uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,562,715 | arxiv | \section{Introduction}
Since matching theory \cite{LP09} was established,
a number of generalizations of the matching problem have been proposed,
including
path-matchings \cite{CG97},
even factors \cite{CG01,Pap07,PS04},
triangle-free $2$-matchings \cite{CP80,Pap04},
square-free $2$-matchings \cite{Hart06,Pap07},
$K_{t,t}$-free $t$-matchings \cite{Fra03},
$K_{t+1}$-free $t$-matchings \cite{BV10},
2-matchings covering prescribed edge cuts \cite{BIT13,KS08},
and
$\U$-feasible{} $2$-matchings \cite{Tak17}.
For most of these generalizations,
important results in matching theory can be extended,
such as a min-max theorem, polynomial algorithms, and a linear programming formulation with dual integrality.
However,
while some similar structures are found,
in most cases,
they have been studied separately and
few connections among those similar structures have been identified.
In this paper,
we propose a new framework of \emph{optimal $t$-matchings excluding prescribed $t$-factors},
to demonstrate a unified understanding of these generalizations.
The proposed framework includes
all of the above generalizations
and
the arborescence problem.
Furthermore,
it includes
the traveling salesman problem (TSP)\@.
This broad coverage implies some intractability of the framework;
however we propose a tractable class that
includes most of the efficiently solvable classes of the above problems.
Our main contributions are a min-max theorem and
a combinatorial polynomial algorithm that commonly extend those for
the matching and triangle-free $2$-matching problem in nonbipartite graphs,
the square-free $2$-matching problem in bipartite graphs,
and
the arborescence problem in directed graphs.
A key ingredient of the proposed algorithm is a technique to shrink the excluded $t$-factors.
This technique commonly extends the techniques used to
shrink odd cycles,
triangles,
squares,
and directed cycles
in a matching algorithm \cite{Edm65},
a triangle-free $2$-matching algorithm \cite{CP80},
a square-free $2$-matching algorithms in bipartite graphs \cite{Hart06,Pap07},
and
an arborescence algorithm \cite{CL65,Edm67}.
respectively.
We demonstrate that the proposed framework is
tractable in the class where this shrinking technique works.
\subsection{Previous Work}
The problems most relevant to our work are the \emph{even factor},
\emph{triangle-free $2$-matching},
and
\emph{square-free $2$-matching problems}.
\subsubsection{Even factor}
The even factor problem \cite{CG01} is a generalization of the nonbipartite matching problem,
which admits a further generalization:
the basic/independent even factor problem \cite{CG01,IT08} is
a common generalization with matroid intersection.
The origin of the even factor problem is the \emph{independent path-matching problem} \cite{CG97},
which is a common generalization of the nonbipartite matching and matroid intersection problems.
In \cite{CG97},
a min-max theorem,
totally dual integral polyhedral description,
and
polynomial solvability by the ellipsoid method were presented.
These were followed by further analysis of the min-max relation \cite{FS02}
and Edmonds-Gallai decomposition \cite{SS04}.
A combinatorial approach to the path-matchings was proposed in \cite{SW02}
and completed by Pap \cite{Pap07},
who addressed a further generalization,
\emph{the even factor problem} \cite{CG01}.
Here, let $D=(V,A)$ be a digraph.
A subset of arcs $F \subseteq A$ is called a \emph{path-cycle factor} if
it is a vertex-disjoint collection of directed cycles (dicycles) and directed paths (dipaths).
Equivalently,
an arc subset $F$ is a path-cycle factor
if,
in the subgraph $(V,F)$,
the indegree and outdegree of every vertex are at most one.
An \emph{even factor} is a path-cycle factor excluding
dicycles of odd length (odd dicycles).
While
the maximum even factor problem is NP-hard,
in \emph{odd-cycle symmetric{}} digraphs
it enjoys
min-max theorems \cite{CG01,PS04},
the Edmonds-Gallai decomposition \cite{PS04},
and
polynomial-time algorithms \cite{CG01,Pap07}.
A digraph is called \emph{odd-cycle symmetric{}} if every odd dicycle has its reverse dicycle.
Moreover,
a maximum-weight even factor can be found in polynomial time
in odd-cycle symmetric{} weighted digraphs,
which are odd-cycle symmetric{} digraphs with arc-weight such that
the total weight of the arcs in an odd dicycle is
equal to that of its reverse dicycle.
The maximum-weight matching problem is straightforwardly reduced to
the maximum-weight even factor problem in odd-cycle symmetric{} weighted digraphs (see Sect.\ \ref{SECbief}).
The assumption of odd-cycle symmetry of (weighted) digraphs is supported by
its relation to
discrete convexity \cite{KT09}.
The independent even factor problem is
a common generalization of the
even factor and matroid intersection problems.
In odd-cycle symmetric{} digraphs
it
admits
combinatorial polynomial algorithms \cite{CG01,IT08} and
a decomposition theorem \cite{IT08},
which extends the Edmonds-Gallai decomposition and
the principal partition for matroid intersection \cite{Iri79,IF81}.
In odd-cycle symmetric{} weighted digraphs,
a linear program with dual integrality and
a combinatorial algorithm for the weighted independent even factor problem
have been presented in \cite{Tak12wief}.
The results are summarized in Table \ref{TABef}.
For more details,
readers are referred to a survey paper \cite{Tak10}.
\begin{table}
\caption{Results for path-matchings and even factors.
(E), (A), (C) denote the ellipsoid method, an algebraic algorithm, and a combinatorial algorithm,
respectively.}
\label{TABef}
\centering
\begin{tabular}{|l|l|l|}
\hline
& Path-matchings & Independent path-matchings \\ \hline
Min-max theorem & Cunningham--Geelen \cite{CG97} & Cunningham--Geelen \cite{CG97} \\
& Frank--Szeg\H{o} \cite{FS02} & \\
Algorithm & Cunningham--Geelen \cite{CG97} (E) & Cunningham--Geelen \cite{CG97} (E) \\
Decomposition theorem & Spille--Szeg\H{o} \cite{SS04} & Iwata--Takazawa \cite{IT08} \\
LP formulation & Cunningham--Geelen \cite{CG97} & Cunningham--Geelen \cite{CG97} \\
Algorithm (Weighted) & Cunningham--Geelen \cite{CG97} (E) & Cunningham--Geelen \cite{CG97} (E) \\\hline\hline
& Even factors & Independent even factors \\ \hline
Min-max theorem & Cunningham--Geelen \cite{CG01} & Iwata--Takazawa \cite{IT08} \\
& Pap--Szeg\H{o} \cite{PS04} & \\
Algorithm & Cunningham--Geelen \cite{CG01} (A) & Iwata--Takazawa \cite{IT08} (C) \\
& Pap \cite{Pap07} (C) & \\
Decomposition theorem & Pap--Szeg\H{o} \cite{PS04} & Iwata--Takazawa \cite{IT08} \\
LP formulation & Kir\'{a}ly--Makai \cite{KM04} & Takazawa \cite{Tak12wief} \\
Algorithm (Weighted) & Takazawa \cite{Tak08} (C) & Takazawa \cite{Tak12wief} (C)\\\hline
\end{tabular}
\end{table}
\subsubsection{Restricted $t$-matching}
The triangle-free $2$-matching and square-free $2$-matching problems are types of
the \emph{restricted $2$-matching problem},
wherein
a main objective is to provide a tight relaxation of the
TSP\@.
Here, let $G=(V,E)$ be a simple undirected graph.
For $v \in V$,
let $\delta(v) \subseteq E$ denote the set of edges incident to $v$.
For a positive integer $t$,
a vector $x \in \mathbf{Z}_+^E$ is called a \emph{$t$-matching} (resp.,\ \emph{$t$-factor}) if
$\sum_{e \in \delta(v)}x(e) \le t$ (resp.,\ $\sum_{e \in \delta(v)}x(e) = t$) for each $v \in V$.
A $2$-matching $x$ is called \emph{triangle-free} if it excludes a triple of edges $(e_1,e_2,e_3)$
such that
$e_1$, $e_2$, and $e_3$ form a cycle and $x(e_1)=x(e_2)=x(e_3)=1$.
For the maximum-weight triangle-free $2$-matching problem,
a combinatorial algorithm,
together with a totally dual integral formulation, is designed \cite{CP80,Pap04}.
If we only deal with simple $2$-matchings $x \in \{0,1\}^E$,
the triangle-free $2$-matching problem becomes much more complicated \cite{HartD}.
A vector $x \in \{0,1\}^E$ is identified by an edge set $F \subseteq E$ such that $e \in F$ if and only if $x(e)=1$.
That is,
an edge set $F \subseteq E$ is called a simple $t$-matching if $|F \cap \delta(v)| \le t$ for each $v \in V$.
For a positive integer $k$,
a simple $2$-matching is called \emph{\c{k}-free} if it excludes cycles of length at most $k$.
Finding a maximum simple \c{k}-free $2$-matching is NP-hard for $k\ge 5$,
and open for $k=4$.
In contrast,
the simple \c{4}-free $2$-matching problem becomes tractable in bipartite graphs.
We often refer to simple \c{4}-free $2$-matching in a bipartite graph as \emph{square-free $2$-matching}.
For the square-free $2$-matching problem,
extensions of the classical matching theory,
such as
min-max theorems \cite{Fra03,Hart06,Kir99,Kir09},
combinatorial algorithms \cite{Hart06,Pap07},
and
decomposition theorems \cite{Tak17DAM},
have been established.
Further,
two generalizations of the square-free $2$-matchings have been proposed.
Frank \cite{Fra03} introduced a generalization,
\emph{$K_{t,t}$-free $t$-matchings} in bipartite graphs,
and provided a min-max theorem.
Another generalization introduced in \cite{Tak17}
is \emph{$\U$-feasible{} $2$-matchings}.
For $\mathcal{U} \subseteq 2\sp{V}$,
a $2$-matching is \emph{$\U$-feasible} if it does not contain a $2$-factor in $U$ for each $U \in \mathcal{U}$.
Takazawa \cite{Tak17} presented a min-max theorem,
a combinatorial algorithm,
and
decomposition theorems
for the case where each $U \in \mathcal{U}$ induced a Hamilton-laceable graph \cite{Sim78},
by extending the aforementioned theory for square-free $2$-matchings in bipartite graphs.
For the weighted case,
Kir\'aly \cite{Kir09} proved that finding a maximum-weight square-free $2$-matching is NP-hard (see also \cite{Fra03}).
However,
Makai \cite{Mak07} presented a linear programming formulation of the weighted $K_{t,t}$-free $t$-matching problem in bipartite graphs
with dual integrality for a special case where the weight is \emph{vertex-induced} on each $K_{t,t}$
(see Definition \ref{DEFvertexinduced}).
Takazawa \cite{Tak09} designed a combinatorial algorithm for this case.
The assumption on the weight is supported by discrete convexity in \cite{KST12},
which proved that
maximum-weight square-free $2$-matchings in bipartite graphs
induce an M-concave function on a jump system \cite{Mur06}
if and only if the edge weight is vertex-induced on every square.
The aforementioned results on simple restricted $t$-matchings are for bipartite graphs.
We should also mention another graph class where restricted $t$-matchings are tractable,
degree-bounded graphs.
In subcubic graphs,
optimal $2$-matchings excluding the cycles of length three and/or four are tractable \cite{BK12,HL11,HL13,Kob14}.
Some of these results are generalized to $t$-matchings
excluding $K_{t+1}$ and $K_{t,t}$ in graphs with maximum degree of up to $t+1$ \cite{BV10,KY12}.
In bridgeless cubic graphs,
there always exists $2$-factors covering all $3$- and $4$-edge cuts \cite{KS08},
and found in polynomial time \cite{BIT13}.
A minimum-weight $2$-factor covering all $3$-edge cuts can also be found in polynomial time \cite{BIT13}.
\subsection{Contribution}
It is noteworthy that Pap \cite{Pap07} presented
combinatorial algorithms for
the even factor and square-free $2$-matching problems in the same paper.
These algorithms were based on similar techniques to shrink odd cycles and squares,
and
were improved in complexity by Babenko \cite{Bab12}.
However,
to the best of our knowledge,
there has been no comprehensive theory
that considers both algorithm.
In this paper,
we discuss \emph{$\U$-feasible{} $t$-matchings}
(see Definition \ref{DEFufeas}).
The $\U$-feasible{} $t$-matching problem generalizes not only
the $\U$-feasible{} $2$-matching problem \cite{Tak17}
but also
all of the aforementioned generalizations of the matching problem,
as well as the TSP and the arborescence problem (see Sect.\ \ref{SECef}).
The objective of this paper is to provide a unified understanding of these problems.
One example of such an understanding is
that $\mathcal{U}$-feasibility is a common generalization of the
blossom constraint for the nonbipartite matching problem and
the subtour elimination constraint for the TSP\@.
The main contributions of this paper are a min-max theorem and an efficient combinatorial algorithm for the maximum $\U$-feasible{} $t$-matching problem in bipartite graphs
under a plausible assumption.
Note that
the $\U$-feasible{} $t$-matching problem in \emph{bipartite} graphs can describe the \emph{nonbipartite} matching problem.
We also remark that it reasonable to impose some assumption in order to
obtain a tractable class of the $\U$-feasible{} $t$-matching problem.
(Recall that it can describe the Hamilton cycle problem.)
Indeed,
we assume that an expanding technique is always valid for the excluded $t$-factors (see Definition \ref{DEFexpansion}).
This assumption is sufficiently broad to include the instances reduced from nonbipartite matchings,
even factors in odd-cycle symmetric{} digraphs,
triangle-free $2$-matchings,
square-free $2$-matchings,
and
arborescences.
We then show that
the \emph{$C_{4k+2}$-free $2$-matching problem},
a new class of the restricted $2$-matching problem,
is contained in our framework.
We prove that
the $C_{4k+2}$-free $2$-matching problem under a certain assumption
is described as the $\U$-feasible{} $2$-matching problem under our assumption,
and thus
obtain a new class of the restricted $2$-matching problem which can be solved efficiently.
The proposed algorithm commonly extend those for
nonbipartite matchings,
even factors,
triangle-free $2$-matchings,
square-free $2$-matchings,
and arborescences.
Generally,
the proposed algorithm runs in $\order{t(|V|^3 \alpha + |V|^2 \beta)}$ time,
where
$\alpha$ and $\beta$ are the time required to check the feasibility of an edge set
and expand the shrunk structures,
respectively.
The complexities $\alpha$ and $\beta$ are typically small,
i.e.,
constant or $\order{n}$,
in the above specific cases (see Sect.\ \ref{SECex}).
We further establish a linear programming description with dual integrality
and
a primal-dual algorithm for the maximum-weight $\U$-feasible{} $t$-matching problem in bipartite graphs.
The complexity of the algorithm is $\order{t(|V|^3 (|E|+\alpha) + |V|^2 \beta)}$.
For the weighted case,
we also assume the edge weight to be vertex-induced for each $U \in \mathcal{U}$.
Note that this assumption
is also inevitable
because the maximum-weight square-free $2$-matching problem is NP-hard.
To be more precise,
our assumption
exactly corresponds to the previous assumptions for the maximum-weight even factor and square-free $2$-matching problems,
both of which are plausible from the discrete convexity perspective \cite{KST12,KT09}.
This would be an example of a unified understanding of
even factors and square-free $2$-matchings.
This paper is organized as follows.
In Sect.\ \ref{SECdef},
we present a precise definition of the proposed framework.
Sect.\ \ref{SECunweighted}
describes
a min-max theorem and a combinatorial algorithm for
the maximum $\U$-feasible{} $t$-matching problem.
In Sect.\ \ref{SECw},
we extend these results to a linear programming formulation with dual integrality and
a primal-dual algorithm for the maximum-weight $\U$-feasible{} $t$-matching problem.
Conclusions are presented in Sect.\ \ref{SECconcl}.
\section{Our Framework}
\label{SECdef}
In this section,
we define the proposed framework
and
explain how the previously mentioned problems are reduced.
\subsection{Optimal $t$-matching Excluding Prescribed $t$-factors}
\label{SECproblem}
Here,
let $G=(V,E)$ be a simple undirected graph.
An edge $e$ connecting $u,v \in V$ is denoted by $\{u,v\}$.
If $G$ is a digraph,
then
an arc from $u$ to $v$ is denoted by $(u,v)$.
For $X \subseteq V$,
let $G[X] =(X,E[X])$ denote the subgraph of $G$ induced by $X$,
i.e.,\
$E[X] = \{\{u,v\} \mid \mbox{$u,v\in X$}, \mbox{$\{u,v\} \in E$}\}$.
Similarly,
for $F \subseteq E$,
define $F[X] = \{\{u,v\} \mid \mbox{$u,v\in X$}, \mbox{$\{u,v\} \in F$}\}$.
If $X,Y \subseteq V$ are disjoint,
then $F[X,Y]$ denotes the set of edges in $F$ connecting $X$ and $Y$.
Recall that
$\delta(v) \subseteq E$ denotes the set of edges incident to $v \in V$.
For $F \subseteq E$ and $v \in V$,
let $\deg_F(v) = |F \cap \delta(v)|$.
Then,
$F$ is
a \emph{$t$-matching} if
$\deg_F(v) \le t$ for each $v \in V$,
and a \emph{$t$-factor} if $\deg_F(v) = t$ for every $v \in V$.
\begin{definition}
\label{DEFufeas}
For
a graph $G=(V,E)$ and
$\mathcal{U} \subseteq 2^V$,
a $t$-matching $F \subseteq E$ is called \emph{$\U$-feasible} if
\begin{align}
\label{EQdefinition}
|F[U]| \le \left\lfloor \frac{t|U|-1}{2} \right\rfloor\end{align}
for each $U \in \mathcal{U}$.
\end{definition}
Equivalently,
a $t$-matching $F$ in $G$ is not $\U$-feasible{} if
$F[U]$ is a $t$-factor in $G[U]$ for some $U \in \mathcal{U}$.
This concept is a further generalization of
the $\U$-feasible{} $2$-matchings introduced in \cite{Tak17}.
In what follows,
we consider the maximum $\U$-feasible{} $t$-matching problem,
whose goal is to find a $\U$-feasible{} $t$-matching $F$ maximizing $|F|$.
We further deal with the maximum-weight $\U$-feasible{} $t$-matching problem,
in which the objective is
to find a $\U$-feasible{} $t$-matching $F$ maximizing $w(F)=\sum_{e \in F}w(e)$
for a given edge-weight vector $w \in \mathbf{R}_+\sp{E}$.
For a vector $x \in \mathbf{R}\sp{E}$ and $F \subseteq E$,
in general we denote $x(F) = \sum_{e \in F}x(e)$.
In discussing the weighted version,
we assume that $w$ is \emph{vertex-induced on each $U \in \mathcal{U}$}.
\begin{definition}
\label{DEFvertexinduced}
For a graph $G=(V,E)$,
a vertex subset $U \subseteq V$,
and an edge-weight $w \in \mathbf{R}\sp{E}$,
$w$ is called \emph{vertex-induced on $U$} if
there exists a function $\pi_U: U \to \mathbf{R}$ on $U$ such that
$w(\{u,v\}) = \pi_U(u) +\pi_U(v) $ for each $\{u,v\} \in E[U]$.
\end{definition}
Here,
as noted previously,
not only the maximum-weight square-free $2$-matching problem in \emph{bipartite} graphs,
but also
many generalizations in \emph{nonbipartite} graphs,
such as the maximum-weight matching, even factor, and
triangle-free $2$-matching,
and
arborescence problems,
are
reduced to the maximum-weight $\U$-feasible{} $t$-matching problem in bipartite graphs
under the assumption that $w$ is vertex-induced on each $U \in \mathcal{U}$.
The reduction is shown in Sect.\ \ref{SECef}.
\subsection{Special Cases of $\U$-feasible{} $t$-matching in Bipartite Graphs}
\label{SECef}
Here we
demonstrate how the problems in the literature are reduced to the $\U$-feasible{} $t$-matching problem.
In Sect.\ \ref{SECbia}--\ref{SECarb},
we demonstrate reductions to the $\U$-feasible{} $t$-matching problem in \emph{bipartite} graphs,
which is the primary focus of this paper.
How our algorithm works for those specific cases is described in Sect.\ \ref{SECex}.
In Sect.\ \ref{SECnonbi},
we show reductions to
the $\U$-feasible{} $t$-matching problem in \emph{nonbipartite} graphs.
While we do not discuss solvability in nonbipartite graphs in this paper,
we show these reductions in order to demonstrate the generality of the proposed framework.
\subsubsection{Restricted $2$-matchings and Hamilton Cycles in Bipartite Graphs}
\label{SECbia}
Let $G=(V,E)$ be a simple bipartite graph.
If $t=2$ and $\mathcal{U} = \{U \subseteq V \mid |U| = 4\}$,
then a $\U$-feasible{} $2$-matching in $G$ is exactly a square-free $2$-matching in $G$.
Generally,
a simple \c{k}-free $2$-matching in $G$ is exactly a $\U$-feasible{} $2$-matching in $G$ where
$\mathcal{U} = \{U \subseteq V \mid 1 \le |U| \le k\}$.
For example,
if $\mathcal{U} = \{U \subseteq V \mid 1\le |U| \le |V|-1\}$,
then
the maximum $\U$-feasible{} $2$-matching problem includes the Hamilton cycle problem,
i.e.,\
if a maximum $\U$-feasible{} $2$-matching is of size $|V|$,
it is a Hamilton cycle.
Square-free $2$-matchings are generalized to $K_{t,t}$-free $t$-matchings in bipartite graphs \cite{Fra03}.
A simple $t$-matching is called \emph{$K_{t,t}$-free} if it does not contain $K_{t,t}$ as a subgraph.
A $K_{t,t}$-free $t$-matching in a bipartite graph
is exactly a $\U$-feasible{} $t$-matching, where $\mathcal{U} = \{U \subseteq V \mid |U| = 2t\}$.
\subsubsection{Matchings and Even Factors in Nonbipartite Graphs}
\label{SECbief}
First, we show the reduction of the nonbipartite matching problem to the even factor problem.
Then,
we present the reduction of the even factor problem to the $\U$-feasible{} $t$-matching problem in bipartite graphs.
Consider the maximum-weight matching problem in a nonbipartite graph $G=(V,E)$ with weight $w \in \mathbf{R}\sp{E}$.
This
can be reduced to the maximum-weight even factor problem
in a digraph $D=(V,A)$,
where
$A=\{ (u,v),(v,u) \mid \{u,v\}\in E \}$,
and
an arc-weight $w' \in \mathbf{R}\sp{A}$ is defined by
$w'((u,v)) = w'((v,u)) = w(\{u,v\})$.
For a matching $M\subseteq E$ in $G$,
it is clear that
there exists
an even factor $F \subseteq A$ in $D$ with $w'(F) = 2w(M)$.
Conversely,
for an even factor $F \subseteq A$ in $D$,
there exists a matching $M \subseteq E$ with $w(M) \ge w'(F)/2$.
Here,
let $D=(V,A)$ and $w \in \mathbf{R}^{A}$ be an arbitrary
instance of the maximum-weight even factor problem
(Fig.\ \ref{FIGef}).
Then,
define an instance of the maximum-weight $\U$-feasible{} $t$-matching problem as follows.
Let
$t=1$.
For each $u \in V$,
let
$\p{u}$ and $\m{u}$ be two copies of $u$,
and
define
$\p{\hat{V}} =\{\p{u} \mid u \in V\}$ and $\m{\hat{V}} =\{\m{u} \mid u \in V\}$.
For $U \subseteq V$,
denote $\hat{U} = \bigcup_{u \in U}\{\p{u},\m{u}\}$.
Now define a bipartite graph $\hat{G}=(\hat{V},\hat{E})$,
$\hat{\mathcal{U}} \subseteq 2^{\hat{V}}$,
and
an edge-weight $\hat{w} \in \mathbf{R}^{\hat{E}}$
by
\begin{align}
&{}\hat{E} = \{\{\p{u},\m{v}\} \mid (u,v) \in A\},
\quad
\hat{\mathcal{U}} = \{ \hat{U} \mid \mbox{$U \subseteq V$}, \mbox{$|U|$ is odd}\},
\label{EQef}\\
&{}
\hat{w}(\{\p{u},\m{v}\}) = w((u,v)) \quad (\{\p{u},\m{v}\} \in \hat{E}).
\end{align}
Note that a $1$-matching in $\hat{G}$ corresponds to a path-cycle factor in $D$.
For $\hat{U}\in \hat{\mathcal{U}}$,
a $1$-factor in $\hat{G}[\hat{U}]$ corresponds to
a vertex-disjoint collection of cycles through $U$ in $D$,
which should contain at least one odd cycle.
Thus,
$\hat{\mathcal{U}}$-feasibility of a $1$-matching in $\hat{G}$ exactly corresponds to
excluding odd cycles in a path-cycle factor in $D$,
which results in an even factor.
If $(D, w)$ is odd-cycle symmetric,
$\hat{G}[\hat{U}]$ is a symmetric bipartite graph and
$\hat{w}$ is vertex-induced on $\hat{U}$
for each $\hat{U} \in \hat{\mathcal{U}}$.
Thus,
the instance constructed in the reduction satisfies
our assumption
that $\hat{w}$ is vertex-induced on each $\hat{U} \in \hat{\mathcal{U}}$.
\begin{figure}
\centering
\includegraphics[height=.25\textheight]{ef.png}
\caption{The maximum even factor problem in $D$ is reduced to the maximum $\hat{\mathcal{U}}$-feasible 1-matching problem in $\hat{G}$,
where $\hat{\mathcal{U}} = \{ \hat{U} \mid \mbox{$U \subseteq V$}, \mbox{$|U|$ is odd}\}$.
The set of thick arcs in $D$ is an even factor
that corresponds to the set of thick edges in $\hat{G}$,
which is a $\hat{\mathcal{U}}$-feasible $1$-matching.}
\label{FIGef}
\end{figure}
\subsubsection{Triangle-free $2$-matchings in Nonbipartite Graphs}
\label{SECtri}
Here,
let $G=(V,E)$ be an undirected nonbipartite graph
and $w \in \mathbf{R}_+^E$.
Now
define $\p{\hat{V}}$, $\m{\hat{V}}$, and $\hat{U}$
as described in Sect.\ \ref{SECbief}.
Let $t=1$
and
define $\hat{G} =(\hat{V}, \hat{E})$,
$\hat{\mathcal{U}} \subseteq 2\sp{\hat{V}}$,
and $\hat{w}\in \mathbf{R}\sp{\hat{E}}$
by
\begin{align*}
&{}\p{\hat{V}} =\{\p{u} \mid u \in V\},\qquad \m{\hat{V}} =\{\m{v} \mid v \in V\}, \\
&\hat{E} = \bigcup_{\{u,v\} \in E}\{ \{\p{u},\m{v}\}, \{\p{v},\m{u}\} \}, \qquad
\hat{\mathcal{U}} = \{ \hat{U} \mid \mbox{$U \subseteq V$, $|U| =3$} \},\\
&\hat{w}(\{\p{u},\m{v}\})=\hat{w}(\{\p{v},\m{u}\}) = w(\{u,v\}) \quad (\{u,v\} \in E).
\end{align*}
It is straightforward that
a triangle-free $2$-matching $F$ in $G$ corresponds to a $\hat{\mathcal{U}}$-feasible $1$-matching $\hat{F}$ in $\hat{G}$ such that $w(F) = \hat{w}(\hat{F})$,
and vice versa (Fig.\ \ref{FIGtriangle}).
\begin{figure}
\centering
\includegraphics[height=.18\textheight]{triangle.pdf}
\caption{The maximum triangle-free $2$-matching problem in $G$ is reduced to the maximum $\hat{\mathcal{U}}$-feasible 1-matching problem
in $\hat{G}$,
where $\hat{\mathcal{U}} = \{ \hat{U} \mid \mbox{$U \subseteq V$}, \mbox{$|U|=3$}\}$.
The set of thick edges in $G$ is a triangle-free $2$-matching that
corresponds to the set of thick edges in $\hat{G}$,
which is a $\hat{\mathcal{U}}$-feasible $1$-matching.}
\label{FIGtriangle}
\end{figure}
\subsubsection{Matroids and Arborescences}
\label{SECarb}
Here,
let $\mathbf{M}$ be a matroid with ground set $V$
and circuit family $\mathcal{C} \subseteq 2\sp{V}$.
The problem of finding a maximum-weight
independent set in $\mathbf{M}$ with respect to $w \in \mathbf{R}\sp{V}$ is described as
the maximum-weight $\hat{\mathcal{U}}$-feasible $t$-matching problem
in a bipartite graph $\hat{G}$ as follows.
Let $t=1$.
Define $\hat{G} =(\hat{V}, \hat{E})$,
$\mathcal{U} \subseteq 2\sp{\hat{V}}$,
and $\hat{w}\in \mathbf{R}\sp{\hat{E}}$
by
\begin{align*}
&{}\hat{V}^+ = \{ \p{u} \mid u \in V \}, \quad
\hat{V}^- = \{ \m{v} \mid v \in V \}, \\
&{}\hat{E} = \{ \{\p{v}, \m{v}\} \mid v \in V \}, \quad
\hat{\mathcal{U}} = \left\{ \bigcup_{v \in C} \{\p{v}, \m{v}\} \mathrel{}\middle|\mathrel{} C \in \mathcal{C} \right\}, \\
&{}\hat{w}(\{\p{v}, \m{v}\}) = w(v).
\end{align*}
Then, it is straightforward that
$I \subseteq V$ is an independent set in $\mathbf{M}$
if and only if
the edge set $\{ \{\p{v}, \m{v}\} \mid v \in I \}$ is a
$\hat{\mathcal{U}}$-feasible $1$-matching in $\hat{G}$.
Arborescences in a digraph are a special case of matroid intersection.
Although we do not know how to describe matroid intersection in our framework,
the arborescence problem
cab be reduced to the
$\U$-feasible{} problem in a bipartite graph as follows.
Let $D=(V,A)$
be a digraph in which we are asked to find a maximum-weight arborescence
with respect to an arc weight $w\in \mathbf{R}\sp{A}$.
Let $t=1$,
and
define $\hat{G} =(\hat{V}, \hat{E})$,
$\hat{\mathcal{U}} \subseteq 2\sp{\hat{V}}$,
and $\hat{w}\in \mathbf{R}\sp{\hat{E}}$
by
\begin{align*}
&{}\hat{V}^+ = \{ \p{a} \mid a \in A \}, \quad
\hat{V}^- = \{ \m{v} \mid v \in V \}, \quad
\hat{E} = \{ \{\p{a}, \m{v}\} \mid \mbox{$v$ is the head of $a$ in $D$} \}, \\
&{}\hat{\mathcal{U}} = \left\{ \{\p{a} \mid a \in A(C) \} \cup \{\m{v} \mid v \in V(C) \} \mid \mbox{$C$ is a directed cycle in $D$} \right\}, \\
&{}\hat{w}(\{\p{a}, \m{v}\}) = w(a),
\end{align*}
where $A(C)$ and $V(C)$ denote the sets of arcs and vertices of a directed cycle $C$,
respectively.
Again, it is straightforward that
$A' \subseteq A$ is an arborescence in $D$
if and only if
the edge set $$\{ \{\p{a}, \m{v}\} \mid \mbox{$a \in A'$, $v$ is the head of $a$ in $D$} \}$$
is a
$\hat{\mathcal{U}}$-feasible $1$-matching in $\hat{G}$.
\subsubsection{Special Cases of Nonbipartite $\U$-feasible{} $t$-matchings}
\label{SECnonbi}
The simple \c{k}-free $2$-matching problem in a nonbipartite graph $G=(V,E)$
is exactly the $\U$-feasible{} $2$-matching problem in $G$,
where
$\mathcal{U} = \{U \subseteq V \mid 1\le |U| \le k\}$.
For example,
if $\mathcal{U} = \{U \subseteq V \mid 1\le |U| \le|V|-1\}$,
then
a $\U$-feasible{} $2$-matching of size $|V|$ is
exactly a Hamilton cycle.
The $K_{t+1}$-free $t$-matching problem \cite{BV10}
is a generalization of the simple triangle-free $2$-matching problem.
A simple $t$-matching is called \emph{$K_{t+1}$-free}
if it does not contain $K_{t+1}$ as a subgraph.
Now
a $K_{t+1}$-free $t$-matching is exactly
a $\U$-feasible{} $t$-matching, where $\mathcal{U}=\{U\subseteq V \mid |U|= t+1\}$.
A \emph{$2$-factor covering prescribed edge cuts} is also described as a $\U$-feasible{} $2$-matching.
Here,
let $C \subseteq E$ be an edge cut,
i.e.,
$C$ is an inclusion-minimal edge subset such that
deleting $C$ makes $G$ disconnected.
A $2$-factor $F$ covers $C$ if $F \cap C \neq \emptyset$.
For a family $\mathcal{C}$ of edge cuts,
a $2$-factor covering every edge cut in $\mathcal{C}$ is a relaxed concept of Hamilton cycles,
i.e.,\
if $\mathcal{C}$ is the family of all edge cuts in $G$,
then a $2$-factor covering all edge cuts in $\mathcal{C}$ is a Hamilton cycle.
Now a $2$-factor covering every edge cuts in $\mathcal{C}$ is described as a $\U$-feasible{} $2$-factor by
putting
$\mathcal{U} = \{ U \subseteq V \mid \mbox{$\delta(U) \in \mathcal{C}$} \}$,
where
$\delta(U)$ denotes $E[U, V \setminus U]$,
i.e.,\
the set of edges connecting $U$ and $V \setminus U$.
For example,
a $2$-factor covering all $3$- and $4$-edge cuts \cite{BIT13,KS08} is a $\U$-feasible{} $2$-factor,
where $\mathcal{U} =\{ U \subseteq V \mid \mbox{$\delta(U)$ is a $3$-edge cut or a $4$-edge cut} \}$.
\section{Maximum $\U$-feasible{} $t$-matching}
\label{SECunweighted}
In this section,
we present a min-max theorem and a combinatorial algorithm
for the maximum $\U$-feasible{} $t$-matching problem in bipartite graphs.
The proposed algorithm commonly extends those for
nonbipartite matchings \cite{Edm65},
even factors \cite{Pap07},
triangle-free $2$-matchings \cite{CP80},
square-free $2$-matchings \cite{Hart06,Pap07},
and
arborescences \cite{CL65,Edm67}.
We begin with a weak duality theorem in Sect.\ \ref{SECweak}.
The proposed algorithm is described
in Sect.\ \ref{SECalg},
and
its validity is proved in Sect.\ \ref{SECminmax}
together with
the min-max theorem (strong duality theorem).
In Sect.\ \ref{SECex},
we demonstrate
how the algorithm works in such special cases.
\subsection{Weak Duality}
\label{SECweak}
Here, let $G=(V,E)$ be an undirected graph
and
let
$\mathcal{U} \subseteq 2\sp{V}$.
For weak duality,
$G$ does not need to be bipartite.
For $X \subseteq V$,
define
$\mathcal{U}_X \subseteq \mathcal{U}$
and
$C_X \subseteq X$ by
\begin{align*}
&{}\mathcal{U}_X = \{ U \in \mathcal{U} \mid \mbox{$U$ forms a component in $G[X]$} \}, &
&{}C_X = X \setminus \bigcup_{U \in \mathcal{U}_X}U.
\end{align*}
Then,
the following inequality holds for an arbitrary $\U$-feasible{} $t$-matching $F \subseteq E$ and $X \subseteq V$.
\begin{lemma}
\label{LEMweak}
Let
$G=(V,E)$ be an undirected graph,
$\mathcal{U} \subseteq 2\sp{V}$,
and
$t$ be a positive integer.
For an arbitrary $\U$-feasible{} $t$-matching $F \subseteq E$
and $X \subseteq V$,
it holds that
\begin{align}
\label{EQweak}
|F| \le t|X| + |E[C_{V\setminus {X}}]| + \sum_{U \in \mathcal{U}_{V \setminus X}}\left\lfloor \frac{t|U|-1}{2} \right\rfloor.
\end{align}
\end{lemma}
\begin{proof}
By counting the number of edges in $F $ incident to $X$,
we obtain
\begin{align}
\label{EQsaturated}
&{}2|F[X]| + |F[X, V \setminus {X}]| \le t|X|.
\end{align}
In $G[V \setminus X]$,
it holds that
\begin{align}
\label{EQcritical}
&{} |F[V \setminus {X}]| \le |E[C_{V \setminus {X}}]| + \sum_{U \in \mathcal{U}_{V \setminus X}}\left\lfloor \frac{t|U|-1}{2} \right\rfloor.
\end{align}
By summing \eqref{EQsaturated} and \eqref{EQcritical},
we obtain
\begin{align}
\label{EQsumup}
&{}2|F[X]| + |F[X, V \setminus {X}]| + |F[V \setminus {X}]| \le t|X| + |E[C_{V \setminus {X}}]| + \sum_{U \in \mathcal{U}_{V \setminus X}}\left\lfloor \frac{t|U|-1}{2} \right\rfloor.
\end{align}
Since $|F| = |F[X]| + |F[X, V \setminus {X}]| + |F[V \setminus {X}]|$ is at most the left-hand side of \eqref{EQsumup},
we obtain \eqref{EQweak}.
\end{proof}
\subsection{Algorithm}
\label{SECalg}
Hereafter,
we assume that $G$ is bipartite.
Let $G=(V,E)$ be a simple undirected bipartite graph.
Here,
we denote the two color classes of $V$ by $\p{V}$ and $\m{V}$.
For $X \subseteq V$,
denote $\p{X} = X \cap \p{V}$ and $\m{X} = X \cap \m{V}$.
The endvertices of an edge $e\in E$ in $\p{V}$ and $\m{V}$ are denoted by $\partial^+ e$ and $\partial^- e$, respectively.
We begin by describing the shrinking of a forbidden structure $U \in \mathcal{U}$.
For concise notation,
we denote the input graph as $\h{G} = (\h{V}, \h{E})$
and
the graph generated by potential repeated shrinkings
as $G=(V,E)$.
Consequently,
we have $\mathcal{U} \subseteq 2\sp{\hat{V}}$.
The solution in hand is denoted by $F \subseteq E$.
Intuitively,
shrinking of $U$ involves
identifying all vertices in $\p{U}$ and $\m{U}$ to obtain new vertices $\p{u_U}$ and $\m{v_U}$, respectively,
and
deleting all edges in $E[U]$.
In a shrunk graph $G=(V,E)$,
we refer to a vertex $v \in V$ as a \emph{natural vertex} if $v$ is a vertex in the original graph $\h{G}$,
and as a \emph{pseudovertex} if it is a newly added vertex when shrinking some $U \in \mathcal{U}$.
We denote the set of natural vertices as $V_{\mathrm{n}}$,
and the set of pseudovertices as $V_{\mathrm{p}}$.
For $X \subseteq \h{V}$,
define
$X_{\mathrm{n}} = X \cap V_{\mathrm{n}}$
and
$X_{\mathrm{p}} = \bigcup\{\p{u_U}, \m{v_U} \mid \mbox{$\p{u_U}, \m{v_U} \in V_{\mathrm{p}}$, $U \cap X \neq \emptyset$} \}$.
For $X \subseteq V$,
define $\hat{X}\subseteq \hat{V}$ by
$\hat{X} = X_{\mathrm{n}} \cup \bigcup \{\p{U} \mid \p{u_U}\in X \cap V_{\mathrm{p}} \} \cup \bigcup \{\m{U} \mid \m{v_U}\in X \cap V_{\mathrm{p}} \}$.
A formal description of shrinking $U \in \mathcal{U}$ is given as follows.
\paragraph{Procedure $\mbox{\textsc{Shrink}}(U)$.}
Let $\p{u_U}$ and $\m{v_U}$ be new vertices,
and
reset the endvertices of an edge $e \in E \setminus E[U_{\mathrm{n}} \cup U_{\mathrm{p}}]$
with
$\p{\partial}e = u$ and
$\m{\partial}e = v$ by
\begin{align*}
{}&{}\p{\partial}e := \p{u_U} \quad \mbox{if $u \in \p{U_{\mathrm{n}}} \cup \p{U_{\mathrm{p}}}$}, \\
{}&{}\m{\partial}e := \m{v_U} \quad \mbox{if $v \in \m{U_{\mathrm{n}}} \cup \m{U_{\mathrm{p}}}$}.
\end{align*}
Then,
update $G$ by
\begin{align*}
&{}\p{V} := (\p{V} \setminus (\p{U_{\mathrm{n}}} \cup \p{U_{\mathrm{p}}})) \cup \{\p{u_U}\}, &
&{}\m{V} := (\m{V} \setminus (\m{U_{\mathrm{n}}} \cup \m{U_{\mathrm{p}}})) \cup \{\m{v_U}\}, &
&{}E := E \setminus E[U].
\end{align*}
Finally,
$F := F \cap E$
and
return $(G,F)$.
\medskip
Procedure $\mbox{\textsc{Expand}}(G,F)$ is to execute
the reverse of $\mbox{\textsc{Shrink}}(U)$ for all shrunk $U \in \mathcal{U}$
to obtain the original graph $\hat{G}$.
Here,
a key point is that $\lfloor (t|U|-1)/2\rfloor$ edges are added to $F$ from $\hat{E}[U]$ for each $U \in \mathcal{U}$.
\paragraph{Procedure $\mbox{\textsc{Expand}}(G,F)$.}
Let $G := \h{G}$.
For each inclusionwise maximal $U\in \mathcal{U}$ that is shrunk,
we add
$F_U \subseteq \hat{E}[U]$ of $\lfloor (t|U|-1)/2\rfloor$ edges
to $F$ such that $F$ is a $\U$-feasible{} $t$-matching in $\h{G}$.
Then return $(G,F)$.
\medskip
In Procedure $\mbox{\textsc{Expand}}(G,F)$,
the existence of $F_U$ is non-trivial.
To attain that
$\hat{F}=F \cup \bigcup\{ F_U \mid \mbox{$U \in \mathcal{U}$ is a maximal shrunk set}\}$
is a $t$-matching in $\hat{G}$,
it should be satisfied for
$F \subseteq E$ and
$F_U \subseteq \hat{E}[U]$
that
\begin{align}
\label{EQdegconst}
&\deg_F(u) \le
\begin{cases}
t & (u \in V_{\mathrm{n}}), \\
1 & (u \in V_{\mathrm{p}})
\end{cases}\\
\label{EQexpandt}
&\deg_{F_U}(u)
\begin{cases}
=t-1 & (\mbox{$u$ is incident to an edge in $F[U, V \setminus U]$}), \\
\le t & (\mbox{otherwise}).
\end{cases}
\end{align}
To achieve this,
we maintain that
$F$ satisfies the degree constraint \eqref{EQdegconst}.
Moreover,
we assume that,
for an arbitrary $F$ with \eqref{EQdegconst},
there exists $F_U$
satisfying $|F_U| = \lfloor (t|U|-1)/2\rfloor$
and \eqref{EQexpandt}
for every maximal shrunk set $U \in \mathcal{U}$.
This assumption is formally defined as follows.
\begin{definition}
\label{DEFexpansion}
Let $\hat{G}=(\hat{V}, \hat{E})$ be a bipartite graph,
$\mathcal{U} \subseteq 2\sp{\hat{V}}$,
and
$t$ be a positive integer.
For pairwise disjoint $U_1,\ldots, U_l \in \mathcal{U}$,
let $G=(V,E)$ denote the graph obtained from $\hat{G}$ by executing $\mbox{\textsc{Shrink}}(U_1)$, \ldots, $\mbox{\textsc{Shrink}}(U_l)$,
and
let $F \subseteq E$ be an arbitrary edge set satisfying \eqref{EQdegconst}.
If there exists $F_{U_i} \subseteq \hat{E}[U_i]$ satisfying $|F_{U_i}| = t|U_i|/2-1$ and \eqref{EQexpandt} for each $i=1,\ldots, l$,
we say that \emph{$(\hat{G}, \mathcal{U},t)$ admits expansion}.
\end{definition}
In what follows,
we assume that
$(\hat{G},\mathcal{U},t)$ admits expansion.
This is exactly the class of $(\hat{G},\mathcal{U},t)$ to which our algorithm is applicable.
This assumption and the degree constraint \eqref{EQdegconst} guarantee that
we can always obtain a $t$-matching $\hat{F}=F \cup \bigcup\{ F_U \mid \mbox{$U \in \mathcal{U}$ is a maximal shrunk set}\}$ in $\hat{G}$.
Furthermore,
we should consider the $\mathcal{U}$-feasibility of $\hat{F}$.
We refer to $F$ in $G$ as \emph{feasible} if $\hat{F}$ is $\U$-feasible{}.
If there are several possibilities of $F_U$,
we say that $F$ is $\U$-feasible{} if there is at least one $\U$-feasible{} $\hat{F}$.
In other words,
$F$ satisfying \eqref{EQdegconst} is not feasible
if,
for any possibility of $\hat{F}$,
\begin{align}
|\hat{F}[U']| = \frac{t|U'|}{2}
\label{EQviolate}
\end{align}
holds for some $U' \subseteq \mathcal{U}$,
and
$\hat{F}$ will have a $t$-factor in $\hat{G}[U']$.
See Fig.\ \ref{FIGfeas} for an example.
Here, $t=2$, and we expand $U=\{u_1,u_2,u_3,u_4,v_1,v_2,v_3,v_4\}$
by adding $F_U$ of $\lfloor (t|U|-1)/2\rfloor = 7$ edges satisfying \eqref{EQexpandt}.
However,
$\hat{F}_1$ is \emph{not} $\U$-feasible{} because
it violates \eqref{EQdefinition} for $U'=\{u_3,u_4,u_5,u_6,v_3,v_4,v_5,v_6\}$.
On the other hand,
$\hat{F}_2$
satisfies \eqref{EQdefinition} for $U'$.
Thus,
$F$ in $G$ is called feasible, and
we select $\hat{F}_2$ when expanding $U$.
\begin{figure}
\centering
\includegraphics[height=.25\textheight]{feasibility.pdf}
\caption{In expanding $U\in \mathcal{U}$, $\hat{F}_1$ is inappropriate because it contains a $t$-factor in $\hat{G}[U']$,
while $\hat{F}_2$ is appropriate.}
\label{FIGfeas}
\end{figure}
Here,
we describe our algorithm in detail.
The algorithm begins with $G=\h{G}$ and an arbitrary $\U$-feasible{} $t$-matching $F \subseteq \h{E}$,
typically $F = \emptyset$.
We first construct an auxiliary digraph.
\paragraph{Procedure $\mbox{\textsc{ConstructAuxiliaryDigraph}}(G,F)$.}
Construct a digraph $(V,A)$
defined by
$$A = \{ (u,v) \mid \mbox{$u\in \p{V}$, $v \in \m{V}$, $\{u,v\} \in E \setminus F$} \}
\cup \{ (v,u) \mid \mbox{$u\in \p{V}$, $v \in \m{V}$, $\{u,v\} \in F$} \}.$$
Define the sets of source vertices $S \subseteq \p{V}$ and
sink vertices $T \subseteq \m{V}$ by
\begin{align*}
&S = \{ u \in \p{V_{\mathrm{n}}} \mid \deg_F(u) \le t-1 \} \cup \{ \p{u_U} \in \p{V_{\mathrm{p}}} \mid \deg_F(\p{u_U}) = 0 \},\\
&T = \{ v \in \m{V_{\mathrm{n}}} \mid \deg_F(v) \le t-1 \} \cup \{ \m{v_U} \in \m{V_{\mathrm{p}}} \mid \deg_F(\m{v_U}) = 0 \}.
\end{align*}
Then, return $D=(V,A;S,T)$.
\medskip
Suppose that there exists a directed path $P=(e_1,f_1,\ldots, e_l, f_l, e_{l+1})$ in $D$ from $S$ to $T$.
Note that
$e_i \in E \setminus F$ ($i=1,\ldots, l+1$)
and ${f}_i \in F$ ($i=1,\ldots, l$).
We denote the symmetric difference $(F \setminus P) \cup (P \setminus F)$ of $F$ and $P$ by $F \triangle P$.
If $F\triangle P$ is feasible,
we execute $\mbox{\textsc{Augment}}(G,F,P)$ below.
We then execute
\textsc{Expand}$(G,F)$.
\paragraph{Procedure $\mbox{\textsc{Augment}}(G,F,P)$.}
Let $F:=F\triangle P$ and
return $F$.
\medskip
If $F\triangle P$ is not feasible,
we apply
$\mbox{\textsc{Shrink}}(U)$
after determining a set $U \in \mathcal{U}$ to be shrunk by the following procedure.
\paragraph{Procedure $\mbox{\textsc{FindViolatingSet}}(G,F,P)$.}
For $i=1,\ldots, l$,
define
$F_i = (F\setminus\{f_1,\ldots f_i\}) \cup\{e_1,\ldots, e_{i}\}$.
Also define $F_0 = F$ and $F_{l+1} = F\triangle P$.
Let $i^*$ be the minimum index $i$ such that $F_i$ is not feasible,
and
let $U \in \mathcal{U}$ satisfy
\eqref{EQviolate} for $F = F_{i^*}$.
Then,
let
$F := F_{i^*-1}$,
and return $(F,U)$.
\medskip
Finally,
if $D$ does not have a directed path from $S$ to $T$,
we determine the minimizer ${X} \subseteq \h{V}$ of \eqref{EQmin} as follows.
\paragraph{Procedure $\mbox{\textsc{FindMinimizer}}(G,F)$.}
Let $R\subseteq V$ be the set of vertices reachable from $S$,
and let
$X := (\p{V} \setminus \p{R}) \cup \m{R}$.
If a natural vertex $v \in \m{V} \setminus X$ has $t$ edges in $F$ connecting $\p{R}$ and $v$,
then
$X := X \cup \{v\}$.
If a pseudovertex $\m{v}_U \in \m{V} \setminus X$ has one edge in $F$ connecting $\p{R}$ and $\m{v}_U$,
then
$X := X \cup \{\m{v}_U\}$.
Finally,
return $X:=\hat{X}$.
\medskip
We then
apply $\mbox{\textsc{Expand}}(G,F)$ and
the algorithm terminates by returning $F \subseteq \hat{E}$ and $X \subseteq \hat{V}$.
Now the description of the algorithm is completed.
The pseudocode of the algorithm is presented in Algorithm \ref{ALGunweighted}.
The optimality of $F$ and $X$ is proved
in Sect.\ \ref{SECminmax}.
We exhibit how the algorithm works in specific cases such as
the square-free $2$-matching, even factor,
triangle-free $2$-matching,
and arborescence problems,
and discuss the complexity for these cases in Sect.\ \ref{SECex}.
Before that,
we analyze the complexity of the algorithm for the general case.
\begin{algorithm}[t]
\caption{Maximum $\U$-feasible{} $t$-matching}
\label{ALGunweighted}
\begin{algorithmic}[1]
\State $G \leftarrow \hat{G}$, $F \leftarrow \emptyset$, $D \leftarrow \mbox{\textsc{AuxiliaryDigraph}}(G,F)$
\While{$D$ has an $S$-$T$ path $P$}
\If{$F\triangle P$ is feasible}
\State $F \leftarrow \mbox{\textsc{Augment}}(G,F,P)$
\State $(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F)$
\State $D \leftarrow \mbox{\textsc{AuxiliaryDigraph}}(G,F)$
\Else
\State $(F,U) \leftarrow \mbox{\textsc{ViolatingSet}}(G,F,P)$
\State $(G,F) \leftarrow \mbox{\textsc{Shrink}}(U)$
\State $D \leftarrow \mbox{\textsc{AuxiliaryDigraph}}(G,F)$
\EndIf
\EndWhile
\State $X \leftarrow \mbox{\textsc{Minimizer}}(G,F)$,
$(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F)$\\
\Return $(F,X)$
\end{algorithmic}
\end{algorithm}
Here,
let $n = |\hat{V}|$ and $m=|\hat{E}|$.
The complexity of the algorithm varies according to the structure of $(G,\mathcal{U},t)$.
Recall that
$\alpha$ denotes the time required to determine the feasibility of $F$,
and $\beta$ denotes the time requited to expand $U$.
To be precise,
$\alpha$ is the time required to check whether $F$ in a shrunk graph $G$ is feasible,
and,
if not,
find $U \in \mathcal{U}$ for which $F$ satisfies \eqref{EQviolate}.
Between augmentations,
we execute $\mbox{\textsc{Shrink}}(U)$ $\order{n}$ times.
For one $\mbox{\textsc{Shrink}}(U)$,
we check the feasibility of $F_1,\ldots, F_l$,
which requires $\order{n\alpha}$ time.
We then reconstruct the auxiliary digraph.
Here, we should only update the vertices and arcs on the $S$-$T$ path,
which takes $\order{n}$ time.
After augmentation,
we expand the shrunk vertex sets,
which takes $\order{n\beta}$ time in total.
Therefore,
the complexity for one augmentation is $\order{n^2 \alpha + n\beta}$.
Since augmentation occurs at most $tn/2$ time,
the total complexity of the algorithm is $\order{t(n^3\alpha + n^2 \beta)}$.
\subsection{Min-max Theorem: Strong Duality}
\label{SECminmax}
In this section,
we strengthen Lemma \ref{LEMweak} to be a min-max relation
and
prove the validity of Algorithm \ref{ALGunweighted}.
We show that
the output $(F,X)$ of the algorithm satisfies \eqref{EQweak} with equality.
This constructively proves the min-max relation
for the class of $(G,\mathcal{U},t)$ that admits expansion.
\begin{theorem}
\label{THminmax}
Let $G=(V,E)$ be a bipartite graph,
$\mathcal{U} \subseteq 2\sp{V}$,
and $t$ be a positive integer
such that $(G,\mathcal{U},t)$ admits expansion.
Then,
the maximum size of a $\U$-feasible{} $t$-matching is equal to
the minimum of
\begin{align}
\label{EQmin}
t|X| + |E[C_{V \setminus {X}}]| + \sum_{U \in \mathcal{U}_{V \setminus X}}\left\lfloor \frac{t|U|}{2} - 1 \right\rfloor ,
\end{align}
where $X$ runs over all subsets of $V$.
\end{theorem}
\begin{proof}
We denote the output of Algorithm \ref{ALGunweighted} by $(\hat{F},\hat{X}) $.
Here,
it is sufficient to
prove that \eqref{EQsaturated} and \eqref{EQcritical} hold by equality
for $(\hat{F},\hat{X})$.
Since $X$ is defined based on reachability in the auxiliary digraph $D$,
it is straightforward that $\hat{F}[\hat{X}] = \emptyset$.
Moreover,
for every $v \in \hat{X}$,
$\deg_{\hat{F}}(v) = t$ holds;
thus, \eqref{EQsaturated} holds by equality.
Finally,
edges in
$\h{G}[\h{V}\setminus \hat{X}]$ are in $F$ before the last \textsc{Expand}$(G,F)$ or
are obtained by expanding pseudovertices $\p{u_U}$ and $\m{v_U}$,
which are isolated vertices in $G[V \setminus {X}]$.
This means that $U$ forms a component in $\h{G}[{\hat{X}}]$;
thus, the equality in \eqref{EQcritical} holds.
\end{proof}
\subsection{Applying our Algorithm to Special Cases}
\label{SECex}
Here, we demonstrate how Algorithm \ref{ALGunweighted} is applied to the special cases of
\c{k}-free $2$-matchings,
even factors (including nonbipartite matchings),
triangle-free $2$-matchings,
and
arborescences.
Differences appear in
determining feasibility in the shrunk graph and
the edges to be added by expansion.
\subsubsection{$C_{\le k}$-free 2-matchings in Bipartite Graphs}
The case where $k=4$,
i.e., square-free $2$-matchings in a simple bipartite graph,
is the most straightforward example.
In this case,
the family of the shrunk vertex sets never becomes nested,
i.e.,\
\textsc{Shrink}($U$) is always applied to a cycle of length four comprising four natural vertices.
Thus,
an edge set $F \subseteq E$ is feasible if and only if $F$ excludes a cycle of length four,
even if the graph is obtained by repeated shrinking.
Furthermore,
the feasibility of each $F_i$ ($i=1,\ldots, l$) can be checked in constant time
because it is sufficient to determine whether the new edge $e_i$ added to $F_i$ is in a square.
When expanding $U \in \mathcal{U}$,
it suffices to
choose $F_U$ consisting of three edges in $E[U]$ and
satisfying \eqref{EQexpandt} for $t=2$.
This always yields the $\mathcal{U}$-feasibility of $\hat{F}$
and can be performed in constant time for one square.
For the case where $k \ge 6$,
the problem becomes more involved.
Suppose that
$\mathcal{U}=\{U \subseteq \hat{V} \mid 1 \le |U| \le 6\}$ and
we expand $U \in \mathcal{U}$ with $|U|=6$.
We then select $F_U\subseteq E[U]$ with $|F_U| = 5$
according to \eqref{EQexpandt};
however, such $F$ might not exist.
Moreover,
even if such $F_U$ is found,
$F_U$ might contain a cycle of length four,
which violates $\mathcal{U}$-feasibility.
Such difficulty is inevitable because
the simple \c{k}-free $2$-matching problem in bipartite graphs is NP-hard when $k \ge 6$.
Thus,
we require an assumption for our algorithm to work.
One solution to this difficulty is to impose the connectivity of $F_U$,
i.e.,\
when expanding $U \in \mathcal{U}$,
we require that
there always exists
$F_U$ satisfying \eqref{EQexpandt} and that
$(U, F_U)$ is connected.
If $t = 2$,
this property amounts to the Hamilton-laceability of $G[U]$ (see \cite{Tak17}).
It is clear that $K_{t,t}$-free $t$-matchings in bipartite graphs \cite{Fra03}
also satisfy this assumption.
Under this assumption,
$F \subseteq E$ satisfying \eqref{EQdegconst} is feasible if and only if
$F$ does not contain a $K_{t,t}$ of natural vertices as a subgraph.
\subsubsection{Matchings and Even Factors in Nonbipartite Graphs}
\label{SECappef}
Since the nonbipartite matching problem is reduced to the even factor problem,
it suffices to discuss only the even factor problem.
Here,
let $D=(V,A)$ be an odd-cycle symmetric{} digraph and define $\h{G}=(\hat{V},\hat{E})$ and $\mathcal{U}$ by \eqref{EQef}.
Recall that,
if $D$ is odd-cycle symmetric{},
then
$\hat{G}[U]$ is a symmetric bipartite graph for each $U \in \mathcal{U}$.
In this case,
our algorithm is performed recursively,
i.e.,\
a $1$-matching $F \subseteq E$ in a shrunk graph $G$ is feasible
if $|F[U']| \le |U'|/2 - 1$ for $U' = \rmn{U} \cup \rmp{U}$ with $U \in \mathcal{U}$.
This can be checked in $\order{n}$ time,
and
Procedure \textsc{Shrink}($U'$) is executed when a perfect matching in $G[U']$ is found in our solution.
See Fig.\ \ref{FIGefShrink} for an illustration.
\begin{figure}
\centering
\includegraphics[height=.25\textheight]{efShrink.pdf}
\caption{The maximum even factor problem in $D_0$ is reduced to the $\U$-feasible{} $1$-matching problem in $\hat{G}$.
If we find arc $(u_7^+, u_1^-)$ as an $S$-$T$ path in the auxiliary digraph,
we shrink $U=\{u_1^+,\ldots, u_7^+, u_1^-,\ldots, u_7^-\}$ to obtain $G$.}
\label{FIGefShrink}
\end{figure}
In \textsc{Expand}($G,F$),
we repeat expanding a maximal shrunk vertex set $U$,
where the proper shrunk subsets of $U$ remain shrunk.
We repeat this step until the original graph $\hat{G}$ is reconstructed.
See Fig.\ \ref{FIGefExpand1} for an illustration of expanding $U$.
Without loss of generality,
we can denote the perfect matching in $G[U']$ by $\bigcup_{i=1}^{k}\{\p{u_i},\m{u_{i+1}}\}$,
where $k=2k'+1$ is odd and $u_{k+1} = u_1$.
Furthermore,
assume that $\m{u_1}$ and $\p{u_j}$ are incident to an edge in $F[U', V \setminus U']$.
Now,
if $j = 2j'+1$ is odd,
let $F_{U'} = \bigcup_{i=1}^{j-1}\{\p{u_{i}}, \m{u_{i+1}}\} \cup \bigcup_{i=j'+1}^{k'}\{ \{\p{u_{2i}}, \m{u_{2i+1}}\}, \{\p{u_{2i+1}}, \m{u_{2i}}\} \}$.
If $j=2j'$ is even,
then let
$F_{U'} = \bigcup_{i=1}^{j'-1}\{ \{\p{u_{2i}}, \m{u_{2i+1}}\}, \{\p{u_{2i+1}}, \m{u_{2i}}\}\} \cup \bigcup_{i=j}^{k}\{\p{u_{i+1}}, \m{u_{i}} \}$.
It is straightforward that \textsc{Expand}($G,F$) can be performed in $\order{n}$ time.
Note that this procedure is possible because $k$ is odd and $G[U']$ is symmetric.
We also remark that this procedure corresponds to expanding an odd cycle in an even factor algorithm \cite{Pap07},
and expanding an odd cycle in Edmonds' blossom algorithm \cite{Edm65}.
\begin{figure}
\centering
\includegraphics[height=.25\textheight]{efExpand1.pdf}
\medskip
\includegraphics[height=.25\textheight]{efExpand2.pdf}
\caption{Two types of expanding
$U$.
The set of thick edges in $\hat{G}$ is our $\U$-feasible{} $1$-matching,
where the dashed edges are those added when expanding $U$.
This $\U$-feasible{} $1$-matching corresponds to the even factor of thick arcs in $D_0$.}
\label{FIGefExpand1}
\end{figure}
\subsubsection{Triangle-free 2-matchings}
Recall the instance of the $\U$-feasible{} $t$-matching problem constructed in Sect.\ \ref{SECtri}.
Here, we denote a graph obtained from $\hat{G}$ by repeated shrinkings by $G$.
In $G$,
a pair of vertices $u \in \p{V}$ and $v \in \m{V}$ is referred to as \emph{twins} if
they are copies of the same original vertex or
they are pseudovertices added by the same shrinking procedure.
Here,
a $1$-matching $F \subseteq E$ is infeasible only if
it contains a matching of three edges covering three pairs of twins in the original graph $\hat{G}$.
In other words,
even if $F$ contains a matching of three edges covering three pairs of twins in $G$,
it is feasible if the endvertices of those edges are not three pairs of twins in $\hat{G}$.
For example,
recall the instance in Fig.\ \ref{FIGtriangle}.
In Fig.\ \ref{FIGtriAug},
$G$ is obtained from $\hat{G}$ by shrinking $U = \{u_3^+,u_4^+,u_5^+, u_3^-,u_4^-,u_5^-\}$,
and we have a feasible edge set $\{\{ u_1^+, u_2^- \}, \{ u_2^+, v_U^- \}\}$.
If we find an $S$-$T$ path $P_1$ consisting of a single arc resulting from $(u_5^+,u_1^-)$,
then $F \triangle P_1$ is feasible
and $\mbox{\textsc{Augment}}(G,F,P)$ and $\mbox{\textsc{Expand}}(G,F)$ follow.
In contrast,
in Fig.\ \ref{FIGtriShrink},
suppose that we find an $S$-$T$ path $P_2$ consisting of a single arc $(u_3^+,u_1^-)$.
Then,
$F \triangle P_2$ is \emph{not} feasible
and $\mbox{\textsc{Shrink}}(W)$ follows,
where
$W = \{ u_1^+,u_2^+,u_3^+, u_1^-,u_2^-,u_3^- \}$.
Note that
the family of vertex sets shrunk by our algorithm corresponds to a \emph{triangle cluster}
in the triangle-free $2$-matching algorithm due to Cornu\'ejols and Pulleyblank \cite{CP80}.
\begin{figure}
\centering
\includegraphics[height=.18\textheight]{triAug.pdf}
\caption{If we find an $S$-$T$ path of a single arc resulting from $(u_5^+,u_1^-)$ (dotted edge in $G$),
we execute $\mbox{\textsc{Augment}}(G,F,P)$ and $\mbox{\textsc{Expand}}(G,F)$.
The set of thick edges in $\hat{G}$ is the obtained $\U$-feasible{} $1$-matching,
where the dashed edges are those added by $\mbox{\textsc{Expand}}(G,F)$.
This $\U$-feasible{} $1$-matching corresponds to a triangle-free $2$-matching
(indicated by the thick edges)
in $G_0$.}
\label{FIGtriAug}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{triShrink.pdf}
\caption{If we find an $S$-$T$ path of a single arc resulting from $(u_3^+,u_1^-)$ (dotted edge in $G$),
we execute $\mbox{\textsc{Shrink}}(W)$,
where $W = \{ u_1^+,u_2^+,u_3^+, u_1^-,u_2^-,u_3^- \}$.
The parallel edges between $\p{u_{W}}$ and $\m{u_{W}}$ result from $\{\p{u_1},\m{u_5}\}$ and $\{\p{u_5},\m{u_1}\}$.
The shrunk triangles $\{u_1,u_2,u_3\}$ and $\{u_3,u_4,u_5\}$ in $G_0$ form a triangle cluster \cite{CP80}.}
\label{FIGtriShrink}
\end{figure}
For \textsc{Expand}($G,F$),
we expand each shrunk $U \in \mathcal{U}$ one by one.
We add two edges from $\hat{E}[U]$ to $F$ such that $F$ remains a $1$-matching,
which always obtains the $\mathcal{U}$-feasibility of the output $F$.
It is clear that the feasibility of $F$ can be determined in $\order{n}$ time,
and
that of $F_i$ in $\mbox{\textsc{FindViolatingSet}}(G,F,P)$ can be determined in a constant time for each $i=1,\ldots , l$.
In addition,
\textsc{Expand}($G,F$) needs $\order{n}$ time.
\subsubsection{Matroids and Arborescences}
Recall the instances constructed in Sect.\ \ref{SECarb}.
If Algorithm \ref{ALGunweighted} is applied to finding a maximum independent set of a matroid,
then
an $S$-$T$ path always consists of one arc,
i.e.,\
the solution is greedily augmented.
However,
Procedure $\mbox{\textsc{Shrink}}(U)$ may occur,
which is simply the contraction of a circuit for matroids.
If Algorithm \ref{ALGunweighted} is applied to finding an arborescence,
then
an $S$-$T$ path always consists of one arc
and the solution is greedily augmented.
Here,
Procedure $\mbox{\textsc{Shrink}}(U)$ corresponds to shrinking a directed cycle.
\subsection{$C_{4k+2}$-free $2$-matchings: New Example in Our Framework}
So far,
we have viewed that some problems in the literature are described as the $\U$-feasible{} $t$-matching problem in bipartite graphs under the assumption that $(G,\mathcal{U},t)$ admits expansion.
Here we exhibit a new problem which falls in this framework.
Let $G=(V,E)$ be a simple bipartite graph.
A $2$-matching $F \subseteq E$ is called \emph{$C_{4k+2}$-free} if $F$ excludes cycles of length $4k+2$ for every positive integer $k$.
In other words,
the length of a cycle in $F$ must be a multiple of four.
Now $C_{4k+2}$-free $2$-matchings are described as $\U$-feasible{} $2$-matchings, where
$$\mathcal{U} = \{U \subseteq V \mid \mbox{$|\p{U}|=|\m{U}|=2k+1$ for some positive integer $k$}\}.$$
A solvable class of this form of the $\U$-feasible{} $2$-matching problem is obtained by recalling the instances for even factors:
a class where $G[U]$ is a symmetric bipartite graph
and all of the twins in $G[U]$ are connected by an edge for every $U \in \mathcal{U}$.
To be precise,
every $U \in\mathcal{U}$ is described as
$\p{U}=\{u_1,\ldots, u_{2k+1}\}$ and $\m{U}= \{v_1,\ldots, v_{2k+1}\}$,
where
$\{u_i, v_j\} \in E$ if and only if $\{u_j,v_i\} \in E$,
and
$\{u_i, v_i\} \in E$ for each $i=1,\ldots, 2k+1$.
Then,
it is straightforward to see that
$(G,\mathcal{U},t)$ admits expansion by following the arguments for even factors in Sect.\ \ref{SECbief} and \ref{SECappef}.
\section{Weighted $\U$-feasible{} $t$-matching}
\label{SECw}
In this section,
we extend the min-max theorem and the algorithm presented in Sect.\ \ref{SECunweighted} to the maximum-weight $\U$-feasible{} $t$-matching problem.
Recall that $G$ is a simple bipartite graph
and $(G,\mathcal{U},t)$ admits expansion.
We further assume that
$w$ is vertex-induced on each $U \in \mathcal{U}$,
which commonly extends the assumptions for the maximum-weight square-free and even factor problems.
\subsection{Linear Program}
Here,
we describe a linear programming relaxation of the
the maximum-weight $\U$-feasible{} $t$-matching problem
in variable $x \in \mathbf{R}\sp{E}$:
\begin{alignat}{3}
\mbox{(P)}\quad &{}\mbox{maximize} \quad {}&&{} \sum_{e \in E}w(e) x(e) {}&& \\
&{}\mbox{subject to} \quad {}&&{} x(\delta(v)) \le t \quad {}&&{}(v \in V), \\
\label{EQufeas}
&{} {}&&{} x(E[U]) \le \left\lfloor \frac{t|U|-1}{2} \right\rfloor \quad {}&&{}(U \in \mathcal{U}), \\
&{} {}&&{} 0 \le x(e) \le 1 \quad {}&&{}(e \in E).
\end{alignat}
Note that
Constraint \eqref{EQufeas}, which describes $\mathcal{U}$-feasibility,
is a common extension of the blossom constraint for
the nonbipartite matching problem ($t=1$),
and
the subtour elimination constraints for the TSP ($t=2$).
Its dual program
in variables
$p\in \mathbf{R}\sp{V}$,
$q\in \mathbf{R}\sp{E}$,
and
$r \in \mathbf{R}\sp{\mathcal{U}}$
is given as follows:
\begin{alignat}{3}
\mbox{(D)}\quad &{}\mbox{minimize} \quad {}&&{} t\sum_{v \in V}p(v) + \sum_{e \in E}q(e) + \sum_{U\in \mathcal{U}}\left\lfloor \frac{t|U|-1}{2} \right\rfloor r(U) {}&& \\
&{}\mbox{subject to} \quad {}&&{} p(u) + p(v) + q(e) + \sum_{U \in \mathcal{U}\colon e \in E[U]}r(U) \ge w(e)\quad {}&&{}(e=\{u,v\} \in E), \\
&{} {}&&{} p(v) \ge 0 \quad {}&&{}(v \in V), \\
&{} {}&&{} q(e) \ge 0 \quad {}&&{}(e \in E), \\
&{} {}&&{} r(U) \ge 0 \quad {}&&{}(U \in \mathcal{U}).
\end{alignat}
We define $w'\in \mathbf{R}\sp{E}$ by
\begin{align*}
w'(e)
=
p(u) + p(v)+ q(e) + \sum_{U \in \mathcal{U}\colon e \in E[U]}r(U) - w(e) \quad(e=\{u,v\} \in E).
\end{align*}
The complementary slackness conditions for (P) and (D) are as follows.
\begin{alignat}{2}
\label{EQcsx}
&{}x(e) > 0 \Longrightarrow w'(e)=0 \quad{}&&{} (e \in E), \\
\label{EQcsp}
&{}p(v) > 0 \Longrightarrow x(\delta (v)) = 2 \quad{}&&{} (v \in V), \\
\label{EQcsq}
&{}q(e) > 0 \Longrightarrow x(e) = 1 \quad{}&&{} (e \in E), \\
\label{EQcsr}
&{}r(U) > 0 \Longrightarrow x(E[U]) = \left\lfloor \frac{t|U|-1}{2} \right\rfloor \quad{}&&{} (U \in \mathcal{U}).
\end{alignat}
\subsection{Primal-Dual Algorithm}
In this section,
we demonstrate a combinatorial primal-dual algorithm for the maximum-weight $\U$-feasible{} $t$-matching problem
in bipartite graphs,
where $(G,\mathcal{U},t)$ admits expansion and
$w$ is vertex-induced for each $U \in \mathcal{U}$.
We maintain primal and dual feasible solutions
that satisfy \eqref{EQcsx}, \eqref{EQcsq}, \eqref{EQcsr},
and \eqref{EQcsp} for $v \in \m{V}$.
The algorithm terminates when \eqref{EQcsp} is obtained for every $v \in \p{V}$.
Again, we denote the input graph by $\hat{G} = (\h{V}, \h{E})$,
and
the graph in hand,
i.e., the graph resulting from possibly repeated shrinkings,
by $G=(V,E)$.
The variables in the algorithm are
$F \subseteq E$,
$p \in \mathbf{R}\sp{\h{V}}$,
$q \in \mathbf{R}\sp{\h{E}}$,
and
$r \in \mathbf{R}\sp{\mathcal{U}}$.
Note that $p$ and $q$ are always defined on the original vertex and edge sets,
respectively.
Initially,
we set
\begin{align}
&F = \emptyset, &
&p(v) =
\begin{cases}
\max\{w(e) \mid e \in \delta(v)\} & (v \in \p{V}), \\
0 &(v \in \m{V}),
\end{cases}\notag\\
& q(e) = 0 \quad (e \in E), &
& r(U) = 0 \quad (U \in \mathcal{U}).
\label{EQinitial}
\end{align}
The auxiliary digraph $D$ is constructed as follows.
Here,
the major differences from Sect.\ \ref{SECalg} are that
we only use an edge $e$ with $w'(e)=0$,
and a vertex in $\p{V}$ can become a sink vertex.
\paragraph{Procedure \textsc{ConstructAuxiliaryDigraph}$(G,F,p,q,r)$.}
Here,
we define a digraph $(V,A)$ by
$$A = \{ (\partial^+e,\partial^-e) \mid \mbox{$e\in E \setminus F$, $w'(e)=0$} \}
\cup \{ (\partial^-e,\partial^+e) \mid e=\{u,v\} \in F \}.$$
The sets of source vertices $S \subseteq \p{V}$ and
sink vertices $T \subseteq \p{V} \cup \m{V}$ are
defined by
\begin{alignat*}{2}
&S = {}&&{} \{ u \in \p{V_{\mathrm{n}}} \mid \mbox{$\deg_F(v) \le t-1$, $p(u) > 0$} \}, \\
&&&{} \cup \{ \p{u_U} \in \p{V_{\mathrm{p}}} \mid \mbox{$\deg_F(\p{u_U}) = 0$, $p(u) > 0$ for some $u \in U$} \}\\
&T = {}&&{} \{ v \in \m{V_{\mathrm{n}}} \mid \deg_F(v) \le t-1 \} \cup \{ \m{v_U} \in \m{V_{\mathrm{p}}} \mid \deg_F(\m{v_U}) = 0 \}\\
&&&{} \cup \{ u \in \p{V_{\mathrm{n}}} \mid \mbox{$\deg_F(u) = t$, $p(u) = 0$} \} \\
&&&{} \cup \{ \p{u_U} \in \p{V_{\mathrm{p}}} \mid \mbox{$\deg_F(u_U^+) = 1$, $p(u) = 0$ for some $u \in U$} \}.
\end{alignat*}
Return $D=(V,A;S,T)$,
\medskip
Suppose that $D$ has a directed path $P$ from $S$ to $T$,
and let $F' := F \triangle P$.
If $F'$ is feasible,
we execute $\mbox{\textsc{Augment}}(G,F,P)$,
which is the same as in Sect.\ \ref{SECalg}.
Note that,
if $P$ ends in a vertex in $T \cap \p{V}$,
then $|F|$ does not increase.
However,
in this case,
the number of vertices satisfying \eqref{EQcsp} increases by one,
and we get closer to the termination condition (achieving \eqref{EQcsp} at every vertex).
If $F'$ is not feasible,
we apply \textsc{ViolatingSet}($G,F,P$) as in Sect.\ \ref{SECalg}.
For the output $U$ of \textsc{ViolatingSet}($G,F,P$),
if $p(u) = 0$ holds for some $u \in \p{U}$,
then we execute
$\mbox{\textsc{Modify}}(G,F,U)$ below.
Otherwise,
we apply
$\mbox{\textsc{Shrink}}(U)$ as in Sect.\ \ref{SECalg}.
\paragraph{Procedure $\mbox{\textsc{Modify}}(G,F,U)$.}
Let $u^* \in \p{U}$ satisfy $p(u^*) = 0$.
Then find $K \subseteq E[U]$ such that
\begin{align*}
\deg_{K}(u)=
\begin{cases}
t & (u \in \p{U_{\mathrm{n}}} \setminus \{u\sp{*}\}), \\
t-1 & (u = u^*), \\
0 & (u = \p{u_{U'}} \in \p{U_{\mathrm{p}}}, u^* \in U'), \\
\deg_{F[U]}(u) & (u \in \m{U_{\mathrm{n}}} \cup \m{U_{\mathrm{p}}}).
\end{cases}
\end{align*}
Here, return $F:= (F \setminus F[U]) \cup K$.
\medskip
If $D$ does not have a directed path from $S$ to $T$,
then
update the dual variables $p$, $q$, and $r$
by procedure $\mbox{\textsc{UpdateDualSolution}}(G,F,p,q,r)$ described below.
\paragraph{Procedure $\mbox{\textsc{UpdateDualSolution}}(G,F,p,q,r)$.}
Let $R \subseteq V$ be the set of vertices reachable from $S$ in the auxiliary digraph $D$.
Then,
\begin{align*}
&{}p(v) :=
\begin{cases}
p(v) - \epsilon & (v \in \p{\hat{R}}), \\
p(v) + \epsilon & (v \in \m{\hat{R}}), \\
p(v) & (v \in \hat{V} \setminus \hat{R}),
\end{cases} \\
&{}q(e) :=
\begin{cases}
q(e) + \epsilon &(\mbox{$\partial^+e \in \p{\hat{R}}$, $\partial^-e \in \m{\hat{V}}\setminus\m{\hat{R}}$}), \\
q(e) & (v \in \m{\hat{V}} \setminus \m{\hat{R}}),
\end{cases} \\
&{}r(U) :=
\begin{cases}
r(U) + \epsilon & (\p{u_U} \in \p{R}, \m{v_U} \in \m{V} \setminus \m{R}), \\
r(U) - \epsilon & (\p{u_U} \in \p{V} \setminus \p{R}, \m{v_U} \in \p{R}), \\
r(U) & (\mbox{otherwise}),
\end{cases}
\end{align*}
where
\begin{align*}
{}&{}\epsilon = \min\{\epsilon_1,\epsilon_2,\epsilon_3\}, \quad
\epsilon_1 = \min\{w'(\{u,v\}) \mid u\in \p{\hat{R}}, v \in \m{\hat{V}} \setminus \m{\hat{R}}\},
\\
{}&{}
\epsilon_2 = \min\{p(u) \mid u \in \hat{\p{R}}\},
\quad
\epsilon_3 = \min\{r(U) \mid \p{u_U} \in \p{\hat{V}} \setminus \p{\hat{R}}, \m{v_U} \in \m{\hat{R}} \}.
\end{align*}
Then
return $(p,q,r)$.
\medskip
Finally,
we expand every $U$ satisfying $r(U)=0$ after
\textsc{Augment}$(G,F,P)$,
\textsc{Modify}$(G,F,U)$,
and
\textsc{UpdateDualSolution}$(G,F,p,q,r)$.
If any $U' \subsetneq U$ satisfies $r_{U'}>0$,
which implies that $U'$ had been shrunk before $U$ was shrunk,
then $U'$ is maintained as shrunk.
\paragraph{Procedure $\mbox{\textsc{Expand}}(G,F,r)$.}
For each shrunk $U \in \mathcal{U}$ with $r(U)=0$,
execute the following procedures.
Update $G$ by replacing $\p{u_{U}}$ and $\m{v_U}$
by the graph induced by $U_{\mathrm{n}} \cup U_{\mathrm{p}}$ just before \textsc{Shrink}($U$) is applied.
Determine $F_U \subseteq E[U_{\mathrm{n}} \cup U_{\mathrm{p}}]$
of $\lfloor (t|U_{\mathrm{n}}| + |U_{\mathrm{p}}|-1)/2\rfloor - 1$ edges
such that $F' = F \cup F_U$ can be extended to a $\U$-feasible{} $t$-matching in $\hat{G}$.
Then return $F:= F'$.
\medskip
The pseudocode of the maximum-weight $\U$-feasible{} $t$-matching algorithm is presented in Algorithm \ref{ALGweighted}.
For complexity,
it follows that one \textsc{DualUpdate}($G,F,p,q,r$) requires $\order{m}$ time,
and it is executed $\order{n^3}$ times.
This is the difference from the unweighted version;
thus,
the total complexity is $\order{t(n^3 (m+\alpha) + n^2 \beta)}$.
\begin{algorithm}[t]
\caption{Maximum-weight $\U$-feasible{} $t$-matching}
\label{ALGweighted}
\label{alg:general}
\begin{algorithmic}[1]
\State Set $F,p,q,r$ by \eqref{EQinitial}
\While{Condition \eqref{EQcsp} is violated}
\State $D \leftarrow \mbox{\textsc{AuxiliaryDigraph}}(G,x,p,q,r)$
\If{$D$ has an $S$-$T$ path $P$}
\If{$F \triangle P$ is $\U$-feasible}
\State $F \leftarrow \mbox{\textsc{Augment}}(G,F,P)$
\State $(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F,r)$
\Else
\State $(F,U) \leftarrow \mbox{\textsc{ViolatingSet}}(G,F,P)$
\If{$p(u)=0$ for some $u \in U$}
\State $F \leftarrow \mbox{\textsc{Modify}}(G,F,U)$
\State $(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F,r)$
\Else
\State $(G,F) \leftarrow \mbox{\textsc{Shrink}}(U)$
\EndIf
\EndIf
\Else
\State $(p,q,r) \leftarrow \mbox{\textsc{DualUpdate}}(G,F,p,q,r)$
\State $(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F,r)$
\EndIf
\EndWhile
\State $(G,F) \leftarrow \mbox{\textsc{Expand}}(G,F)$ \\
\Return $(F,p,q,r)$
\end{algorithmic}
\end{algorithm}
It is clear that the optimal dual solution $(p,q,r)$ found by Algorithm \ref{ALGweighted} is integer if
the edge weight $w$ is integer.
Thus,
Algorithm \ref{ALGweighted} constructively proves the following theorem for the integrality of (P) and (D).
This is a common extension of dual integrality theorems for
nonbipartite matchings \cite{CM78},
even factors \cite{KM04},
triangle-free $2$-matchings \cite{CP80},
square-free $2$-matchings \cite{Mak07},
and
branchings \cite{Edm67}.
\begin{theorem}
If
$(G,\mathcal{U},t)$ admits expansion and
$w$ is vertex-induced
on each $U \in \mathcal{U}$,
then
the linear program \textsc{(P)} has an integer optimal solution.
Moreover,
the linear program
\textsc{(D)} also an integer optimal solution such that
the number of sets $U \in \mathcal{U}$ with $r(U)>0$ is at most $n/2$.
\hfill \qedsymbol
\end{theorem}
\section{Conclusion}
\label{SECconcl}
We have presented a new framework for the optimal $\U$-feasible{} $t$-matching problem
and established a min-max theorem and combinatorial algorithm under the reasonable assumption that
$G$ is bipartite,
$(G,\mathcal{U},t)$ admits expansion,
and
$w$ is vertex-induced on each $U \in \mathcal{U}$.
Under this assumption,
our problem can describe a number of generalization of the matching problem,
such as the matching and triangle-free $2$-matching problems in nonbipartite graphs,
the square-free $2$-matching problem in bipartite graphs,
and matroids and arborescences.
We have also obtained a new class of the restricted $2$-matching problem, the $C_{4k+2}$-free $2$-matching problem,
which can be solved efficiently under a corresponding assumption.
It is noteworthy that the $\mathcal{U}$-feasibility is a common generalization of
the blossom constraints for the nonbipartite matching problem
and
the subtour elimination constraints for the TSP\@.
We expect that this unified perspective will provide a
new approach to the TSP utilizing matching and matroid theories.
\section*{Acknowledgements}
I thank Yutaro Yamaguchi for the helpful comments regarding the draft of the paper.
I am also thankful to the anonymous referees for their careful reading and comments.
This work has been supported by
JSPS KAKENHI Grant Numbers 16K16012,
25280004,
and 26280001,
and
CREST, JST, Grant Number JPMJCR1402, Japan.
\bibliographystyle{myjorsj2}
|
1,108,101,562,716 | arxiv | \section{Introduction}
Compressive sensing aims to recover an unknown signal from the underdetermined linear measurements (see \cite{ek,fr} for a comprehensive view). It is known as phase retrieval or phaseless compressive sensing when there is no phase information. The phaseless compressive sensing problem has recently attracted considerable research interests and many algorithms have been proposed to solve this problem. Existing literature include \cite{cesv,cls,csv,cc,gx,njs,sbe}, to name a few. Specifically, the goal of phaseless compressive sensing is to recover $x\in\mathbb{R}^N$ up to a unimodular scaling constant from noisy magnitude measurements $y=|Ax|+e\in\mathbb{R}^m$ with the measurement matrix $A=(a_1,\cdots,a_m)^T\in\mathbb{R}^{m\times N}$, $|Ax|=(|\langle a_1,x\rangle|,\cdots,|\langle a_m,x\rangle|)^T$ and the noise term $e\in\mathbb{R}^m$. When $x$ is sparse or compressible, the stable recovery can be guaranteed by solving the following $\ell_1$ minimization problem
\begin{align}
\min\limits_{z\in\mathbb{R}^N}\,\lVert z\rVert_1\,\,\,\text{subject to\,\,\,$\lVert |Az|-y\rVert_2\leq \varepsilon$},
\end{align}
provided that the measurement matrix $A$ satisfies the strong restricted isometry property (SRIP) \cite{gwx,vx}. In the noiseless case, the first sufficient and necessary condition was presented in \cite{wx} by proposing a new version of null space property for the phase retrieval problem.
In this paper, we generalize the existing theoretical framework for phaseless compressive sensing to incorporate partial support information, where we consider the case that an estimate of the support of the signal is available. We follow the similar notations and arguments in \cite{fmsy,zy}. For an arbitrary signal $x\in\mathbb{R}^N$, let $x^k$ be its best $k$-term approximation, so that $x^k$ minimizes $\lVert x-f\rVert_{1}$ over all $k$-sparse vectors $f$. Let $T_0$ be the support of $x^k$, where $T_0\subset\{1,\cdots,N\}$ and $|T_0|\leq k$. Let $\tilde{T}$, the support estimate, be a subset of $\{1,2\cdots,N\}$ with cardinality $|\tilde{T}|=\rho k$, where $\rho\geq 0$ and $|\tilde{T}\cap T_0|=\alpha\rho k$ with $0\leq \alpha\leq 1$. Here the parameter $\rho$ determines the ratio of the size of the estimated support to the size of the actual support of $x^k$ (or the support of $x$ if $x$ is $k$-sparse), while the parameter $\alpha$ determines the ratio of the number of indices in the support of $x^k$ that are accurately estimated in $\tilde{T}$ to the size of $\tilde{T}$, i.e., $\alpha=\frac{|\tilde{T}\cap T_0|}{|\tilde{T}|}$. To incorporate prior support information $\tilde{T}$, we adopt the weighted $\ell_1$ minimization \begin{align}
\min\limits_{z\in\mathbb{R}^N}\sum\limits_{i=1}^N \mathrm{w}_i |z_i|,\,\,\,\text{subject to $\lVert |Az|-y\rVert_2\leq\varepsilon$},\,\,\,
\text{where $\mathrm{w}_i=\begin{cases}
\omega \in[0,1] &\text{$i \in\tilde{T}$,}\\
1 &\text{$i\in\tilde{T}^c$.}
\end{cases}$} \label{min}
\end{align}
We present the SRIP condition and weighted null space property condition to guarantee the success of the recovery via the weighted $\ell_1$ minimization problem above.
The paper is organized as follows. In Section 2, we introduce the definition of SRIP and present the stable recovery condition with this tool. In Section 3, the sufficient and necessary weighted null space property condition for the real sparse noise free phase retrieval is given. In Section 4, some numerical experiments are presented to illustrate our theoretical results. Finally, Section 5 is devoted to the conclusion.
Throughout the paper, for any vector $x\in\mathbb{R}^N$, we denote the $\ell_p$ norm by $\lVert x\rVert_p=(\sum_{i=1}^p |x_i|^p)^{1/p}$ for $p>0$ and the weighted $\ell_1$ norm as $\lVert x\rVert_{1,\mathrm{w}}=\sum_{i=1}^N \mathrm{w}_i |x_i|$. For any matrix $X$, $\lVert X\rVert_1$ denotes the entry-wise $\ell_1$ norm. For any set $T$, we denote its cardinality as $|T|$. The vector $x\in\mathbb{R}^N$ is called $k$-sparse if at most $k$ of its entries are nonzero, i.e., if $\lVert x\rVert_0=|\mathrm{supp}(x)|\leq k$, where $\mathrm{supp}(x)$ denotes the index set of the nonzero entries. We denote the index set $[N]:=\{1,2,\cdots,N\}$. For a matrix $A=(a_1,\cdots,a_m)^T\in\mathbb{R}^{m\times N}$ and an index set $I\subset[m]$, we denote $A_I$ the sub-matrix of $A$ where only rows with indices in $I$ are kept, i.e., $A_I=(a_j,j\in I)^T$.
\section{SRIP}
To recover sparse signals via $\ell_1$ minimization in the classical compressive sensing setting, \cite{ct} introduced the notion of restricted isometry property (RIP) and established a sufficient condition. We say a matrix $A$ satisfies the RIP of order $k$ if there exists a constant $\delta_k\in[0,1)$ such that for all $k$-sparse vectors $x$ we have \begin{align}
(1-\delta_k)\lVert x\rVert_2^2\leq\lVert Ax\rVert_2^2\leq (1+\delta_k)\lVert x\rVert_2^2.
\end{align}
Cai and Zhang \cite{cz} proved that the RIP of order $tk$ with $\delta_{tk}<\sqrt{\frac{t-1}{t}}$ where $t>1$ can guarantee the exact recovery in the noiseless case and stable recovery in the noisy case via $\ell_1$ minimization. This condition is sharp when $t\geq \frac{4}{3}$, see \cite{cz} for details. Very recently, Chen and Li \cite{cl} generalized this sharp RIP condition to the weighted $\ell_1$ minimization problem when partial support information was incorporated. We first present the following useful lemma, which is an extension of the result in \cite{cl}.\\
\begin{lemma}
Let $x\in\mathbb{R}^N, y=Ax+e\in\mathbb{R}^m$ with $\lVert e\rVert_2\leq \zeta$, and $\eta\geq 0$. Suppose that $A$ satisfies RIP of order $tk$ with $\delta_{tk}<\sqrt{\frac{t-d}{t-d+\gamma^2}}$ for some $t>d$, where $\gamma=\omega+(1-\omega)\sqrt{1+\rho-2\alpha\rho}$ and \begin{align}
d=\begin{cases}
1, &\text{$\omega=1$}\\
1-\alpha\rho+a, &\text{$0\leq \omega<1$}
\end{cases}
\end{align}
with $a=\max\{\alpha,1-\alpha\}\rho$. Then for any $$
\hat{x}\in\{z\in\mathbb{R}^N:\lVert z\rVert_{1,\mathrm{w}}\leq\lVert x\rVert_{1,\mathrm{w}}+\eta,\lVert Az-y\rVert_2\leq \varepsilon\},
$$
we have \begin{align}
\lVert \hat{x}-x\rVert_2\leq C_1(\zeta+\varepsilon)+C_2\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)}{\sqrt{k}}+C_2\frac{\eta}{\sqrt{k}},
\end{align}
where \begin{align*}
C_1&=\frac{\sqrt{2(t-d)(t-d+\gamma^2)(1+\delta_{tk})}}{(t-d+\gamma^2)(\sqrt{\frac{t-d}{t-d+\gamma^2}}-\delta_{tk})},\\
C_2&=\frac{\sqrt{2}\delta_{tk}\gamma+\sqrt{(t-d+\gamma^2)(\sqrt{\frac{t-d}{t-d+\gamma^2}}-\delta_{tk})\delta_{tk}}}{(t-d+\gamma^2)(\sqrt{\frac{t-d}{t-d+\gamma^2}}-\delta_{tk})}+\frac{1}{\sqrt{d}}.
\end{align*} \\
\end{lemma}
\noindent
{\bf Remark 1}\,\, Note that if $x^{\ell_2}$ is the solution of the weighted $\ell_1$ minimization problem: $$
\min\limits_{z\in\mathbb{R}^N}\,\,\lVert z\rVert_{1,\mathrm{w}},\,\,\text{subject to\,\,$\lVert Az-y\rVert_2\leq \varepsilon$},
$$ then $x^{\ell_2}\in\{z\in\mathbb{R}^N:\lVert z\rVert_{1,\mathrm{w}}\leq\lVert x\rVert_{1,\mathrm{w}}+\eta,\lVert Az-y\rVert_2\leq \varepsilon\}$ with $\eta=0$. Therefore, this lemma is an extension of Theorem 3.1 in \cite{cl} by letting $\zeta=\varepsilon$ and $\eta=0$. The proof follows from almost the same procedure for the proof of Theorem 3.1 in Section 4 of \cite{cl} via replacing the $P=\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)}{\sqrt{k}\gamma}$ with $P'=\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)+\eta}{\sqrt{k}\gamma}$, and letting $\zeta=\varepsilon$. In order not to repeat, we leave out all the details. In addition, this result also generalizes the Lemma 2.1 in \cite{gwx}, which is the special case with the noise term $e=0$, $\zeta=0$ and $\omega=1$. This lemma will play a crucial role in establishing the stable phaseless recovery result via weighted $\ell_1$ minimization later on.\\
To address the phaseless compressive sensing problem (\ref{min}), a stronger version of RIP is needed. Its definition is provided as follows.\\
\begin{definition} (SRIP \cite{gwx,vx})
We say a matrix $A=(a_1,\cdots,a_m)^T\in\mathbb{R}^{m\times N}$ has the Strong Restricted Isometry Property (SRIP) of order $k$ with bounds $\theta_{-},\theta_{+}\in(0,2)$ if \begin{align}
\theta_{-}\lVert x\rVert_2^2\leq \min\limits_{I\subseteq [m],|I|\geq m/2}\lVert A_{I}x\rVert_2^2\leq \max\limits_{I\subseteq [m],|I|\geq m/2}\lVert A_{I}x\rVert_2^2\leq \theta_{+}\lVert x\rVert_2^2 \label{srip}
\end{align}
holds for all $k$-sparse vectors $x\in\mathbb{R}^N$, where $[m]=\{1,\cdots,m\}$. We say $A$ has the Strong Lower Restricted Isometry Property of order $k$ with bound $\theta_{-}$ if the lower bound in (\ref{srip}) holds. Similarly, we say $A$ has the Strong Upper Restricted Isometry Property of order $k$ with bound $\theta_{+}$ if the upper bound in (\ref{srip}) holds.\\
\end{definition}
Next, we present the conditions for the stable recovery via weighted $\ell_1$ minimization by using SRIP.
\begin{theorem}
Let $x\in\mathbb{R}^N, y=|Ax|+e\in\mathbb{R}^m$ with $\lVert e\rVert_2\leq \zeta$. Adopt the notations in Lemma 1 and assume that $A\in\mathbb{R}^{m\times N}$ satisfies the SRIP of order $tk$ with bounds $\theta_{-},\theta_{+}\in (0,2)$ such that \begin{align}
t\geq \max\left\{d+\frac{\gamma^2(1-\theta_{-})^2}{2\theta_{-}-\theta_{-}^2},d+\frac{\gamma^2(1-\theta_{+})^2}{2\theta_{+}-\theta_{+}^2}\right\}.\label{sripc}
\end{align}
Then any solution $x^{\sharp}$ of (\ref{min}) satisfies \begin{align}
\min\{\lVert x^{\sharp}-x\rVert_2,\lVert x^{\sharp}+x\rVert_2\}\leq C_1(\zeta+\varepsilon)+C_2\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)}{\sqrt{k}}. \label{stable}
\end{align}
where $C_1$ and $C_2$ are constants defined in Lemma 1. \\
\end{theorem}
\noindent
{\bf Remark 2}\,\, As it has been proved in \cite{vx} that Gaussian matrices with $m=O(tk\log(N/k))$ satisfy SRIP of order $tk$ with high probability, thus the stable recovery result (\ref{stable}) can be achieved by using Gaussian measurement matrix with appropriate number of measurements $m$.\\
\noindent
{\bf Remark 3}\,\, Note that when the weight $\omega=1$, we have $\gamma=d=1$. Then, by assuming $\zeta=\varepsilon=0$ and $x$ is exactly $k$-sparse, our theorem reduces to Theorem 2.2 in \cite{vx}. That is, if $A$ satisfies the SRIP of order $tk$ with bounds $\theta_{-},\theta_{+}$ and $t\geq\max\{\frac{1}{2\theta_{-}-\theta_{-}^2},\frac{1}{2\theta_{+}-\theta_{+}^2}\}$, then for any $k$-sparse signal $x\in\mathbb{R}^N$ we have $\mathop{\arg\min}_{z\in\mathbb{R}^N}\{\lVert z\rVert_{1}: |Az|=|Ax|\}=\{\pm x\}$. Similarly, if we let the noise term $e=0$, $\zeta=0$ and $\omega=1$, this theorem goes to Theorem 3.1 in \cite{gwx}.\\
\noindent
{\bf Remark 4}\,\, If $\alpha=\frac{1}{2}$, we have $\gamma=d=1$. The sufficient condition (\ref{sripc}) of Theorem 1 is identical to that of Theorem 2.2 in \cite{vx} and that of Theorem 3.1 in \cite{gwx}. And the constants $C_1=c_1=\frac{\sqrt{2(1+\delta_{tk})}}{1-\sqrt{t/(t-1)}\delta_{tk}}, C_2=c_2=\frac{\sqrt{2}\delta_{tk}+\sqrt{(\sqrt{t(t-1)}-\delta_{tk}t)\delta_{tk}}}{\sqrt{t(t-1)-\delta_{tk}t}}$ (see Theorem 3.1 in \cite{gwx}). In addition, if $0\leq \omega<1$ and $\alpha>\frac{1}{2}$, then $d=1$ and $\gamma<1$. The sufficient condition (\ref{sripc}) in Theorem 1 is weaker than that of Theorem 2.2 in \cite{vx} and that of Theorem 3.1 in \cite{gwx}. In this case, the constants $C_1<c_1, C_2<c_2$.\\
Set $t^{\omega}=\max\left\{d+\frac{\gamma^2(1-\theta_{-})^2}{2\theta_{-}-\theta_{-}^2},d+\frac{\gamma^2(1-\theta_{+})^2}{2\theta_{+}-\theta_{+}^2}\right\}$.
We illustrate how the constants $t^{\omega}$, $C_1$ and $C_2$ change with $\omega$ for different values of $\alpha$ in Figure 1. In all the plots, we set $\rho=1$. In the plot of $t^{\omega}$, we set $\theta_{-}=\frac{1}{2}$ and $\theta_{+}=\frac{3}{2}$, then $t^{\omega}=d+\frac{\gamma^2}{3}$. In the plots of $C_1$ and $C_2$, we fix $t=4$ and $\delta_{tk}=0.3$. Note that if $\omega=1$ or $\alpha=0.5$, then $t^{\omega}\equiv 1+\frac{1}{3}=\frac{4}{3}$, $C_1\equiv c_1$ and $C_1\equiv c_2$. And it shows that $t^{\omega}$ decreases as $\alpha$ increases, which means that the sufficient condition (\ref{sripc}) becomes weaker as $\alpha$ increases. For each $\alpha>0.5$, the sufficient condition becomes stronger ($t^{\omega}$ increases) as $\omega$ increases. For instance, if $90\%$ of the support estimate is accurate ($\alpha=0.9$) and $\omega=0.6$, we have $t^{\omega}=1.2022$, while $t^{\omega}=1.3333$ for standard $\ell_1$ minimization ($\omega=1$). The opposite conclusion holds for the case $\alpha<0.5$. In addition, as $\alpha$ increases, the constant $C_1$ decreases with $t=4$ and $\delta_{tk}=0.3$. Meanwhile, the constant $C_2$ with $\alpha\neq 0.5$ is smaller than that with $\alpha=0.5$. \\
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth,height=0.4\textheight]{constant.eps}
\caption{Comparison of the constants $t^{\omega}$, $C_1$ and $C_2$ for various of $\alpha$. In all the plots, we set $\rho=1$. In the plot of $t^{\omega}$, we set $\theta_{-}=\frac{1}{2}$ and $\theta_{+}=\frac{3}{2}$. In the plots of $C_1$ and $C_2$, we fix $t=4$ and $\delta_{tk}=0.3$.}\label{fig:1}
\end{figure}
\noindent
{\bf Proof of Theorem 1.} For any solution $x^{\sharp}$ of (\ref{min}), we have $$
\lVert x^{\sharp}\rVert_{1,\mathrm{w}}\leq \lVert x\rVert_{1,\mathrm{w}}
$$
and $$
\lVert |Ax^{\sharp}|-|Ax|-e\rVert_2\leq \varepsilon.
$$
If we divide the index set $\{1,2,\cdots,m\}$ into two subsets \begin{align*}
T=\{j: \mathrm{sign}(\langle a_j,x^{\sharp}\rangle)=\mathrm{sign}(\langle a_j,x\rangle)\}\,\,\,\text{and}\,\,\,T^c=\{j: \mathrm{sign}(\langle a_j,x^{\sharp}\rangle)=-\mathrm{sign}(\langle a_j,x\rangle)\},
\end{align*}
then it implies that \begin{align}
\lVert A_T x^{\sharp}-A_T x-e\rVert_2+\lVert A_{T^c} x^{\sharp}+A_{T^c} x-e\rVert_2\leq \varepsilon.
\end{align}
Here either $|T|\geq m/2$ or $|T^c|\geq m/2$. If $|T|\geq m/2$, we use the fact that \begin{align}
\lVert A_T x^{\sharp}-A_T x-e\rVert_2\leq \varepsilon.
\end{align}
Then, we obtain $$
x^{\sharp}\in\{z\in\mathbb{R}^N:\lVert z\rVert_{1,\mathrm{w}}\leq \lVert x\rVert_{1,\mathrm{w}},\lVert A_T z-A_T x-e\rVert_2\leq \varepsilon\}.
$$
Since $A$ satisfies SRIP of order $tk$ with bounds $\theta_{-},\theta_{+}$ and $$
t\geq \max\left\{d+\frac{\gamma^2(1-\theta_{-})^2}{2\theta_{-}-\theta_{-}^2},d+\frac{\gamma^2(1-\theta_{+})^2}{2\theta_{+}-\theta_{+}^2}\right\}>d,
$$
therefore, the definition of SRIP implies that $A_T$ satisfies the RIP of order $tk$ with \begin{align}
\delta_{tk}\leq\max\{1-\theta_{-},\theta_{+}-1\}\leq \sqrt{\frac{t-d}{t-d+\gamma^2}}.
\end{align}
Thus, by using Lemma 1 with $\eta=0$, we have $$
\lVert x^{\sharp}-x\rVert_2\leq C_1(\zeta+\varepsilon)+C_2\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)}{\sqrt{k}}.
$$
Similarly, if $|T^c|\geq m/2$, we obtain the other corresponding result $$
\lVert x^{\sharp}+x\rVert_2\leq C_1(\zeta+\varepsilon)+C_2\frac{2(\omega\lVert x_{T_0^c}\rVert_1+(1-\omega)\lVert x_{\tilde{T}^c\cap T_0^c}\rVert_1)}{\sqrt{k}}.
$$
The proof of Theorem 1 is now completed.
\section{Weighted Null Space Property}
In this section, we consider the noiseless weighted $\ell_1$ minimization problem, i.e., \begin{align}
\min\limits_{z\in\mathbb{R}^N}\,\,\lVert z\rVert_{1,\mathrm{w}},\,\,\,\text{subject to $|Az|=|Ax|$},\,\,\,
\text{where $\mathrm{w}_i=\begin{cases}
\omega \in[0,1], &\text{$i \in\tilde{T}$}\\
1, &\text{$i\in\tilde{T}^c$}
\end{cases}$}.
\end{align}
We denote the kernel space of $A$ by $\mathcal{N}(A):=\{h\in\mathbb{R}^N: Ah=0\}$ and denote the $k$-sparse vector space $\Sigma_{k}^N:=\{x\in\mathbb{R}^N:\lVert x\rVert_0\leq k\}$.\\
\begin{definition}
The matrix $A$ satisfies the $\mathrm{w}$-weighted null space property of order $k$ if for any nonzero $h\in \mathcal{N}(A)$ and any $T\subset[N]$ with $|T|\leq k$ it holds that \begin{align}
\lVert h_T\rVert_{1,\mathrm{w}}<\lVert h_{T^c}\rVert_{1,\mathrm{w}}, \label{nsp}
\end{align}
where $T^c$ is the complementary index set of $T$ and $h_T$ is the restriction of $h$ to $T$. \\
\end{definition}
\noindent
{\bf Remark 5}\,\, Obviously, when the weight $\omega=1$, the weighted null space property reduces to the classical null space property. And according to the specific setting of $\mathrm{w}_i$, the expression (\ref{nsp}) is equivalent to $$
\omega\lVert h_{T\cap\tilde{T}}\rVert_1+\lVert h_{T\cap \tilde{T}^c}\rVert_1<\omega\lVert h_{T^c\cap\tilde{T}}\rVert_1+\lVert h_{T^c\cap \tilde{T}^c}\rVert_1\Leftrightarrow \omega\lVert h_T\rVert_1+(1-\omega)\lVert h_G\rVert_1<\lVert h_{T^c}\rVert_1,
$$
where $G=(T\cap\tilde{T}^c)\cup(T^c\cap \tilde{T})$ (see \cite{ms} for more arguments).\\
It is known that a signal $x\in\Sigma_k^N$ can be recovered via the weighted $\ell_1$ minimization problem if and only if the measurement matrix $A$ has the weighted null space property of order $k$. We state it as follows (see \cite{zxwk}):\\
\begin{lemma}
Given $A\in\mathbb{R}^{m\times N}$, for every $k$-sparse vector $x\in\mathbb{R}^N$ it holds that $$\mathop{\arg\min}\limits_{z\in\mathbb{R}^N}\,\,\{\lVert z\rVert_{1,\mathrm{w}}: Az=Ax\}=x
$$ if and only if $A$ satisfies the $\mathrm{w}$-weighted null space property of order $k$. \\
\end{lemma}
Next, we extend Lemma 2 to the following theorem on phaseless compressive sensing for the real-valued signal reconstruction.\\
\begin{theorem}
The following statements are equivalent:\\
(a) For any $k$-sparse $x\in\mathbb{R}^N$, we have \begin{align}
\mathop{\arg\min}\limits_{z\in\mathbb{R}^N}\{\lVert z\rVert_{1,\mathrm{w}}:|Az|=|Ax|\}=\{\pm x\}.
\end{align}\\
(b) For every $S\subseteq [m]$, it holds\begin{align}
\lVert u+v\rVert_{1,\mathrm{w}}<\lVert u-v\rVert_{1,\mathrm{w}} \label{pnsp}
\end{align}
for all nonzero $u\in \mathcal{N}(A_S)$ and $v\in \mathcal{N}(A_{S^c})$ satisfying $\lVert u+v\rVert_0\leq k$.\\
\end{theorem}
\noindent
{\bf Remark 6}\,\, If $\omega=1$, then Theorem 2 reduces to Theorem 3.2 in \cite {wx}. Since $\mathrm{w}_i=\omega$ when $i\in\tilde{T}$, and $\mathrm{w}_i=1$ otherwise, the expression (\ref{pnsp}) is equivalent to \begin{align*}
\omega\lVert u+v\rVert_1+(1-\omega)\lVert (u+v)_{\tilde{T}^c}\rVert_1<\omega\lVert u-v\rVert_1+(1-\omega)\lVert (u-v)_{\tilde{T}^c}\rVert_1.
\end{align*}
\\
\noindent
{\bf Proof of Theorem 2.} The proof follows from the proof of Theorem 3.2 in \cite{wx} with minor modifications. First we show $(a)\Rightarrow(b)$. Assume (b) is false, that is, there exist nonzero $u\in \mathcal{N}(A_S)$ and $v\in \mathcal{N}(A_{S^c})$ such that $$
\lVert u+v\rVert_{1,\mathrm{w}}\geq\lVert u-v\rVert_{1,\mathrm{w}}
$$
and $u+v\in\Sigma_{k}^N$. Now set $x=u+v\in\Sigma_k^N$, obviously for $i=1,\cdots,m$, we have $$
|\langle a_i, x\rangle|=|\langle a_i,u+v\rangle|=|\langle a_i,u-v\rangle|,
$$
since either $\langle a_i,u\rangle=0$ or $\langle a_i,v\rangle=0$. In other words $|Ax|=|A(u-v)|$. Note that $u-v\neq-x$, for otherwise we would have $u=0$, which is a contradiction. Then, it follows from (a) that we obtain \begin{align*}
\lVert x\rVert_{1,\mathrm{w}}=\lVert u+v\rVert_{1,\mathrm{w}}<\lVert u-v\rVert_{1,\mathrm{w}},
\end{align*}
This is a contradiction. Thus, (b) holds.\\
Next we prove $(b)\Rightarrow (a)$. Let $b=(b_1,\cdots,b_m)^T=|Ax|$ where $x\in\Sigma_k^N$. For a fixed $\sigma=(\sigma_1,\cdots,\sigma_m)^T\in\{-1,1\}^m$, we set $b^{\sigma}=(\sigma_1b_1,\cdots,\sigma_m b_m)^T$. We now consider the following weighted $\ell_1$ minimization problem: \begin{align}
\min\limits_{z\in\mathbb{R}^N}\,\,\lVert z\rVert_{1,\mathrm{w}}\,\,\,\text{subject to\,\,\,$Az=b^{\sigma}$}.
\end{align}
Its solution is denoted as $x^{\sigma}$. Then, we claim that for any $\sigma\in\{1,-1\}^m$, if $x^{\sigma}$ exists (it may not exist), we have $$
\lVert x^{\sigma}\rVert_{1,\mathrm{w}}\geq \lVert x\rVert_{1,\mathrm{w}}
$$ and the equality holds if and only if $x^{\sigma}=\pm x$.
To prove the claim, we assume $\sigma^{\star}\in\{1,-1\}^m$ such that $b^{\sigma^{\star}}=Ax$. First note that the statement (b) implies the classical weighted null space property of order $k$. To see this, for any nonzero $h\in \mathcal{N}(A)$ and $T\subseteq [N]$ with $|T|\leq k$, we set $u=h$, $v=h_T-h_{T^c}$ and $S=[m]$. Then, we have $u\in \mathcal{N}(A_S)$ and $v\in \mathcal{N}(A_{S^c})$. Therefore, the statement (b) now implies \begin{align*}
2\lVert h_T\rVert_{1,\mathrm{w}}=\lVert u+v\rVert_{1,\mathrm{w}}<\lVert u-v\rVert_{1,\mathrm{w}}=2\lVert h_{T^c}\rVert_{1,\mathrm{w}}.
\end{align*}
As a consequence, we have $x^{\sigma^{\star}}=x$ by Lemma 2. And, similarly we have $x^{-\sigma^{\star}}=-x$. Next, for any $\sigma\in\{-1,1\}^m\neq \pm \sigma^{\star}$, if $x^{\sigma}$ doesn't exist then we have nothing to prove. Assume it does exist, set $S_{\star}=\{i: \sigma_i=\sigma^{\star}_i\}$. Then \begin{align*}
\langle a_i, x^{\sigma}\rangle=\begin{cases}
\langle a_i,x\rangle &i\in S_{\star},\\
-\langle a_i,x\rangle &i\in S_{\star}^c.
\end{cases}
\end{align*}
Set $u=x-x^{\sigma}$ and $v=x+x^{\sigma}$. Obviously, $u\in \mathcal{N}(A_{S_{\star}})$ and $v\in \mathcal{N}(A_{S_{\star}^c})$. Furthermore, $u+v=2x\in\Sigma_k^N$. Then, by the statement (b), we have $$
2\lVert x\rVert_{1,\mathrm{w}}=\lVert u+v\rVert_{1,\mathrm{w}}<\lVert u-v\rVert_{1,\mathrm{w}}=2\lVert x^{\sigma}\rVert_{1,\mathrm{w}}.
$$
This proves (a) and the proof is completed.
\section{Simulations}
In this section, we present some simple numerical experiments to illustrate the benefits of using weighted $\ell_1$ minimization to recover sparse and compressible signals when partial prior support information is available in the phaseless compressive sensing case. In order to facilitate the computation, we follow a non-standard noise model: \begin{align}
b=|Ax|^2+e=\{a_i^T xx^Ta_i\}_{1\leq i\leq m}+e,
\end{align}
where $e\in\mathbb{R}^m$ is a noise term with $\lVert e\rVert_2\leq \varepsilon$. Then the weighted $\ell_1$ minimization goes to \begin{align}
\min\limits_{z\in\mathbb{R}^N}\sum\limits_{i=1}^N \mathrm{w}_i |z_i|,\,\,\,\text{subject to $\lVert |Az|^2-b\rVert_2\leq\varepsilon$},\,\,\,
\text{where $\mathrm{w}_i=\begin{cases}
\omega \in[0,1] &\text{$i \in\tilde{T}$,}\\
1 &\text{$i\in\tilde{T}^c$.}
\end{cases}$} \label{phaseless}
\end{align}
Here we adopt the compressive phase retrieval via lifting (CPRL) algorithm developed in \cite{oyds} to solve this phaseless recovery problem.
By using a lifting technique, this problem can be rewritten as a semidefinite program (SDP). More specifically, given the ground truth signal $x\in\mathbb{R}^N$, let $X=xx^{T}\in\mathbb{R}^{N\times N}$ be an induced rank-1 semidefinite matrix. We further denote $\Phi_i=a_i a_i^T$, a linear operator $B$ of $Z=zz^{T}\in\mathbb{R}^{N\times N}$ as \begin{align*}
B: Z\mapsto \{\mathrm{Tr}(\Phi_i Z)\}_{1\leq i\leq m}\in\mathbb{R}^m
\end{align*} and the weight matrix $W=\mathrm{diag}\{\mathrm{w}_i, 1\leq i\leq N\}\in\mathbb{R}^{N\times N}$. Then the phaseless vector recovery problem (\ref{phaseless}) can be cast as the following rank-1 matrix recovery problem: \begin{align*}
\min_{Z\in\mathbb{R}^{N\times N}}\,\,&\lVert WZW^{T}\rVert_1, \\
\text{subject to}\,\,\, &\lVert B(Z)-b\rVert_2\leq \varepsilon,\\
&\mathrm{rank}(WZW^{T})=1, Z\succeq 0.
\end{align*}
This is of course still a non-convex problem due to the rank constraint. The lifting approach addresses this issue by replacing $\mathrm{rank}(WZW^{T})$ with $\mathrm{Tr}(WZW^{T})$. This leads to an SDP:
\begin{align}
\min_{Z\in\mathbb{R}^{N\times N}}\,\,&\mathrm{Tr}(WZW^{T})+\lambda\lVert WZW^{T}\rVert_1, \nonumber \\
\text{subject to}\,\,\, &\lVert B(Z)-b\rVert_2\leq \varepsilon, \nonumber \\
&Z\succeq 0, \label{sdp}
\end{align}
where $\lambda>0$ is a design parameter. Then the estimate of $x$ can be finally be found by computing the rank-1 decomposition of the recovered matrix via singular value decomposition.
The recovery performance is assessed by the average reconstruction signal to noise ratio (SNR) over 10 experiments. The SNR is measured in dB and it is given by \begin{align}
\mathrm{SNR}(x,x^{\sharp})=20\log_{10}\left(\frac{\lVert x\rVert_2}{\min\{\lVert x^{\sharp}-x\rVert_2,\lVert x^{\sharp}+x\rVert_2\}}\right),
\end{align}
where $x$ is the true signal and $x^{\sharp}$ is the recovered signal. For all the experiments, we fix the parameter $\lambda=1$. In the experiments where the measurements are noisy, we set the noise $\{e_i,1\leq i\leq m\}\overset{i.i.d}\sim N(0,\sigma^2)$ with $\sigma=0.1$ and $\varepsilon=\lVert e\rVert_2$.
\subsection{Sparse Case}
We first consider the case that $x$ is exactly sparse with an ambient dimension $N=32$ and fixed sparsity $k=4$. The sparse signals are generated by choosing $k$ nonzero positions uniformly at random, and then choosing the nonzero values from the standard normal distribution for these k nonzero positions. The recovery is done via (\ref{sdp}) using a support estimate of size $|\tilde{T}|=4$ (i.e., $\rho=1$).
Figure 2 shows the recovery performances for different $\alpha$ and $\omega$ with an increasing number of measurements $m$, both in the noise free and noisy cases. It can be observed that when $\alpha=0.75>0.5$, the best recovery is achieved for very small $\omega$ whereas a $\omega=1$ results in the lowest SNR for both cases. On the other hand, when $\alpha=0.25< 0.5$, the performance of the recovery algorithms is better for large $\omega$ than that for small $\omega$. The case $\omega=0$ results in the lowest SNR. When $\alpha=0.5$, the performance gaps for different $\omega$ are not particularly large and it seems that a medium $\omega$ ($\omega=0.5$) achieves the best recovery. In the noise free case, a perfect recovery can be achieved as long as the number of measurements $m$ is large enough. As is also expected that in all settings, comparing to the noise free case, we have a lower SNR in the noisy case. These findings are largely consistent with the theoretical results provided in Section 2.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth,height=0.4\textheight]{sparse_measurements.eps}
\caption{Performance of weighted $\ell_1$ recovery in terms of SNR averaged over 10 experiments for sparse signals $x$ with $N=32$, $k=4$, while varying the number of measurements $m$. From left to right, $\alpha=0.75$, $\alpha=0.5$ and $\alpha=0.25$. (a) Noise Free. (b) $\sigma=0.1$.}\label{fig:2}
\end{figure}
\subsection{Compressible Case}
Here we generate a signal $x$ whose coefficients decay like $j^{-\theta}$ where $j\in\{1,\cdots,N\}$ and $\theta=4.5$. This kind of signal itself is not sparse, but can be well approximated by an exactly sparse signal. For this experiment, we set $k=4$, i.e., we use the best 4-term approximation. We fix $\rho=1$ as in the sparse case. The phaseless recovery results are presented in Figure 3. It shows that on average a mediate value of $\omega$ ($\omega=0.5$) results in the best recovery. In general, when $\alpha>0.5$, smaller $\omega$ favours better reconstruction results. The opposite conclusion holds for the case that $\alpha<0.5$. Therefore, as is expected that the behaviors that occur in the exactly sparse case also occur in the compressible case.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth,height=0.4\textheight]{compressible_measurements.eps}
\caption{Performance of weighted $\ell_1$ recovery in terms of SNR averaged over 10 experiments for compressible signals $x$ with $N=32$, $\theta=4.5$, while varying the number of measurements $m$. From left to right, $\alpha=0.75$, $\alpha=0.5$ and $\alpha=0.25$. (a) Noise Free. (b) $\sigma=0.1$.}\label{fig:3}
\end{figure}
\section{Conclusion}
In this paper, we established the sufficient SRIP condition and the sufficient and necessary weighted null space property condition for phaseless compressive sensing using partial support information via weighted $\ell_1$ minimization, and we conducted some numerical experiments to illustrate the theoretical results.
Some further problems are left for future work. As we only consider the real-valued signal reconstruction case, it will be challenging to generalize the present results to the complex-valued signal case. Besides it will be very interesting to construct the measurement matrix $A\in\mathbb{R}^{m\times N}$ satisfying the weighted null space property given in (\ref{pnsp}) directly.
\section*{Acknowledgements}
This work is supported by the Swedish Research Council grant (Reg.No. 340-2013-5342).
|
1,108,101,562,717 | arxiv | \section{Introduction}
Neural encoder-decoder models have been successfully applied to various natural language generation tasks including machine translation~\cite{Sutskever:2014:SSL:2969033.2969173}, summarization~\cite{rush-chopra-weston:2015:EMNLP}, and caption generation~\cite{journals/corr/VinyalsTBE14}.
Still, it is necessary to control the output length for abstractive summarization, which generates a summary for a given text while satisfying a space constraint.
In fact, Figure \ref{fig:length_dist} shows a large variance in output sequences produced by a widely used encoder-decoder model~\cite{luong-pham-manning:2015:EMNLP}, which has no mechanism for controlling the length of the output sequences.
\begin{figure}[!t]
\centering
\includegraphics[width=7.5cm]{./length_dist_cutted.pdf}
\caption{Difference in number of characters between correct headlines and outputs of a widely used LSTM encoder-decoder~\cite{luong-pham-manning:2015:EMNLP} which is trained on sentence-headline pairs created by \newcite{rush-chopra-weston:2015:EMNLP} from the annotated English Gigaword corpus. The difference was investigated for 3,000 sentence-headline pairs randomly sampled from the test splits.}
\label{fig:length_dist}
\end{figure}
\newcite{W18-2706} trained embeddings that correspond to each output length to control the output sequence length.
Since the embeddings for different lengths are independent, it is hard to generate a sequence of the length that is infrequent in training data.
Thus, a method that can model any lengths continuously is required.
\newcite{kikuchi-EtAl:2016:EMNLP2016} proposed two learning based methods for an LSTM encoder-decoder: LenEmb and LenInit.
LenEmb inputs an embedding representing the remaining length in each decoding step.
Since this approach also prepares embeddings for each length independently, it suffers from the same problem as that in \newcite{W18-2706}.
On the other hand, LenInit can handle arbitrary lengths because it combines the scalar value of a desired length with a trainable embedding.
LenInit initializes the LSTM cell of the decoder with the embedding depending on the scalar value of the desired length.
\newcite{D18-1444} incorporated such scalar values into the initial state of the decoder in a CNN encoder-decoder.
These approaches deal with any length but it is reasonable to incorporate the distance to the desired terminal position into each decoding step such as in LenEmb.
In this study, we focused on Transformer~\cite{NIPS2017_7181}, which recently achieved the state-of-the-art score on the machine translation task.
We extend the sinusoidal positional encoding, which represents a position of each token in Transformer~\cite{NIPS2017_7181}, to represent a distance from a terminal position on the decoder side.
In this way, the proposed method considers the remaining length explicitly at each decoding step.
Moreover, the proposed method can handle any desired length regardless of its appearance in a training corpus because it uses the same continuous space for any length.
We conduct experiments on the headline generation task.
The experimental results show that our proposed method is able to not only control the output length but also improve the ROUGE scores from the baselines.
Our code and constructed test data are publicly available at: \href{https://github.com/takase/control-length}{https://github.com/takase/control-length}.
\section{Positional Encoding}
Transformer~\cite{NIPS2017_7181} uses a sinusoidal positional encoding to represent the position of an input.
Transformer feeds the sum of the positional encoding and token embedding to the input layer of its encoder and decoder.
Let $pos$ be the position and $d$ be the embedding size.
Then, the $i$-th dimension of the sinusoidal positional encoding $PE_{(pos, i)}$ is as follows:
\begin{align}
PE_{(pos, 2i)} &= {\rm sin}\bigg(\frac{pos}{10000^{\frac{2i}{d}}}\bigg), \label{eq:sin}\\
PE_{(pos, 2i+1)} &= {\rm cos}\bigg(\frac{pos}{10000^{\frac{2i}{d}}}\bigg). \label{eq:cos}
\end{align}
In short, each dimension of the positional encoding corresponds to a sinusoid whose period is $10000^{2i / d} \times 2\pi$.
Since this function returns an identical value at the same position $pos$, the above positional encoding can be interpreted as representing the absolute position of each input token.
In this paper, we extend Equations (\ref{eq:sin}) and (\ref{eq:cos}) to depend on the given output length and the distance from the terminal position.
We propose two extensions: length-difference positional encoding ($LDPE$) and length-ratio positional encoding ($LRPE$).
Then we replace Equations (\ref{eq:sin}) and (\ref{eq:cos}) with (\ref{eq:ctrl_diff_sin}) and (\ref{eq:ctrl_diff_cos}) (or (\ref{eq:ctrl_ratio_sin}) and (\ref{eq:ctrl_ratio_cos})) on the decoder side to control the output sequence length.
We define $LDPE$ and $LRPE$ as follows:
\begin{align}
LDPE_{(pos, len, 2i)} &= {\rm sin}\bigg(\frac{len - pos}{10000^{\frac{2i}{d}}}\bigg), \label{eq:ctrl_diff_sin}\\
LDPE_{(pos, len, 2i+1)} &= {\rm cos}\bigg(\frac{len - pos}{10000^{\frac{2i}{d}}}\bigg), \label{eq:ctrl_diff_cos} \\
LRPE_{(pos, len, 2i)} &= {\rm sin}\bigg(\frac{pos}{len^{\frac{2i}{d}}}\bigg), \label{eq:ctrl_ratio_sin}\\
LRPE_{(pos, len, 2i+1)} &= {\rm cos}\bigg(\frac{pos}{len^{\frac{2i}{d}}}\bigg), \label{eq:ctrl_ratio_cos}
\end{align}
where $len$ presents the given length constraint.
$LDPE$ returns an identical value at the position where the remaining length to the terminal position is the same.
$LRPE$ returns a similar value at the positions where the ratio of the remaining length to the terminal position is similar.
Let us consider the $d$-th dimension as the simplest example.
Since we obtain ${\rm sin}(pos / len)$ (or ${\rm cos}(pos / len)$) at this dimension, the equations yield the same value when the remaining length ratio is the same, e.g., $pos = 5$, $len = 10$ and $pos = 10$, $len = 20$.
We add $LDPE$ (or $LRPE$) to the input layer of Transformer in the same manner as in \newcite{NIPS2017_7181}.
In the training step, we assign the length of the correct output to $len$.
In the test phase, we control the output length by assigning the desired length to $len$.
\section{Experiments}
\subsection{Datasets}
We conduct experiments on the headline generation task on Japanese and English datasets.
The purpose of the experiments is to evaluate the ability of the proposed method to generate a summary of good quality within a specified length.
We used JAMUL corpus as the Japanese test set~\cite{Hitomi2019}.
This test set contains three kinds of headlines for 1,181\footnote{We obtained this test set by applying the pre-processing script at \href{https://github.com/asahi-research/Gingo}{https://github.com/asahi-research/Gingo} to the original JAMUL corpus.} news articles written by professional editors under the different upper bounds of headline lengths.
The upper bounds are 10, 13, and 26 characters ($len = 10, 13, 26$).
This test set is suitable for simulating the real process of news production because it is constructed by a Japanese media company.
In contrast, we have no English test sets that contain headlines of multiple lengths.
Thus, we randomly extracted 3,000 sentence-headline pairs that satisfy a length constraint from the test set constructed from annotated English Gigaword~\cite{napoles:2012:AG} by pre-processing scripts of \newcite{rush-chopra-weston:2015:EMNLP}\footnote{\href{https://github.com/facebookarchive/NAMAS}{https://github.com/facebookarchive/NAMAS}}.
We set three configurations for the number of characters as the length constraint: 0 to 30 characters ($len=30$), 30 to 50 characters ($len=50$), and 50 to 75 characters ($len=75$).
Moreover, we also evaluate the proposed method on the DUC-2004 task 1~\cite{Over:2007:DC:1284916.1285157} for comparison with published scores in previous studies.
\begin{table*}[!t]
\centering
\footnotesize
\begin{tabular}{| l | r r r | r r r | r r r |} \hline
& \multicolumn{3}{|c|}{$len = 10$} & \multicolumn{3}{c|}{$len = 13$} & \multicolumn{3}{c|}{$len = 26$} \\ \hline
Model & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} \\ \hline
\multicolumn{10}{|l|}{Baselines} \\ \hline
LenInit & 38.08 & 17.72 & 36.84 & 41.83 & 19.53 & 39.22 & 47.07 & 22.02 & 38.36 \\
LC & 35.88 & 15.73 & 34.80 & 40.28 & 18.86 & 38.16 & 42.62 & 19.38 & 35.61 \\
Transformer & 34.63 & 15.48 & 33.02 & 43.94 & 21.35 & 40.77 & 46.43 & 23.03 & 38.10 \\ \hline
\multicolumn{10}{|l|}{Proposed method} \\ \hline
Transformer+$LDPE$ & 42.84 & 21.07 & 41.31 & 46.51 & 22.83 & 43.76 & 50.89 & 24.18 & 40.82 \\
+$PE$ & 42.85 & 20.67 & 41.47 & 46.72 & 22.70 & 43.75 & {\bf 51.32} & {\bf 25.15} & {\bf 41.48} \\
Transformer+$LRPE$ & 42.70 & 21.62 & 41.35 & {\bf 47.05} & {\bf 23.70} & {\bf 44.13} & 50.68 & 24.70 & 41.23 \\
+$PE$ & {\bf 43.36} & {\bf 21.63} & {\bf 41.93} & 46.39 & 23.09 & 43.49 & 51.21 & 25.03 & 41.43 \\ \hline
\multicolumn{10}{|l|}{Proposed method trained on the dataset without headlines consisting of target lengths} \\ \hline
Transformer+$LDPE$ & 41.91 & 20.01 & 40.69 & 45.88 & 22.61 & 43.16 & 50.90 & 24.37 & 40.48 \\
+$PE$ & 42.33 & 20.46 & 40.88 & 44.78 & 22.33 & 42.27 & 50.87 & 24.54 & 40.89 \\
Transformer+$LRPE$ & 41.91 & 20.10 & 40.52 & 46.01 & 22.87 & 43.47 & 50.33 & 24.37 & 41.00 \\
+$PE$ & 42.59 & 20.76 & 41.16 & 46.52 & 23.65 & 43.81 & 50.73 & 24.64 & 41.01 \\ \hline
\end{tabular}
\caption{Recall-oriented ROUGE scores for each length on Japanese test set. This test set contains three kinds of headlines, i.e., $len=10, 13, 26$, tied to a single article.\label{tab:jamul_result}}
\end{table*}
\begin{table*}[!t]
\centering
\footnotesize
\begin{tabular}{| l | r r r | r r r | r r r |} \hline
& \multicolumn{3}{|c|}{$len = 30$} & \multicolumn{3}{c|}{$len = 50$} & \multicolumn{3}{c|}{$len = 75$} \\ \hline
Model & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} \\ \hline
\multicolumn{10}{|l|}{Baselines} \\ \hline
LenInit& 44.58 & 25.90 & 43.34 & 48.42 & 25.47 & 45.56 & 50.78 & 25.74 & 46.42 \\
LC & 45.17 & 26.73 & 44.09 & 46.56 & 24.55 & 44.10 & 48.67 & 24.83 & 44.98 \\
Transformer & 47.48 & {\bf 29.77} & 46.17 & 50.02 & {\bf 28.04} & 47.29 & 47.31 & 24.83 & 43.75 \\ \hline
\multicolumn{10}{|l|}{Proposed method} \\ \hline
Transformer+$LDPE$ & 47.26 & 26.98 & 45.77 & 50.21 & 26.13 & 47.15 & 53.99 & 27.78 & 49.24 \\
+$PE$ & 48.13 & 27.18 & 46.43 & 50.29 & 25.97 & 47.17 & 53.65 & 27.65 & 49.06 \\
Transformer+$LRPE$ & 48.79 & 28.77 & 47.17 & 50.09 & 26.08 & 46.91 & 53.91 & 27.82 & 49.15 \\
+$PE$ & {\bf 49.23} & 29.26 & {\bf 47.68} & 50.41 & 26.37 & 47.39 & {\bf 54.21} & {\bf 27.84} & {\bf 49.38} \\ \hline
\multicolumn{10}{|l|}{Proposed method trained on the dataset without headlines consisting of the target lengths} \\ \hline
Transformer+$LDPE$ & 47.35 & 26.76 & 45.70 & 50.46 & 25.96 & 47.30 & 53.69 & 27.61 & 49.04 \\
+$PE$ & 47.44 & 27.42 & 45.99 & 50.67 & 26.07 & 47.57 & 53.76 & 27.53 & 49.03 \\
Transformer+$LRPE$ & 48.54 & 28.89 & 47.06 & 50.65 & 26.19 & 47.34 & 53.94 & 27.88 & 49.11 \\
+$PE$ & 49.08 & 29.09 & 47.58 & {\bf 50.78} & 26.64 & {\bf 47.60} & 53.77 & 27.68 & 48.93 \\ \hline
\end{tabular}
\caption{Recall-oriented ROUGE scores for each length on test data extracted from annotated English Gigaword.\label{tab:engiga_result}}
\end{table*}
\begin{table}[!t]
\centering
\footnotesize
\begin{tabular}{| l | r r r |} \hline
Model & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c|}{R-L} \\ \hline
\multicolumn{4}{|l|}{Baselines} \\ \hline
LenInit & 29.78 & 11.05 & 26.49 \\
LC & 28.68 & 10.79 & 25.72 \\
Transformer & 26.15 & 9.14 & 23.19 \\ \hline
\multicolumn{4}{|l|}{Proposed method} \\ \hline
Transformer+$LDPE$ & 30.95 & 10.53 & 26.79 \\
+$PE$ & 31.00 & 10.78 & 27.02 \\
+Re-ranking & 31.65 & 11.25 & 27.46 \\
Transformer+$LRPE$ & 30.74 & 10.83 & 26.69 \\
+$PE$ & 31.10 & 11.05 & 27.25 \\
+Re-ranking & 32.29 & 11.49 & 28.03 \\
+Ensemble (5 models) & {\bf 32.85} & {\bf 11.78} & {\bf 28.52} \\ \hline
\multicolumn{4}{|l|}{Previous studies for controlling output length} \\ \hline
\newcite{kikuchi-EtAl:2016:EMNLP2016} & 26.73 & 8.39 & 23.88 \\
\newcite{W18-2706} & 30.00 & 10.27 & 26.43 \\ \hline
\multicolumn{4}{|l|}{Other previous studies} \\ \hline
\newcite{rush-chopra-weston:2015:EMNLP} & 28.18 & 8.49 & 23.81 \\
\newcite{suzuki-nagata:2017:EACLshort} & 32.28 & 10.54 & 27.80 \\
\newcite{zhou-EtAl:2017:Long} & 29.21 & 9.56 & 25.51 \\
\newcite{li-EtAl:2017:EMNLP20174} & 31.79 & 10.75 & 27.48 \\
\newcite{C18-1121} & 29.33 & 10.24 & 25.24 \\ \hline
\end{tabular}
\caption{Recall-oriented ROUGE scores in DUC-2004.\label{tab:duc_result}}
\end{table}
Unfortunately, we have no large supervision data with multiple headlines of different lengths associated with each news article in both languages.
Thus, we trained the proposed method on pairs with a one-to-one correspondences between the source articles and headlines.
In the training step, we regarded the length of the target headline as the desired length $len$.
For Japanese, we used the JNC corpus, which contains a pair of the lead three sentences of a news article and its headline~\cite{Hitomi2019}.
The training set contains about 1.6M pairs\footnote{We obtained this training set by applying the pre-processing script at \href{https://github.com/asahi-research/Gingo}{https://github.com/asahi-research/Gingo}.}.
For English, we used sentence-headline pairs extracted from the annotated English Gigaword with the same pre-processing script used in the construction of the test set.
The training set contains about 3.8M pairs.
In this paper, we used a character-level decoder to control the number of characters.
On the encoder side, we used subword units to construct the vocabulary~\cite{sennrich-haddow-birch:2016:P16-12,P18-1007}.
We set the hyper-parameter to fit the vocabulary size to about 8k for Japanese and 16k for English.
\subsection{Baselines}
\begin{table*}[!t]
\centering
\footnotesize
\begin{tabular}{| l | r r r | r r r |} \hline
& \multicolumn{6}{c|}{Variance} \\ \hline
& \multicolumn{3}{c|}{Japanese dataset} & \multicolumn{3}{c|}{English Gigaword} \\ \hline
Model & $len = 10$ & $len = 13$ & $len = 26$ & $len = 30$ & $len = 50$ & $len = 75$ \\ \hline
\multicolumn{7}{|l|}{Baselines} \\ \hline
LenInit & 0.047 & 0.144 & 0.058 & 0.114 & 0.112 & 0.091 \\
LC & 0.021 & 0.028 & 0.040 & 0.445 & 0.521 & 0.871 \\
Transformer & 181.261 & 115.431 & 38.169 & 193.119 & 138.566 & 620.887 \\ \hline
\multicolumn{7}{|l|}{Proposed method} \\ \hline
Transformer+$LDPE$ & {\bf 0.000} & {\bf 0.000} & {\bf 0.000} & {\bf 0.015} & 0.012 & 0.013 \\
+$PE$ & 0.003 & 0.001 & 0.001 & 0.016 & {\bf 0.009} & {\bf 0.007} \\
Transformer+$LRPE$ & 0.121 & 0.210 & 0.047 & 0.082 & 0.071 & 0.187 \\
+$PE$ & 0.119 & 0.144 & 0.058 & 0.142 & 0.110 & 0.173 \\ \hline
\multicolumn{7}{|l|}{Proposed method trained on the dataset without headlines consisting of the target lengths} \\ \hline
Transformer+$LDPE$ & {\bf 0.000} & 0.002 & {\bf 0.000} & 0.018 & {\bf 0.009} & 0.009 \\
+$PE$ & 0.021 & 0.001 & 0.003 & 0.021 & 0.013 & 0.010 \\
Transformer+$LRPE$ & 0.191 & 0.362 & 0.043 & 0.120 & 0.058 & 0.133 \\
+$PE$ & 0.183 & 0.406 & 0.052 & 0.138 & 0.081 & 0.154 \\ \hline
\end{tabular}
\caption{Variances of generated headlines.\label{tab:var_length}}
\end{table*}
We implemented two methods proposed by previous studies to control the output length and handle arbitrary lengths.
We employed them and Transformer as baselines.
\paragraph{LenInit}
\newcite{kikuchi-EtAl:2016:EMNLP2016} proposed LenInit, which controls the output length by initializing the LSTM cell $m$ of the decoder as follows:
\begin{align}
m = len \times b,
\end{align}
where $b$ is a trainable vector.
We incorporated this method with a widely used LSTM encoder-decoder model~\cite{luong-pham-manning:2015:EMNLP}\footnote{We used an implementation at \href{https://github.com/mlpnlp/mlpnlp-nmt}{https://github.com/mlpnlp/mlpnlp-nmt}.}.
For a fair comparison, we set the same hyper-parameters as in \newcite{D18-1489} because they indicated that the LSTM encoder-decoder model trained with the hyper-parameters achieved a similar performance to the state-of-the-art on the headline generation.
\paragraph{Length Control (LC)}
\newcite{D18-1444} proposed a length control method that multiplies the desired length by input token embeddings.
We trained the model with their hyper-parameters.
\paragraph{Transformer}
Our proposed method is based on Transformer~\cite{NIPS2017_7181}\footnote{\href{https://github.com/pytorch/fairseq}{https://github.com/pytorch/fairseq}}.
We trained Transformer with the equal hyper-parameters as in the base model in \newcite{NIPS2017_7181}.
\subsection{Results}
Table \ref{tab:jamul_result} shows the recall-oriented ROUGE-1 (R-1), 2 (R-2), and L (R-L) scores of each method on the Japanese test set\footnote{To calculate ROUGE scores on the Japanese dataset, we used \href{https://github.com/asahi-research/Gingo}{https://github.com/asahi-research/Gingo}.}.
This table indicates that Transformer with the proposed method (Transformer+$LDPE$ and Transformer+$LRPE$) outperformed the baselines for all given constraints ($len=10, 13, 26$).
Transformer+$LRPE$ performed slightly better than Transformer+$LDPE$.
Moreover, we improved the performance by incorporating the standard sinusoidal positional encoding (+$PE$) on $len=10$ and $26$.
The results imply that the absolute position also helps to generate better headlines while controlling the output length.
Table \ref{tab:engiga_result} shows the recall-oriented ROUGE scores on the English Gigaword test set.
This table indicates that $LDPE$ and $LRPE$ significantly improved the performance on $len=75$.
Moreover, the absolute position ($PE$) also improved the performance in this test set.
In particular, $PE$ was very effective in the setting of very short headlines ($len=30$).
However, the proposed method slightly lowered ROUGE-2 scores from the bare Transformer on $len=30, 50$.
We infer that the bare Transformer can generate headlines whose lengths are close to 30 and 50 because the majority of the training set consists of headlines whose lengths are less than or equal to 50.
However, most of the generated headlines breached the length constraints, as explained in Section \ref{sec:analysis}.
To investigate whether the proposed method can generate good headlines for unseen lengths, we excluded headlines whose lengths are equal to the desired length ($len$) from the training data.
The lower parts of Table \ref{tab:jamul_result} and \ref{tab:engiga_result} show ROUGE scores of the proposed method trained on the modified training data.
These parts show that the proposed method achieved comparable scores to ones trained on whole training dataset.
These results indicate that the proposed method can generate high-quality headlines even if the length does not appear in the training data.
Table \ref{tab:duc_result} shows the recall-oriented ROUGE scores on the DUC-2004 test set.
Following the evaluation protocol~\cite{Over:2007:DC:1284916.1285157}, we truncated characters over 75 bytes.
The table indicates that $LDPE$ and $LRPE$ significantly improved the performance compared to the bare Transformer, and achieved better performance than the baselines except for R-2 of LenInit.
This table also shows the scores reported in the previous studies.
The proposed method outperformed the previous methods that control the output length and achieved the competitive score to the state-of-the-art scores.
Since the proposed method consists of a character-based decoder, it sometimes generated words unrelated to a source sentence.
Thus, we applied a simple re-ranking to each $n$-best headlines generated by the proposed method ($n=20$ in this experiment) based on the contained words.
Our re-ranking strategy selects a headline that contains source-side words the most.
Table \ref{tab:duc_result} shows that Transformer+$LRPE$+$PE$ with this re-ranking (+Re-ranking) achieved better scores than the state-of-the-art~\cite{suzuki-nagata:2017:EACLshort}.
\subsection{Analysis of Output Length}\label{sec:analysis}
Following \newcite{D18-1444}, we used the variance of the generated summary lengths against the desired lengths as an indicator of the preciseness of the output lengths.
We calculated variance ($var$) for $n$ generated summaries as follows\footnote{\newcite{D18-1444} multiplies Equation (\ref{eq:var}) by $0.001$.}:
\begin{align}
var = \frac{1}{n} \sum_{i=1}^{n} |l_i - len|^{2}, \label{eq:var}
\end{align}
where $len$ is the desired length and $l_i$ is the length of the generated summary.
Table \ref{tab:var_length} shows the values of Equation (\ref{eq:var}) computed for each method and the desired lengths.
This table indicates that $LDPE$ could control the length of headlines precisely.
In particular, $LDPE$ could generate headlines with the identical length to the desired one in comparison with LenInit and LC.
$LRPE$ also generated headlines with a precise length but its variance is larger than those of previous studies in very short lengths, i.e., $len = 10$ and $13$ in Japanese.
However, we consider $LRPE$ is enough for real applications because the averaged difference between its output and the desired length is small, e.g., $0.1$ for $len = 10$.
The lower part of Table \ref{tab:var_length} shows the variances of the proposed method trained on the modified training data that does not contain headlines whose lengths are equal to the desired length, similar to the lower parts of Table \ref{tab:jamul_result} and \ref{tab:engiga_result}.
The variances for this part are comparable to the ones obtained when we trained the proposed method with whole training dataset.
This fact indicates that the proposed method can generate an output that satisfies the constraint of the desired length even if the training data does not contain instances of such a length.
\section{Conclusion}
In this paper, we proposed length-dependent positional encodings, $LDPE$ and $LRPE$, that can control the output sequence length in Transformer.
The experimental results demonstrate that the proposed method can generate a headline with the desired length even if the desired length is not present in the training data.
Moreover, the proposed method significantly improved the quality of headlines on the Japanese headline generation task while preserving the given length constraint.
For English, the proposed method also generated headlines with the desired length precisely and achieved the top ROUGE scores on the DUC-2004 test set.
\section*{Acknowledgments}
The research results have been achieved by ``Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation'', the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan.
|
1,108,101,562,718 | arxiv | \section{Introduction}
{
The development of internet technology has led to the generation of modern data that exhibits several challenges in statistical estimation:
\begin{enumerate}
\item The first challenge comes from the scalability of the data. In particular, modern large-scale data usually cannot be fit into memory or are collected in a distributed environment. For example, a personal computer usually has a limited memory size in GBs; while the data stored on a hard disk could have a size in TBs. In addition, sensor network data are naturally collected by many sensors. For these types of large-scale data, traditional methods, which load all the data into memory and run a certain optimization procedure (e.g., Lasso), are no longer applicable due to both storage and computation issues.
\item The second challenge comes from the dimensionality of data. High-dimensional data analysis has been an important research area in statistics over the past decade. A sparse model is commonly adopted in high-dimensional literature and support recovery is an important task for high-dimensional analysis (see, e.g., \cite{zhao2006model,wainwright2009sharp,buhlmann2011statistics,tibshirani2015statistical}). There are some recent work on statistical estimation for high-dimensional distributed data (see, e.g., \cite{zhao2014general}, \cite{lee2017communication}, \cite{battey2018distributed}). However, these work usually adopt a de-biased approach, which leads to a dense estimated coefficient vector. Moreover, the \emph{support recovery} problem in a distributed setting still largely remains open.
\item The third challenge comes from heavy-tailed noise, which is prevalent in practice (see, e.g., \cite{hsu2016loss,fan2017estimation,chen2018robust,sun2018adaptive,zhou2018new}). When the finite variance assumption for the noise does not exist, most existing theories based on least squares or Huber loss in robust statistics will no longer be applicable.
\end{enumerate}
}
The main purpose of the paper is to provide a new estimation approach for high-dimensional linear regression in a distributed environment and establish the theoretical results on both estimation and support recovery. More specifically, we consider the following linear model,
\begin{equation}\label{eq:model}
Y= \boldsymbol{X}^{\rm T} \boldsymbol{\beta}^* +e,
\end{equation}
where $\boldsymbol{X}=(1, X_{1}, \ldots, X_{p})^{\rm T}$ is a $(p+1)$-dimensional vector, $\boldsymbol{\beta}^*=(\beta^{*}_{0},\beta_1^*,\ldots, \beta^*_{p})^{\rm T}$ is the true regression coefficient, with $\beta^*_0$ being the intercept, and $e$ is the noise.
We only assume that $e$ is independent of the covariate vector $(X_{1}, \ldots, X_{p})^{\rm T}$ and the density function of $e$ exists. { It is worthwhile noting that the independence assumption has been adopted in estimating robust linear models when using a quantile loss function (see, e.g., \cite{zou2008composite,fan2014adaptive}). In Remark \ref{rmk:independent}, we will briefly comment on how to extend our method to the case when the noise is not independent with covariates.} Furthermore, we allow the dimension $p$ to be much larger than the sample size $n$ (e.g., $p=o(n^\nu)$ for some $\nu>0$). We assume that $\boldsymbol{\beta}^*$ is a sparse vector with $s$ non-zero elements.
In this paper, we allow a very heavy-tailed noise $e$, whose variance can be infinite (e.g., Cauchy distribution). For such a heavy-tailed noise, the squared-loss based Lasso approach is no longer applicable.
To address this challenge, we can assume without loss of generality that $\textsf{P}(e\leq 0)=\tau$ for a specified quantile level $\tau\in (0,1)$ (otherwise, we can shift the first component to be $\beta^*_0-q_{\tau}$ so that this assumption holds, where $q_{\tau}$ is the $\tau$-th quantile of $e$). Then, it is easy to see that
\begin{equation*}
\boldsymbol{\beta}^{*}=\mathop{\rm arg\min}_{\boldsymbol{\beta}\in \mathbb{R}^{p+1}}\mathbb{E}\rho_{\tau}(Y-\boldsymbol{X}^{\rm T}\boldsymbol{\beta}),
\end{equation*}
where $\rho_\tau(x)=x(\tau-\ind{x\leq 0})$ (see, e.g., \cite{koenker2005quantile}) is known as the quantile regression (QR) loss function.
Given $n$ \emph{i.i.d.} samples $(\boldsymbol{X}_{i},Y_{i})$ for $1\leq i\leq n$, the high-dimensional QR estimator takes the following form,
\begin{eqnarray}\label{eq:QR}
\widehat{\boldsymbol{\beta}}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{n}\sum_{i=1}^{n}\rho_{\tau}(Y_{i}-\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})+\lambda_{n}|\boldsymbol{\beta}|_{1},
\end{eqnarray}
where $|\boldsymbol{\beta}|_{1}$ the $\ell_1$-regularization of $\boldsymbol{\beta}$, and $\lambda_n$ is the regularization parameter.
{ It is worthwhile noting that in robust statistical literature, the MOM (median of means) has been adopted to corrupted data in high-dimensional settings \citep{hsu2014heavy,lugosi2016risk,lecue2017robust,lugosi2017regularization,lecue2018learning}. However, the MOM is a multi-stage method that requires data splitting. Moreover, when true regression coefficients are sparse, support recovery guarantee is not available in existing MOM literature. Moreover, the quantile loss has been a useful approach to deal with heavy-tailed noise, see, e.g., \cite{fan2014adaptive} for single quantile level and \cite{zou2008composite} for multiple quantile levels. However, the existing literature does not address the challenging issue on efficient distributed implementation, which is the main focus of this paper.
}
Although the adoption of QR loss provides robustness to heavy-tailed noises, it also poses new challenges due to limited computation power and memory to store data especially when the sample size and dimension are both large. Therefore, distributed estimation procedure becomes increasingly important. The main purpose of the paper is to develop a new estimation approach for high-dimensional QR and establish the theoretical results on both \emph{estimation} and \emph{support recovery}. In fact, as we will survey in the next paragraph, the support recovery problem in a high-dimensional distributed setting still largely remains as an open problem.
In a distributed setting, let us assume $n$ samples are stored in $L$ local machines. In particular, we split the data index set $\{1,2,\ldots,n\}$ into $\mathcal{H}_{1},\ldots,\mathcal{H}_{L}$, where $\mathcal{H}_k$ denotes the set of indices on the $k$-th machine. For the ease of illustration, we assume that the data are evenly distributed ($n/L$ is an integer) and each local machine has the sample size $|\mathcal{H}_k|=m=n/L$ (see Remark \ref{rmk:batchsize} at the end of Section \ref{sec:theory} for the discussion on general data partitions). On each machine, one can construct a local estimator $\widehat{\boldsymbol{\beta}}_{k}$ by solving
\begin{eqnarray}\label{eq:local}
\widehat{\boldsymbol{\beta}}_{k}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{m}\sum_{i\in\mathcal{H}_{k}}\rho_\tau(Y_{i}-\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})+\lambda_{m}|\boldsymbol{\beta}|_{1}.
\end{eqnarray}
Then the final estimator of $\boldsymbol{\beta}^{*}$ can be naturally taken as the averaging estimator $\widehat{\boldsymbol{\beta}}_{avg}=\frac{1}{L}\sum_{k=1}^{L}\widehat{\boldsymbol{\beta}}_{k}$. This method is usually known as averaging divide-and-conquer approach (see, e.g., \cite{li2013statistical,zhao2016partially,fan2017distributed,shi2018massive,banerjee2019divide}). Although this method enjoys low communication cost (i.e., one-shot communication), the obtained estimator is usually no longer sparse. Instead of constructing the local estimator in its original form as in \eqref{eq:local}, there are a number of works that construct a de-biased estimator as the local estimator, and then take the average (see, e.g., \cite{zhao2014general,lee2017communication,battey2018distributed}). The de-biased estimator has been popular in high-dimensional statistics (see, e.g., \cite{belloni2013least,van2014asymptotically,zhang2014confidence,javanmard2014confidence} and references therein). \cite{zhao2014general} studied the averaging divide-and-conquer approach for high-dimensional QR based on de-biased estimator. There are several issues of the averaging de-biased estimator for high-dimensional distributed estimation. First, due to de-biasing, the local estimator on each machine is no longer sparse and thus the final averaging estimator cannot be used for support recovery. Second, the de-biased approach needs to estimate a $p\times p$ precision matrix $\boldsymbol{\Sigma}^{-1}$, which requires each machine to solve $p$ optimization problems (see, e.g., Eq. (3.17) in \cite{zhao2014general}), while each optimization problem involves computing a variant of the CLIME estimator \citep{cai2011constrained}. In other words, instead of solving one $p$-dimensional optimization as in \eqref{eq:local}, the de-biased estimator requires to solve $(p+1)$ optimization problems. This would be computationally very expensive especially when $p$ is large. Finally, the theoretical result of the averaging estimator requires that the number of machines $L$ is not too large. For example, in high-dimensional QR, the theoretical development in \cite{zhao2014general} requires $L=o(n^{1/3}/(s \log^{5/3}(\max(p,n))))$, where $s$ is the number of non-zero elements in $\boldsymbol{\beta}^*$.
It would be an interesting theoretical question on how to remove such a constraint on $L$. In \cite{wang2017efficient}, \cite{jordan2018communication} and \cite{fan2019communication}, they develop iterative methods with multiple rounds of aggregations (instead of one-shot averaging), which relax the condition on the number of machines. However, their methods and theory require the loss function to be second-order differentiable and thus cannot be applied to the \emph{non-smooth} QR loss.
We also note that \cite{chen2019} studied distributed QR problem in a low dimensional setting, where $\boldsymbol{\beta}^*$ is dense and $p$ grows much more slowly than $n$.
In this paper, we propose a new distributed estimator for estimating high-dimensional linear model with heavy-tailed noise. We first show that the estimation of regression coefficient $\boldsymbol{\beta}^*$ can be resorted to a penalized least squares optimization problem with a pseudo-response $\widetilde{Y}_{i}$ instead of $Y_{i}$. This leads to a pooled estimator, which essentially solves a Lasso problem with the squared loss based on $\widetilde{Y}_i$, without requiring any moment condition on the noise term. This pooled estimator is computationally much more efficient than solving high-dimensional QR (\ref{eq:QR}) in a single machine setting.
Moreover, our result establishes an interesting connection between the QR estimation and the ordinary linear regression. This connection translates a non-smooth objective function to a smooth one, which greatly facilitates computation in a distributed setting. Given the transformed penalized least squares formulation, we further provide a communication efficient distributed algorithm, which runs iteratively and only communicates $(p+1)$-dimensional gradient information at each iteration (instead of the $(p+1) \times (p+1)$ matrix information). Our distributed algorithm is essentially an approximate Newton method (see, e.g., \cite{shamir2014communication}), which uses gradient information to approximate Hessian information and thus allows efficient communication. In this paper, we provide a more intuitive derivation of the method simply based on the standard Lasso theory.
Then we establish the theoretical properties of the proposed distributed estimator. We first establish the convergence rate in $\ell_2$-norm for one iteration (Theorem \ref{thm:betainf}). Based on this result, we further characterize the convergence rate for multiple iterations. We show that, after a constant number of iterations, our method achieves a near-oracle rate of $\sqrt{s\log(\max(p,n))/n}$ (Theorem \ref{thm:betainft}). This rate is identical to the rate of $\ell_1$-regularized QR in a single machine setting \citep{belloni2011l1}, and almost matches the oracle rate $\sqrt{s/n}$ (upto a logarithmic factor) where the true support is known. Furthermore, we provide the support recovery result of the distributed estimator. We first show that the estimated support is a subset of the true support with high probability (Theorem \ref{thm:support} and \ref{thm:supportt}). Then we characterize the ``beta-min'' condition for the exact support recovery, and we show that the ``beta-min'' condition becomes weaker as the number of iterations increases (Theorem \ref{thm:supportt}). Again, after a constant number of iterations, the lower bound in our ``beta-min'' condition matches the ideal case with all the samples on a single machine. To the best of our knowledge, this is the first support recovery result for high-dimensional robust distributed estimation.
\subsection{Paper Organization and Notations}
The rest of our paper is organized as follows. In Section \ref{sec:method} we define the estimator and provide our algorithm. In Section \ref{sec:theory} we provide the theoretical guarantee for the convergence rate and support recovery for our estimator. Numerical experiments based on simulation are provided in Section \ref{sec:sim} to illustrate the performance of the estimator. Section \ref{sec:conclusion} gives some concluding remarks and future directions. The proofs of main theoretical results is relegated to the Appendix \ref{sec:proofsupp}.
For a vector $\bm{v}=(v_{1},\dots,v_{n})^{\rm T}$, define $\abs{\bm{v}}_{1}=\sum_{i=1}^{n}\abs{v_{i}}$ and $\abs{\bm{v}}_{2}=\sqrt{\sum_{i=1}^{n}v_{i}^{2}}$. For a matrix $\bm{A}=(a_{ij})\in\mathbb{R}^{p\times q}$, define $\abs{\bm{A}}_{\infty}=\max_{1\le i\le p,1\le j \le q}\abs{a_{ij}}$, $\norm{\bm{A}}_{L_{1}}=\max_{1\le j\le q}\sum_{i=1}^{p}\abs{a_{ij}}$, $\norm{\bm{A}}_{\mathrm{op}}=\max_{\abs{v}_2=1} \abs{\bm{A} v}_2$, and $\norm{\bm{A}}_{\infty}=\max_{1\le i\le p}\sum_{j=1}^{q}\abs{a_{ij}}$. For two sequences $a_n$ and $b_n$ we say $a_n \asymp b_n$ if and only if both $a_n = O(b_n) $ and $b_n = O(a_n) $ hold. For a matrix $\bm{A}$, define $\Lambda_{\text{max}}(\bm{A})$ and $\Lambda_{\text{min}}(\bm{A})$ to be the largest and smallest eigenvalues of $\bm{A}$ respectively. For a matrix $\bm{A}\in \mathbb{R}^{m\times n}$ and two subsets of indices $S=\{s_1,\ldots,s_r\}\subseteq\{1,\ldots,m\}$ and $T = \{t_1,\ldots,t_q\}\subseteq \{1,\ldots,n\}$, we use $\boldsymbol{A}_{S\times T}$ to denote the $r$ by $q$ submatrix given by $(a_{s_it_j})$. We use $C,c,c_0,c_1,\ldots$ to denote constants whose value may change from place to place, which do not depend on $n$, $p$, $s$ and $m$.
\section{Methodology}\label{sec:method}
In this section, we introduce the proposed method. We start with a robust estimator with Lasso (REL), which establishes the connection between quantile regression (QR) and ordinary linear regression in a single machine setting. This proposed estimator will motivate the construction of our distributed estimator.
\subsection{Robust Estimator with Lasso (REL)}
Our method is inspired by the Newton-Raphson method. Consider the following stochastic optimization problem,
\begin{equation}\label{eq:sto_opt}
\boldsymbol{\beta}^{*}=\mathop{\rm arg\min}_{\boldsymbol{\beta}\in \mathbb{R}^{p+1}}\epsilon [G(\boldsymbol{\beta};\boldsymbol{X},Y)],
\end{equation}
where $G(\boldsymbol{\beta};\boldsymbol{X},Y)$ is the loss function. In $G(\boldsymbol{\beta};\boldsymbol{X},Y)$, $\boldsymbol{X}$ and $Y$ are random covariates and response and $\boldsymbol{\beta}$ is the coefficient vector of interest. To solve this stochastic optimization problem, the population version of the Newton-Raphson iteration takes the following form
\begin{eqnarray}\label{eq:onestep}
\tilde{\boldsymbol{\beta}}_{1}=\boldsymbol{\beta}_{0}-\boldsymbol{H}(\boldsymbol{\beta}_{0})^{-1}\epsilon [g(\boldsymbol{\beta}_{0};\boldsymbol{X},Y)],
\end{eqnarray}
where $\boldsymbol{\beta}_0$ is an initial solution, $g(\boldsymbol{\beta};\boldsymbol{X},Y)$ is the subgradient of the loss function $G(\boldsymbol{\beta};\boldsymbol{X},Y)$ with respect to $\boldsymbol{\beta}$, and $\boldsymbol{H}(\boldsymbol{\beta}):=\partial \epsilon [g(\boldsymbol{\beta};\boldsymbol{X},Y)]/\partial \boldsymbol{\beta} $ denotes the population Hessian matrix of $\mathbb{E}G(\boldsymbol{\beta};\boldsymbol{X},Y)$. In particular, let us consider the case where $G(\boldsymbol{\beta};\boldsymbol{X},Y)$ is the QR loss, i.e.,
\begin{equation}\label{eq:G}
G(\boldsymbol{\beta};\boldsymbol{X},Y) = \rho_\tau (Y-\boldsymbol{X}^{\rm T}\boldsymbol{\beta}).
\end{equation}
Given $G(\boldsymbol{\beta};\boldsymbol{X},Y)$ in \eqref{eq:G}, the subgradient and Hessian matrix take the form of
$g(\boldsymbol{\beta};\boldsymbol{X},Y)=\boldsymbol{X}(\ind{Y-\boldsymbol{X}^{\rm T}\boldsymbol{\beta}\leq 0}-\tau)$ and $\boldsymbol{H}(\boldsymbol{\beta})=\epsilon (\boldsymbol{X}\X^{\rm T}f(\boldsymbol{X}^{\rm T}(\boldsymbol{\beta}-\boldsymbol{\beta}^{*})))$, respectively. Here, $f(x)$
is the density function of the noise $e$. When the initial estimator $\boldsymbol{\beta}_{0}$ is close to the true parameter $\boldsymbol{\beta}^{*}$, $\boldsymbol{H}(\boldsymbol{\beta}_0)$ will be close to $\boldsymbol{H}(\boldsymbol{\beta}^*) = \boldsymbol{\Sigma} f(0)$, where $\boldsymbol{\Sigma} = \epsilon \boldsymbol{X}\X^{\rm T}$ is the population covariance matrix of the covariates $\boldsymbol{X}$. Using $\boldsymbol{H}(\boldsymbol{\beta}^*)$ in \eqref{eq:onestep} motivates the following iteration,
\begin{align}\label{eq:newton}
\boldsymbol{\beta}_{1}=\boldsymbol{\beta}_{0}-\boldsymbol{H}(\boldsymbol{\beta}^*)^{-1}\epsilon [g(\boldsymbol{\beta}_{0};\boldsymbol{X},Y)]= \boldsymbol{\beta}_{0}-\boldsymbol{\Sigma}^{-1}f^{-1}(0)\epsilon [g(\boldsymbol{\beta}_{0};\boldsymbol{X},Y)].
\end{align}
Further, under some regularity conditions, we have the following Taylor expansion of $\epsilon[g(\boldsymbol{\beta}_0;\boldsymbol{X},Y)]$ at $\boldsymbol{\beta}^*$,
\begin{align*}
\epsilon[g(\boldsymbol{\beta}_0;\boldsymbol{X},Y)] =& \boldsymbol{H}(\boldsymbol{\beta}^*)(\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*})+O(|\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*}|_2^2)\\
=&\boldsymbol{\Sigma} f(0)(\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*})+O(|\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*}|_2^2).
\end{align*}
Combine it with \eqref{eq:newton}, and it is easy to see that
\begin{align*}
|\boldsymbol{\beta}_{1}-\boldsymbol{\beta}^{*}|_{2} =& |\boldsymbol{\beta}_0-\boldsymbol{\Sigma}^{-1}f^{-1}(0)\left(\boldsymbol{\Sigma} f(0)(\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*})+O(|\boldsymbol{\beta}_0-\boldsymbol{\beta}^{*}|_2^2)\right)-\boldsymbol{\beta}^{*}|_2
\\
=&O(|\boldsymbol{\beta}_{0}-\boldsymbol{\beta}^{*}|^{2}_{2}).
\end{align*}
In summary, if we have a consistent estimator $\boldsymbol{\beta}_{0}$, we can refine it by the Newton-Raphson iteration in \eqref{eq:newton}.
Next, we show how to translate the Newton-Raphson iteration into a least squares optimization problem. First we rewrite the equation \eqref{eq:newton} to be
\begin{eqnarray*}
\boldsymbol{\beta}_{1}&=&\boldsymbol{\Sigma}^{-1}\Big{(}\boldsymbol{\Sigma}\boldsymbol{\beta}_{0}-f^{-1}(0)\epsilon[g(\boldsymbol{\beta}_{0};\boldsymbol{X},Y)]\Big{)}\cr
&=&\boldsymbol{\Sigma}^{-1}\epsilon \Big{[}\boldsymbol{X}\Big{\{}\boldsymbol{X}^{\rm T}\boldsymbol{\beta}_{0}-f^{-1}(0)(\ind{Y\leq\boldsymbol{X}^{\rm T}\boldsymbol{\beta}_{0}}-\tau)\Big{\}}\Big{]}.
\end{eqnarray*}
Let us define a new response variable $\widetilde{Y}$ as
\begin{eqnarray*}
\widetilde{Y}=\boldsymbol{X}^{\rm T}\boldsymbol{\beta}_{0}-f^{-1}(0)(\ind{Y\leq\boldsymbol{X}^{\rm T}\boldsymbol{\beta}_{0}}-\tau).
\end{eqnarray*}
Then $\boldsymbol{\beta}_{1} = \boldsymbol{\Sigma}^{-1}\epsilon(\boldsymbol{X}\widetilde{Y})$ is the best linear regression coefficient of $\widetilde{Y}$ on $\boldsymbol{X}$, i.e., $\boldsymbol{\beta}_{1}=\mathop{\rm arg\min}_{\boldsymbol{\beta}\in \mathbb{R}^{p+1}}\epsilon (\widetilde{Y}-\boldsymbol{X}^{\rm T}\boldsymbol{\beta})^{2}$.
To further encourage the sparsity of the estimator, it is natural to consider the following $\ell_1$-regularized problem,
\begin{eqnarray}\label{eq:l1}
\boldsymbol{\beta}_{1,\lambda}=\mathop{\rm arg\min}_{\boldsymbol{\beta}\in \mathbb{R}^{p+1}} \frac{1}{2}\epsilon (\widetilde{Y}-\boldsymbol{X}^{\rm T}\boldsymbol{\beta})^{2}+\lambda|\boldsymbol{\beta}|_{1},
\end{eqnarray}
where $\boldsymbol{\beta}_{1,\lambda}$ is sparse and can achieve a better convergence rate than $\boldsymbol{\beta}_{0}$. So far, we have shown that if we have a consistent estimator $\boldsymbol{\beta}_{0}$ of $\boldsymbol{\beta}^{*}$, then the estimation of the high-dimensional sparse $\boldsymbol{\beta}^{*}$ can be implemented by solving a penalized least squares optimization in \eqref{eq:l1} instead of the penalized QR optimization.
It is well known that the latter optimization problem is computationally expensive when $n$ is large since the QR loss is non-smooth. More importantly, the transformation from QR loss to least squares will greatly facilitate the development of the distributed estimator. In particular, our distributed estimator is derived from the Lasso theory, which is based on the squared loss (see Section \ref{sec:dist}).
Now, we are ready to define the empirical version of $\boldsymbol{\beta}_{1,\lambda}$ in a single machine setting. Let $\widehat{\boldsymbol{\beta}}_{0}$ be an initial estimator of $\boldsymbol{\beta}^{*}$ and $\widehat{f}(0)$ be an estimator of the density $f(0)$. We use $\widehat{\boldsymbol{\beta}}_{0}$ to denote the empirical version of the initial estimator, which is distinguished from the population version $\boldsymbol{\beta}_0$. Given $n$ \emph{i.i.d.} samples $(\boldsymbol{X}_i,Y_i)$ from \eqref{eq:model}, for each $1\le i\le n$, we construct
\begin{eqnarray*}
\widetilde{Y}_{i}=\boldsymbol{X}^{\rm T}_{i}\widehat{\boldsymbol{\beta}}_{0}-\widehat{f}^{-1}(0)(\ind{Y_{i}\leq\boldsymbol{X}^{\rm T}_{i}\widehat{\boldsymbol{\beta}}_{0}}-\tau).
\end{eqnarray*}
It is natural to estimate $\boldsymbol{\beta}^{*}$ by the empirical version of \eqref{eq:l1}:
\begin{eqnarray}\label{eq:pool}
\widehat{\boldsymbol{\beta}}_{pool}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\Big{\{} \frac{1}{2n}\sum_{i=1}^{n}(\widetilde{Y}_{i}-\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})^{2}+\lambda_{n}|\boldsymbol{\beta}|_{1}\Big{\}}.
\end{eqnarray}
We note that in a single machine setting, computing this pooled estimator essentially solves a Lasso problem, which is computationally much more efficient than solving an $\ell_1$-regularized QR problem.
Finally, we choose $\widehat{f}(0)$ to be a kernel density estimator of $f(0)$:
\begin{eqnarray*}
\widehat{f}(0)=\frac{1}{nh}\sum_{i=1}^{n}K\Big{(}\frac{Y_{i}-\boldsymbol{X}^{\rm T}_{i}\widehat{\boldsymbol{\beta}}_{0}}{h}\Big{)},
\end{eqnarray*}
where $K(x)$ is a kernel function which satisfies the condition (C3) (see Section 3) and $h\to 0$ is the bandwidth. The selection of bandwidth will be discussed in our theoretical results (see Section \ref{sec:theory}).
In the next section, we will introduce a distributed robust estimator with Lasso which can estimate $\boldsymbol{\beta}^{*}$ with a near-oracle convergence rate.
\subsection{Distributed Robust Estimator with Lasso}
\label{sec:dist}
Given our new proposed estimator $\widehat{\boldsymbol{\beta}}_{pool}$, we can use the approximate Newton method to solve the distributed estimation problem. To illustrate this technique from the Lasso theory, we first consider a general convex quadratic optimization as follows,
\begin{eqnarray}\label{eq:bhat}
\widehat{\boldsymbol{\beta}}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{2}\boldsymbol{\beta}^{\rm T}\boldsymbol{A}\boldsymbol{\beta}-\boldsymbol{\beta}^{\rm T}\boldsymbol{b}+\lambda_{n}|\boldsymbol{\beta}|_{1},
\end{eqnarray}
where $\boldsymbol{A}$ is a non-negative definite matrix and $\boldsymbol{b}$ is a vector in $\mathbb{R}^{p+1}$.
From standard Lasso theory (see \cite{buhlmann2011statistics}), we have the following proposition.
\begin{proposition}\label{prop:0}
Assume the following conditions hold
\begin{eqnarray}\label{cd1}
|\boldsymbol{A}\boldsymbol{\beta}^{*}-\boldsymbol{b}|_{\infty}\leq \lambda_{n}/2,
\end{eqnarray}
\begin{eqnarray}\label{cd2}
\min_{\delta: |\delta|_{1}\leq c_{1}\sqrt{s}|\delta|_{2}}\frac{\delta^{\mathrm{T}}\boldsymbol{A}\delta}{|\delta|^{2}_{2}}\geq c_{2},\quad c_{1},c_{2}>0.
\end{eqnarray}
where $s$ is the sparsity of $\boldsymbol{\beta}^{*}$, i.e., $s=\sum_{j=0}^{p}\ind{\beta^{*}_{j}\neq 0}$. Then we have
\begin{eqnarray}\label{prop4}
|\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}^{*}|_{2}\leq c\sqrt{s}\lambda_{n},
\end{eqnarray}
for some constant $c>0$.
\end{proposition}
Note that the condition \eqref{cd2} is known as the compatibility condition, which is used to provide the $\ell_2$-consistency of the Lasso estimator. For the purpose of completeness, we include a proof of Proposition \ref{prop:0} in the Appendix \ref{sec:proofsupp}. As one can see from \eqref{cd1}, if we can choose a matrix $\boldsymbol{A}$ and a vector $\boldsymbol{b}$ such that $\lambda_{n}$ is as small as possible, we can obtain a fast convergence rate of $\widehat{\boldsymbol{\beta}}$.
Now let us discuss how to use Proposition \ref{prop:0} to develop our distributed estimator. Suppose that $n$ samples are stored in $L=n/m$ machines and each local machine has $m$ samples. We first split the data index set $\{1,2,\ldots,n\}$ into $\mathcal{H}_{1},\ldots,\mathcal{H}_{L}$ with $|\mathcal{H}_{k}|=m$ and the $k$-th machine stores samples $\{(\boldsymbol{X}_{i},Y_i): \; i\in \mathcal{H}_{k}\}$.
Let us define
\begin{eqnarray}\label{eq:local_hessian}
\widehat{\boldsymbol{\Sigma}}_{k}=\frac{1}{m}\sum_{i\in \mathcal{H}_{k}}\boldsymbol{X}_{i}\boldsymbol{X}^{\rm T}_{i},\quad\widehat{\boldsymbol{\Sigma}} = \frac{1}{n}\sum_{i=1}^n \boldsymbol{X}_i\boldsymbol{X}_i^{\rm T} = \frac{1}{L} \sum_{k=1}^L \widehat{\boldsymbol{\Sigma}}_k,
\end{eqnarray}
as the sample covariance matrix on the $k$-th machine and the sample covariance matrix of the entire dataset, respectively. It is worthwhile noting that our algorithm does not need to explicitly compute and communicate $\widehat{\boldsymbol{\Sigma}}_{k}$ (for $k \neq 1$) (see Algorithm \ref{alg:1} for more details).
In Proposition \ref{prop:0}, we first choose $\boldsymbol{A} = \widehat{\boldsymbol{\Sigma}}_1$ to be the sample covariance matrix computed on the first machine. Our goal is to construct a vector $\boldsymbol{b}$ such that $|\boldsymbol{A}\boldsymbol{\beta}^* - \boldsymbol{b}|_\infty $ can be as small as possible. Note that
\begin{align}\label{eq:Ab}
\nonumber\boldsymbol{A}\boldsymbol{\beta}^*-\boldsymbol{b} =& \widehat{\boldsymbol{\Sigma}}_1\boldsymbol{\beta}^*-\boldsymbol{b}\\
=& \widehat{\boldsymbol{\Sigma}} \boldsymbol{\beta}^* +(\widehat{\boldsymbol{\Sigma}}_1-\widehat{\boldsymbol{\Sigma}})\boldsymbol{\beta}^* - \boldsymbol{b}.
\end{align}
It can be proved that $\widehat{\boldsymbol{\Sigma}}\boldsymbol{\beta}^*$ is close to $\boldsymbol{z}_n := \frac{1}{n}\sum_{i=1}^n \boldsymbol{X}_i\widetilde{Y}_i$ (see Proposition \ref{prop:Bnbeta0} in the Appendix \ref{sec:proofsupp}). We note that $\boldsymbol{z}_n$ can be computed effectively in a distributed setting since
$$\boldsymbol{z}_n = \frac{1}{L} \sum_{k=1}^L \boldsymbol{z}_{nk},\quad\boldsymbol{z}_{nk}=\frac{1}{m}\sum_{i\in \mathcal{H}_{k}}\boldsymbol{X}_{i}\widetilde{Y}_{i},$$
where
$\boldsymbol{z}_{nk}$ can be computed on the $k$-th local machine. Therefore we can rewrite $\eqref{eq:Ab}$ as
\begin{equation*}
\begin{aligned}
|\boldsymbol{A}\boldsymbol{\beta}^*-\boldsymbol{b}|_{\infty} =& |\widehat{\boldsymbol{\Sigma}}\boldsymbol{\beta}^*-\boldsymbol{z}_n+\boldsymbol{z}_n+(\widehat{\boldsymbol{\Sigma}}_1-\widehat{\boldsymbol{\Sigma}})\boldsymbol{\beta}^*-\boldsymbol{b}|_{\infty}\\
\leq & |\widehat{\boldsymbol{\Sigma}}\boldsymbol{\beta}^*-\boldsymbol{z}_n|_{\infty}+|\boldsymbol{z}_n+(\widehat{\boldsymbol{\Sigma}}_1-\widehat{\boldsymbol{\Sigma}})\boldsymbol{\beta}^*-\boldsymbol{b}|_{\infty}.
\end{aligned}
\end{equation*}
Since $\boldsymbol{\beta}^*$ is unknown, in order to make the second term as small as possible, it is natural to set $$\boldsymbol{b} = \boldsymbol{z}_n +(\widehat{\boldsymbol{\Sigma}}_1-\widehat{\boldsymbol{\Sigma}})\widehat{\boldsymbol{\beta}}_0.$$ For $\boldsymbol{A} = \widehat{\boldsymbol{\Sigma}}_1$ and $\boldsymbol{b} = \boldsymbol{z}_n +(\widehat{\boldsymbol{\Sigma}}_1-\widehat{\boldsymbol{\Sigma}})\widehat{\boldsymbol{\beta}}_0$, we can prove that (see Eq. \eqref{easytoshow} in the proof of Theorem \ref{thm:betainf} and \ref{thm:betainft})
\begin{eqnarray*}
|\widehat{\boldsymbol{\Sigma}}_{1}\boldsymbol{\beta}^{*}-\boldsymbol{b}|_{\infty}\leq \lambda_{n}/2,
\end{eqnarray*}
for some specified $\lambda_{n}$ (see Theorem \ref{thm:betainf}). With $\boldsymbol{A}$ and $\boldsymbol{b}$ in place, the equation \eqref{eq:bhat} leads to the following $\ell_1$-regularized quadratic programming,
\begin{align}\label{eq:beta_dist}
\widehat{\boldsymbol{\beta}}^{(1)}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{2m}\sum_{i\in \mathcal{H}_1}(\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})^{2}-\boldsymbol{\beta}^{\rm T}\Big{\{}\boldsymbol{z}_{n}+(\widehat{\boldsymbol{\Sigma}}_{1}-\widehat{\boldsymbol{\Sigma}})\widehat{\boldsymbol{\beta}}_{0}\Big{\}}+\lambda_{n}|\boldsymbol{\beta}|_{1}.
\end{align}
Note that when $m=n$, we have $\widehat{\boldsymbol{\beta}}^{(1)}=\widehat{\boldsymbol{\beta}}_{pool}$. In other words, when the data is pooled on a single machine, the proposed distributed estimator automatically reduces to $\widehat{\boldsymbol{\beta}}_{pool}$ in \eqref{eq:pool}. We also note that $\widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\beta}}_0$ in the vector $\boldsymbol{b}$ can be computed effectively in a distributed manner. In particular, each local machine computes and communicates a $(p+1)$-dimensional vector $\widehat{\boldsymbol{\Sigma}}_k \widehat{\boldsymbol{\beta}}_{0}=\frac{1}{m}\sum_{i\in \mathcal{H}_{k}}\boldsymbol{X}_{i}(\boldsymbol{X}^{\rm T}_{i}\widehat{\boldsymbol{\beta}}_{0})$ to the first machine. Then the first machine computes $\widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\beta}}_0$ by
\[
\widehat{\boldsymbol{\Sigma}} \widehat{\boldsymbol{\beta}}_0 =\frac{1}{L} \sum_{k=1}^L \widehat{\boldsymbol{\Sigma}}_k \widehat{\boldsymbol{\beta}}_{0}.
\]
Our algorithm only communicates $\boldsymbol{z}_{nk} = \frac{1}{m}\sum_{i\in \mathcal{H}_k} \boldsymbol{X}_i\widetilde{Y}_i$ and $\widehat{\boldsymbol{\Sigma}}_k\widehat{\boldsymbol{\beta}}_0$ to the first machine at each iteration. Therefore, the per-iteration communication complexity is only $O(p)$ and there is no need to communicate the $(p+1)\times (p+1)$ sample covariance matrix $\widehat{\boldsymbol{\Sigma}}_k$.
Given \eqref{eq:beta_dist} as the estimator from the first iteration, it is easy to construct an iterative estimator. In particular, let $\widehat{\boldsymbol{\beta}}^{(t-1)}$ be the distributed REL in the $(t-1)$-th iteration. Define
\[
\widehat{f}^{(t)}\left(0\right)=\frac{1}{nh_{t}}\sum_{i=1}^{n}K\left(\frac{Y_{i}-\bm{X}_{i}^{\rm T}\widehat{\bbeta}^{(t-1)}}{h_{t}}\right),
\]
as the density estimator in the $t$-th iteration where $h_{t}\to 0$ is the bandwidth for the $t$-th iteration. The bandwidth $h_t$ shrinks as $t$ grows, whose rate will be specified in Theorem \ref{thm:betainft}.
Let us define
\begin{equation}\label{eq:ytilde}
\widetilde{Y}_{i}^{(t)}=\bm{X}_{i}^{\rm T}\widehat{\bbeta}^{(t-1)}-(\widehat{f}^{(t)}\left(0\right))^{-1}\left(\Ind{Y_{i}\le \bm{X}_{i}^{\rm T}\widehat{\bbeta}^{(t-1)}}-\tau\right),
\end{equation}
and
\[
\boldsymbol{z}_{n}^{(t)}=\frac{1}{n}\sum_{i=1}^{n}\bm{X}_{i}\widetilde{Y}_{i}^{(t)}.
\]
As in \eqref{eq:beta_dist}, our distributed estimator $\widehat{\bbeta}^{(t)}$ is the solution of the following $\ell_1$-regularized quadratic programming problem:
\begin{align}\label{eq:betat}
\widehat{\boldsymbol{\beta}}^{(t)}=\mathop{\rm arg\min}_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{2m}\sum_{i\in \mathcal{H}_1}(\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})^{2}-\boldsymbol{\beta}^{\rm T}\left\{\boldsymbol{z}_{n}^{(t)}+\left(\widehat{\bSigma}_{1}-\widehat{\bSigma}\right)\widehat{\boldsymbol{\beta}}^{(t-1)}\right\}+\lambda_{n,t}\Abs{\boldsymbol{\beta}}_{1}.
\end{align}
It is worthwhile noting that the convex optimization problem \eqref{eq:betat} has been extensively studied in the optimization literature and several efficient optimization methods have been developed, e.g., FISTA \citep{beck2009fast}, active set method \citep{solntsev2015algorithm}, and PSSgb (Projected Scaled Subgradient, Gafni-Bertsekas variant, \citep{schmidt2010graphical}). In our experiments, we adopt the PSSgb optimization method for solving \eqref{eq:betat}. We present the entire distributed estimation procedure in Algorithm \ref{alg:1}.
\begin{algorithm}[!t]
\caption{{\small Distributed high-dimensional QR estimator}}
\label{alg:1}
\hspace*{\algorithmicindent} \hspace{-0.72cm} {\textbf{Input:} Data on local machines $\{\boldsymbol{X}_i,Y_i:\;i\in \mathcal{H}_k\}$ for $k=1,\ldots, L$, the number of iterations $t$, quantile level $\tau$, kernel function $K$, a sequence of bandwidths $h_g$ for $g=1,\ldots, t$ and the regularization parameters $\lambda_0$, $\lambda_{n,g}$ for $g=1,\ldots,t$.}
\begin{algorithmic}[1]
\State Compute the initial estimator $\widehat{\boldsymbol{\beta}}^{(0)} = \widehat{\boldsymbol{\beta}}_0$ based on $\{\boldsymbol{X}_i,Y_i:\;i\in \mathcal{H}_1\}$:
\begin{eqnarray}\label{ag0}
\widehat{\boldsymbol{\beta}}_{0} = \mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{m}\sum_{i\in\mathcal{H}_{1}}\rho_\tau(Y_{i}-\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})+\lambda_{0}|\boldsymbol{\beta}|_{1}.
\end{eqnarray}
\For{$g=1,2 \ldots, t$}
\State Transmit $\widehat{\boldsymbol{\beta}}^{(g-1)}$ to all local machines.
\For{$k=1,\dots, L$}
\State The $k$-th machine computes $ \widehat{f}^{(g,k)}\left(0\right):=\frac{1}{m}\sum_{i\in \mathcal{H}_k}K\left(\frac{Y_{i}-\bm{X}_{i}^{\rm T}\widehat{\bbeta}^{(g-1)}}{h_{g}}\right)$ and sends it back to the first machine.
\EndFor
\State The first machine computes $\widehat{f}^{(g)}\left(0\right)$ based on
\[
\widehat{f}^{(g)}\left(0\right)=\frac{1}{L}\sum_{k=1}^L\widehat{f}^{(g,k)}\left(0\right).
\]
\State Transmit $\widehat{f}^{(g)}\left(0\right)$ to all local machines.
\For{$k=1,\dots, L$}
\State The $k$-th machine computes $\widehat{\boldsymbol{\Sigma}}_k\widehat{\boldsymbol{\beta}}^{(g-1)}$ and $\boldsymbol{z}_{nk}=\frac{1}{m}\sum_{i\in \mathcal{H}_k}\boldsymbol{X}_i\widetilde{Y}_i^{(g)}$ based on \eqref{eq:ytilde} and sends them back to the first machine.
\EndFor
\State Compute the estimator $\widehat{\boldsymbol{\beta}}^{(g)}$ on the first machine based on \eqref{eq:betat}.
\EndFor
\end{algorithmic}
\textbf{Output:} The final estimator $\widehat{\boldsymbol{\beta}}^{(t)}$.
\end{algorithm}
For the choice of the initial estimator $\widehat{\boldsymbol{\beta}}_{0}$, we propose to solve the high-dimensional QR problem using the data on the first machine, i.e.,
\begin{equation}\label{eq:init}
\begin{aligned}
\widehat{\boldsymbol{\beta}}_{0} = \mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\frac{1}{m}\sum_{i\in\mathcal{H}_{1}}\rho_\tau(Y_{i}-\boldsymbol{X}^{\rm T}_{i}\boldsymbol{\beta})+\lambda_{0}|\boldsymbol{\beta}|_{1}.
\end{aligned}
\end{equation}
Note that although this paper uses the \eqref{eq:init} as the initial estimator, one can adopt any estimator as $\widehat{\boldsymbol{\beta}}_0$ as long as it satisfies the condition (C6) (see Section \ref{sec:theory}).
We assume the quantile level $\tau$ is pre-specified in Algorithm \ref{alg:1}. Our paper mainly focuses on the algorithm for distributed estimation under a general $\tau$ and develop the related theoretical results. Different choices of $\tau$ correspond to different loss functions we want to use and different parameters we are interested in. The choice of $\tau$ to fit the model is a separate topic which clearly depends on the practical problem and the parameters we are interested in. For example, without the covariate $\bm{X}$ (for briefness), $\beta^{*}_{0}$ is the $\tau$-quantile of $Y$ and the choice of $\tau$ depends on what quantile of $Y$ we are interested in. In extreme climate studies, people would like to choose $\tau$ as some large values ($0.9$ and $0.99$) or small values ($0.1$ and $0.01$) to evaluate the extreme climate performance. In economic domain, to learn the problem associated with median salary, we can simply set $\tau=0.5$.
\section{Theoretical Results}\label{sec:theory}
In this section we provide the theoretical results for our distributed method. We define
\begin{eqnarray*}
S=\{0\leq i\leq p: \beta^{*}_{i}\neq 0\},
\end{eqnarray*}
as the support of $\boldsymbol{\beta}^*$ and $s=|S|$.
We assume the following regular conditions.
\vspace{3mm}
(C1) The density function of the noise $f(\cdot)$ is bounded and Lipschitz continuous (i.e., $\abs{f(x)-f(y)}\le C_L\abs{x-y}$ for any $x,y\in\mathbb{R}$ and some constant $C_L>0$). Moreover, we assume $f(0)>c>0$ for some constant $c$.
(C2) Suppose that $\bm{\Sigma}=\epsilon \boldsymbol{X}\X^{\mathrm{T}}$ satisfies
\begin{equation}\label{eqn:irres}
\Norm{\bm{\Sigma}_{S^{c}\times S}\bm{\Sigma}_{S\times S}^{-1}}_{\infty}\le 1-\alpha,
\end{equation}
for some $0<\alpha<1$. Also assume that $c_{0}^{-1}\le\Lambda_{\text{min}}(\bm{\Sigma})\le\Lambda_{\text{max}}(\bm{\Sigma})\le c_{0}$ for some constant $c_{0}>0$.
(C3) Assume that the kernel function $K(\cdot)$ is integrable with $\int_{-\infty}^\infty K(u)\mathrm{d}u = 1$. Moreover, assume that $K(\cdot)$ satisfies $K(u)=0$ if $|u|\ge 1$. Further, assume $K(\cdot)$ is differentiable and its derivative $K'(\cdot)$ is bounded.
(C4) We assume that the covariate $\bm{X}$ satisfies the sub-Gaussian condition for some $t>0$ and $C>0$, $$\sup_{\abs{\bm{\theta}}_{2}=1}\mathbb{E}\exp(t(\bm{\theta}^{\rm T}\bm{X})^2)\le C.$$
(C5) The dimension $p$ satisfies $p=O(n^{\nu})$ for some $\nu>0$. The local sample size $m$ on each machine satisfies $m\geq n^{c}$ for some $0<c<1$, and the sparsity level $s$ satisfies $s=O(m^{r})$ for some $0<r<1/3$.
(C6) The initial estimator $\widehat{\bbeta}_{0}$ satisfies $\abs{\widehat{\bbeta}_{0}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}(\sqrt{s(\log n)/m})$. Furthermore, assume that $\textsf{P}(\text{supp}(\widehat{\bbeta}_{0})\subseteq S)\rightarrow 1$.\vspace{2mm}
Condition (C1) is a regular condition on the smoothness of the density function $f(\cdot)$. { Condition (C2) is the standard irrepresentable condition, which is commonly adopted to establish support recovery in high-dimensional statistics literature (see, e.g., \cite{zhao2006model,wainwright2009sharp,buhlmann2011statistics,tibshirani2015statistical})}.
Condition (C3) is a standard condition on the kernel function $K(\cdot) $ (see an example of $K(\cdot)$ in Section \ref{sec:sim}).
Condition (C4) is a regular condition on the distribution of $\boldsymbol{X}$ while Condition (C5) is on dimension $p$, local sample size $m$ and sparsity level $s$.
The conditions $m \geq n^c$ for some $0 < c <1$ and $s=O(m^r)$ make sure that our algorithm achieves the near-oracle convergence rate only using a finite number of iterations (see Eq.~\eqref{eq:t} below). Condition (C6) is a condition on the convergence rate and support recovery of the initial estimator. Note that in Algorithm \ref{alg:1}, the initial estimator $\widehat{\boldsymbol{\beta}}_0$ is proposed as the solution to the high-dimensional QR problem using data on the first machine.
It can be shown that $\widehat{\boldsymbol{\beta}}_0$ in \eqref{eq:init} fulfills condition (C6) under conditions (C1), (C2), (C4), (C5) and some regularity conditions \citep{fan2014adaptive}. In addition, we also show that the condition (C6) is satisfied for the proposed estimator for the $t$-th iteration $\widehat{\beta}^{(t)}$, which serves as the initial estimator for the $(t+1)$-th iteration, in Theorems \ref{thm:betainf}--\ref{thm:supportt}. We also note that by $p=O(n^\nu)$ in (C5), we have that $\log(\max(n,p))=C_1 \log(n)$ for some constant $C_1>0$. Therefore, we will use $\log(n)$ in our convergence rates (instead of $\log(\max(n,p))$) for notational simplicity.
Let $\{a_{n}\}$ be the convergence rate of the initial estimator, i.e., $\abs{\widehat{\bbeta}_{0}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}(a_{n})$. By condition (C6) we can assume that $a_{n}=\sqrt{s(\log n)/m}$. We first provide the convergence rate for $\widehat{\boldsymbol{\beta}}^{(1)}$ after one iteration.
\begin{theorem}\label{thm:betainf}
Let $\abs{\widehat{\bbeta}_{0}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}(a_{n})$ and choose the bandwidth $h\asymp a_{n}$, take
$$\lambda_n=C_{0}\left(\sqrt{\frac{\log n}{n}}+a_{n}\sqrt{\frac{s\log n}{m}}\right),$$ with $C_{0}$ being a sufficiently large constant. Under (C1)-(C6), we have
\begin{equation}\label{eqn:betainf}
\Abs{\widehat{\bbeta}^{(1)}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}\left(\sqrt{\frac{s\log n}{n}}+a_{n}\sqrt{\frac{s^{2}\log n}{m}}\right).
\end{equation}
\end{theorem}
With the choice of the bandwidth $h$ shrinking at the same rate as $a_{n}$, conclusion \eqref{eqn:betainf} shows that one iteration enables a refinement of the estimator with its rate improved from $a_{n}$ to $\max\{\sqrt{s(\log n)/n},a_{n}\sqrt{s^{2}(\log n)/m}\}$ where $ \sqrt{s^{2}(\log n)/m} = o(1)$ by condition (C5). By recursive applications of Theorem \ref{thm:betainf}, we provide the convergence rate for the multi-iteration estimator $\widehat{\boldsymbol{\beta}}^{(t)}$. The next theorem shows that an iterative refinement of the initial estimator will improve the estimation accuracy and achieve a near-oracle rate after a constant number of iterations.
In particular, let us define
\begin{eqnarray}\label{eq:a}
a_{n,g}=\sqrt{\frac{s \log n}{n}}+s^{(2g+1)/2}\left(\frac{\log n}{m}\right)^{(g+1)/2},\quad 0\leq g\leq t.
\end{eqnarray}
From Theorem \ref{thm:betainft} below, we can see that $a_{n,g}$ is the convergence rate of the estimator $\widehat{\boldsymbol{\beta}}^{(g)}$ after $g$ iterations.
\begin{theorem}\label{thm:betainft}
Assume that the initial estimator $\widehat{\bbeta}_{0}$ satisfies $\abs{\widehat{\bbeta}_{0}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}(\sqrt{s(\log n)/m})$. Let $h_{g}\asymp a_{n,g-1}$ for $1\le g\le t$, and take
\begin{equation}\label{eq:lambda}
\begin{aligned}
\lambda_{n,g}=C_{0}\left(\sqrt{\frac{\log n}{n}}+a_{n,g-1}\sqrt{\frac{s\log n}{m}}\right),
\end{aligned}
\end{equation} with $C_{0}$ being a sufficiently large constant. Under (C1)-(C6), we have
\begin{equation}\label{eq:bt}
\Abs{\widehat{\bbeta}^{(t)}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}\left(\sqrt{\frac{s \log n}{n}}+s^{(2t+1)/2}\left(\frac{\log n}{m}\right)^{(t+1)/2}\right).
\end{equation}
\end{theorem}
It can be shown that when the iteration number $t$ is sufficiently large, i.e.,
\begin{equation}\label{eq:t}
t\geq \frac{\log (n/m)}{\log (c_0m/(s^2\log n))},\quad \text{for some }c_0>0,
\end{equation}
the second term in \eqref{eq:bt} is dominated by the first term, and the convergence rate in \eqref{eq:bt} becomes $\abs{\widehat{\bbeta}^{(t)}-\bm{\beta}^{*}}_{2}=O_{\textsf{P}}(\sqrt{s(\log n)/n})$. We note that this rate matches the convergence rate of the $\ell_1$-regularized QR estimator in a single machine setup (see \cite{belloni2011l1}). Moreover, it nearly matches the oracle convergence rate $\sqrt{s/n}$ (upto a logarithmic factor) when the support of $\boldsymbol{\beta}^*$ is known. We also note that the conditions $m \geq n^c$ and $s = o(m^{1/3})$ in (C5) ensure that the right hand side of \eqref{eq:t} is bounded by a constant, which implies that a constant number of iterations would guarantee a near-oracle rate of $\widehat{\boldsymbol{\beta}}^{(t)}$.
The following theorems provide results on support recovery of the proposed estimators $\widehat{\boldsymbol{\beta}}^{(1)}$ and $\widehat{\boldsymbol{\beta}}^{(t)}$. Recall $S=\{j:\beta^{*}_{j}\neq0\}$ is the support of $\bm{\beta}^{*}$. Let $\widehat{\boldsymbol{\beta}}^{(1)}=(\widehat{\beta}_{0}^{(1)},\widehat{\beta}_{1}^{(1)},\ldots,\widehat{\beta}_{p}^{(1)})^{\mathrm{T}}$ and
\[
\widehat{S}^{(1)}=\left\{j:\widehat{\beta}_{j}^{(1)}\neq0\right\}.
\]
\begin{theorem}\label{thm:support}
Assume that the conditions in Theorem \ref{thm:betainf} hold.
(i) We have $\widehat{S}^{(1)}\subseteq S$ with probability tending to one.
(ii) In addition, suppose that for a sufficiently large constant $C>0$,
\begin{equation}\label{eqn:sigcon}
\underset{j\in S}{\min}\Abs{\beta^{*}_{j}}\ge C\|\boldsymbol{\Sigma}^{-1}_{S\times S}\|_{\infty}\left(\sqrt{\frac{\log n}{n}}+a_{n}\sqrt{\frac{s\log n}{m}}\right).
\end{equation}
Then we have $\widehat{S}^{(1)}= S$ with probability tending to one.
\end{theorem}
Based on Theorem \ref{thm:support}, we can further obtain the support recovery result for $\widehat{\boldsymbol{\beta}}^{(t)}$, which requires a weaker condition on $\underset{j\in S}{\min}\Abs{\beta^{*}_{j}}$.
Denote $\widehat{\boldsymbol{\beta}}^{(t)}=(\widehat{\beta}_{0}^{(t)},\widehat{\beta}_{1}^{(t)},\ldots,\widehat{\beta}_{p}^{(t)})^{\mathrm{T}}$ and
\[
\widehat{S}^{(t)}=\left\{j:\widehat{\beta}_{j}^{(t)}\neq0\right\}.
\]
\begin{theorem}\label{thm:supportt} Assume the conditions in Theorem \ref{thm:betainft} hold.
(i) We have $\widehat{S}^{(t)}\subseteq S$ with probability tending to one.
(ii) In addition, suppose that for a sufficiently large constant $C>0$,
\begin{equation}\label{eqn:sigcont}
\underset{j\in S}{\min}\Abs{\beta^{*}_{j}}\ge C\|\boldsymbol{\Sigma}^{-1}_{S\times S}\|_{\infty}\left(\sqrt{\frac{\log n}{n}}+s^{t}\left(\frac{\log n}{m}\right)^{(t+1)/2}\right).
\end{equation}
Then we have $\widehat{S}^{(t)}= S$ with probability tending to one.
\end{theorem}
Note that the ``beta-min'' condition gets weaker as $t$ increases. When $t$ satisfies \eqref{eq:t}, the condition \eqref{eqn:sigcont} will reduce to $\underset{j\in S}{\min}\Abs{\beta^{*}_{j}}\ge C\|\boldsymbol{\Sigma}^{-1}_{S\times S}\|_{\infty}\sqrt{\frac{\log n}{n}}$, which matches the rate of the lower bound for the ``beta-min'' condition in Lasso in a single machine setting (see \cite{wainwright2009sharp}).
Furthermore, we state the results in both Theorem \ref{thm:support} and \ref{thm:supportt} by a high-probability statement ``with probability tending to one''. The convergence rate actually can be represented as $1-q_{n}$, where $q_{n}=O(1-\mathbb{P}(\text{supp}(\hat{\bm{\beta}}_{0})\subseteq S))+O(n^{-\gamma})$ is a small quantity goes to $0$ when both $n$ and $p$ go to $\infty$. More specifically, the convergence rate depends on the convergence rate $\mathbb{P}(\text{supp}(\hat{\bm{\beta}}_{0})\subseteq S)\rightarrow 1$ for the initial estimator $\hat{\bm{\beta}}_0$. Below we further provide two remarks on our method.
\begin{remark}\label{rmk:batchsize}
It is worthwhile noting that we assume the data is evenly split only for the ease of discussions. In fact, the local sample size $m$ in our theoretical results is the sample size on the first machine in Algorithm \ref{alg:1} (a.k.a. the central machine in distributed computing). As long as the sample size $m$ on the first machine is specified, our method does not depend on the partition of the entire dataset.
\end{remark}
{
\begin{remark}\label{rmk:independent}
We note that the proposed estimator can be generalized to the case when the noise $e$ and the covariates $\boldsymbol{X}$ are not independent. More specifically, without the independence assumption, we assume $\textsf{P}(e\leq 0 |\boldsymbol{X}) = \tau$ for some specified $\tau \in (0,1)$. The Hessian matrix becomes $\boldsymbol{H}(\boldsymbol{\beta}^*)=\epsilon (\boldsymbol{X}\X^{\rm T}f(0|\boldsymbol{X}))$. Although $\boldsymbol{H}(\boldsymbol{\beta}^*)$ no longer takes the form of $\boldsymbol{\Sigma} f(0)$ when the noise depends on covariates, it can be approximate by
\[
\boldsymbol{D}_h(\boldsymbol{\beta}_0)=
\epsilon\left(\boldsymbol{X}\X^{\rm T}\frac{1}{h}K\left(\frac{Y-\boldsymbol{X}^{\rm T}\boldsymbol{\beta}_0}{h}\right)\right),
\]
for a positive kernel function $K(\cdot)$ (i.e., $K(x) >0$ for all $x$). Let $\widehat{\boldsymbol{\beta}}_{0}$ be an initial estimator of $\boldsymbol{\beta}^{*}$. Given $n$ \emph{i.i.d.} samples $(\boldsymbol{X}_i,Y_i)$ from \eqref{eq:model}, for each $1\le i\le n$, we construct the following quantities:
\[
\gamma_{i,h} = \sqrt{\frac{1}{h}K\left(\frac{Y_i-\boldsymbol{X}_i^{\rm T}\widehat{\boldsymbol{\beta}}_0}{h}\right)},\quad \widetilde{\boldsymbol{X}}_{i,h} = \gamma_{i,h} \boldsymbol{X}_i,\quad \widehat{\boldsymbol{D}}_h = \frac{1}{n} \sum_{i=1}^n \widetilde{\boldsymbol{X}}_{i,h}\widetilde{\boldsymbol{X}}_{i,h}^{\rm T},
\]
\begin{eqnarray*}
\widetilde{Y}_{i,h}=\widetilde{\boldsymbol{X}}_{i,h}^{\rm T}\widehat{\boldsymbol{\beta}}_{0}-\frac{\ind{Y_i\leq\boldsymbol{X}_i^{\rm T}\widehat{\boldsymbol{\beta}}_{0}}-\tau}{\gamma_{i,h}}.
\end{eqnarray*}
Then, we can construct the pooled estimator (i.e., the counterpart of \eqref{eq:pool}) by solving the following Lasso problem with both transformed input $\widetilde{\boldsymbol{X}}_{i,h}$ and response $\widetilde{Y}_{i,h}$:
\begin{eqnarray}\label{eq:dependent}
\widehat{\boldsymbol{\beta}}=\mathop{\rm arg\min}\limits_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\Big{\{} \frac{1}{2n}\sum_{i=1}^{n}(\widetilde{Y}_{i,h}-\widetilde{\boldsymbol{X}}^{\rm T}_{i,h}\boldsymbol{\beta})^{2}+\lambda_{n}|\boldsymbol{\beta}|_{1}\Big{\}}.
\end{eqnarray}
Using a similar distributed approach described in Section \ref{sec:dist}, the pooled estimator in Eq. \eqref{eq:dependent} can be extended into a distributed estimator.
Although the extension to the dependent case seems relatively straightforward, the nonparametric estimation of the conditional density $f(0|\boldsymbol{X})$ has the issue of ``curse of dimensionality'', especially when $\boldsymbol{X}$ is high-dimensional. Without any strong assumption on $f(0|\boldsymbol{X})$, it requires a huge number of local samples to construct an accurate estimator $\widehat{\boldsymbol{D}}_{1,h} = \frac{1}{m} \sum_{i\in \mathcal{H}_1} \widetilde{\boldsymbol{X}}_{i,h}\widetilde{\boldsymbol{X}}_{i,h}^{\rm T}$ in the distributed implementation. We leave more investigation of the dependent noise case to future work.
\end{remark}
}
\section{Simulation Study}\label{sec:sim}
In this section, we report the simulation studies to illustrate the performance of our distributed REL.
\subsection{Simulation Setup}
We consider the following linear model
\[
Y_i = \boldsymbol{X}_i^\mathrm{T}\boldsymbol{\beta}^* +e_i, \quad i=1,2,\ldots,n,
\]
where $\boldsymbol{X}_i^\mathrm{T}=(1,X_{i,1},\ldots,X_{i,p})$ is a $(p+1)$-dimensional covariate vector and\linebreak[4] $(X_{i,1},\ldots,X_{i,p})$s are drawn $i.i.d.$ from a multivariate normal distribution $N(0,\boldsymbol{\Sigma})$. The covariance matrix $\boldsymbol{\Sigma}$ is constructed by $\boldsymbol{\Sigma}_{ij} = 0.5 ^{|i-j|}$ for $1\leq i,j\leq p$. We fix the dimension $p=500$ and choose the loss function to be the QR loss with quantile level $\tau = 0.3$. Note that other choices of $\tau$ lead to similar results in the experiment. We provide additional experimental results for $\tau=0.5$ in the appendix. Let $s$ be the sparsity level and the true coefficient is set to $$\boldsymbol{\beta}^* = (\frac{10}{s},\frac{20}{s},\frac{30}{s},\ldots,\frac{10(s-1)}{s},10,0,0\ldots,0).$$ We consider the following three noise distributions:
\begin{enumerate}
\item Normal: the noise $e_i \sim \mathrm{N}(0,1)$.
\item Cauchy: the noise $e_i\sim \mathrm{Cauchy}(0,1)$.
\item Exponential: the noise $e_i\sim \mathrm{exp}(1)$.
\end{enumerate}
We note that the variance of the Cauchy distribution is infinite.
The initial estimator is computed by directly solving the $\ell_1$-regularized QR optimization using only the data on the first machine (see Eq. \eqref{ag0}). At each iteration,
the constant $C_0$ in the regularization parameter $\lambda_{n,g}$ in \eqref{eq:lambda} is chosen by validation. In particular, we choose $C_0$ to minimize the quantile loss on an independently generated validation dataset with the sample size $n$. Moreover, we could also apply cross-validation or an information criterion such as BIC to choose $\lambda_{n}$.
For the choice of the kernel function $K(\cdot)$, we use a biweight kernel function
\[
K(x) = \begin{cases}
0, & \text{if} \quad x\leq -1,\\
-\frac{315}{64}x^6+\frac{735}{64}x^4-\frac{525}{64}x^2+\frac{105}{64}, & \text{if} \quad -1\leq x\leq 1,\\
0, & \text{if} \quad x \geq 1.
\end{cases}
\]
It is easy to verify that $K(\cdot)$ satisfies the condition (C3). We also note that other choices of $K(\cdot)$ provide similar results.
From Theorem \ref{thm:betainf} and \ref{thm:betainft} in Section \ref{sec:theory}, the bandwidth is set to $h_g =ca_{n,g-1}$ for some constant $c>0$, where $a_{n,g-1}$ is defined in \eqref{eq:a}. In our simulation study, we choose $h_g = \sqrt{\frac{s \log n}{n}}+s^{-1/2}\left(c_0\frac{s^2\log n}{m}\right)^{(g+1)/2}$ (i.e., set the constant $c=1$) for convenience. Note that the constant $c_0$ is used to ensure that $\frac{s^2\log n}{m}<1$, and we set $c_0=0.1$ in the following experiments. In fact, our algorithm is quite robust with respect to the choice of the bandwidth (see the sensitivity analysis in Section \ref{sec:sensitivity}). All the results reported in this section are average of 100 independent runs of simulations.
We compare the performance of the proposed distributed REL (dist REL for short) with other two approaches:
\begin{enumerate}
\item Averaging divide-and-conquer (Avg-DC) which computes the $\ell_1$-regularized QR (see Eq. \eqref{eq:local}) on each local machine and combines the local estimators by taking the average.
\item Robust estimator with Lasso (REL) on a single machine with pooled data (see Eq. \eqref{eq:pool}), which is denoted by pooled REL.
\end{enumerate}
Note that the $\ell_1$-regularized QR estimator in \eqref{eq:QR} and the de-biased averaging divide-and-conquer estimator (see \cite{zhao2014general}) are not included in most comparisons because they are computationally very expensive to be implemented in our setting, with large $n$ and $p$. Moreover, the de-biased estimator generates a dense estimated coefficient due to the de-biasing procedure. In the experiment on computation efficiency, we compare the running time of our method to the $\ell_1$-regularized QR estimator. The result shows that our method achieves a similar performance as the $\ell_1$-regularized QR estimator and it is computationally much more efficient.
\subsection{Effect of the Number of Iterations}
We first show the performance of our distribute REL by varying the number of iterations. We fix the sample size $n=10000$, local sample size $m=500$, the sparsity level $s=20$ and dimension $p=500$. We plot the $\ell_2$-error from the true QR coefficients versus the number of iterations. Since the Avg-DC only requires one-shot communication, we use a horizontal line to show its performance. The results are shown in Figure \ref{iter}.
\begin{figure}[!ht]
\centering
\addtolength{\leftskip} {-4cm}
\addtolength{\rightskip}{-4cm}
\subfloat[Normal noise]{
\includegraphics[width=0.39\textwidth]{./fig/iterations/fig1.png}
\label{iter_normal}}
\hspace{-1.5em}
\subfloat[Cauchy noise]{
\includegraphics[width=0.39\textwidth]{./fig/iterations/fig2.png}
\label{iter_cauchy}}
\hspace{-1.5em}
\subfloat[Exponential noise]{
\includegraphics[width=0.39\textwidth]{./fig/iterations/fig3.png}
\label{iter_exp}}
\caption{The $\ell_2$-error from the true QR coefficient versus the number of iterations. The sample size $n$ is fixed to $n=10000$ and the local sample size $m$ is 500.}\label{iter}
\end{figure}
From the result, both pooled REL and distributed REL outperform the Avg-DC algorithm and become stable after a few iterations. Therefore, for the rest of the experiments in this section, we use 50 as the number of iterations in the algorithm. Moreover, the distributed REL almost matches the performance of pooled REL for all three noises.
\subsection{Effect of the QR Loss Under Heavy-Tailed Noise}
We study the effect of the QR loss in the presence of heavy-tailed noise. We compare with the standard Lasso estimator in a single machine setting with pooled data. We vary the sample size $n$ and compute the $F_1$-score and the $\ell_2$-error for the distributed REL, Pooled REL, Avg-DC, and the Lasso estimator. The $F_1$-score is defined as
\[
F_{1}=\left({\frac {\mathrm {recall} ^{-1}+\mathrm {precision} ^{-1}}{2}}\right)^{-1}=2\cdot {\frac {\mathrm {precision} \cdot \mathrm {recall} }{\mathrm {precision} +\mathrm {recall} }},
\]
which is commonly used as an evaluation of support recovery (note that $F_1$-score=1 implies perfect support recovery). In Table \ref{robust_normal}, \ref{robust_cauchy} and \ref{robust_exp}, we report the results for all three types of noises.
\begin{table}
\caption{The $F_1$-score and $\ell_2$-error of the distributed REL, pooled REL, Avg-DC, and Lasso estimator under different sample size $n$. Noises are generated from normal distribution. The local sample size is fixed to $m=500$.\label{robust_normal}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{$n$} & \multicolumn{2}{c|}{Dist REL} & \multicolumn{2}{c|}{Pooled REL} & \multicolumn{2}{c|}{Avg-DC} & \multicolumn{2}{c}{Lasso} \\ \cline{2-9}
& $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error \\ \hline
2500 & 0.90 & 0.189 & 0.83 & 0.183 & 0.23 & 0.255 & 1.00 & 0.161 \\
5000 & 0.95 & 0.138 & 0.91 & 0.132 & 0.14 & 0.221 & 1.00 & 0.113 \\
10000 & 0.97 & 0.102 & 0.93 & 0.097 & 0.10 & 0.203 & 1.00 & 0.079 \\
15000 & 0.98 & 0.085 & 0.96 & 0.083 & 0.09 & 0.196 & 1.00 & 0.065 \\
20000 & 0.99 & 0.073 & 0.96 & 0.069 & 0.08 & 0.192 & 1.00 & 0.056 \\
25000 & 0.99 & 0.067 & 0.97 & 0.050 & 0.08 & 0.196 & 1.00 & 0.050 \\ \hline
\end{tabular}%
}
\end{table}
\begin{table}
\caption{The $F_1$-score and $\ell_2$-error of the distributed REL, pooled REL, Avg-DC, and Lasso estimator under different sample size $n$. Noises are generated from Cauchy distribution. The local sample size is fixed to $m=500$.\label{robust_cauchy}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{$n$} & \multicolumn{2}{c|}{Dist REL} & \multicolumn{2}{c|}{Pooled REL} & \multicolumn{2}{c|}{Avg-DC} & \multicolumn{2}{c}{Lasso} \\ \cline{2-9}
& $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error \\ \hline
2500 & 0.84 & 0.320 & 0.75 & 0.312 & 0.25 & 0.436 & 0.25 & 151.4 \\
5000 & 0.92 & 0.229 & 0.85 & 0.221 & 0.16 & 0.380 & 0.26 & 138.8 \\
10000 & 0.96 & 0.168 & 0.89 & 0.160 & 0.11 & 0.349 & 0.27 & 128.3 \\
15000 & 0.98 & 0.139 & 0.92 & 0.132 & 0.09 & 0.338 & 0.25 & 132.1 \\
20000 & 0.97 & 0.118 & 0.93 & 0.113 & 0.08 & 0.329 & 0.26 & 121.0 \\
25000 & 0.98 & 0.107 & 0.94 & 0.101 & 0.08 & 0.330 & 0.23 & 120.8 \\ \hline
\end{tabular}%
}
\end{table}
\begin{table}
\caption{The $F_1$-score and $\ell_2$-error of the distributed REL, pooled REL, Avg-DC, and Lasso estimator under different sample size $n$. Noises are generated from exponential distribution. The local sample size is fixed to $m=500$.\label{robust_exp}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{$n$} & \multicolumn{2}{c|}{Dist REL} & \multicolumn{2}{c|}{Pooled REL} & \multicolumn{2}{c|}{Avg-DC} & \multicolumn{2}{c}{Lasso} \\ \cline{2-9}
& $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error \\ \hline
2500 & 0.96 & 0.093 & 0.91 & 0.089 & 0.25 & 0.115 & 1.00 & 0.102 \\
5000 & 0.98 & 0.069 & 0.92 & 0.066 & 0.15 & 0.101 & 1.00 & 0.094 \\
10000 & 0.99 & 0.051 & 0.96 & 0.048 & 0.10 & 0.092 & 1.00 & 0.069 \\
15000 & 0.99 & 0.043 & 0.97 & 0.040 & 0.09 & 0.089 & 1.00 & 0.054 \\
20000 & 1.00 & 0.037 & 0.98 & 0.034 & 0.08 & 0.086 & 1.00 & 0.048 \\
25000 & 0.99 & 0.033 & 0.98 & 0.031 & 0.08 & 0.087 & 1.00 & 0.043 \\ \hline
\end{tabular}%
}
\end{table}
As expected, when the noise is normal, the Lasso estimator has smaller $\ell_2$-error and better support recovery. However, when the noise has a slightly heavier tail (e.g., exponential noise), both the distributed REL and pooled REL outperform the Lasso estimator in $\ell_2$-error. In the case of heavy-tailed noise (e.g., Cauchy noise), the Lasso approach completely fails with very large $\ell_2$-errors while the distributed REL is much better in both $\ell_2$-error and support recovery. It is clear that the Lasso estimator is not robust to heavy-tailed noises, and therefore we omit the Lasso estimator in the rest of the simulation studies.
Another interesting phenomena revealed in Tables \ref{robust_normal}-\ref{robust_exp} is that, in terms of the $F_1$-score, the distributed REL is slightly better than pooled REL. This is indeed affected by the selection of regularization parameter $\lambda_n$. According to our Theorem \ref{thm:betainf}, we set $\lambda_n$ for the first round on the order of $s \log n/m$, where $m$ is the local sample size and $n$ the total sample size. For the pooled estimator where $m=n$, this term becomes $s \log n/n$, which becomes smaller. Therefore, our distributed estimator has already eliminated many features for the first round due to a larger regularization parameter, which leads to a slightly better precision. It is noted that this also happens in the following experiments.
\subsection{Effect of Sample Size and Local Sample Size}
\label{sec:effect}
In this section, we investigate how the performance of the distributed REL changes with the total sample size $n$ and the local sample size $m$. We also compare our estimator with the Communication-efficient Surrogate Likelihood (CSL) estimator proposed in \cite{jordan2018communication}. The original method in \cite{jordan2018communication} requires second-order differentiable loss functions, which is not directly applicable to quantile loss function. Thus, we adopt a smoothing technique to smooth the QR loss function as in \cite{horowitz1998bootstrap, chen2019}. We fix sparsity level $s=20$, $p=500$, and vary the sample size $n\in\{5000,10000,20000\}$ and the local sample size $m\in\{200,500,1000\}$. The precision, recall of the support recovery and the $\ell_2$-error are reported for each estimator. The results are shown in Table \ref{mn_normal}, \ref{mn_cauchy} and \ref{mn_exp}.
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators under different combinations of the sample size $n$ and local sample size $m$. Noises are generated from normal distribution.\label{mn_normal}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
\multicolumn{2}{c|}{$m$} & \multicolumn{3}{c|}{200} & \multicolumn{3}{c|}{500} & \multicolumn{3}{c}{1000} \\ \hline
\multicolumn{2}{c|}{$n$} & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Pooled\\ REL\end{tabular}} & Precision & 0.79 & 0.85 & 0.92 & 0.79 & 0.89 & 0.93 & 0.78 & 0.85 & 0.92 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.136 & 0.098 & 0.071 & 0.138 & 0.101 & 0.073 & 0.135 & 0.100 & 0.072 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ REL\end{tabular}} & Precision & 0.98 & 0.99 & 1.00 & 0.91 & 0.95 & 0.98 & 0.83 & 0.89 & 0.95 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.154 & 0.111 & 0.081 & 0.142 & 0.105 & 0.076 & 0.137 & 0.102 & 0.074 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.05 & 0.04 & 0.04 & 0.08 & 0.06 & 0.05 & 0.13 & 0.08 & 0.06 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.348 & 0.328 & 0.314 & 0.225 & 0.205 & 0.199 & 0.180 & 0.156 & 0.145 \\ \hline
\multirow{3}{*}{CSL} & Precision & 0.86 & 0.85 & 0.88 & 0.08 & 1.00 & 1.00 & 1.00 & 1.00& 1.00 \\
& Recall & 0.95 & 0.93 & 0.94 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.480 & 0.455 & 0.452 & 0.218 & 0.201 & 0.190 & 0.154 & 0.141 & 0.098 \\ \hline
\end{tabular}
}
\end{table}
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators under different combinations of the sample size $n$ and local sample size $m$. Noises are generated from Cauchy distribution.\label{mn_cauchy}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
\multicolumn{2}{c|}{$m$} & \multicolumn{3}{c|}{200} & \multicolumn{3}{c|}{500} & \multicolumn{3}{c}{1000} \\ \hline
\multicolumn{2}{c|}{$n$} & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Pooled\\ REL\end{tabular}} & Precision & 0.72 & 0.84 & 0.89 & 0.75 & 0.82 & 0.88 & 0.70 & 0.81 & 0.87 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.220 & 0.159 & 0.118 & 0.221 & 0.161 & 0.116 & 0.221 & 0.156 & 0.114 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ REL\end{tabular}} & Precision & 0.98 & 0.99 & 1.00 & 0.86 & 0.91 & 0.95 & 0.76 & 0.87 & 0.92 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.251 & 0.181 & 0.134 & 0.230 & 0.169 & 0.122 & 0.223 & 0.158 & 0.117 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.05 & 0.04 & 0.04 & 0.08 & 0.06 & 0.04 & 0.14 & 0.08 & 0.06 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.704 & 0.671 & 0.667 & 0.375 & 0.355 & 0.332 & 0.291 & 0.245 & 0.235 \\ \hline
\multirow{3}{*}{CSL} & Precision & 0.09 & 0.12 & 0.12 & 0.28 & 0.35 & 0.48 & 0.64 & 0.77& 0.89 \\
& Recall & 0.91 & 0.93 & 0.90 & 0.97 & 0.97 & 0.98 & 0.98 & 0.98 & 0.99 \\
& $\ell_2$-error & 0.834 & 0.790 & 0.728 & 0.324 & 0.327 & 0.312 & 0.255 & 0.195 & 0.171 \\ \hline
\end{tabular}
}
\end{table}
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators under different combinations of the sample size $n$ and local sample size $m$. Noises are generated from exponential distribution.\label{mn_exp}}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
\multicolumn{2}{c|}{$m$} & \multicolumn{3}{c|}{200} & \multicolumn{3}{c|}{500} & \multicolumn{3}{c}{1000} \\ \hline
\multicolumn{2}{c|}{$n$} & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 & 5000 & 10000 & 20000 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Pooled\\ REL\end{tabular}} & Precision & 0.90 & 0.98 & 0.98 & 0.88 & 0.94 & 0.96 & 0.86 & 0.93 & 0.96 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.060 & 0.045 & 0.031 & 0.059 & 0.042 & 0.032 & 0.059 & 0.042 & 0.030 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ REL\end{tabular}} & Precision & 1.00 & 1.00 & 1.00 & 0.95 & 0.98 & 0.99 & 0.91 & 0.95 & 0.98 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.076 & 0.061 & 0.043 & 0.062 & 0.044 & 0.034 & 0.060 & 0.042 & 0.031 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.05 & 0.04 & 0.04 & 0.07 & 0.06 & 0.04 & 0.15 & 0.09 & 0.05 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.168 & 0.162 & 0.154 & 0.090 & 0.084 & 0.079 & 0.072 & 0.062 & 0.054 \\ \hline
\multirow{3}{*}{CSL} & Precision & 0.86 & 0.85 & 0.88 & 0.08 & 1.00 & 1.00 & 1.00 & 1.00& 1.00 \\
& Recall & 0.95 & 0.93 & 0.94 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.480 & 0.455 & 0.452 & 0.218 & 0.201 & 0.190 & 0.154 & 0.141 & 0.098 \\ \hline
\end{tabular}
}
\end{table}
\begin{figure}[p]
\centering
\addtolength{\leftskip} {-4cm}
\addtolength{\rightskip}{-4cm}
\subfloat[Normal noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/m.png}}
\hspace{-1.5em}
\subfloat[Cauchy noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/mu.png}}
\hspace{-1.5em}
\subfloat[Exponential noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/me.png}}
\caption{The $\ell_2$-error from the true QR coefficient versus the local sample size $m$, with the total sample size fixed to $n = 20000$.}\label{m}
\end{figure}
\begin{figure}[p]
\centering
\addtolength{\leftskip} {-4cm}
\addtolength{\rightskip}{-4cm}
\subfloat[Normal noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/m_f1.png}}
\hspace{-1.5em}
\subfloat[Cauchy noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/mu_f1.png}}
\hspace{-1.5em}
\subfloat[Exponential noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/me_f1.png}}
\caption{The $F_1$-score versus the local sample size $m$, with the total sample size fixed to $n = 20000$.}\label{m_f1}
\end{figure}
From the results, we observe that both distributed REL and pooled REL outperform the Avg-DC algorithm and CSL estimator in all settings. The $\ell_2$-error of the distributed REL improves as the local sample size $m$ grows and it becomes close to pooled REL when $m$ is large. This is expected since the pooled REL is a special case of distributed REL with $m=n$. We also observe that the precision and recall of the distributed REL are both close to 1, which indicates good support recovery. In particular, the recall of our distributed REL is always 1, implying that all the relevant variables are selected. The precision of our method is close to 1, which indicates that only a very small number of irrelevant variables are selected. On the other hand, the precision of Avg-DC is very small because the averaging procedure results in a dense estimator, especially when $m$ is small. In addition, the performance of CSL estimator heavily depends on $m$. For example, for Cauchy error distribution in Table \ref{mn_cauchy}, a smaller $m$ leads to a relatively poor performance.
For better visualization, with the sample size $n=20000$ fixed, we vary the local sample size $m$ and plot the $\ell_2$-error and $F_1$-score of pooled REL, distributed REL and Avg-DC estimator.
The results are presented in Figure \ref{m} and \ref{m_f1}. Similarly, in Figure \ref{n} and \ref{n_f1}, we fix the local sample size $m=500$ and vary the total sample size $n$.
\begin{figure}[!t]
\centering
\addtolength{\leftskip} {-4cm}
\addtolength{\rightskip}{-4cm}
\subfloat[Normal noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/n.png}}
\hspace{-1.5em}
\subfloat[Cauchy noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/nu.png}}
\hspace{-1.5em}
\subfloat[Exponential noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/ne.png}}
\caption{The $\ell_2$-error from the true QR coefficient versus the sample size $n$, with the local sample size fixed to $m=500$.}\label{n}
\end{figure}
\begin{figure}[!t]
\centering
\addtolength{\leftskip} {-4cm}
\addtolength{\rightskip}{-4cm}
\subfloat[Normal noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/n_f1.png}}
\hspace{-1.5em}
\subfloat[Cauchy noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/nu_f1.png}}
\hspace{-1.5em}
\subfloat[Exponential noise]{
\includegraphics[width=0.39\textwidth]{./fig/plotmn/ne_f1.png}}
\caption{The $F_1$-score versus the sample size $n$, with the local sample size fixed to $m=500$.}\label{n_f1}
\end{figure}
From Figure \ref{m} we can see that the $\ell_2$-error of distributed REL is close to that of pooled REL when $m$ is not too small, and both of them outperform the Avg-DC estimator. From Figure \ref{n} we observe that the $\ell_2$-error of distributed REL is close to that of pooled REL and both errors decrease as the sample size $n$ becomes large. However, the $\ell_2$-error of the Avg-DC estimator stays large and fails to converge as the sample size $n$ increases. From Figure \ref{m_f1} and \ref{n_f1} we can see that the $F_1$-score of both distributed REL and pooled REL are close to 1, while the Avg-DC approach clearly fails in support recovery in high-dimensional settings.
\subsection{Sensitivity Analysis for the Bandwidth}\label{sec:sensitivity}
In this section, we study the sensitivity of the scaling constant in the bandwidth of the proposed REL. Recall that the bandwidth is $h = ca_{n,g}$ where $a_{n,g}$ is defined in \eqref{eq:a} with $c>0$ being the scaling constant. We vary the sample size $n$ and the constant $c$ from $0.5$ to 10 and compute the $F_1$-score and the $\ell_2$-error of the distributed REL, pooled REL, and the Avg-DC estimator. Due to space limitations, we report the Cauchy noise case as an example. For other noises, the performance is even less sensitive. The results are shown in Table \ref{bandwidth}.
\begin{table}
\caption{The $F_1$-score and $\ell_2$-error of the distributed REL, pooled REL, and Avg-DC under different sample size $n$ and choices of bandwidth constant $c$. Local sample size $m=500$. Noises are generated from Cauchy distribution.\label{bandwidth}}
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{c|c|cc|cc|cc}
\hline
\multirow{2}{*}{$n$} & \multirow{2}{*}{$c$} & \multicolumn{2}{c|}{Dist REL} & \multicolumn{2}{c|}{Pooled REL} & \multicolumn{2}{c}{Avg-DC} \\ \cline{3-8}
& & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error & $F_1$-score & $\ell_2$-error \\ \hline
5000 & 0.5 & 0.99 & 0.249 & 0.96 & 0.236 & 0.17 & 0.377 \\
10000 & 0.5 & 1.00 & 0.183 & 0.99 & 0.171 & 0.12 & 0.356 \\
20000 & 0.5 & 0.99 & 0.130 & 0.99 & 0.123 & 0.09 & 0.348 \\ \hline
5000 & 1 & 0.99 & 0.253 & 0.96 & 0.241 & 0.16 & 0.373 \\
10000 & 1 & 0.99 & 0.179 & 0.98 & 0.170 & 0.11 & 0.345 \\
20000 & 1 & 1.00 & 0.125 & 0.98 & 0.117 & 0.09 & 0.328 \\ \hline
5000 & 2 & 0.99 & 0.259 & 0.97 & 0.245 & 0.16 & 0.38 \\
10000 & 2 & 1.00 & 0.188 & 0.98 & 0.177 & 0.11 & 0.347 \\
20000 & 2 & 1.00 & 0.131 & 0.99 & 0.124 & 0.09 & 0.332 \\ \hline
5000 & 5 & 0.99 & 0.255 & 0.97 & 0.239 & 0.16 & 0.378 \\
10000 & 5 & 1.00 & 0.185 & 0.98 & 0.173 & 0.11 & 0.349 \\
20000 & 5 & 1.00 & 0.138 & 0.98 & 0.124 & 0.09 & 0.339 \\ \hline
5000 & 10 & 1.00 & 0.270 & 0.99 & 0.252 & 0.16 & 0.382 \\
10000 & 10 & 1.00 & 0.194 & 0.99 & 0.180 & 0.1 & 0.346 \\
20000 & 10 & 1.00 & 0.136 & 0.98 & 0.121 & 0.09 & 0.331 \\ \hline
\end{tabular}
}
\end{table}
From Table \ref{bandwidth}, we observe that both distributed REL and pooled REL exhibit good performance under all choices of bandwidth constant. Therefore even under a suboptimal choice of bandwidth constant, the distributed REL still achieves small $\ell_2$-error and good support recovery.
\subsection{Effect of the Sparsity}
In this section we investigate how the performance of the distributed REL algorithm changes with the sparsity level of the true coefficient $\boldsymbol{\beta}^*$. We fix the sample size $n = 10000$ and the local sample size $m = 500$, and we set the constant $c_0$ in $h_g$ to be 0.01. Recall that the true coefficient is set to be $$\boldsymbol{\beta}^* = (\frac{10}{s},\frac{20}{s},\frac{30}{s},\ldots,\frac{10(s-1)}{s},10,0,0\ldots,0).$$ We vary the sparsity level $s$ in $\{5,10,20,30,50,100\}$ and report the precision, recall and $\ell_2$-error. Since the $\ell_2$-norm of the true coefficient $\boldsymbol{\beta}^*$ changes with the sparsity level $s$, we also report the relative $\ell_2$-error which is defined by $|\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}^*|_2/|\boldsymbol{\beta}^*|_2$. The results are shown in Table \ref{s_homo}, \ref{s_cauchy} and \ref{s_exp}.\\
From the result, we can observe that the $\ell_2$-errors of all three estimators become larger as the sparsity level $s$ increases and the distributed REL algorithm performs much better than the Avg-DC algorithm. Moreover, the performance of the distributed REL is very close to the performance of the pooled REL.
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators with different sparsity level $s$. Noises are generated from normal distribution. The local sample size is fixed to $m=500$.\label{s_homo}}
\centering
\begin{tabular}{c|c|cccccc}
\hline
\multicolumn{2}{c|}{Sparsity $s$} & 5 & 10 & 20 & 30 & 50 & 100 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Pooled\\ REL\end{tabular}} & Precision & 0.98 & 0.96 & 0.86 & 0.82 & 0.73 & 0.66 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.063 & 0.080 & 0.096 & 0.117 & 0.141 & 0.191 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.426 & 0.408 & 0.360 & 0.361 & 0.341 & 0.329 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ REL\end{tabular}} & Precision & 1.00 & 0.98 & 0.94 & 0.93 & 0.91 & 0.88 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.065 & 0.082 & 0.101 & 0.123 & 0.150 & 0.202 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.441 & 0.418 & 0.379 & 0.379 & 0.363 & 0.347 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.02 & 0.03 & 0.06 & 0.08 & 0.11 & 0.20 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.147 & 0.175 & 0.204 & 0.243 & 0.280 & 0.368 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.988 & 0.890 & 0.760 & 0.751 & 0.675 & 0.633 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators with different sparsity level $s$. Noises are generated from Cauchy distribution. The local sample size is fixed to $m=500$.\label{s_cauchy}}
\centering
\begin{tabular}{c|c|cccccc}
\hline
\multicolumn{2}{c|}{Sparsity $s$} & 5 & 10 & 20 & 30 & 50 & 100 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Pool\\ QR\end{tabular}} & Precision & 0.95 & 0.91 & 0.79 & 0.73 & 0.66 & 0.64 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.103 & 0.129 & 0.156 & 0.186 & 0.230 & 0.318 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.696 & 0.656 & 0.581 & 0.574 & 0.555 & 0.547 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ QR\end{tabular}} & Precision & 0.97 & 0.95 & 0.91 & 0.87 & 0.86 & 0.84 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.105 & 0.132 & 0.163 & 0.194 & 0.239 & 0.330 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.709 & 0.674 & 0.608 & 0.598 & 0.578 & 0.567 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.02 & 0.04 & 0.06 & 0.07 & 0.11 & 0.20 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.264 & 0.319 & 0.347 & 0.419 & 0.542 & 0.885 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 1.779 & 1.628 & 1.295 & 1.293 & 1.308 & 1.252 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{The $\ell_2$-error, precision, and recall of the three estimators with different sparsity level $s$. Noises are generated from exponential distribution. The local sample size is fixed to $m=500$.\label{s_exp}}
\centering
\begin{tabular}{c|c|cccccc}
\hline
\multicolumn{2}{c|}{Sparsity $s$} & 5 & 10 & 20 & 30 & 50 & 100 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Pooled\\ REL\end{tabular}} & Precision & 0.97 & 0.97 & 0.95 & 0.92 & 0.87 & 0.79 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.026 & 0.034 & 0.043 & 0.049 & 0.062 & 0.080 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.178 & 0.171 & 0.160 & 0.151 & 0.149 & 0.138 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Dist\\ REL\end{tabular}} & Precision & 0.99 & 0.99 & 0.98 & 0.98 & 0.98 & 0.99 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.027 & 0.035 & 0.045 & 0.052 & 0.066 & 0.092 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.185 & 0.180 & 0.169 & 0.161 & 0.160 & 0.158 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Avg\\ DC\end{tabular}} & Precision & 0.02 & 0.04 & 0.05 & 0.07 & 0.11 & 0.20 \\
& Recall & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
& $\ell_2$-error & 0.054 & 0.065 & 0.083 & 0.099 & 0.113 & 0.151 \\
& Relative $\ell_2$-error($\times 10^{-2}$) & 0.365 & 0.329 & 0.311 & 0.305 & 0.273 & 0.260 \\ \hline
\end{tabular}
\end{table}
\subsection{Computation Time Comparison}\label{sec:time}
We further study the computation efficiency of our proposed estimator. We fix the local sample size $m$, dimension $p$, and vary the sample size $n$. In Table \ref{time}, we report the $F_1$-score, $\ell_2$-error, and the computation time of distributed REL, pooled REL, Avg-DC, and the $\ell_1$-regularized QR estimator. To solve the $\ell_1$-regularized QR estimator, we formulate it into a standard linear programming problem (LP) and solve it by Gurobi \citep{gurobi}, which is the state-of-the-art LP solver. We implement the three distributed algorithms (distributed REL, pooled REL and Avg-DC) in a fully synchronized distributed setting.
\begin{table}
\caption{The $F_1$-score, $\ell_2$-error, and computation time of the distributed REL, pooled REL, Avg-DC, and $\ell_1$-regularized QR estimator under different sample size $n$. Noises are generated from Cauchy distribution. The local sample size is fixed to $m=500$.\label{time}}
\centering
\centering
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}{*}{$n$} & \multicolumn{3}{c|}{Dist REL} &\multicolumn{3}{c}{Pooled REL} \\ \cline{2-7}
& $F_1$-score & $\ell_2$-error & Time & $F_1$-score & $\ell_2$-error & Time \\ \hline
5000 & 0.95 & 0.137 & 0.40 &0.90 & 0.132 & 0.44\\
10000 & 0.97 & 0.099 & 0.42 & 0.92 & 0.095 & 0.45\\
15000 & 0.98 & 0.083 & 0.42 & 0.95 & 0.080 & 0.47 \\
20000 & 0.99 & 0.074 & 0.44 & 0.96 & 0.071 & 0.48\\
\hline
\multirow{2}{*}{$n$} & \multicolumn{3}{c|}{Avg-DC} & \multicolumn{3}{c}{$\ell_1$-QR} \\ \cline{2-7}
& $F_1$-score & $\ell_2$-error & Time & $F_1$-score & $\ell_2$-error & Time \\ \hline
5000 & 0.15 & 0.223 & 2.82 & 0.95& 0.132& 159.6\\
10000 & 0.10 & 0.202 & 3.08 & 0.97& 0.091& 576.1\\
15000 & 0.09 & 0.198 & 3.07 & 0.98& 0.077& 1223.1\\
20000 & 0.08 & 0.192 & 3.15 & 0.99& 0.068& 2059.3\\ \hline
\end{tabular}%
\end{table}
From Table \ref{time} we can see that the distributed REL is much faster than the $\ell_1$-regularized QR estimator. In fact, for larger sample size (i.e., $n> 20000$), we cannot implement the $\ell_1$-regularized QR method due to memory and computation time issues. We also note that the computation time of the pooled REL is similar to the distributed version. This is because for the comparison propose, simulated datasets can still be fully stored in memory, and thus pooled REL takes the advantage of solving the entire optimization problem in memory. For large-scale datasets that cannot be stored in memory, the pool REL is no longer applicable.
\section{Conclusions and Future Directions}\label{sec:conclusion}
In this paper, we address the problem of distributed estimation for high-dimensional linear model with the presence of heavy-tailed noise. The proposed method achieves the same convergence rate as the ideal case with pooled data. Furthermore, we establish the support recovery guarantee of the proposed method. One key insight from this work is that a non-smooth loss can be transformed into a smooth one by constructing a new response. Our method is essentially an iterative refinement approach in a distributed environment, which is superior to the averaging divide-and-conquer scheme.
One important future direction is to further investigate the inference problem. We note that \cite{zhao2014general} first provide the inference result based on averaging de-biased QR local estimators. As we mentioned, this approach might suffer from heavy computational cost and requires a condition on the number of machines. It would be interesting to develop computationally efficient inference approaches without any restriction on the number of machines. Moreover, the idea of transforming to $\ell_1$-regularized least-squares problem and the iterative distributed implementation can be generalized other high-dimensional problems, e.g., $\ell_1$-regularized Huber regression in robust statistics. Our algorithm can also be generalized to handle other sparsity-inducing penalties, such as SCAD or MCP \citep{Fan-Li01,Zhang10}. Deriving the corresponding theoretical results for other sparsity-inducing penalties would be another interesting future direction.
\newpage
|
1,108,101,562,719 | arxiv | \section{Introduction}
\section{Ease of Use}
\subsection{Maintaining the Integrity of the Specifications}
The IEEEtran class file is used to format your paper and style the text. All margins,
column widths, line spaces, and text fonts are prescribed; please do not
alter them. You may note peculiarities. For example, the head margin
measures proportionately more than is customary. This measurement
and others are deliberate, using specifications that anticipate your paper
as one part of the entire proceedings, and not as an independent document.
Please do not revise any of the current designations.
\section{Prepare Your Paper Before Styling}
Before you begin to format your paper, first write and save the content as a
separate text file. Complete all content and organizational editing before
formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on
proofreading, spelling and grammar.
Keep your text and graphic files separate until after the text has been
formatted and styled. Do not number text heads---{\LaTeX} will do that
for you.
\subsection{Abbreviations and Acronyms}\label{AA}
Define abbreviations and acronyms the first time they are used in the text,
even after they have been defined in the abstract. Abbreviations such as
IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use
abbreviations in the title or heads unless they are unavoidable.
\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.
\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.)
\end{itemize}
\subsection{Equations}
Number equations consecutively. To make your
equations more compact, you may use the solidus (~/~), the exp function, or
appropriate exponents. Italicize Roman symbols for quantities and variables,
but not Greek symbols. Use a long dash rather than a hyphen for a minus
sign. Punctuate equations with commas or periods when they are part of a
sentence, as in:
\begin{equation}
a+b=\gamma\label{eq}
\end{equation}
Be sure that the
symbols in your equation have been defined before or immediately following
the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at
the beginning of a sentence: ``Equation \eqref{eq} is . . .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
\subsection{Some Common Mistakes}\label{SCM}
\begin{itemize}
\item The word ``data'' is plural, not singular.
\item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.
\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).
\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.
\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.
\item Do not confuse ``imply'' and ``infer''.
\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.
\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.
\end{itemize}
An excellent style manual for science writers is \cite{b7}.
\subsection{Authors and Affiliations}
\textbf{The class file is designed for, but not limited to, six authors.} A
minimum of one author is required for all conference articles. Author names
should be listed starting from left to right and then moving down to the
next line. This is the author sequence that will be used in future citations
and by indexing services. Names should not be listed in columns nor group by
affiliation. Please keep your affiliations as succinct as possible (for
example, do not differentiate among departments of the same organization).
\subsection{Identify the Headings}
Headings, or heads, are organizational devices that guide the reader through
your paper. There are two types: component heads and text heads.
Component heads identify the different components of your paper and are not
topically subordinate to each other. Examples include Acknowledgments and
References and, for these, the correct style to use is ``Heading 5''. Use
``figure caption'' for your Figure captions, and ``table head'' for your
table title. Run-in heads, such as ``Abstract'', will require you to apply a
style (in this case, italic) in addition to the style provided by the drop
down menu to differentiate the head from the text.
Text heads organize the topics on a relational, hierarchical basis. For
example, the paper title is the primary text head because all subsequent
material relates and elaborates on this one topic. If there are two or more
sub-topics, the next level head (uppercase Roman numerals) should be used
and, conversely, if there are not at least two sub-topics, then no subheads
should be introduced.
\subsection{Figures and Tables}
\paragraph{Positioning Figures and Tables} Place figures and tables at the top and
bottom of columns. Avoid placing them in the middle of columns. Large
figures and tables may span across both columns. Figure captions should be
below the figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the abbreviation
``Fig.~\ref{fig}'', even at the beginning of a sentence.
\begin{table}[htbp]
\caption{Table Type Styles}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
\cline{2-4}
\textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
\hline
copy& More table copy$^{\mathrm{a}}$& & \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[htbp]
\centerline{\includegraphics{fig1.png}}
\caption{Example of a figure caption.}
\label{fig}
\end{figure}
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words
rather than symbols or abbreviations when writing Figure axis labels to
avoid confusing the reader. As an example, write the quantity
``Magnetization'', or ``Magnetization, M'', not just ``M''. If including
units in the label, present them within parentheses. Do not label axes only
with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization
\{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of
quantities and units. For example, write ``Temperature (K)'', not
``Temperature/K''.
\section*{Acknowledgement}
The preferred spelling of the word ``acknowledgment'' in America is without
an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B.
G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor
acknowledgments in the unnumbered footnote on the first page.
\section*{References}
Please number citations consecutively within brackets \cite{b1}. The
sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference
number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at
the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$''
Number footnotes separately in superscripts. Place the actual footnote at
the bottom of the column in which it was cited. Do not put footnotes in the
abstract or reference list. Use letters for table footnotes.
Unless there are six authors or more give all authors' names; do not use
``et al.''. Papers that have not been published, even if they have been
submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers
that have been accepted for publication should be cited as ``in press'' \cite{b5}.
Capitalize only the first word in a paper title, except for proper nouns and
element symbols.
For papers published in translation journals, please give the English
citation first, followed by the original foreign-language citation \cite{b6}.
\section{Introduction \label{intro}}
A/B experimentation or A/B testing is a method for evaluating software changes in a quantifiable manner.
Continuous A/B testing is an important method in understanding and delivering measurable customer value.
Many web-facing companies have demonstrated success from A/B experiments, such as Booking.com \cite{Fabijan2018}, Google \cite{google2010} and Microsoft \cite{Kohavi2013, Gupta2018, Li2019}, just to list a few.
With the digitalisation of the automotive industry, software is becoming a main differentiator of products \cite{Mattos2018}.
A/B testing is an effective tool to evaluate software and support organisations in making data-driven decisions \cite{Fabijan2017a}.
However, the adoption of continuous A/B experiments in automotive embedded software is not without challenges.
Embedded software has hardware constraints. Such constraints could manifest as limitations to computational power \cite{Giaimo2017}, long release cycles \cite{Mattos2018} and often dependency on suppliers \cite{Mattos2020}.
Data collection and handling is also believed to be challenging in the automotive specific applications \cite{Giaimo2019, Mattos2020}.
Although a fair number of publications point out the challenges in A/B experiment adoption \cite{Giaimo2017, Mattos2018, Giaimo2019, Mattos2020}, we identified a gap in the literature concerning architectural solutions to enable A/B experiments.
Furthermore, there is little to no reports on concluded or ongoing online A/B experiments in the automotive domain.
In this paper, we present an architecture that enables A/B experiments in the automotive domain and aim to address the challenges that are unique to this industry.
We present a literature review of A/B experiment architecture in embedded and web-facing environments.
Moreover, we conducted a case study of the architecture applied at scale and to report the state-of-practice of A/B testing in automotive.
Compared to the existing literature, the contribution of this paper is two-fold.
First, we present an architecture that enables A/B testing automotive software.
We reviewed the literature and did not find a similar architecture for A/B experiments.
Secondly, we apply this architecture in practice, in fleets of considerable scale.
We present the case study and state-of-practice of two other automotive companies.
The rest of this paper is organised as following.
In \cref{background}, we introduce the unique constraints in automotive industry for A/B testing.
In \cref{method}, we present our research method.
We summarise the existing A/B experiment frameworks and architecture in \cref{otherworks}.
In \cref{design_main}, we present our architecture design along with the case studies.
Discussions and conclusion are presented in \cref{discussion} and \cref{conclusion}.
\section{Background and constraints \label{background}}
In this section, we introduce the background on A/B testing and list the constraints of adopting the method in automotive embedded software.
\subsection{Background}
A/B testing is a type of continuous experimentation where users or systems are split into subgroups and issued with different variants of the same software.
By studying the response from each cohorts, A/B experiments can guide product development in an effective manner \cite{Kohavi2013, google2010, Fabijan2018}.
Typically, eligible users are split into two groups, the A version (control) and the B version (treatment).
For both user groups, their interactions with the functions are recorded and evaluated based on a set of carefully designed metrics reflecting business and/or customer values \cite{Dmitriev2017}.
Almost all well-established A/B testing frameworks are for web-facing businesses.
Such frameworks or models cannot be applied directly in an embedded environment as they do not address specific challenges.
These challenges come from many aspects, they can be technical, business, and organisational as demonstrated by Mattos \textit{et al.}\ \cite{Mattos2018}.
As embedded software often has dependency on hardware, fast software release becomes difficult to accomplish \cite{Giaimo2017, Giaimo2019, Mattos2020}.
Although challenging to adopt, many advantages of continuous experiments that were proven in the web-facing businesses are also expected in the automotive industry \cite{Giaimo2019}.
\subsection{Constraints}
In addition to the challenges summarised by relevant literature \cite{Mattos2018, Giaimo2019, Mattos2020}, we list the specific constraints in automotive which motivate our architecture design.
Automotive embedded software is distributed to hundreds of Electronics Control Modules (ECUs).
These software are traditionally developed using the "V-model" where the OEMs deliver specifications and suppliers deliver implementations \cite{Forsberg1992}.
This model has exhibited its limitations.
\subsubsection{Release cycles and speed\label{background_speed}}
Combining the strict standards with the growing complexity, the automotive software release process is rigid.
First, the development and release of automotive embedded software is usually strongly dependent on suppliers.
Secondly, automotive companies have traditionally designed software release cycles based on their hardware release process \cite{Bosch2012}.
This process cannot handle rapid changes, as all integration and tests are planned at fixed periods.
Moreover, the most commonly adopted automotive software architecture AUTOSAR \footnote{\href{https://www.autosar.org/}{https://www.autosar.org/}} lacks flexibility in partial updates \cite{Mattos2020}.
If the new software is not backwards compatible, all ECUs in the vehicle need to be updated.
Last but not least, updating software which are governed by legislation might require renewal of certifications, which will add delays to the software release process.
\subsubsection{Sample size and management\label{background_sample_size}}
Controlling boundary conditions is impossible for online experiments, as vehicles can be driven to everywhere and at anytime.
Therefore, to conclude sufficient treatment effects, A/B experiments need be conducted on large and randomly selected sample groups.
This large group of users needs to be managed as online experiments require a flexible configuration of A/B or A/B/n groups.
However, the sample groups are difficult to manipulate when the software needs to be updated through physical contact with the cars.
Same challenge could be experienced when an A/B test is concluded, and the software needs to be inverted to the original version.
Managing sample groups longitudinally can be burdensome.
Performance of some automotive functions depends on temporal factors and has seasonality effects, thus experiments need to be conducted longitudinally.
Therefore, the ability to orchestrate the A/B groups over time is beneficial.
\subsubsection{Data infrastructure\label{background_data}}
To conclude a casual effect of the treatment, data collection for A/B experiments requires certain level of accuracy.
Storing such data locally in each vehicle is not feasible, as it becomes difficult to access and it will require a large memory on-board.
The success of an A/B experiment is largely relied on appropriate assumptions when designing an experiment and fast feedback when conducting one.
Sharing data within a large organisation can be problematic \cite{Fabijan2016}.
In order to maximise the data, all development teams need to have easy access to relevant data.
As a result, companies suffer from misrepresentation of customer values.
\subsubsection{Safety requirements and fallback\label{background_fallback}}
Automotive software has high safety requirements.
In an A/B test, all alternative versions can never obstruct such requirements which might affect road safety and/or legal compliance.
The functional requirements need to be safeguarded while ensuring a continuous release of alternative versions seems impossible today.
Another practice to decrease hazards on the road is to have built-in fallback for safety critical functions.
For instance, one could install both the A and B alternatives on-board. Then the A alternative can be used as a fallback when it is thoroughly tested and validated.
\section{Research method \label{method}}
In this paper, we combine a literature review with case studies.
We studied several existing A/B experiment frameworks inside and outside of the industry through literature reviews, to compare our approach to existing frameworks.
Furthermore, to validate the architecture designed, we conducted case studies based on a series of ongoing efforts in A/B experiments from three separate automotive manufacturers.
We explore the following research question:
\begin{itemize}
\item[] \textbf{RQ} How can we continuously experiment with automotive embedded software providing the challenges and limitations that are unique to this industry?
\end{itemize}
\subsection{Literature review}
This literature review is done to understand existing A/B experiment frameworks within and outside of the automotive domain.
To identity and explore work that is relevant for the research question, we follow the methodology described by Kitchenham \cite{Kit04}.
\subsubsection{Data collection}
We included the following terms in our search query: ("A/B testing" OR "A/B experiment" OR "online experiment" OR "bucket testing" OR "continuous experiment") AND ("software architecture") AND ("embedded software" or "automotive software").
Alternative terms are included as there is no standard terminology.
Keyword combination with "automotive software" yield no meaningful results, thus we expanded the search query to also include embedded software.
The databases included in our search process are IEEE Xplore, ScienceDirect, and Google Scholar, returning a total of 104 results excluding duplicates.
To ensure the results are relevant today, we limit the publications to the recent ten years.
\subsubsection{Inclusion criteria}
Each paper resulted from the search process was reviewed by at least one of the authors.
We examine the keywords, abstracts, and the body of the paper to identify A/B experiment frameworks and the applicable sector for said frameworks.
We selected publications which focus on A/B experiment architecture and/or framework from embedded applications.
We did not include publications discussing the benefits or challenges or feasibility of A/B testing.
This inclusion criteria resulted in a total of three papers.
Since the technique is well established in web-facing applications, we included work on A/B testing framework in the web domain.
A total of 11 publications included in this review are \cite{google2010, Eklund2012, Bosch2012, Amatriain2013, Kohavi2013, Giaimo2017, Fagerholm2017, Fabijan2018, Gupta2018, Vasthimal2019, Li2019}.
\subsection{Case study}
Following guidelines from Runeson and H\"{o}st \cite{Runeson2008}, we conducted two sets of case studies with three separate automotive companies.
In study I, we examine the proposed architecture in practice on a cloud-based A/B experimentation in a vehicle fleet at scale.
We study the architecture for A/B testing in a fleet from one of the three companies.
The software for case study I was developed in-house in company A.
As online experiments are not commonly applied in the industry, to the best of our knowledge, there is a lack of quantitative data to study from.
To understand the state-of-practice, we conducted semi-structured interviews with two more OEMs as case study II.
\subsubsection{Case study attendees}
The three companies included in the case studies are large OEMs.
In each company, we conducted interviews and workshops with at least five different employees from each company, working with varying aspects of software development.
Their roles include software engineer, software architect, product owner, data engineer and data scientist.
\subsubsection{Data collection}
One of the authors was actively involved in the experimentation design from ground up and supported the entire process.
We document the process through meeting notes and design specifications in the project.
The questions from case study II were specifically designed to understand the current state-or-practice of A/B experiments in an automotive setting.
We also aim to understand the potential of cloud-based A/B testings in each company.
During the interviews, we presented our architecture design to the attendees along with questions regarding current practices adopted in their companies.
All the interviews were conducted by at least one of the authors. The responses were documented as meeting notes, which were distributed to the interview participants.
We recognise the limitation of our case study approach, as the results of our case studies were obtained from three companies. The outcome can be specific to these companies and without further investigation, we cannot generalise the conclusion to the automotive industry.
\section{Existing architectures \label{otherworks}}
In this section, we present the results from our literature review.
We included 11 publications \cite{google2010, Eklund2012, Bosch2012, Amatriain2013, Kohavi2013, Giaimo2017, Fagerholm2017, Fabijan2018, Gupta2018, Vasthimal2019, Li2019} that focus on describing A/B experiment architectures, in both embedded software and online applications.
From our literature review, we have discovered that there is a general gap in the literature on architectures or frameworks designed specifically for automotive software.
Based on the topic, we summarise the papers into four overlapping categories.
They are grouped firstly by their environment, i.e., embedded or web-facing. Paper \cite{Fagerholm2017, Li2019} are applicable for both groups.
We include OS and embedded applications in the same category, as they share many common challenges for instance, the devices can be offline \cite{Li2019}.
Second, we identified in these papers how a software variant is shipped to the users.
Namely, if a complete software change is required, or variants can be introduced through parameter changes.
The categories are presented in Figure \ref{fig_litreview}. As can be seen, variant introduction through parameter change is not a widely explored method within embedded software.
Although the design process is vastly different, there are a numbers of shared components for embedded and web experiment architecture.
This includes experiment configuration, data collection, experiment analysis and metrics evaluation.
Therefore, some experiment models can be employed in various environments including web, operation systems and embedded \cite{Fagerholm2017, Gupta2018}.
Tang \textit{et al.}\ \cite{google2010} and Kohavi \textit{et al.}\ \cite{Kohavi2013} both report a multi layered experiment configuration system that can handle multiple A/B experiments.
Users will be assigned to A or B variant in a consistent manner \cite{Gupta2018, Vasthimal2019}.
In the web environment, this is achieved by assigning unique IDs when users visit the web pages.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{2_lit_review_NEW.pdf}}
\caption{Existing A/B experiment framework categorised by environment and variant generation methods.}
\label{fig_litreview}
\end{figure}
Data infrastructure is a major component in any experiment framework.
All researchers include data infrastructure as part of their experiment frameworks, particularly focused on trustworthiness \cite{google2010, Kohavi2013, Fagerholm2017, Fabijan2018, Gupta2018, Vasthimal2019}.
Such data collection is also required in embedded environments, however, is more difficult due to hardware limitations \cite{Bosch2012}.
An experiment architecture \cite{Eklund2012} for automotive software used an on-board data storage before uploading the data through the vehicle's telemetry.
Another key element for A/B experiments is rapid software release.
We found that all architectures for rapid experiments in an embedded environment rely on continuous deployment.
The "RIGHT Model" discuss that if a function is novel, continuous deployment might not be necessary \cite{Fagerholm2017}.
However, most frameworks in embedded environments \cite{Eklund2012, Bosch2012, Giaimo2017} require a well-established continuous deployment process to achieve rapid experimentation loops.
Software variant release through Over-the-air(OTA) can increase delivery speed in automotive applications \cite{Eklund2012}.
In the web environment, rapid experimentation can be achieved more flexibly through an array of mechanisms.
For example, an offline and online experiment systems in Netflix, as demonstrated by Amatriain \cite{Amatriain2013}.
Existing data can be used to train the models before they are introduced to an online experiment, which allows faster and cheaper evaluation of software.
Another technique in increasing experiment speed is using parameter updates as mentioned by Tang \textit{et al.}\ \cite{google2010}.
The A/B variants in target functions are parameterised and configured through data files.
These parameters are changed more frequently than code, which enables fast experiments provided the parameters exist.
\begin{figure*}[t]
\centerline{\includegraphics[width=\textwidth]{1_process.pdf}}
\caption{Process of the cloud-based A/B test architecture, illustrating the general work flow of conducting an A/B experiment with parametrised functions.}
\label{fig_process}
\end{figure*}
Furthermore, to fully utilise the benefits of A/B testing, all papers highlighted the importance of the organisational and cultural mindset of making data-driven decisions.
Kohavi \textit{et al.}\ \cite{Kohavi2013} summarise prerequisites which an organisation needs to adapt, highlighting the importance of data-driven decision making mindsets.
In the "Experiment Growth Model" introduced by Fabijan \textit{et al.}\ \cite{Fabijan2018}, all components of their A/B experimentation model become more mature as the entire organisation evolves through different stages.
\section{Architecture and case studies \label{design_main}}
In this section, we present a software architecture that could enable A/B testing in the automotive domain.
A hybrid architecture is presented.
The essence of the architecture is to imitate an online environment for an otherwise offline application.
In doing so, automotive A/B testing can benefit from the flexibility of online experiments We present the components of our architecture in Fig.\ref{fig_process}.
\subsection{System characteristics \label{desing_act}}
We present a hybrid architecture (Fig.\ref{fig_process}) combining on-board and cloud functionalities.
The system is composed of six main components.
These are parameterised functions, a release process which most companies have in place.
There is a cloud host that writes parameters to the vehicles and collects data from the vehicles.
Finally, a centralised data storage and pipeline for distributing the measured data.
The system workflow can be described as follows. First, a function which characteristics can be defined by a list of parameters is delivered.
There are two sets of parameters embedded in the function, the local set, which is the default and the cloud set that can receive incoming values externally.
The benefit of parameterisation in A/B testing was also highlighted by \cite{google2010}.
A set of observables which measure function performance is also predetermined.
The function and its parameters are delivered to a release process which will integrate with other functions and release the software to vehicles.
This release and installation of software can be done through workshop visits or OTA.
Once the software is introduced to the vehicles, users are identified through Vehicle Identification Numbers (VIN), which is unique and comprised of vehicle meta-data.
This ensures that although the software is introduced to all cars, no experiments will be conducted unless the users are deemed eligible in advance.
Upon key-on of a vehicle, a vehicle will send its VIN to the A/B test cloud.
Since the A and B groups are configured in the cloud, the test cloud will match the VIN and then return a status indicator to the vehicle.
Ineligible cars will have no match in the cloud and receive no response.
For all eligible vehicles, they can be partitioned into A and B groups through remote configuration.
The control group will use the functions' local parameters and the treatment group will receive cloud parameters.
Since the parameter names are predefined, the vehicle cannot accept any other values, thus increase security.
Furthermore, as the cloud parameters are blank values in the vehicles, cloud parameter change can be done remotely.
This design enables function behaviour change through parameters provided the parameters exist.
Development teams can continuously A/B test and adjust the existing parameters without complete software change and independently from the company-wide release cadence.
A complete software update is required only when new parameters need to be added.
Data collection is done through the cloud and it measures a set of predefined observables.
The observables are measured and temporarily stored on board, then sent to the cloud at time intervals while driving.
This data are collected in a centralised data lake, cleaned, then distributed to development teams.
During a trip, time series data is collected for dynamic observables. For stationary observables, only one or a few snapshots are measured.
After analysis of the A/B tests, further actions can be taken such as adjusting cloud parameters, re-partitioning A/B groups, or concluding the experiments.
When the experiments are concluded, the connection to the cloud will be interrupted and vehicles will invert back to the local parameters automatically.
Moreover, the local variant always serves as a safety fallback in critical situations.
\subsection{Case study \label{test_case}}
The first case study was performed in company A on an energy management (EM) function that was developed internally.
The function Energy Management has a local and a cloud set of parameters which determine the local and cloud energy management strategy, respectively.
By default, the vehicle will always run the local strategy unless a connection to the cloud is established and the vehicle is eligible.
The development team delivers the software through the company's existing release process.
There are 50 vehicles in this fleet of company cars driven for a total 18 month period, during which, there were 58 observables measured.
The experimenters also monitor how frequently the users manually interrupt the cloud connection.
A number of automated mechanisms were put in place in the cloud to ensure data quality.
The experimenters have access to the data collected in real-time.
The data collected was post-processed in an automated manner in a database, while the team can also choose to export the raw data.
The file size of data collected per week averages at 1.7 gigabytes when exported in CSV format.
The EM function software has dependencies on six ECUs that are mostly supplier parts.
Traditionally, changing the software means rebuilding of all these ECUs completely through suppliers and downloading the software to the vehicles physically.
The usual lead time for such changes is anywhere from three month to one year.
This system caters an environment where continuous experiments are independently from release processes that could be lengthy at times.
In average, the total distance travelled by all eligible users is over 18.000 kilometres per week and over 80\% of the vehicles are being driven daily.
Comparing to any test fleet, they are generating measurements at a much larger scale.
The second case study is conducted to understand the state-of-practice in company B and C.
Although neither company has experience with large scale A/B experiments, but there are commonalities in the components.
Through our interviews, it was apparent that company B and C have adopted some level of capabilities, specifically the data collection capabilities.
Company B has invested intensively in an online data collection system for their vehicles.
A set of observables are measured, their data collected and distributed to the corresponding development teams through a centralised database.
Each functional team within the company can also request for more observables to be measured from the fleet.
A similar approach was reported by company C.
A centralised database was built to distribute high quality data in a fast manner.
The teams have the freedom to determine the sampling frequency accordingly to their measurement requirements.
\section{Discussion \label{discussion}}
In this paper, we presented a hybrid architecture that enables continuous A/B experiments in automotive embedded software.
Comparing to the existing A/B experiment architecture, our architecture offers the flexibility of being independent from continuous deployment processes.
By allowing parameter changes, functional changes can be experimented continuously without a complete software change.
However, we foresee some potential weakness in the design and they are discussed here.
Firstly, the threshold of functional behaviour change through parameters is low comparing to a complete software change.
The system enables A/B experiments for fine tuning of functions but not complete concept changes.
Secondly, many parameter changes are not independent from each other in an automotive setting.
When multiple experiments are running simultaneously, the configuration of experiments becomes critical as suggested by \cite{google2010} and \cite{Kohavi2013} from their experience in online businesses.
Similar to wed-facing applications, we need to consider contradicting and hierarchical functions and their parameters.
Performance of contradicting or hierarchical software variants cannot be determined individually.
Therefore, some centrally well-established and understood performance metrics need to be put in place before parallel/multiple experiments can be conducted.
Thirdly, the teams shall coordinate their experiment design when parameters or observables are shared between different functions.
Such coordination requires organisational support \cite{Fabijan2018}.
As many automotive companies are going through agile transformation \cite{Mattos2020}, the data-driven development mindsets and support structure are gradually improving.
The speed of the transformation will influence how quickly an A/B experiment framework can be implemented at scale.
Finally, receiving cloud parameters requires an active internet connection.
Although functions can be safeguarded by using local parameters as fallback, functions which require millisecond response time cannot rely on cloud connection.
A possible setup for time critical functions could be, one embeds the A and B versions of parameters in the software itself, and use the cloud to trigger the switch in between them.
As a trade-off, one will lose the freedom of tuning cloud parameters without complete software updates.
\section{Conclusion \label{conclusion}}
In recent years, some research effort was put in the adoption of A/B experiments in the automotive domain \cite{Giaimo2017, Giaimo2019, Mattos2020}.
In this paper, we raised a research question on how to enable continuous experiments in an automotive, and presented an architecture that demonstrated such capabilities.
Through a literature review, we found that embedded experiment architectures share many components with web-facing ones, however, lack the capability of rapid changes.
The architecture design is a hybrid A/B testing model that address many challenges in the industry.
Comparing to existing frameworks, our hybrid architecture enable rapid software changes without compromising the high safety and security standards.
Similar framework for automotive software A/B testing is not previously discussed in literature.
We shared case studies of cloud-based A/B experiments at scale, which shows high potential of the parameterised hybrid architecture.
The components of our architecture were compared with the state-of-practice of two other large automotive manufacturers.
We found that the case study companies have applied many components, thus paving the way to an A/B experiment capable architecture.
\bibliographystyle{IEEEtran}
\section{Discussion \label{discussion}}
\begin{table*}[h]
\normalsize
\caption{Papers selected which describing the architecture of A/B experiments, for web and embedded software.}
\begin{center}
{
\begin{tabular}{|p{0.03\textwidth} p{0.55\textwidth}p{0.15\textwidth}p{0.05\textwidth}p{0.105\textwidth}|}
\hline
\textbf{No.} &
\textbf{Title of publication} & \textbf{Authors} & \textbf{Year} & \textbf{Environment} \\ \hline
P1 & Overlapping Experiment Infrastructure:
More, Better, Faster Experimentation \cite{google2010} & Tang \textit{et al.}\ & 2010 & Web \\ \hline
P2 & Architecture for Large-Scale Innovation Experiment Systems \cite{Eklund2012} & Eklund \& Bosch & 2012 & Embedded \\ \hline
P3 & Eternal Embedded Software: Towards Innovation Experiment Systems \cite{Bosch2012} & Bosch \& Eklund & 2012 & Embedded \\ \hline
P4 & Beyond data: from user information to business value through personalized recommendations and consumer science
\cite{Amatriain2013} & Amatriain & 2013 & Web \\ \hline
P5 & Online controlled experiments at large scale \cite{Kohavi2013} & Kohavi \textit{et al.}\ & 2013 & Web \\ \hline
P6 & Design criteria to architect continuous experimentation for self-driving vehicles \cite{Giaimo2017} & Giaimo \& Berger & 2017 & Embedded \\ \hline
P7 & The RIGHT model for Continuous Experimentation \cite{Fagerholm2017} & Fagerholm \textit{et al.}\ & 2017 & Web \\ \hline
P8 & Experimentation growth: Evolving trustworthy A/B testing capabilities in online software companies
\cite{Fabijan2018} & Fabijian \textit{et al.}\ & 2018 & Web \\ \hline
P9 & The Anatomy of a Large-Scale Experimentation Platform
\cite{Gupta2018} & Gupta \textit{et al.}\ & 2018 & Web, app \& OS \\ \hline
P10 & Scalable Data Reporting Platform for A/B Tests
\cite{Vasthimal2019} & Vasthimal \textit{et al.}\ & 2019 & Web \\ \hline
P11 & Experimentation in the Operating System: The Windows Experimentation Platform
\cite{Li2019} & Li \textit{et al.}\ & 2019 & OS \\ \hline
\end{tabular}}
\end{center}
\label{table_slr}
\end{table*}
In this paper, we presented a hybrid architecture combining in-vehicle and cloud elements that solves many problems in adopting continuous A/B experiments in automotive software.
Comparing to existing A/B experiment architecture for embedded software, our architecture offers the flexibility of being independent from a continuous deployment process.
By allowing parameter changes, the function can be experimented continuously without software changes.
However, we foresee some potential weakness in the design and they are discussed here.
Firstly, the threshold of functional behaviour change through parameters is low comparing to a complete software change.
The system enables A/B experiments for fine tuning of functions but not complete concept changes.
In other words, the A/B testing can be done on relatively mature functions through our architecture.
Secondly, many parameters changes are not independent from each other in an automotive setting.
When multiple experiments are running simultaneously, the configuration of experiments becomes critical as suggested by \cite{google2010} and \cite{Kohavi2013} from their experience in the online businesses.
Similar to wed-facing applications, we need to consider contradicting parameters.
At the same time, some parameters configurations can be hierarchical when one function is deemed to be a sub-function of something else.
For instance, battery cooling temperature effects energy consumption management and climate comfort for the driver.
This type of parameters adjustment needs to be coordinated when both functions are being A/B tested.
Performance of hierarchical functions cannot be determined individually.
As a result, centralised, well-established and understood performance metrics need to be put in place before parallel/multiple experiments can be conducted simultaneously.
Thirdly, each team can independently determine parameters and observables.
Ideally, the teams shall coordinate their experiment designs when parameters or observables are shared in between different functions.
Such coordination requires organisational support \cite{Fabijan2018}.
As many automotive companies are going through agile transformation \cite{Mattos2020}, the data-driven development mindsets and support structure are gradually improving.
The speed of the transformation will influence how quickly an A/B experiment system can be put in place.
Fourthly, receiving cloud-based parameters requires an active internet connection that can be disrupted during a number of driving conditions, and usually has a delay in response.
Meanwhile the functions can be safeguarded through using local parameters as fallback, a function which requires millisecond response time cannot rely on cloud parameters.
Common examples of functions which are time critical include lane keeping assist, automatic breaking and active cruise control.
A possible setup for time critical functions is to embed the A and B versions of parameters in the software itself, and use the cloud to trigger the switch in between them.
As a trade-off, one will lose the freedom of tuning parameters independently from release processes.
Last but not least, some types of software are strictly governed by legal frameworks.
Most types of legal requirements cover two dimensions.
First, functional changes can not demonstrate deteriorated performance in legal compliance tests. Second, vehicles cannot behave worse on the road in comparison to the lab tests.
We did not specifically address this issue in our software architecture, since legal compliance testing is built into software release processes in the industry.
Moreover, the potential performance change from cloud parameters can be demonstrated in advance if we limited the upper and lower limits of parameters changes for all legally governed functions.
|
1,108,101,562,720 | arxiv | \section{Successive Approximations}
Double successive rough set approximations here, are considered using two, generally different equivalence relations. These are interesting because one can imagine a situation or model where sets/information to be approximated is input through two different approximations before returning the output. It is possible that for example heuristics in the brain can be modelled using such layered approximations. Decomposing successive approximations into constituent parts is somewhat analogous to decomposing a wave into sine and cosine parts using Fourier analysis.
In our case, we have two equivalence relations $E_1$ and $E_2$ on a set $V$ with lower and upper approximations operators acting on its powerset $\mathscr{P}(V),$ denoted by $L_1, \ U_1$ and $L_2, \ U_2$ respectively. What if we knew the results of passing all the elements in $\mathscr{P}(V)$ through $L_1$ and then $L_2,$ which we denote by $L_2L_1.$ Could we then reconstruct $E_1$ and $E_2$ from this information? In this paper, we will investigate this question and consider the four cases of being given a defined $L_2L_1, \ U_2U_1, \ U_2L_1, \ L_2U_1$ operators. We will find that two equivalence relations do not always produce unique such operators but that some pairs do. We find and characterise conditions which the pairs of equivalence relations must satisfy for them to produce a unique operator. Cattaneo and Ciucci found that preclusive relations are especially useful for rough approximations in information systems in \cite{PR}. For the $L_2L_1$ case we will show that these conditions from a preclusive relation between pairs of equivalence relations on a set and so we can define a related notion of independence from it. After this, we will find a more conceptual but equivalent version of the conditions of the uniqueness theorem. These conditions are more illuminating in that we can easier see why these conditions work while the conditions in the first version of the theorem are easier to use in practice. Lastly, we will consider the cases of the remaining operators, $U_2U_1, \ U_2L_1$ and $L_2U_1.$ We note that the $L_2L_1$ and $U_2U_1$ cases are dual to each other and similarly for the $U_2L_1$ and $L_2U_1$ cases.
Rough set theory has quite a large number of practical applications. This is due in part to the computation of reducts and decision rules for databases. Predictions can be made after the data is mined to extract decision rules of manageable size (i.e. attribute reduction). In this way, rough set theory can be used to make decisions using data in the absence of major prior assumptions as argued in more detail in \cite{Baye}. Hence in retrospect, it is perhaps not so surprising that this leads to tremendous applications. Therefore, rough set analysis is added to the tools, which includes regression analysis and Bayes' Theorem, for pattern recognition and feature selection in data mining, see \cite{DM4, DM1, DM2, DM3, DM5, DM6, DM7, DM8}. The resulting applications include in medical databases \cite{MD1, MD2, MD3, MD4, MD5, MD6, MD7}, cognitive science \cite{CG1, CG2, CG3, CG4, CG5}, artificial intelligence and machine learning \cite{AC1, AC2, AC3, AC4, AC5, AC6, AC7} and engineering \cite{EN1, EN2, EN3, EN4, EN5}. Indeed in \cite{TSid}, Yao noted that there is currently an imbalance in the literature between the conceptual unfolding of rough set theory and its practical computational progress. He observed that the amount of computational literature currently far exceeds the amount of conceptual, theoretical literature. Moreover, he made the case that the field would prosper from a correction of this imbalance. To illustrate this, he began his recommendation in \cite{TSid} by formulating a conceptual example of reducts that unifies three reduct definitions used in the literature which on the surface look different. We strongly agree that more efforts to find conceptual formulations of notions and results would increase the discovery of unifying notions. This would greatly aid the aim of making a cohesive and coherent map of the present mass of existing literature. In this direction, we have developed subsections 4.2 and 4.3 in section 4.
\section{Basic Concepts of Rough Set Theory}
We go over some basic notions and definitions which can be found in \cite{PZ}. Let $V$ be a set and $E$ be an equivalence relation on $V$. Also, let the set of equivalence classes of $E$ be denoted by $V/E.$ If a set $X \subseteq V$, is equal to a union of some of the equivalence classes of $E$ then it is called \textit{E-exact}. Otherwise, $X$ is called \textit{E-inexact} or \textit{E-rough} or simply \textit{rough} when the equivalence relation under consideration is clear from the context. Inexact sets may be approximated by two exact sets, the lower and upper approximations as is respectively defined below:
\begin{center}
$ \textit{\textbf{l}}_E (X) = \{ x \in V\ | \ [x]_E \subseteq X \} $,
\end{center}
\begin{equation} \label{eq:1}
\textit{\textbf{u}}_E (X) = \{ x\in V \ | \ [x]_E \cap X \neq \emptyset \}.
\end{equation}
Equivalently, we may use a granule based definition instead of a pointwise based definition:
\begin{center}
$ \textit{\textbf{l}}_E (X)= \bigcup \{ Y\subseteq V/E \ | \ Y \subseteq X \}, $
\end{center}
\begin{equation} \label{eq:2}
\textit{\textbf{u}}_E (X)= \bigcup \{ Y \subseteq V/E \ | \ Y \cap X \neq \emptyset \}.
\end{equation}
The pair $(V, E)$ is called an \textit{approximation space}. It may be the case that several equivalence relations are considered over a set. Let $\mathscr{E} $ being a family of equivalence relations over a finite non-empty set $V$. The pair, $K = (V, \mathscr{E} )$ is called \textit{knowledge base}. If $ \mathscr{P} \subseteq \mathscr{E} $, we recall that $\bigcap \mathscr{P}$ is alao an equivalence relation. The intersection of all equivalence relations belonging to $\mathscr{P}$ is denoted by $IND(\mathscr{P}) = \bigcap \mathscr{P}$. This is called the \textit{indiscernibility relation} over $\mathscr{P}$.\\
\noindent For two equivalence relations $E_1$ and $E_2,$ we say that $E_1 \leq E_2$ iff $E_1 \subseteq E_2.$ In this case we say that $E_1$ is \emph{finer} than $E_2$ or that $E_2$ is \emph{coarser} than $E_1.$\\
\noindent We recall from \cite{TA} some definitions about different types of roughly definable and undefinable sets. Let $V$ be a set then for $X \subseteq V:$
\vspace{2mm}
\noindent(i) If $ \textit{\textbf{l}}_E (X)\neq \emptyset$ and $\textit{\textbf{u}}_E (X) \neq V,$ then $X$ is called \textit{roughly E-definable.}
\vspace{2mm}
\noindent (ii) If $ \textit{\textbf{l}}_E (X)= \emptyset$ and $\textit{\textbf{u}}_E (X) \neq V,$ then $X$ is called \emph{internally roughly E-undefinable.}
\vspace{2mm}
\noindent (iii) If $ \textit{\textbf{l}}_E (X) \neq \emptyset$ and $\textit{\textbf{u}}_E (X) = V,$ then $X$ is called \emph{externally roughly E-definable.}
\vspace{2mm}
\noindent (iv) If $ \textit{\textbf{l}}_E (X) = \emptyset$ and $\textit{\textbf{u}}_E (X) = V,$ then $X$ is called \emph{totally roughly E-definable.}
\subsection{Properties Satisfied by Rough Sets}
In \cite{PZ}, Pawlak enlists the following properties of lower and upper approximations. Let $V$ be a non-empty finite set and $X, Y \subseteq V$. Then, the following holds:\\
\begin{onehalfspace}
\noindent $1) \textit{\textbf{l}}_E (X) \subseteq X \subseteq \textit{\textbf{u}}_E (X),$
\vspace{2mm}
\noindent $2) \textit{\textbf{l}}_E (\emptyset) = \textit{\textbf{u}}_E (\emptyset) = \emptyset; \quad \textit{\textbf{l}}_E (V) = \textit{\textbf{u}}_E (V)= V,$
\vspace{2mm}
\noindent $ 3) \textit{\textbf{u}}_E (X \cup Y) = \textit{\textbf{u}}_E (X) \cup \textit{\textbf{u}}_E (Y),$
\vspace{2mm}
\noindent $ 4) \textit{\textbf{l}}_E (X \cap Y) = \textit{\textbf{l}}_E (X) \cap \textit{\textbf{l}}_E (Y),$
\vspace{2mm}
\noindent $ 5) X \subseteq Y \Rightarrow \textit{\textbf{l}}_E (X) \subseteq \textit{\textbf{l}}_E (Y),$
\vspace{2mm}
\noindent $ 6) X \subseteq Y \Rightarrow \textit{\textbf{u}}_E (X) \subseteq \textit{\textbf{u}}_E (Y),$
\vspace{2mm}
\noindent $ 7) \textit{\textbf{l}}_E (X\cup Y) \supseteq \textit{\textbf{l}}_E (X) \cup \textit{\textbf{l}}_E (Y),$
\vspace{2mm}
\noindent $ 8) \textit{\textbf{u}}_E (X\cap Y) \supseteq \textit{\textbf{u}}_E (X) \cap \textit{\textbf{u}}_E (Y),$
\vspace{2mm}
\noindent $ 9) \textit{\textbf{l}}_E (-X) = -\textit{\textbf{u}}_E (X),$
\vspace{2mm}
\noindent $ 10) \textit{\textbf{u}}_E (-X) = -\textit{\textbf{l}}_E (X),$
\vspace{2mm}
\noindent $ 11) \textit{\textbf{l}}_E (\textit{\textbf{l}}_E (X)) = \textit{\textbf{u}}_E (\textit{\textbf{l}}_E (X)) = \textit{\textbf{l}}_E (X),$
\vspace{2mm}
\noindent $ 12) \textit{\textbf{u}}_E (\textit{\textbf{u}}_E (X)) = \textit{\textbf{l}}_E (\textit{\textbf{u}}_E (X)) = \textit{\textbf{u}}_E (X).$
\end{onehalfspace}
\subsection{ Dependencies in Knowledge Bases}
A database can also be represented in the form of a matrix of \emph{Objects} versus \emph{Attributes} with the entry corresponding to an object attribute pair being assigned the value of that attribute which the object satisfies. From the following definition, we can form equivalence relations on the objects for each given attribute. The set of these equivalence relations can then be used as our knowledge base.
\begin{definition}
Let $V$ be the set of objects and $P$ be the set of attributes. Let $ Q \subseteq P$, then V/Q is an equivalence relation on $U$ induced by Q as follows: $x\sim_Qy $ iff $q(x) = q(y)$ for every $q \in Q.$
\end{definition}
\noindent To construct decision rules, we may fix two sets of attributes called \emph{condition attributes} and \emph{decision attributes} denoted by ${C}$ and ${D}$ respectively. We then use these to make predictions about the decision attributes based on the condition attributes. \emph{Decision rules} are made by recording which values of decision attributes correlate with which values of condition attributes. As this information can be of considerable size, one of the primary goals of rough set theory is to reduce the number of decision attributes without losing predictive power. A minimal set of attributes which contains the same predictive power as the full set of decision attributes is called a \emph{reduct} with respect to \emph{D}.
\vspace{2mm}
\noindent Next we give the definition of the positive region of one equivalence relation with respect to another.
\begin{definition}
Let $C$ and $ D $ be equivalence relations on a finite non-empty set $V.$ The \emph{positive region} of the partition $D$ with respect to $C$ is given by,
\begin{equation}
POS_C(D) = \bigcup\limits_{X \in D} \textit{\textbf{l}}_C (X),
\end{equation}
\end{definition}
\begin{definition}
It is said that \emph{$ {D} $ depends on ${C} $ in a degree ${k}$}, where $0 \leq k \leq 1$, denoted by $C \Rightarrow_{k} D,$ if
\begin{equation}
k = \gamma(C,D) = \frac{|POS_C(D)|}{|V|}.
\end{equation}
\end{definition}
\noindent If $k = 1,$ then we say that $C$ depends totally on $D$ i.e $C \Rightarrow D.$
\vspace{2mm}
\noindent Let $K_1 = (V, \mathscr{P})$ and $K_2 = (V, \mathscr{Q}).$ We now give the definitions dependency of knowledge and then partial dependency. We say that \emph{$\mathscr{Q}$ depends on $\mathscr{P}$} i.e. $ \mathscr{P} \Rightarrow \mathscr{Q}$ iff $IND(\mathscr{P}) \subseteq IND(\mathscr{Q}).$
\begin{proposition}
$I_{IND(\mathscr{P})} \leq I_{IND(\mathscr{Q})}$ iff $ \mathscr{P} \Rightarrow \mathscr{Q}.$
\end{proposition}
\begin{proposition}
$POS_{IND(\mathscr{P})}IND((\mathscr{Q})) = U$ iff $\mathscr{P} \Rightarrow \mathscr{Q}.$
\end{proposition}
\noindent Otherwise, in the above case, $\gamma(IND(\mathscr{P}), IND(\mathscr{Q})) = k <1 $ and then we say that $\mathscr{P} \Rightarrow_k \mathscr{Q}.$\\
\section{Properties of Successive Approximations}
Next, we see that in general, approximating with respect to $E_1$ and then approximating the result with respect to $E_2$ gives a different result than if we had done it in the reverse order. That is, successive approximations do not commute. We consider some properties of successive approximations below.
\begin{proposition}
Let $V$ be a set and $E_1$ and $E_2$ be equivalence relations on $V.$ Then for $Y \in \mathscr{P}(V),$ the following holds,\\
\noindent 1. $\textbf{l}_{E_1}(\textbf{l}_{E_2}(Y)) = Z \ \, \, \, \, \not\Rightarrow \ \textbf{l}_{E_2}(\textbf{l}_{E_1}(Y)) = Z, $\\
2. $\textbf{u}_{E_1}(\textbf{u}_{E_2}(Y)) = Z \ \not\Rightarrow \ \textbf{u}_{E_2}(\textbf{u}_{E_1}(Y)) = Z,$\\
3. $\textbf{u}_{E_1}(\textbf{l}_{E_2}(Y)) = Z \ \ \not\Rightarrow \ \textbf{l}_{E_2}(\textbf{u}_{E_1}(Y)) = Z,$\\
4. $\textbf{l}_{E_1}(\textbf{u}_{E_2}(Y)) = Z \ \ \not\Rightarrow \ \textbf{u}_{E_2}(\textbf{l}_{E_1}(Y)) = Z.$\\
\end{proposition}
\begin{proof}
We give a counterexample to illustrate the proposition. Let $V = \{a, b, c, d\}$ and let $E_1 = \{ \{a, b,c\}, \{ d \} \}$ and $E_2 = \{ \{a, b\}, \{c, d\} \}.$
\noindent To illustrate 1., let $Y = \{ a, b, c\}$. Then $\textbf{\textit{l}}_{E_1}(\textbf{\textit{l}}_{E_2}(Y)) = \emptyset$ while $\textbf{\textit{l}}_{E_2}(\textbf{\textit{l}}_{E_1}(Y)) = \{ a, b\}.$
\vspace{2mm}
\noindent For 2., let $Y = \{a\}$. Then $\textbf{\textit{u}}_{E_1}(\textbf{\textit{u}}_{E_2}(Y)) = \{ a, b, c \}$ while $\textbf{\textit{u}}_{E_2}(\textbf{\textit{u}}_{E_1}(Y)) = \{a, b, c, d\}.$
\vspace{2mm}
\noindent For 3., let $Y = \{a, b\}$. Then $\textbf{\textit{u}}_{E_1}(\textbf{\textit{l}}_{E_2}(Y)) = \{ a, b, c \}$ while $\textbf{\textit{l}}_{E_2}(\textbf{\textit{u}}_{E_1}(Y)) = \{a, b\}.$
\vspace{2mm}
\noindent For 4., let $Y = \{a, b, c\}$. Then $\textbf{\textit{l}}_{E_1}(\textbf{\textit{u}}_{E_2}(Y)) = \emptyset $ while $\textbf{\textit{u}}_{E_2}(\textbf{\textit{l}}_{E_1}(Y)) = \{a, b, c, d\}.$
\end{proof}
\noindent From Properties 1), 5) and 6) of lower and upper approximations in Section 2.1, we immediately get that,\\
(i) $\textbf{\textit{l}}_{E_1}(\textbf{\textit{l}}_{E_2}(Y)) \subseteq \textbf{\textit{l}}_{E_2}(Y),
\textbf{\textit{u}}_{E_1}(\textbf{\textit{u}}_{E_2}(Y)) \supseteq \textbf{\textit{u}}_{E_2}(Y),$\\
$\textcolor{white}{ggl} \textbf{\textit{u}}_{E_1}(\textbf{\textit{l}}_{E_2}(Y)) \supseteq \textbf{\textit{l}}_{E_2}(Y) \ \text{and}
\ \textbf{\textit{l}}_{E_1}(\textbf{\textit{u}}_{E_2}(Y)) \subseteq \textbf{\textit{u}}_{E_2}(Y).$\\
\hfill\\
\noindent If we do not know anything more about the relationship between $E_1$ and $E_2$ then nothing further may be deduced. However, if for example we know that $E_1 \leq E_2$ then the successive approximations are constrained as follows:
\begin{proposition}
If $E_1 \leq E_2$ then the following properties hold;
\vspace{2mm}
\noindent (ii) $\textbf{\textit{l}}_{E_1}(\textbf{\textit{l}}_{E_2}(Y)) = \textbf{\textit{l}}_{E_2}(Y) $\\
(iii) $ \textbf{\textit{l}}_{E_2}(\textbf{\textit{l}}_{E_1}(Y)) \subseteq \textbf{\textit{l}}_{E_2}(Y) $ \\
(iv) $ \textbf{\textit{u}}_{E_1}(\textbf{\textit{u}}_{E_2}(Y)) \supseteq \textbf{\textit{u}}_{E_1}(Y) $\\
(v) $ \textbf{\textit{u}}_{E_2}(\textbf{\textit{u}}_{E_1}(Y)) = \textbf{\textit{u}}_{E_2}(Y) $
\end{proposition}
\begin{proof}
Straightforward.
\end{proof}
\begin{proposition}
Let $V$ be a finite non-empty set and let $E_1$ and $E_2$ be equivalence relations on $V.$ Let $ x\in V.$ Then $\textbf{\textit{l}}_{E_1}( \textbf{\textit{u}}_{E_2}(\{x\})) \subseteq POS_{E_1}({E_2}).$
\end{proposition}
\begin{corollary}
Let $V$ be a finite non-empty set and let $E_1$ and $E_2$ be equivalence relations on $V.$ Let $ X \subseteq V.$ Then $POS_{E_1}({E_2}) \cap X \subseteq \bigcup \limits_{x \in X} \textbf{\textit{l}}_{E_1}( \textbf{\textit{u}}_{E_2}(\{x\})).$
\end{corollary}
\begin{corollary}
Let $V$ be a finite non-empty set and let $E_1$ and $E_2$ be equivalence relations on $V.$ Then $POS_{E_1}({E_2}) = \bigcup \limits_{x \in V} \textbf{\textit{l}}_{E_1}( \textbf{\textit{u}}_{E_2}(\{x\})).$
\end{corollary}
\begin{center}
\includegraphics[scale = .5]{Approx5}
\textcolor{white}{aaaaaaa} Figure 4.1: Illustrates that successive approximations get more coarse when iterated.
\end{center}
\begin{proposition}
Let $G$ be a graph with vertex set $V$ and $E$ an equivalence relation on $V.$ Let $S_E$ be the set containing equivalence classes of $E$ and taking the closure under union. Let $ F : \mathscr{P}(V) \rightarrow S_E$ be such that $F(X) = \bigcup\limits_{x \in X} [x]_E$ and let $Id: S_E \rightarrow \mathscr{P}(V) $ be such that $Id(Y) = Y$ Then $F$ and $Id$ form a Galois connection.
\end{proposition}
\begin{proof}
It is clear from definitions that both $F$ and $Id$ are monotone. We need that for $X \in \mathscr{P}(V)$ and $Y \in S_E, \ F(X) \subseteq Y$ iff $ X \subseteq Id(Y).$ This is also the case because from the definition of $F,$ we have the $X \subseteq F(X).$
\end{proof}
\noindent \textbf{Remark 3.1}. Successive approximations break the Galois structure of single approximations. We can imagine that single approximations are a kind of sorting on the domain of a structure. We partition objects in the domain into boxes and in each box there is a special member (the lower or upper approximation) which identifies/represents any member in its respective box. We may say that objects are approximated by their representative.
For successive approximations, we have two different sortings of the same domain. Objects are sorted by the first approximation and only their representative members are then sorted by the second approximation. An object is then placed in the box that its representative member is assigned to in the second approximation, even though the object itself may be placed differently if the second approximation alone was used. Hence the errors `add' in some sense. In Figure 4.1, the final grouping as seen by following successive arrows, may be coarser than both the first and second approximations used singly. An interesting problem is how to correct/minimise these errors. It is also interesting how much of the individual approximations can be reconstructed from knowledge of the combined approximation. In the next section we will investigate this problem.
\section{Decomposing $L_2L_1$ Approximations}
What if we knew that a system contained exactly two successive approximations? Would we be able to decompose them into its individual components? Before getting into what we can do and what information can be extracted, we start with an example to illustrate this.\\
\noindent \textbf{Notation}: Let $V$ be a finite set. Let a function representing the output of a subset of $V$ when acted on by a lower approximation operator $L_1$ followed by a lower approximation operator $L_2,$ based on the equivalence relations $E_1$ and $E_2$ respectively, be denoted by $L_2L_1$ where $L_2L_1(X) = L_2(L_1(X))$ and $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V).$ Similarly, other combinations of successive lower and upper approximations examined will be denoted by $U_2U_1,\ L_2U_1, U_2L_1$ which denotes successive upper approximations, an upper approximation followed by a lower approximation and a lower approximation followed by an upper approximation respectively.
Sometimes when we know that the approximations are based on equivalence relations $P$ and $Q$ we may use the subscripts to indicate this for example; $L_QL_P.$
Lastly, if for a defined $L_2L_1$ operator there exists a pair of equivalence relation solutions $E_1$ and $E_2$ which are such that the lower approximation operators $L_1$ and $L_2$ are based on them respectively, then we may denote this solution by the pair $(E_1, E_2).$ Also, $(E_1, E_2)$ can be said to produce or generate the operators based on them.\\
\noindent \textbf{Example 4.1}\\
Let $V = \{a, b, c, d, e\}.$ Let a function representing the output of a subset of $V$ when acted on by a lower approximation operator $L_1$ followed by a lower approximation operator $L_2,$ which are induced by equivalence relations $E_1$ and $E_2$ respectively and let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be as follows:\\
\hfill\\
$L_2L_1(\{ \emptyset\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, b, c, d, e\}) = \{a, b, c, d, e\} $ \\
$L_2L_1(\{a\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{b, c, d, e\})= \{e\} $\\
$L_2L_1(\{b\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, c, d, e\}) = \{c, d, e\} $\\
$L_2L_1(\{c\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, b, d, e\}) = \{e\} $\\
$L_2L_1(\{d\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, b, c, e\}) = \{a, b \} $\\
$L_2L_1(\{e\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, b, c, d\}) = \{a, b\} $\\
$L_2L_1(\{a, b\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{c, d, e\}) = \{e\} $\\
$L_2L_1(\{a, c\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{b, d, e\}) = \{e\} $\\
$L_2L_1(\{a, d\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{b, c, e\}) = \emptyset $\\
$L_2L_1(\{a, e\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{b, c, d\}) = \emptyset $\\
$L_2L_1(\{b, c\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{b, c, d\}) = \emptyset $\\
$L_2L_1(\{b, d\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{a, d, e\}) = \{e\} $\\
$L_2L_1(\{b, e\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{a, c, d\}) = \emptyset $\\
$L_2L_1(\{c, d\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{a, b, e\}) = \emptyset $\\
$L_2L_1(\{c, e\}) = \emptyset \qquad \qquad \qquad \qquad \qquad \qquad \quad L_2L_1(\{a, b, d\}) = \emptyset $\\
$L_2L_1(\{d, e\}) = \{ e\} \qquad \qquad \qquad \qquad \qquad \qquad L_2L_1(\{a, b, c\}) = \{a, b\} $\\
\hfill\\
We will now try to reconstruct $E_1$ and $E_2.$ The minimal sets in the output are $\{e\}$ and $\{a,b\}.$ Clearly, these are either equivalence classes of $E_2$ or a union of two or more equivalence classes of $E_2.$ Since $\{e\}$ is a singleton it is must be an equivalence class of $E_2.$ So far we have partially reconstructed $E_2$ and it is equal to or finer than $\{ \{a, b\}, \{c, d\}, \{e\} \}.$
Let us consider the pre-images of these sets in $L_2L_1$ to try to reconstruct $E_1.$ Now, $L_2L_1^{-1}(\{e\}) = \{ \{ d,e \}, \{a, d, e \}, \{ b, d, e\}, \{ c, d, e\}, \{ a, b, d, e\}, \{ b, c, d, e\} \}.$ We see that this set has a minimum with respect to containment and it is $\{d, e\}.$ Hence either $\{d,e\}$ is an equivalence class of $E_1$ or both of $\{d\}$ and $\{e\}$ are equivalent classes of $E_1.$
Similarly, $L_2L_1^{-1}(\{a, b\}) = \{ \{ a, b, c \}, \{a, b, c, e\}, \{a, b, c, d\} \}.$ We, see that this set has a minimum which is $\{a, b, c\}$ hence either this set is an equivalence class or is a union of equivalence classes in $E_1.$ Now, $L_2L_1^{-1}(\{ c, d, e\}) = \{\{a, c, d, e\}\}.$ Hence, $\{a, c, d, e\}$ also consists of a union of equivalence classes of $E_1.$ Since we know from above that $\{d,e\}$ consists of the union of one or more equivalence classes of $E_1$, this means that $\{a,c\}$ consists of the union of one or more equivalence classes of $E_1$ and $\{b\}$ is an equivalence class of $E_1.$ So far we have that $E_1$ is equal to or finer than $\{ \{a,c\}, \{b\}, \{d, e\} \} .$
Now we consider if $\{a, c\} \in E_1$ or both of $\{a\}$ and $\{c\}$ are in $E_1.$ We can rule out the latter for suppose it was the case. Then $L_2L_1(\{a, b\})$ would be equal to $\{a, b\}$ since we already have that $\{b\} \in E_1$ and $\{a, b\}$ is the union of equivalence classes in $E_2.$ Since this is not the case we get that $\{a,c\} \in E_1.$ By a similar analysis of $L_2L_1(\{a,c, d\}) \neq \{c, d\}$ but only $\emptyset$ we get that $\{ d, e\} \in E_1.$ Hence, we have fully constructed $E_1$ and $E_1 = \{ \{a, c\}, \{b\}, \{d, e\} \}.$
With $E_1$ constructed we can complete the construction of $E_2.$ Recall, that we have that $\{a, b\}$ is a union of equivalence classes in $E_2.$ Suppose that $\{a\} \in E_2.$ Then $L_2L_1(\{a,c\}) $ would be equal to $\{a\}$ since $\{a, c \} \in E_1$ but from the given list we see that it is not. Hence, $\{a, b\} \in E_2.$ Similarly, we recall that $\{c, d \}$ is a union of equivalence classes in $E_2.$ Suppose that $\{d\} \in E_2.$ Then $L_2L_1(\{d, e\})$ would be equal to $\{d, e\}$ since $\{d,e\} \in E_1$ but it is only equal to $\{e\}.$ Hence, $\{c, d\} \in E_2.$ We have now fully reconstructed $E_2$ and $E_2 = \{ \{a, b\}, \{ c, d\}, \{ e\} \}.$
\hfill\\
\noindent The next example shows that we cannot always uniquely decompose successive approximations.\\
\hfill\\
\textbf{Example 4.2}
\noindent Let $V = \{a, b, c, d\}$ and let $E_1 = \{ \{ a, b\}, \{ c, d\} \}, \ E_2 = \{ \{a, c \}, \{ b, d\} \}$ and $E_3 = \{ \{a, d \}, \{ b, c\} \}.$ We see that $L_1L_2(X) = L_1L_3(X) = \emptyset$ for all $X \in (\mathscr{P}(V) -V)$ and $L_1L_2(X) = L_1L_3(X) = V$ when $X = V.$ Then for all $X \subseteq U,$ $ L_1L_2(X) = L_1L_3(X)$ even though $E_2 \neq E_3.$ Hence, if we are given a double, lower successive approximation on $\mathscr{P}(V)$ which outputs $\emptyset$ for all $X \in (\mathscr{P}(V) -V)$ and $V$ for $X = V$ then we would be unable to say that it was uniquely produced by $L_1L_2$ or $L_1L_3.$ \\
\hfill\\
\noindent In the following we start to build to picture of what conditions are needed for the existence of unique solutions for double, successive approximations. \\
\begin{proposition}
Let $V$ be a set with equivalence relations $E_1$ and $E_2$ on $V.$ If for each $[x]_{E_1} \in E_1,$ $[x]_{E_1}$ is such that $L_2([x]_{E_1}) = \emptyset$ i.e $[x]_{E_1}$ is either internally $E_2$--undefinable or totally $E_2$--undefinable, then the corresponding approximation operator, $L_2L_1$ on $\mathscr{P}(V)$ will be such that $L_2L_1([x]_{E_1}) = \emptyset.$
\end{proposition}
\begin{proof}
Here, $L_1([x]) = \emptyset.$ Hence $L_2L_1([x]) = L_2(\emptyset) = \emptyset.$
\end{proof}
\noindent \textbf {Remark 4.1} We note that the union of $E$-undefinable sets is not necessarily $E$-undefinable. Consider Example 4.2. Here, $\{a, b\}$ and $\{c, d\}$ are both totally $E_2$--undefinable but their union, $\{a, b, c, d\}$ is $E_2$--definable.\\
\noindent \textbf{Algorithm 4.1: For Partial Decomposition of Double Successive Lower Approximations}
\vspace{4mm}
\noindent Let $V$ be a finite set. Given an input of a fully defined operator $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V),$ if a solution exists, we can produce a solution $(S, R)$, i.e. where $L_1$ and $L_2$ are the lower approximation operators of equivalence relations $S$ and $R$ respectively, by performing the following steps:
\vspace{4mm}
\vspace{3mm}
\noindent \textbf{1}. Let $J$ be the set of output sets of the given $L_2L_1$ operator. We form the relation $R$ to be such that for $a, b \in V,$ $a \sim_R b \iff (a \in X \iff b\in X)$ for any $X \in J.$ It is clear that $R$ is an equivalence relation.
\vspace{3mm}
\noindent \textbf{2}. For each $Y \neq \emptyset$ output set, find the minimum pre-image set with respect to $\subseteq,$ $Y_m,$ such that $L_2L_1(Y_m) = Y$. Collect all these minimum sets in a set $K.$ If there is any non-empty output set $Y,$ such that the minimum $Y_m$ does not exist, then there is no solution to the given operator and we return 0 signifying that no solution exists.
\vspace{3mm}
\noindent \textbf{3}. Using $K,$ we form the relation $S$ to be such that for $a, b \in V,$ $a \sim_S b \iff (a \in X \iff b\in X)$ for any $X \in K.$ It is clear that $S$ is an equivalence relation.
\vspace{3mm}
\noindent \textbf{4}. Form the operator $L_RL_S : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ generated by $(S, R).$ If for all $X \in \mathscr{P}(V)$, the given $L_2L_1$ operator is such that $L_2L_1(X) = L_RL_S(X),$ then $(S, R)$ is a solution proving that a solution exists (note that it is not necessarily unique). Return $(S, R).$ Otherwise, discard $S$ and $R$ and return 0 signifying that no solution exists.\\
\noindent We will prove the claims in step 2 and step 4 in this section. Next, we prove step 2.
\begin{proposition} \label{l4}
Let $V$ be a set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ If for $Y \neq \emptyset$ in the range of $L_2L_1,$ there does not exist a minimum set $Y_m,$ with respect to $\subseteq$ such that $L_2L_1(Y_m) = Y,$ then there is no equivalence relation pair solution to the given operator.
\end{proposition}
\begin{proof}
Suppose to get a contradiction that a solution $(E_1, E_2)$ exists and there is no minimum set $Y_m$ such that $L_2L_1(Y_m) =Y.$ Since $V$ is finite, then there exists at least two minimal sets $Y_k$ and $Y_l$ say, such that $L_2L_1(Y_s) =Y$ and $L_2L_1(Y_t) =Y.$ Since $Y_s$ and $Y_t$ are minimal sets with the same output after two successive lower approximations, then $Y_s$ and $Y_t$ must each be unions of equivalence classes in $E_1$ which contain $Y.$ Since they are unequal, then WLOG there exists $[a]_{E_1} \in E_1$ which is such that $[a]_{E_1} \in Y_s$ but $[a]_{E_1} \not\in Y_t.$ Since $Y_s$ is minimal, then $[a]_{E_1} \cap Y \neq \emptyset$ (or else $L_2L_1(Y_s) = L_2L_1(Y_s - [a]_{E_1}) = Y$). So let $ x \in [a]_{E_1} \cap Y.$ Then $Y_t \not\supseteq x$ which contradicts $Y_t \supseteq Y.$
\end{proof}
\noindent We now prove three lemmas on the way to proving the claim in step 4.
\begin{lemma} \label{p15}
Let $V$ be a set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ Let $R$ and $S$ be equivalence relations defined on $V$ as constructed in the previous algorithm. If $(E_1, E_2)$ is a solution of $L_2L_1$ then $E_2\leq R$ and $E_1 \leq S.$
\end{lemma}
\begin{proof}
We first prove $E_2\leq R.$ Now the output set of a non-empty set in $\mathscr{P}(V)$ is obtained by first applying the lower approximation $L_1$ to it and and after applying the lower approximation, $L_2$ to it. Hence by definition of $L_2,$ the non-empty output sets are unions of equivalence classes of the equivalence relation which corresponds to $L_2.$ If $a$ is in an output set but $b$ is not then they cannot belong to the same equivalence class of $E_2$ i.e. $a \not\sim_R b$ implies that $a \not\sim_{E_2} b.$ Hence $E_2\leq R. $
Similarly, the minimal pre-image, $X$ say, of a non-empty output set which is a union of equivalence classes in $E_2,$ has to be a union of equivalence classes in $E_1.$ For suppose it was not. Let $Y = \{y \in X\ | \ [y]_{E_1} \not\subseteq X\}.$ By assumption, $Y \neq \emptyset.$ Then $L_1 (X) = L_1 (X - Y).$ Hence $L_2L_1(X) = L_2L_1 (X - Y)$ but $|X - Y| < |X|$ contradicting minimality of $X$. Therefore, if $a$ belongs to the minimal pre-image of a non-empty output set but $b$ does not belong to it, then $a$ and $b$ cannot belong to the same equivalence class in $E_1$ i.e. $a \not\sim_S b$ which implies that $a \not\sim_{E_1} b.$ Hence $E_1\leq S.$
\end{proof}
\noindent \textbf{Remark 4.2} The above lemma implies that for a given $L_2L_1$ operator on $\mathscr{P}(V)$ for a set $V,$ that the pair of solutions given by the algorithm $S$ and $R$ for corresponding to $L_1$ and $L_2$ are the coarsest solutions for $E_1$ and $E_2$ which are compatible with the given, fully defined $L_2L_1$ operator. That is, for any other possible solutions, $E_1$ and $E_2$ to the given $L_2L_1$ operator, $E_1 \leq S$ and $E_2 \leq R.$\\
\begin{lemma} \label{l2}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined operator. If there exists equivalence pair solutions to the operator $(E_1, E_2)$ which is such that there exists $[x]_{E_2}, [y]_{E_2} \in E_2,$ such that $[x]_{E_2} \neq [y]_{E_2}$ and $\textbf{u}_{E_1}([x]_{E_2}) = \textbf{u}_{E_1}([y]_{E_2}), $ then there exists another solution, $(E_1, H_2)$, where $H_2$ is an equivalence relation formed from $E_2$ by combining $[x]_{E_2}$ and $[y]_{E_2}$ and all other elements are as in $E_2.$ That is, $[x]_{E_2} \cup [y]_{E_2} = [z] \in H_2$ and if $[w] \in E_2$ such that $[w] \neq [x]_{E_2}$ and $[w]_{E_2} \neq [y]_{E_2},$ then $[w] \in H_2.$
\end{lemma}
\begin{proof}
Suppose that $(E_1, E_2)$ is a solution of a given $L_2L_1$ operator and $H_2$ is as defined above. Now, $L_2L_1(X) = Y $ iff the union of $E_1$-equivalence classes in $X$ contains the union of $E_2$-equivalence classes which is equal to $Y.$ So, in the $(E_1, H_2)$ solution, the only way that $L_{H_2}L_{E_1}(X)$ could be different from $L_{E_2}L_{E_1}(X)$(which is $=L_2L_1(X)$) is if (i) $[x]_{E_2}$ is contained in $L_{E_2}L_{E_1}(X)$ while $[y]_{E_2}$ is not contained in $L_{E_2}L_{E_1}(X)$ or if (ii) $[y]_{E_2}$ is contained in $L_{E_2}L_{E_1}(X)$ while $[x]_{E_2}$ is not contained in $L_{E_2}L_{E_1}(X).$ This is because in $H_2,$ $[x]_{E_2}$ and $[y]_{E_2}$ always occur together in an output set if they are in it at all (recall that output sets are unions of equivalence classes) in the equivalence class of $[z]= [x]_{E_2} \cup [y]_{E_2} $ and all the other equivalence classes of $H_2$ are the same as in $E_2.$ However, neither (i) nor (ii) is the case since $\textit{\textbf{u}}_{E_1}([x]_{E_2}) = \textit{\textbf{u}}_{E_1}([y]_{E_2}).$ That is, the equivalence classes of $[x]_{E_2}$ are contained by exactly the same union of equivalences in $E_1$ which contains $[y]_{E_2}.$ Thus, any set $X$ which contains a union of $E_1$-equivalences which contains $[x]_{E_2}$ also must contain $[y]_{E_2}$ and therefore $[z]_H.$ Hence, if $(E_1, E_2)$ is a solution for the given vector, then so is $(E_1, H_2).$
\end{proof}
\begin{lemma} \label{l3}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined operator. If there exists equivalence pair solutions to the operator $(E_1, E_2)$ which is such that there exists $[x]_{E_1}, [y]_{E_1} \in E_2,$ such that $[x]_{E_1} \neq [y]_{E_1}$ and $\textbf{u}_{E_2}([x]_{E_1}) = \textbf{u}_{E_2}([y]_{E_1}), $ then there exists another solution, $(H_1, E_2)$, where $H_1$ is an equivalence relation formed from $E_1$ by combining $[x]_{E_2}$ and $[y]_{E_2}$ and all other elements are as in $E_1.$ That is, $[x]_{E_1} \cup [y]_{E_1} = [z] \in H_1$ and if $[w] \in E_2$ such that $[w] \neq [x]_{E_1}$ and $[w]_{E_1} \neq [y]_{E_1},$ then $[w] \in H_1.$
\end{lemma}
\begin{proof}
Suppose that $(E_1, E_2)$ is a solution of a given $L_2L_1$ operator and $H_1$ is as defined above. Now, $L_2L_1(X) = Y $ iff the union of $E_1$-equivalence classes in $X$ contains the union of $E_2$-equivalence classes which is equal to $Y.$ So, in the $(H_1, E_2)$ solution, the only way that $L_{E_2}L_{H_1}(X)$ could be different from $L_{E_2}L_{E_1}(X)$(which is $=L_2L_1(X)$) is if the union of equivalence classes in $X$ which is needed to contain $Y,$ (i) contains $[x]_{E_2} $ but not $[y]_{E_2}$ or (ii) contains $[y]_{E_2} $ but not $[z]_{E_2}.$ However, this is not the case since $\textit{\textbf{u}}_{E_2}([x]_{E_1}) = \textit{\textbf{u}}_{E_2}([y]_{E_1}).$ That is, $[x]_{E_1}$ intersects exactly the same equivalence classes in $E_2$ as $[y]_{E_1}.$ So if $[x]_{E_1}$ is needed to contain an equivalence class in $E_2,$ then $[y]_{E_1}$ is also needed. In other words, if $L_2L_1(X) = Y,$ then for any minimal set such $Y_m \subseteq X$ such that $L_2L_1(Y_m) = Y,$ $[x]_{E_1}$ is contained in $Y_m$ iff $[y]_{E_1}$ is contained in $Y_m$ iff $[z] \in H_1$ is contained in $Y_m.$ Hence, if $(E_1, E_2)$ is a solution for the given vector, then so is $(H_1, E_2).$
\end{proof}
\noindent We now have enough to be able to prove the claim in step 4 of Algorithm 4.1 (actually we prove something stronger because we also show conditions which the solutions of the algorithm must satisfy).
\begin{theorem} \label{t2}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined operator. If there exists equivalence pair solutions to the operator, then there exists solutions $(E_1, E_2)$ which satisfy,
\vspace{2mm}
\noindent (i) for each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent (ii) for each $[x]_{E_1}, [y]_{E_1} \in E_1,$ if $[x]_{E_1} \neq [y]_{E_1}$ then $\textbf{u}_{E_2}([x]_{E_1}) \neq \textbf{u}_{E_2}([y]_{E_1})$.
\vspace{2mm}
\noindent Furthermore, $E_1 = S$ and $E_2 = R$ where $(S, R)$ are the solutions obtained by applying Algorithm 4.1 to the given $L_2L_1$ operator.
\end{theorem}
\begin{proof}
Suppose that there exists a solution $(C, D).$ Then, either $(C, D)$ already satisfies condition (i) and condition (ii) or it does not. If it does, take $ (E_1, E_2) = (C, D).$ If it does not satisfy condition (i) then use repeated applications of Lemma \ref{l2} until we arrive at an $(C, E_2)$ solution which does. Similarly, if $(C, E_2)$ does not satisfy condition (ii), use repeated applications of Lemma \ref{l3} until it does. Since $\mathscr{P}(V)$ is finite this will take at most finite applications of the lemmas until we obtain a solution, $(E_1, E_2)$ which satisfies the conditions of the theorem. Since there is a solution, using Proposition \ref{l4} we will at least be able to reach step 4 of Algorithm 4.1. So let $S$ and $R$ be the relations formed by the algorithm after step 3. Next, we will show that $E_1= S$ and $E_2= R.$ Now, by Lemma \ref{p15}, we have that $E_1 \leq S$ and $E_2 \leq R.$
Consider the output sets of the given $L_2L_1$ operator. It is clear that these sets are unions of one or more equivalence classes of $E_2.$ Let $[y]_{E_2} \in E_2$ then $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2})) \supseteq [y]_{E_2}.$
\vspace{2mm}
\noindent \textbf{Claim 1:} $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2})) $ is the minimum output set of $L_2L_1$ such that it contains $[y]_{E_2}$ and $\textbf{\textit{u}}_{E_1}([y]_{E_2})$ is the minimum set $X$ such that $L_2L_1(X) \supseteq [y]_{E_2}.$
\vspace{2mm}
To see this we first note that $L_2L_1$ is a monotone function on $\mathscr{P}(V)$ since $L_1$ and $L_2$ are monotone operators and $L_2L_1$ is the composition of them. Then, if we can show that $\textbf{\textit{u}}_{E_1}([y]_{E_2})$ is the minimum set $X \in \mathscr{P}(V),$ such that $L_2L_1(X) \supseteq [y]_{E_2},$ then $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2}))$ will be the minimum set output set which contains $[y]_{E_2}.$ This is true because for $L_2L_1(X) \supseteq [y]_{E_2},$ then $L_1(X)$ must contain each member of $[y]_{E_2}.$ We note that the range of $L_1$ contains only unions of equivalence classes of $E_1$ (counting the emptyset as a union of zero sets). Hence for $L_1(X)$ to contain each element of $[y]_{E_2},$ it must contain each equivalence class in $E_1$ which contains any of these elements. In other words, it must contain $\textbf{\textit{u}}_{E_1}([y]_{E_2}).$ Suppose that $X$ is such that $X \not\supseteq \textbf{\textit{u}}_{E_1}([y]_{E_2})$ and $ L_2L_1(X) \supseteq [y]_{E_2}.$ Then for some $v \in [y]_{E_2},$ $v$ is not in $X$ and so $\textbf{\textit{u}}_{E_1}([v]_{E_2})\not\in L_1(X).$ Hence $L_2L_1(X) \not\supseteq v $ and so does not contain $[y]_{E_2}$ which is a contradiction.
\vspace{2mm}
\noindent \textbf{Claim 2:} $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2})) $ is not the minimum output set with respect to containing any other $[z]_{E_2} \neq [y]_{E_2}.$
\vspace{2mm}
Suppose that for some $[z]_{E_2} \neq [y]_{E_2} \in E_2,$ that $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2})) $ is the minimum output set containing $[z]_{E_2}.$ Then by the previous Claim, we get that $L_2L_1(\textbf{\textit{u}}_{E_1}([y]_{E_2})) = L_2L_1(\textbf{\textit{u}}_{E_1}([z]_{E_2}))$ and that $\textbf{\textit{u}}_{E_1}([y]_{E_2}) \supseteq \textbf{\textit{u}}_{E_1}([z]_{E_2})$. But since $\textbf{\textit{u}}_{E_1}([y]_{E_2})$ is the minimum set such that $L_2L_1(X) \supseteq [y]_{E_2}$, then the stated equality also gives us that $\textbf{\textit{u}}_{E_1}([y]_{E_2}) \subseteq \textbf{\textit{u}}_{E_1}([z]_{E_2}).$ Hence we have $\textbf{\textit{u}}_{E_1}([y]_{E_2}) = \textbf{\textit{u}}_{E_1}([z]_{E_2})$ which is a contradiction to the assumption of condition (i) of the theorem.
Now we can reconstruct $E_2$ by relating elements which always occur together in the output sets. That is, $a \sim_R b \iff \ (a \in X \iff b \in X)$ for each $X$ in the range of $L_2L_1$. From the previous proposition we have that $E_2 \leq R.$ We claim that $R \leq E_2,$ hence $R = E_2.$ To show this, suppose that it is not the case. Then there exists $a, b \in V$ such that $ a \sim_R b$ but $a \not \sim_{E_2} b.$ By Claim 1, $L_2L_1(\textbf{\textit{u}}_{E_1}([a]_{E_2}))$ is the minimum set which contains $[a]_{E_2}$ and since $a \sim_R b$ then it must contain $b$, and consequently $[b]_{E_2}$ as well. Similarly by Claim 1, $L_2L_1(\textbf{\textit{u}}_{E_1}([b]_{E_2})) $ is the minimum set which contains $[b]_{E_2}$ and since $a \sim_R b$ then it must contain $a,$ and consequently $[a]_{E_2}$ as well. By minimality we therefore have both $ L_2L_1(\textbf{\textit{u}}_{E_1}([a]_{E_2})) \subseteq L_2L_1(\textbf{\textit{u}}_{E_1}([b]_{E_2}))$ and $L_2L_1(\textbf{\textit{u}}_{E_1}([a]_{E_2})) \supseteq L_2L_1(\textbf{\textit{u}}_{E_1}([b]_{E_2}))$ which implies that $L_2L_1(\textbf{\textit{u}}_{E_1}([a]_{E_2})) = L_2L_1(\textbf{\textit{u}}_{E_1}([b]_{E_2})).$ This contradicts Claim 2 since $[a]_{E_2} \neq [b]_{E_2} \in E_2.$ Hence, $E = R$ and we can reconstruct $E_2$ by forming the equivalence relation $R$ which was defined by using the output sets.
It remains to reconstruct $E_1.$ Next, we list the pre-images of the minimal output sets which contain $[y]_{E_2}$ for each $[y]_{E_2}$ in $E_2$ and by Claim 1 this exists and is equal to $\textbf{\textit{u}}_{E_1}([y]_{E_2}).$ This implies that each such set is the union of some of the equivalence classes of $E_1.$ Now using this pre-image list we relate elements of $V$ in the following way: $a \sim_S b \iff \ (a\in X \iff b \in X)$ for each $X$ in the pre-image list. From the previous proposition we have that $E_1 \leq S.$ We claim that $S \leq E_1$ and hence $S = E_1.$ Suppose that it was not the case. That is, there exists $a, b \in V$ such that $a \sim_S b$ but $a \not\sim_{E_1} b.$ Hence $[a]_{E_1} \neq [b]_{E_1}.$ By condition (ii) of the theorem, we know that $\textit{\textbf{u}}_{E_2}([a]_{E_1}) \neq \textit{\textbf{u}}_{E_2}([b]_{E_1}).$ So WLOG suppose that $d \in \textit{\textbf{u}}_{E_2}([a]_{E_1})$ but $d \not\in \textit{\textbf{u}}_{E_2}([b]_{E_1}).$ Since these sets are unions of equivalence classes in $E_2$ this implies that 1), $[d]_{E_2} \subseteq \textit{\textbf{u}}_{E_2}([a]_{E_1})$ and 2) $[d]_{E_2} \cap \textit{\textbf{u}}_{E_2}([b]_{E_1}) = \emptyset.$ Now by Claim 1, $\textit{\textbf{u}}_{E_1}([d]_{E_2})$ is the minimum set, $X$ such that $L_2L_1(X)$ contains $[d]_{E_2}$ and so is on the output list from which the Relation $S$ was formed. However, 1) implies that this set contains $a$ while 2) implies that this set does not contain $b.$ This contradicts $a \sim_S b.$ Hence $S= E_1$ and we can construct $E_1$ by constructing $S.$ The result is shown.
\end{proof}
\noindent Next we give, a graph-theoretic equivalence of the theorem but we first define a graph showing the relationship between two equivalence relations on a set.
\begin{definition}
Let $C$ and $D$ be two equivalence relations on a set $V.$ Form a bipartite graph $B(C, D) = (G,E),$ where the nodes $G$ is such that $G = \{ [u]_C \ | \ [u]_C \in C \} \cup \{ [u]_D \ | \ [u]_D \in D \}$ and the edges $E$ are such that $E = \{ ([u]_C, [v]_D)\ | \ \exists \ x \in V : \ x \in [u]_C \ \text{and} \ x \in [v]_D \}.$ We call this the \textbf{incidence graph} of the pair $(C, D).$
\end{definition}
\begin{theorem}
Let $V$ be a finite set and let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ If there exists solutions $(E_1, E_2)$ then the incidence graph of $E_1$ and $E_2,$ $B(E_1, E_2), $ is such that there are no compete bipartite subgraphs as components other than edges (or $K_2$).
\end{theorem}
\begin{proof}
This is a direct translation of the previous theorem graph-theoretically. Suppose that the incidence graph of $E_1$ and $E_2$, $B(E_1, E_2),$ contains a complete bipartite subgraph as a component. Then the partition corresponding to $E_2$ violates Condition (i) of the theorem and the partition corresponding to $E_1$ violates condition (ii) of the theorem.
\end{proof}
\begin{corollary}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If $(E_1, E_2)$ is a unique solution for the operator then $|E_1| < 2^{|E_2|} $ and $|E_2| < 2^{|E_1|}. $
\end{corollary}
\begin{proof}
This follows directly from the conditions since in the incidence graph of a unique solution $(E_1, E_2),$ each equivalence class in $E_1$ is mapped to a unique non-empty subset of equivalence classes in $E_2$ and vice versa.
\end{proof}
\noindent The next natural question is, without assuming conditions on the equivalence relations, are there instances when the algorithm produces a unique solution? Example 4.1 is an example of a unique decomposition of a given $L_2L_1$ operator. So this leads naturally to the next question. What conditions result in a unique solution to a given $L_2L_1?$ Can we find characterising features of the pairs of equivalences relations which give a unique $L_2L_1$ operator?
We note that the algorithm always produces a solution for a fully defined $L_2L_1$ operator which has at least one solution. Hence, if there is a unique solution then these pairs of equivalence relations satisfy the conditions of Theorem \ref{t2}. Recall that in Example 4.2, we were given an $L_2L_1$ operator defined on $\mathscr{P}(V)$ for $ V = \{a, b, c, d\}$ such that $L_2L_1(X) = \emptyset$ for all $X \neq V$ and $L_2L_1(V) = V.$ This example shows us that in addition to a solution which would satisfy the conditions of the theorem, which applying the algorithm gives us; $E_1 = \{\{a, b, c, d\} \} $ and $E_2 = \{\{a, b, c, d\} \}\} ,$ we also have solutions of the form $E_1 = \{ \{a, b\}, \{c, d\} \} $ and $E_2 = \{ \{a, c\}, \{b, d \}\}$ or $E_1 = \{ \{a, b\}, \{c, d\} \} $ and $E_2 = \{ \{a, d\}, \{b, c \}\}$ amongst others. In Lemma \ref{p15}, we showed that the solution given by the algorithm is the coarsest pair compatible with a given defined $L_2L_1$ operator. We now try to find a condition such that after applying the algorithm, we may deduce whether or not the $(S, R)$ solution is unique. This leads us to the next section.
\subsection {Characterising Unique Solutions}
\begin{theorem} \label{t3}
Let $V$ be a finite set and let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined operator on $\mathscr{P}(V).$ If $(S, R)$ is returned by Algorithm 4.1, then $(S, R)$ is the unique solution of the operator iff the following holds:
\vspace{2mm}
\noindent (i) For any $[x]_R \in R,$
there exists $[z]_S \in S$ such that, $ |[x]_R \cap [z]_S| = 1.$ \\
(ii) For any $[x]_S \in S,$
there exists $[z]_R \in R$ such that, $ |[x]_S \cap [z]_R| = 1.$
\vspace{2mm}
\end{theorem}
\begin{proof}
We prove $\Leftarrow$ direction first. So assume the conditions. We note that by Lemma \ref{p15}, any other solutions, $(E_1, E_2)$ to the given $L_2L_1$ operator must be coarser than $(S, R).$ Thus, if there is another solution to the given $L_2L_1$ operator, $(E_1, E_2)$ then at least one of $E_1 < S,$ $E_2 < R$ must hold.
First we assume to get a contradiction that there exists a solution $(E_1, E_2)$ which is such that $E_1 < S.$ That is, $E_1$ contains a splitting of at least one of the equivalences classes of $S,$ say $[a]_S.$ Hence $|[a]_S| \geq 2.$ By assumption there exists a $[z]_R \in R$ such that $|[a]_S \cap [z]_R| = 1.$ Hence there is a $[z]_{E_2} \in E_2$ such that $|[a]_S \cap [z]_{E_2}| = 1$ since $E_2 \leq R.$ Call the element in this intersection $v$ say. We note that $[v]_{E_2} = [z]_{E_2}.$ Now as $[a]_S$ is spilt into smaller classes in $E_1,$ $v$ must be in one of these classes, $[v]_{E_1}.$ Consider the minimal pre-image of the minimal output set of $L_2L_1$ which contains $[v]_R.$ Call this set $Y_{(S, R)}.$ For the solution $(S, R),$ $Y_{(S, R)}$ contains all of $[a]_S$ since $v \in [a]_S.$ But for the solution $(E_1, E_2),$ the minimal pre-image of the minimal output set of $L_2L_1$ which contains $[v]_R,$ $Y_{(E_1, E_2)}, $ is such that $Y_{(E_1, E_2)} = (Y_S - [a]_s) \cup [v]_{E_1} \neq Y_S.$ Hence the output list for $(E_1, S)$ is different from the given one which is a contradiction.
Next, suppose to get a contradiction there exists a solution $(E_1, E_2)$ which is such that $E_2 < R.$ That is, $E_2$ contains a splitting of at least one of the equivalences classes of $R,$ say $[a]_R.$ Hence $|[a]_R| \geq 2.$ By assumption there exists a $[z]_S \in S$ such that $|[a]_R \cap [z]_S| = 1.$ Hence there is a $[z]_{E_1} \in E_1$ such that $|[a]_R \cap [z]_{E_1}| = 1$ since $E_1 \leq S.$ Call the element in this intersection $v$ say. We note that $[v]_{E_1} = [z]_{E_1}.$ Now as $[a]_R$ is spilt into smaller classes in $E_2,$ $v$ must be in one of these classes, $[v]_{E_2}.$ Consider the set $[a]_R - [v]_{E_2}.$ The minimal pre-image of the minimal output set which contains this set in the $(S,R)$ solution, $Y_{(S, R)}$ contains $[v]_S$ since here the minimal output set which contains $([a]_R - [v]_{E_2}),$ must contain all of $[a]_R$ which contains $v.$ If $(E_1, E_2)$ were the solution then the minimal pre-image of the minimal output set which contains $([a]_R - [v]_{E_2}),$ $Y_{(E_1, E_2)},$ would not contain $[v_s]$ since $([a]_R - [v]_{E_2}) \cap [v]_S = \emptyset.$ That is, $Y_{(E_1, E_2)} \neq Y_S.$ Hence the output list for $(E_1, E_2)$ is different from the given one which is a contradiction.
Now we prove $\Rightarrow $ direction. Suppose that $(E_1, E_2)$ is the unique solution, and assume that the condition does not hold. By Theorem \ref{t2}, $(E_1, E_2) = (S, R).$ Then either there exists an $[x]_R \in R$ such that for all $[y]_S \in S$ such that $[x]_R \cap [y]_S \neq \emptyset$ we have that $|[x]_R \cap [y] _S| \geq 2$ or there exists an $[x]_S \in S$ such that for all $[y]_R \in R$ such that $[x]_S \cap [y]_R \neq \emptyset $ we have that $|[x]_S \cap [y] _R| \geq 2.$
We consider the first case. Suppose that $[x]_R$ has non-empty intersection with with $n$ sets in $S.$ We note that $n \geq 1.$ Form a sequence of these sets; $S_1,...S_n. $ Since $|[x]_R \cap S_i| \geq 2$ for each $i$ such that $i= 1,...n,$ let $\{a_{i1}, a_{i2} \}$ be in $[x]_R \cap S_i$ for each $i$ such that $i= 1,...n.$ We split $[x]_R$ to form a finer $E_2$ as follows: Let $P = \{a_{i1}\ | \ i= 1,...n\}$ and $Q = [x]_R - P$ be equivalence classes in $E_2$ and for the remaining equivalence classes in $E_2,$ let $[y] \in E_2$ iff $[y] \in R$ and $[y]_R \neq [x]_R.$ Now, $L_RL_S(X) = Y $ iff the union of $S$-equivalence classes in $X$ contains the union of $R$-equivalence classes which is equal to $Y.$ So, for the $(S, E_2)$ solution, the only way that $L_{E_2}L_S(X)$ could be different from $L_RL_S(X)$ is if there is a union of $S$-equivalence classes in $X$ which contain $P$ but not $Q$ or which contain $Q$ but not $P$ (since $P$ and $Q$ always occur together as $[x]_R$ for the $(S, R)$ solution). However, this is not the case as follows. Since $P$ and $Q$ exactly spilt all of the equivalence classes of $S$ which have non-empty intersection with $[x]_R,$ we have that $\textit{\textbf{u}}_S(P) = \textit{\textbf{u}}_S(Q).$ That is, $P$ intersects exactly the same equivalence classes of $S$ as $Q.$ Therefore, $P$ is contained by exactly the same union of equivalence classes in $S$ as $Q.$ Therefore, a union of $S$-equivalence classes in $X$ contains $P$ iff it contains $Q$ iff its contains $[x]_R.$ Hence, $L_RL_S(X) = L_{E_2}L_S(X)$ for all $X \in \mathscr{P}(V)$ and if $(S, R)$ is a solution for the given vector, then so is $(S, E_2)$ which is a contradiction of assumed uniqueness of $(S, R).$
We consider the second case. Suppose that $[x]_S$ has non-empty intersection with with $n$ sets in $R.$ We note that $n \geq 1.$ Form a sequence of these sets; $R_1, \dots R_n. $ Since $|[x]_S \cap R_i| \geq 2$ for each $i$ such that $i= 1, \dots n,$ let $\{a_{i1}, a_{i2} \}$ be in $[x]_S \cap R_i$ for each $i$ such that $i= 1, \dots n.$ We split $[x]_S$ to form a finer $E_1$ as follows: Let $P = \{a_{i1}\ | \ i= 1, \dots n\}$ be one equivalence class and let $Q = [x]_R - P$ be another and for any $[y]_S \in S$ such that $[y]_S \neq [x]_S, $ let $[y] \in E_1$ iff $[y] \in S.$ Again, $L_RL_S(X) = Y $ iff the union of $S$-equivalence classes in $X$ contains the union of $R$-equivalence classes which is equal to $Y.$ So, for the $(E_1, R)$ solution, the only way that $L_RL_{E_1}(X)$ could be different from $L_RL_S(X)$ is if (i) $P$ is contained in $L_RL_S(X)$ while $Q$ is not contained in $L_RL_S(X)$ or (ii) $Q$ is contained in $L_RL_S(X)$ while $P$ is not contained in $L_RL_S(X).$ Since $P$ and $Q$ spilt all of the equivalence classes of $R$ which have non-empty intersection with $[x]_S,$ this implies that $\textit{\textbf{u}}_R(P) = \textit{\textbf{u}}_R(Q).$ That is, $P$ and $Q$ intersect exactly the same equivalence classes of $R.$ So if $P$ is needed to contain an equivalence class in $R$ for the $(S, R)$ solution, then $Q$ is also needed. In other words, if $L_2L_1(X) = Y,$ then for any minimal set such $Y_m \subseteq X$ such that $L_2L_1(Y_m) = Y,$ $P$ is contained in $Y_m$ iff $Q$ is contained in $Y_m$ iff $[x]_S$ is contained in $Y_m.$ Hence, $L_RL_S(X) = L_RL_{E_1}(X)$ for all $X \in \mathscr{P}(V)$ and if $(S, R)$ is a solution for the given vector, then so is $(E_1, R)$ which is a contradiction of assumed uniqueness of $(S, R).$
\end{proof}
\noindent The following theorem sums up the results of Theorem \ref{t2} and Theorem \ref{t3}.
\vspace{2mm}
\begin{theorem} \label{t4}
Let $V$ be a finite set and let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined successive approximation operator on $\mathscr{P}(V).$ If $(E_1, E_2)$ is a solution of the operator then it is the unique solution iff the following holds:
\vspace{2mm}
\noindent (i) For each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent (ii) For each $[x]_{E_1}, [y]_{E_1} \in E_1,$ if $[x]_{E_1} \neq [y]_{E_1}$ then $\textbf{u}_{E_2}([x]_{E_1}) \neq \textbf{u}_{E_2}([y]_{E_1})$.
\vspace{2mm}
\noindent (iii) For any $[x]_{E_2} \in E_2,$
there exists $[z]_{E_1} \in E_1$ such that, $ |[x]_{E_2} \cap [z]_{E_1}| = 1.$
\vspace{2mm}
\noindent(iv) For any $[x]_{E_1} \in E_1,$
there exists $[z]_{E_2} \in E_2$ such that, $ |[x]_{E_1} \cap [z]_{E_2}| = 1.$
\end{theorem}
\noindent \textbf{Remark 4.3:} If an equivalence relation pair satisfies the conditions of Theorem \ref{t2}, then the $L_2L_1$ operator based on those relations would be such that if there exists other solutions then they would be finer pairs of equivalence relations. On the other hand, if an equivalence relation pair satisfies the conditions of Theorem \ref{t3}, then the $L_2L_1$ operator based on those relations would be such that if there exists other solutions then they would be coarser pairs of equivalence relations. Hence, if an equivalence relation pair satisfies the conditions of both Theorem \ref{t2} and Theorem \ref{t3}, then the $L_2L_1$ operator produced by it is unique.
\begin{corollary} \label{c2}
Let $V$ be a finite set and let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined successive approximation operator on $\mathscr{P}(V).$ If $(S, R)$ is the solution returned by Algorithm 4.1, is such that it is the unique solution then following holds:
\vspace{2mm}
\noindent For any $x \in V$ we have that;\\
(i) $[x]_S \not\supseteq [x]_R$ unless $|[x]_R| =1$ \\
(ii) $[x]_R \not\supseteq [x]_S$ unless $|[x]_S| =1,$
\end{corollary}
\begin{proof}
This follows directly from the conditions in Theorem \ref{t3}.
\end{proof}
\noindent \textbf{Example 4.1} (\emph{revisited}): Consider again, the given output vector of Example 4.1. First we form the $(S, R)$ pair using Algorithm 4.1. We get that $R = \{ \{a, b\}, \{c, d\}, \{e\} \}$ and $S = \{ \{a, c\}, \{b\}, \{d, e\} \}.$ Since this is the pair produced from Algorithm 4.1, we know that it satisfies the conditions of Theorem \ref{t2}. Now we need only to check if this pair satisfies the conditions of Theorem \ref{t3} to see if it is the only solution to do so. To keep track of which equivalence class a set belongs to, we will index a set belonging to either $S$ or $R$ by $S$ or $R$ respectively. Then we see that $|\{a, b\}_R \cap \{b\}_S| = 1,$ $|\{c, d\}_R \cap \{a, c\}_S| = 1$ and $| \{e\}_R \cap \{d,e\}_S| = 1.$ This verifies both conditions of Theorem \ref{t3} and therefore this is the unique solution of the given operator.
\begin{proposition} \label{p17}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If $(E_1, E_2)$ is a unique solution such that either $E_1 \neq Id$ or $E_2 \neq Id$ where $Id$ is the identity equivalence relation on $V$ then,
\vspace{2mm}
\noindent (i) $ E_1 \not\leq E_2, $\\
(ii) $ E_2 \not\leq E_1.$
\end{proposition}
\begin{proof}
We first observe that if $E_1$ and $E_2$ are unique solutions and both of them are not $Id$ then one of them cannot be equal $Id.$ This is because if $(E_1, Id)$ were solutions to a given $L_2L_1$ operator corresponding to $L_1$ and $L_2$ respectively then $(Id, E_1)$ would also be solutions corresponding to $L_1$ and $L_2$ respectively and the solutions would not be unique. Hence, each of $E_1$ and $E_2$ contains at least one equivalence class of size greater than or equal to two.
Suppose that $E_1 \leq E_2.$ Consider an $e \in E_2$ such that $|e| \geq 2.$ Then $e$ either contains a $f \in E_1$ such that $|f| \geq 2$ or two or more singletons in $E_1.$ Then first violates the condition of Corollary \ref{c2} and the second violates the second condition of Theorem \ref{t2}. Hence the solutions cannot be unique. Similarly, if we suppose that $E_2 \leq E_1.$
\end{proof}
\begin{corollary}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If there exists a unique solution $(E_1, E_2)$ such that either $E_1 \neq Id$ or $E_2 \neq Id$ where $Id$ is the identity equivalence relation on $V$ then,
\vspace{2mm}
\noindent (i) $ k = \gamma(E_1, E_2) = \frac {|POS_{E_1} (E_2)|}{|V|} < 1 $ or $E_1 \not\Rightarrow E_2$\\
(ii) $ k = \gamma(E_2, E_1) = \frac {|POS_{E_2} (E_1)|}{|V|} < 1$ or $E_2 \not\Rightarrow E_1.$
\end{corollary}
\begin{proof}
This follows immediately from definitions.
\end{proof}
\begin{proposition} \label{p16}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If there exists exists a unique solution $(E_1, E_2)$ then,
\vspace{2mm}
\noindent (i) for any $[x]_{E_1} \in E_1,$ $|POS_{E_2}([x]_{E_1})| \leq 1$
\noindent (ii) for any $[x]_{E_2} \in E_2,$ $|POS_{E_1}([x]_{E_2})| \leq 1.$
\end{proposition}
\begin{proof}
This follows from the conditions in Theorem \ref{t4} and Corollary \ref{c2} which imply that for a unique pair solution $(E_1, E_2)$, an equivalence class of one of the equivalence relations cannot contain any elements of size greater than one of the other relation and can contain at most one element of size exactly one of the other relation.
\end{proof}
\begin{corollary}
Let $V$ be a finite set where $|V| = l$ and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If there exists exists a unique solution $(E_1, E_2)$ such that $ |E_1| = n$ and $|E_2| = m$ then,
\vspace{2mm}
\noindent (i) $ k = \gamma(E_1, E_2) = \frac {|POS_{E_1} (E_2)|}{|V|} \leq \frac{m}{l} $ \\
(ii) $ k = \gamma(E_2, E_1) = \frac {|POS_{E_2} (E_1)|}{|V|} \leq \frac{n}{l} .$
\begin{proof}
Let $(E_1, E_2) $ be the unique solution of the given $L_2L_1$ operator. This result follows directly from the previous proposition by summing over all the elements in one member of this pair for taking its positive region with respect to the other member of the pair.
\end{proof}
\end{corollary}
\begin{corollary}
Let $V$ be a finite set such that $|V| = n$ and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If there exists exists a unique solution $(E_1, E_2)$ then,
\vspace{2mm}
\noindent (i) if the minimum size of an equivalence class in $E_1,$ $k_1$ where $ k_1 \geq 2$ then \\ $ k = \gamma(E_1, E_2) = \frac {|POS_{E_1} (E_2)|}{|V|} = 0.$
\vspace{2mm}
\noindent (ii) if the minimum size of an equivalence class in $E_2,$ $k_2$ where $ k_2 \geq 2$ then \\ $ k = \gamma(E_2, E_1) = \frac {|POS_{E_2} (E_1)|}{|V|} = 0.$
\vspace{2mm}
\end{corollary}
\begin{proof}
Since no member of $E_2$ can contain any member of $E_1$ because $E_1$ has no singletons, we get that $\frac {|POS_{E_1} (E_2)|}{|V|} = 0.$ Similarly for Part (ii).
\end{proof}
\begin{proposition}
Let $V$ be a finite set and $L_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given defined operator. If there exists a unique solution $(E_1, E_2)$ such that $|E_1| = m$ and $|E_2| = n$ and $S_1$ is the number of singletons in $E_1$ and $S_2$ is the number of singletons in $E_2,$ then,
\vspace{2mm}
\noindent (i) $ S_1 \leq n$
\noindent (ii) $ S_2 \leq m.$
\end{proposition}
\begin{proof}
We note that the conditions in Theorem \ref{t4} imply that no two singletons in $E_1$ can be contained by any equivalence class in $E_2$ and vice versa. The result thus follows on application of the pigeonhole principle between the singletons in one equivalence relation and the number of elements in the other relation.
\end{proof}
\subsection{A Derived Preclusive Relation and a Notion of \\ Independence}
In \cite{PR}, Cattaneo and Ciucci found that preclusive relations are quite useful for using rough approximations in information systems. In this direction, we will define a related notion of independence of equivalence relations from it.
Let $V$ be a finite set and let $\mathfrak{E}_V$ be the set of all equivalence relations on $V.$ Also, let $\mathfrak{E}_V^0 = \mathfrak{E}_V - Id_V,$ where $Id_V$ is the identity relation on $V.$ From now on, where the context is clear, we will omit the subscript. We now define a relation on $\mathfrak{E^0}, \ $ $\not\Rightarrow_{\mathfrak{E^0} }, $ as follows:
\vspace{2mm}
Let $E_1$ and $E_2$ be in $\mathfrak{E^0}.$ Let $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ where $L_1$ and $L_2$ are lower approximation operators based on $E_1$ and $E_2$ respectively. Then,
\vspace{2mm}
\begin{center}
$E_1 \not\Rightarrow_{\mathfrak{E^0}} E_2$ iff $L_2L_1$ is a unique approximation operator.
\end{center}
\vspace{2mm}
\noindent That is, if for no other $E_3$ and $E_4$ in $\mathfrak{E^0}$ where at least one of $E_1 \neq E_3$ or $E_2 \neq E_4$ holds, is it the case that the operator $L_2L_1 = L_3L_4,$ where $L_3$ and $L_4$ are lower approximation operators based on $E_3$ and $E_4$ respectively.
\begin{definition}
Let $V$ be a set and $E_1, E_2 \in \mathfrak{E}_V^0$. We say that $E_1$ is $ \mathfrak{E}_V^0$--\textbf{independent} of $E_2$ iff $E_1 \not\Rightarrow_{\mathfrak{E}_V^0} E_2.$ Also, if $\lnot (E_1 \not\Rightarrow_{\mathfrak{E}_V^0} E_2), $ we simply write $E_1 \Rightarrow_{\mathfrak{E}_V^0} E_2.$ Here, we say the $E_1$ is $ \mathfrak{E}_V^0$--\textbf{dependent} of $E_2$ iff $E_1 \Rightarrow_{\mathfrak{E}_V^0} E_2.$
\end{definition}
\begin{proposition}
$\not\Rightarrow_{\mathfrak{E}_V^0}$ is a preclusive relation.
\end{proposition}
\begin{proof}
We recall that a preclusive relation is one which is irreflexive and symmetric. Let $E \in \mathfrak{E^0}_V.$ Since $E \neq Id,$ then by application of Proposition 4.2.3 $(E, E)$ does not generate a unique $L_2L_1$ operator and therefore $E \Rightarrow_{\mathfrak{E}_V^0} E.$ Hence $ \not\Rightarrow_{\mathfrak{E}_V^0}$ is irreflexive.
Now, suppose that $E_1, E_2 \in \mathfrak{E^0}_V$ are such that $ E_1 \not\Rightarrow_{\mathfrak{E}_V^0} E_2.$ Then $(E_1, E_2)$ satisfies the conditions of Theorem \ref{t4}. Since together, the four conditions of the theorem are symmetric (with conditions (i) and (ii) and conditions (iii) and (iv) being symmetric pairs), then $(E_2, E_1)$ also satisfies the conditions of the theorem. Then by this theorem, we will have that $ E_2 \not\Rightarrow_{\mathfrak{E}_V^0} E_1.$ Hence, $\not\Rightarrow_{\mathfrak{E}_V^0}$ is symmetric.
\end{proof}
\noindent \textbf{Remark 4.4} From the previous proposition we can see that dependency relation $\Rightarrow_{\mathfrak{E}_V^0}$ is a similarity relation.
\begin{proposition}\label{p18}
If $E_1 \Rightarrow E_2$ then $E_1 \Rightarrow_{\mathfrak{E}_V^0} E_2.$
\end{proposition}
\begin{proof}
This follows from Corollary \ref{p17}.
\end{proof}
\begin{proposition} \label{p19}
It is not the case that $E_1 \Rightarrow_{\mathfrak{E}_V^0} E_2$ implies that $E_1 \Rightarrow E_2.$
\end{proposition}
\begin{proof}
In Example 4.2 we see $(E_1, E_2)$ does not give a corresponding unique $L_2L_1$ operator, hence $E_1 \Rightarrow_{\mathfrak{E}_V^0} E_2$ but $E_1 \not\Rightarrow E_2.$
\end{proof}
\noindent \textbf{Remark 4.5} From Proposition \ref{p18} and Proposition \ref{p19}, we see that $ \mathfrak{E}_V^0$--\textit{\textbf{dependency}} is a more general notion of equivalence relation dependency that $\Rightarrow$ (or equivalently $\leq$ ). Similarly $ \mathfrak{E}_V^0$--\textit{\textbf{independence}} is a stricter notion of independence than $\not\Rightarrow.$
\begin{theorem}
Let $V$ be a finite set and $E_1$ and $E_2$ equivalence relations on $V.$ Then \\ $E_1 \not\Rightarrow_{\mathfrak{E}_V^0} E_2$ iff the following holds:
\vspace{2mm}
\noindent (i) For each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent (ii) For each $[x]_{E_1}, [y]_{E_1} \in E_1,$ if $[x]_{E_1} \neq [y]_{E_1}$ then $\textbf{u}_{E_2}([x]_{E_1}) \neq \textbf{u}_{E_2}([y]_{E_1})$.
\vspace{2mm}
\noindent (iii) For any $[x]_{E_2} \in E_2,$
there exists $[z]_{E_1} \in E_1$ such that, $ |[x]_{E_2} \cap [z]_{E_1}| = 1.$
\vspace{2mm}
\noindent(iv) For any $[x]_{E_1} \in E_1,$
there exists $[z]_{E_2} \in E_2$ such that, $ |[x]_{E_1} \cap [z]_{E_2}| = 1.$
\end{theorem}
\begin{proof}
This follows directly from Theorem \ref{t4}.
\end{proof}
\subsection{Seeing One Equivalence Relation through Another}
We will first give a proposition which will show a more explicit symmetry between conditions (i) and (ii) and conditions (iii) and (iv) in Theorem \ref{t4} for unique solutions.
\begin{proposition}
Let $V$ be a finite set and let $E_1$ and $E_2$ be two equivalence relations on $V.$ Then;
\vspace{2mm}
\noindent For any $[x]_{E_1} \in E_1,$
$\exists[z]_{E_2} \in E_2$ such that, $ |[x]_{E_1} \cap [z]_{E_2}| = 1$ iff it is not the case that $ \exists Y, Z \in \mathscr{P}(V)$ such that $[x]_{E_1} = Y \cup Z,$ $Y \cap Z = \emptyset$ and $\textbf{u}_{E_2}(Y) = \textbf{u}_{E_2}(Z) = \textbf{u}_{E_2}([x]_{E_1}) .$
\end{proposition}
\begin{proof}
We prove $\Rightarrow$ first. Let $[x]_{E_1} \in E_1$ and suppose that $\exists[z]_{E_2} \in E_2$ such that, $ |[x]_{E_1} \cap [z]_{E_2}| = 1.$ Then let $[x]_{E_1} \cap [z]_{E_2} = t.$ Now for any spilt of $[x]_{E_1}$, that is for any $Y, Z \in \mathscr{P}(V)$ such that $[x]_{E_2} = Y \cup Z$ and $Y \cap Z = \emptyset,$ $t$ is in exactly one of these sets. Thus exactly one of $ \textit{\textbf{u}}_{E_2}(Y), \ \textit{\textbf{u}}_{E_2}(Z)$ contains $[t]_{E_2} = [z]_{E_2}.$ Hence $ \textit{\textbf{u}}_{E_2}(Y) \neq \textit{\textbf{u}}_{E_2}(Z).$
We prove the converse by the contrapositive. Let $[x]_{E_1} \in E_1$ be such that for all $[z]_{E_2} \in E_2$ whenever $[x]_{E_1} \cap [z]_{E_2} \neq \emptyset$ (and clearly some such $[z]_{E_2}$ must exist), we have that $|[x]_{E_1} \cap [z]_{E_2}| \geq 2.$ Suppose that $[x]_{E_1}$ has non-empty intersection with with $n$ sets in $E_2.$ We note that $n \geq 1.$ Form a sequence of these sets; $R_1, \dots R_n. $ Since $|[x]_{E_1} \cap R_i| \geq 2$ for each $i$ such that $i= 1, \dots n,$ let $\{a_{i1}, a_{i2} \}$ be in $[x]_{E_1}\cap R_i$ for each $i$ such that $i= 1, \dots n.$ Let $Y = \{a_{i1}\ | \ i= 1, \dots n\}$ and let $Z = [x]_{E_1} - Y.$ Then, $[x]_{E_1} = Y \cup Z,$ $Y \cap Z = \emptyset$ and $\textit{\textbf{u}}_{E_2}(Y) = \textit{\textbf{u}}_{E_2}(Z) = \textit{\textbf{u}}_{E_2}([x]_{E_1}) .$
\end{proof}
\noindent Using the preceding proposition we obtain an equivalent form of Theorem \ref{t4}.
\begin{theorem}
Let $V$ be a finite set and $E_1$ and $E_2$ equivalence relations on $V.$ Then $(E_1, E_2)$ produces a unique $L_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ operator iff the following holds:
\vspace{2mm}
\noindent (i) For each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2})$
\vspace{2mm}
\noindent (ii) For each $[x]_{E_1}, [y]_{E_1} \in E_1,$ if $[x]_{E_1} \neq [y]_{E_1}$ then $\textbf{u}_{E_2}([x]_{E_1}) \neq \textbf{u}_{E_2}([y]_{E_1})$
\vspace{2mm}
\noindent (iii) For any $[x]_{E_2} \in E_2,$ if $ \exists Y, Z \in \mathscr{P}(V)$ such that $[x]_{E_2} = Y \cup Z$ and $Y \cap Z = \emptyset$ \\ \textcolor{white}{aaa} then $\textbf{u}_{E_1}(Y) \neq \textbf{u}_{E_1}(Z)$
\vspace{2mm}
\noindent(iv) For any $[x]_{E_1} \in E_1,$ if $ \exists Y, Z \in \mathscr{P}(V)$ if $[x]_{E_1} = Y \cup Z$ and $Y \cap Z = \emptyset$ then \\ \textcolor{white}{aaa} $\textbf{u}_{E_2}(Y) \neq \textbf{u}_{E_2}(Z)$
\end{theorem}
\subsubsection{Conceptual Translation of the Uniqueness Theorem}
The conditions of the above theorem can be viewed conceptually as follows: (i) Through the eyes of $E_1,$ no two equivalence classes of $E_2$ are the same; (ii) Through the eyes of $E_2,$ no two equivalence classes of $E_1$ are the same; (iii) No equivalence class in $E_2$ can be broken down into two smaller equivalence classes which are equal to it through the eyes of $E_1;$ (iv) No equivalence class in $E_1$ can be broken down into two smaller equivalence classes which are equal to it through the eyes of $E_2.$
In other words we view set $V$\textbf{mod} $E_1.$ That is, let $V$\textbf{mod}$E_1$ be the set obtained from $V$ after renaming the elements of $V$ with fixed representatives of their respective equivalence classes in $E_1.$ Similarly let $V$\textbf{mod}$E_2$ be the set obtained from $V$ after renaming the elements of $V$ with fixed representatives of their respective equivalence classes in $E_2.$ We then have the following equivalent conceptual version of Theorem \ref{t4}
\begin{theorem}
Let $V$ be a finite set and $E_1$ and $E_2$ equivalence relations on $V.$ Then \\ $(E_1, E_2)$ generate a unique $L_2L_1$ operator iff the following holds:
\vspace{2mm}
\noindent (i) No two distinct members of $E_2$ are equivalent in $V$\textbf{mod}$E_1.$
\vspace{2mm}
\noindent (ii) No two distinct members of $E_1$ are equivalent in $V$\textbf{mod}$E_2.$
\vspace{2mm}
\noindent (iii) No member $E_2$ can be broken down into two smaller sets which are equivalent to it in $V$\textbf{mod}$E_1.$
\vspace{2mm}
\noindent(iv) No member $E_1$ can be broken down into two smaller sets which are equivalent to it in $V$\textbf{mod}$E_2.$
\end{theorem}
\section{Decomposing $U_2U_1$ Approximations}
We now investigate the case of double upper approximations. This is dually related to the case of double lower approximations because of the relationship between upper and lower approximations by the equation, $U(X) = -L(-X)$ (see property 10 in Section 2.1.1). The following proposition shows that the problem of finding solutions for this case reduces to the case in the previous section: \\
\begin{proposition} \label{p20}
Let $V$ be a finite set and let $U_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ Then any solution $(E_1, E_2),$ is also a solution of $L_2L_1: \mathscr{P}(V) \rightarrow P\mathscr{P}(V)$ operator where $L_2L_1(X) = -U_2U_1(-X)$ for any $X \in \mathscr{P}(V).$ Therefore, the solution $(E_1, E_2)$ for the defined $U_2U_1$ operator is a unique iff the solution for the corresponding $L_2L_1$ operator is unique.
\end{proposition}
\begin{proof}
Recall that $ L_2L_1(X) = -U_2U_1(-X)$. Hence, if there exists a solution $(E_1, E_2)$ which corresponds to the given $U_2U_1$ operator, this solution corresponds to a solution for the $L_2L_1$ operator which is based on the same $(E_1, E_2)$ by the equation $ L_2L_1(X) = -U_2U_1(-X).$ Similarly for the converse.
\end{proof}
\noindent \textbf{Algorithm:} Let $V$ be a finite set and let $U_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ To solve for a solution, change it to solving for a solution for the corresponding $L_2L_1$ operator by the equation $ L_2L_1(X) = -U_2U_1(-X).$ Then, when we want to know the $L_2L_1$ output of a set we look at the $U_2U_1$ output of its complement set and take the complement of that. Next, use Algorithm 4.2 and the solution found will also be a solution for the initial $U_2U_1$ operator.
\subsection{Characterising Unique Solutions}
\begin{theorem}
Let $V$ be a finite set and let $U_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ If $(E_1, E_2)$ is a solution then, it is unique iff the following holds:
\vspace{2mm}
\noindent (i) for each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent (ii) for each $[x]_{E_1}, [y]_{E_1} \in E_1,$ if $[x]_{E_1} \neq [y]]_{E_1}$ then $\textbf{u}_{E_2}([x]_{E_1}) \neq \textbf{u}_{E_2}([y]_{E_1})$.
\vspace{2mm}
\noindent (iii) For any $[x]_{E_2} \in E_2,$
there exists $[z]_{E_1} \in E_1$ such that, $ |[x]_{E_2} \cap [z]_{E_1}| = 1.$
\vspace{2mm}
\noindent (iv) For any $[x]_{E_1} \in E_1,$
there exists $[z]_{E_2} \in E_2$ such that, $ |[x]_{E_1} \cap [z]_{E_2}| = 1.$
\end{theorem}
\begin{proof}
This follows from Proposition \ref{p20} using Theorem \ref{t4}.
\end{proof}
\section{Decomposing $U_2L_1$ Approximations}
For this case, we observe that $U_2L_1 (X) = -L_2(-L_1(X)) = U_2(-U_1(-X)).$ Since we cannot get rid of the minus sign between the $L$s (or $U$s), duality will not save us the work of further proof here like it did in the previous section. In this section, we will see that $U_2L_1$ approximations are tighter than $L_2L_1$ (or $U_2U_1$) approximations. For this decomposition we will use an algorithm that is very similar to Algorithm 4.1, however notice the difference in step 2 where it only requires the use of minimal sets with respect to $\subseteq$ instead of minimum sets (which may not necessarily exist).\\
\noindent \textbf{Algorithm 4.2: For Partial Decomposition of Double Successive Lower Approximations}
\vspace{4mm}
\noindent Let $V$ be a finite set. Given an input of a fully defined operator $U_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V),$ if a solution exists, we can produce a solution $(S, R)$, i.e. where $L_1$ and $U_2$ are the lower and upper approximation operators of equivalence relations $S$ and $R$ respectively, by performing the following steps:
\vspace{4mm}
\vspace{3mm}
\noindent \textbf{1}. Let $J$ be the set of output sets of the given $U_2L_1$ operator. We form the relation $R$ to be such that for $a, b \in V,$ $a \sim_R b \iff (a \in X \iff b\in X)$ for any $X \in J.$ It is clear that $R$ is an equivalence relation.
\vspace{3mm}
\noindent \textbf{2}. For each $Y \neq \emptyset$ output set, find the minimal pre-image sets with respect to $\subseteq,$ $Y_m,$ such that $U_2L_1(Y_m) = Y$. Collect all these minimal sets in a set $K.$ Note that we can always find these minimal sets since $\mathscr{P}(V)$ is finite.
\vspace{3mm}
\noindent \textbf{3}. Using $K,$ we form the relation $S$ to be such that for $a, b \in V,$ $a \sim_S b \iff (a \in X \iff b\in X)$ for any $X \in K.$ It is clear that $S$ is an equivalence relation.
\vspace{3mm}
\noindent \textbf{4}. Form the operator $U_RL_S : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ generated by $(S, R).$ If for all $X \in \mathscr{P}(V)$, the given $U_2L_1$ operator is such that $U_2L_1(X) = U_RL_S(X),$ then $(S, R)$ is a solution proving that a solution exists (note that it is not necessarily unique). Return $(S, R).$ Otherwise, discard $S$ and $R$ and return 0 signifying that no solution exists.\\
\noindent We will prove the claim in step 4 in this section. \\
\begin{lemma} \label{p21}
Let $V$ be a set and $U_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V)$ with $L_1$ and $E_2$ based on unknown $E_1$ and $E_2$ respectively. Let $R$ and $S$ be equivalence relations defined on $V$ as constructed in Algorithm 4.3. Then $E_2\leq R$ and $E_1 = S.$
\end{lemma}
\begin{proof}
We first prove $E_2\leq R.$ Now the output set of a non-empty set in $\mathscr{P}(V)$ is obtained by first applying the lower approximation $L_1$ to it and and after applying the upper approximation, $U_2$ to it. Hence by definition of $U_2,$ the non-empty output sets are unions of equivalence classes of the equivalence relation which corresponds to $U_2.$ If $a$ is in an output set but $b$ is not in it then they cannot belong to the same equivalence class of $E_2$ i.e. $a \not\sim_R b$ implies that $a \not\sim_{E_2} b.$ Hence $E_2\leq R. $
Now, the minimal pre-image, X say, of a non-empty output set which is a union of equivalence classes in $E_2,$ has to be a union of equivalence classes in $E_1.$ For suppose it was not. Let $Y = \{y \in X\ | \ [y]_{E_1} \not\subseteq X\}.$ By assumption, $Y \neq \emptyset.$ Then $L_1 (X) = L_1 (X - Y).$ Hence $U_2L_1(X) = U_2L_1 (X - Y)$ but $|X - Y| < |X|$ contradicting minimality of $X$. Therefore, if $a$ belongs to the minimal pre-image of a non-empty output set but $b$ does not belong to it, then $a$ and $b$ cannot belong to the same equivalence class in $E_1$ i.e. $a \not\sim_S b$ which implies that $a \not\sim_{E_1} b.$ Hence $E_1\leq S.$
We now prove the converse, that $ S \leq E_1.$ For suppose it was not. That is, $E_1 < S.$ Then there exists at least one equivalence class in $S$ which is split into smaller equivalence classes in $E_1.$ Call this equivalence class $[a]_S.$ Then there exists $w, t \in V$ such that $[w]_{E_1} \subset [a]_S$ and $[t]_{E_1} \subset[a]_S.$ Now consider the pre-images of a minimal output sets of $U_2L_1,$ containing $t.$ That is, $X$ such that $U_2L_1(X) = Y$ where $Y$ is the minimal output set such that $t \in Y$ and for any $X_1 \subset X,$ $U_2L_1(X_1) \neq Y.$ The following is a very useful observation.
\vspace{2mm}
\noindent \textbf{Claim:} For any $v \in \textbf{\textit{u}}_{E_1}([y]_{E_2}),$ $[v]_S$ is a minimal set such that $U_2L_1([v]_S) \supseteq [y]_{E_2}.$
\vspace{2mm}
\noindent The above follows because 1) $U_2L_1([v]_S) \supseteq [y]_{E_2}$ since $v \in \textbf{\textit{u}}_{E_1}([y]_{E_2})$ and 2) For any $Z \subset [v]_S, \ U_2L_1(Z) = \emptyset$ since $L_1(Z) = \emptyset.$
Now for $U_2L_1(X)$ to contain $t,$ then it must contain $[t]_{E_2}.$ Hence by the previous claim, $X = [t]_S$ is such a minimal pre-image of a set containing $t$. If $L_1$ is based on $S,$ then $X = [t]_S = [a]_S.$ However, if $L_1$ is based on $E_1,$ then $ X = [a]_S$ is not such a minimal set because $X = [t]_{E_1}$ is such that $U_2L_1(X) = Y$ but $[t]_{E_1} \subset [a]_S.$ Hence, $U_RL_S(X)\neq U_{E_2}L_{E_1}(X)$ for all $X \in \mathscr{P}(V)$ which is a contradiction to $(E_1, E_2)$ also being a solution for the given $U_2U_1$ operator. Thus we have that $E_1 = S.$
\end{proof}
\begin{lemma} \label{l6}
Let $V$ be a finite set and $U_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined operator. If there exists equivalence pair solutions to the operator $(E_1, E_2)$ which is such that there exists $[x]_{E_2}, [y]_{E_2} \in E_2,$ such that $[x]_{E_2} \neq [y]_{E_2}$ and $\textbf{u}_{E_1}([x]_{E_2}) = \textbf{u}_{E_1}([y]_{E_2}), $ then there exists another solution, $(E_1, H_2)$, where $H_2$ is an equivalence relation formed from $E_2$ by combining $[x]_{E_2}$ and $[y]_{E_2}$ and all other elements are as in $E_2.$ That is, $[x]_{E_2} \cup [y]_{E_2} = [z] \in H_2$ and if $[w] \in E_2$ such that $[w] \neq [x]_{E_2}$ and $[w]_{E_2} \neq [y]_{E_2},$ then $[w] \in H_2.$
\end{lemma}
\begin{proof}
Suppose that $(E_1, E_2)$ is a solution of a given $U_2L_1$ operator and $H_2$ is as defined above. Now, $U_2L_1(X) = Y $ iff the union of $E_1$-equivalence classes in $X$ intersects the equivalence classes of $E_2$ whose union is equal to $Y.$ So, in the $(E_1, H_2)$ solution, the only way that $U_{H_2}L_{E_1}(X)$ could be different from $U_{E_2}L_{E_1}(X)$(which is $=U_2L_1(X)$) is if there some equivalence class of $E_1$ which either intersects $[x]_{E_2}$ but not $[y]_{E_2}$ or intersects $[y]_{E_2}$ but not $[x]_{E_2}.$ However, this is not the case since we have that $\textit{\textbf{u}}_{E_1}([x]_{E_2}) = \textit{\textbf{u}}_{E_1}([y]_{E_2}).$ Hence, $U_{E_2}L_{E_1} (X) = U_{H_2}L_{E_1}(X)$ for all $X \in \mathscr{P}(V)$ and therefore if $(E_1, E_2)$ is a solution to the given operator then so is $(E_1, H_2).$
\end{proof}
\noindent Next, we prove the claim in step 4 of Algorithm 4.2.
\begin{theorem} \label{t5}
Let $V$ be a finite set and $U_2L_1 : \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ a fully defined operator. If there exists an equivalence relation pair solution, then there exists a solution $(E_1, E_2),$ which satisfies,
\vspace{2mm}
\noindent (i) for each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent Furthermore $E_1 =S$ and $E_2 = R,$ where $(S, R)$ are the relations obtained by applying Algorithm 4.2 to the given $U_2L_1$ operator.
\end{theorem}
\begin{proof}
Suppose that there exists a solution $(C, D).$ Then by Lemma \ref{p21}, $C = S,$ where $S$ is produced by Algorithm 4.2. If $(S, D)$ satisfies condition (i) of the theorem then take $(E_1, E_2) = (C, D).$ Otherwise, use repeated applications of Lemma \ref{l6} until we obtain a solution, $(S, E_2)$ which satisfies the condition of the theorem. Since $\mathscr{P}(V)$ is finite this occurs after a finite number of applications of the lemma. Moreover, by Lemma \ref{p21}, $E_2 \leq R.$
Consider the minimal sets in the output list of the given $U_2L_1$ operator. It is clear that these sets are union of one or more equivalence classes of $E_2.$ Let $[y]_{E_2} \in E_2$ then for any $v \in \textbf{\textit{u}}_{E_1}([y]_{E_2})),$ $U_2L_1([v]_S) \supseteq [y]_{E_2}$ (by the claim in Lemma \ref{p21}).
\vspace{2mm}
\noindent \textbf{Claim:} (i) For any $[y]_{E_2} \neq [z]_{E_2} \in E_2,$ there exists an output set, $U_2L_1(X)$ such that it contains at least of $[y]_{E_2}$ or $[z]_{E_2}$ both it does not contain both sets.
\vspace{2mm}
Suppose that $[y]_{E_2} \neq [z]_{E_2} \in E_2.$ By the assumed condition of the theorem, then $\textbf{\textit{u}}_{E_1}([y]_{E_2}) \neq \textbf{\textit{u}}_{E_1}([z]_{E_2}).$ Hence either (i) there exists $a \in V$ such that $a \in \textbf{\textit{u}}_{E_1}([y]_{E_2})$ and $a \not\in \textbf{\textit{u}}_{E_1}([z]_{E_2})$ or (ii) there exists $a \in V$ such that $a \not\in \textbf{\textit{u}}_{E_1}([y]_{E_2})$ and
$a \in \textbf{\textit{u}}_{E_1}([z]_{E_2}).$ Consider the first case. This implies that $[a]_S \cap [y]_{E_2} \neq \emptyset$ while $[a]_S \cap [z]_{E_2} = \emptyset.$
Therefore, $U_2L_1([a]_S) \supseteq [y]_{E_2}$ but $U_2L_1([a]_S) \not\supseteq [z]_{E_2}.$ Similarly, for the second case we will get that $U_2L_1([a]_S) \supseteq [z]_{E_2}$ but $U_2L_1([a]_S) \not\supseteq [y]_{E_2}$ and the claim is shown.
We recall that $a \sim_R b \iff \ (a \in X \iff b \in X)$ for each $X$ in the range of the given $U_2L_1$. From the previous proposition we have that $E_2 \leq R.$ From the above claim we see that if $[y]_{E_2} \neq [z]_{E_2}$ in $E_2$ then there is an output set that contains one of $[y]_{E_2}$ or $[z]_{E_2},$ but not the other. Hence, if $x \not\sim_{E_2} y$ then $x \not\sim_R y.$ That is, $R \leq E_2.$ Therefore we have that $R = E_2.$
\end{proof}
\subsection{Characterising Unique Solutions}
\begin{theorem} \label{t6}
Let $V$ be a finite set and let $U_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a fully defined successive approximation operator on $\mathscr{P}(V).$ If $(S, R)$ is returned by Algorithm 4.1, then $(S, R)$ is the unique solution of the operator iff the following holds:
\vspace{2mm}
\noindent (i) For any $[x]_R \in R,$
there exists $[z]_S \in S$ such that, $ |[x]_R \cap [z]_S| = 1.$
\vspace{2mm}
\end{theorem}
\begin{proof}
We prove $\Leftarrow$ direction first. So assume the condition holds. Then by Theorem \ref{t5} if there is a unique solution, it is $(S, R)$ produced by Algorithm 4.2. We note that by Lemma \ref{p21}, any other solution, $(E_1, E_2)$ to the given $U_2L_1$ operator must be such that $E_1 = S$ and $E_2 \leq R.$
So, suppose to get a contradiction, that there exists a solution $(E_1, E_2)$ which is such that $E_2 < R.$ That is, $E_2$ contains a splitting of at least one of the equivalences classes of $R,$ say $[a]_R.$ Hence $|[a]_R| \geq 2.$ By assumption there exists a $[z]_S \in S$ such that $|[a]_R \cap [z]_S| = 1.$ Call the element in this intersection $v$ say. We note that $[v]_S= [z]_S.$ Now as $[a]_R$ is spilt into smaller classes in $E_2,$ $v$ must be in one of these classes, $[v]_{E_2}.$ Now, $U_2L_1([v]_S)$ when $U_2$ is based on $E_2,$ contains $[v] _{E_2}$ but does not contain $[a]_R.$ This is because $[v]_S \cap ([a]_R - [v]_{E_2}) = \emptyset.$ That is, $U_{E_2}L_S([v]_S) \not\supseteq [a]_R$ but $U_RL_S([v]_S) \supseteq [a]_R.$ Hence $U_{E_2}L_S (X) \neq U_RL_S(X)$ for all $X \in \mathscr{P}(V).$ This is a contradiction to $(S, E_2)$ also being a solution to the given $U_2L_1$ operator for which $(S, R)$ is a solution. Hence we have a contradiction and so $E_2 = R.$
Now we prove $\Rightarrow $ direction. Suppose that $(E_1, E_2)$ is the unique solution, and assume that the condition does not hold. By uniqueness, $(E_1, E_2) = (S, R).$ Then, there exists an $[x]_R \in R$ such that for all $[y]_S \in S$ such that $[x]_R \cap [y]_S \neq \emptyset$ we have that $|[x]_R \cap [y] _S| \geq 2.$
Suppose that $[x]_R$ has non-empty intersection with with $n$ sets in $S.$ We note that $n \geq 1.$ Form a sequence of these sets; $S_1, \dots S_n. $ Since $|[x]_R \cap S_i| \geq 2$ for each $i$ such that $i= 1, \dots n,$ let $\{a_{i1}, a_{i2} \}$ be in $[x]_R \cap S_i$ for each $i$ such that $i= 1, \dots n.$ We split $[x]_R$ to form a finer $E_2$ as follows: Let $P = \{a_{i1}\ | \ i= 1, \dots n\}$ and $Q = [x]_R - P$ be two equivalence classes in $E_2$ and for the rest of $E_2,$ for any $[y]_R \in R$ such that $[y]_R \neq [x]_R, $ let $[y] \in E_2$ iff $[y] \in R.$ Now, $U_RL_S(X) = Y $ iff the union of $S$-equivalence classes in $X$ intersects equivalence classes of $E_2$ whose union is equal to $Y.$ So, for the $(S, E_2)$ solution, the only way that $L_{E_2}L_S(X)$ could be different from $L_RL_S(X)$ is if there is an equivalence class in $S$ which intersects $P$ but not $Q$ or $Q$ but not $P.$ However, this is not the case because $\textit{\textbf{u}}_S(P) = \textit{\textbf{u}}_S(Q).$ Hence, $L_RL_S(X) = L_{E_2}L_S(X)$ for all $X \in \mathscr{P}(V)$ and if $(S, R)$ is a solution for the given vector, then so is $(S, E_2)$ which is a contradiction of assumed uniqueness of $(S, R).$
\end{proof}
\noindent The following result sums up the effects of Theorem \ref{t5} and Theorem \ref{t6}.
\vspace{2mm}
\begin{theorem}
Let $V$ be a finite set and let $U_2L_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ Then there exists a unique pair of equivalence relations solution $(E_1, E_2)$ iff the following holds:
\vspace{2mm}
\noindent (i) for each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\vspace{2mm}
\noindent (iii) For any $[x]_{E_2} \in E_2,$
there exists $[z]_{E_1} \in E_1$ such that, $ |[x]_{E_2} \cap [z]_{E_1}| = 1.$
\end{theorem}
\section{Decomposing $L_2U_1$ Approximations}
For this case we observe that $L_2U_1$ is dual to the case previously investigated $U_2L_1$ operator. Due to the duality connection between $L_2U_1$ and $U_2L_1$, the question of unique solutions of the former reduces to the latter as the following proposition shows. \\
\begin{proposition} \label{p22}
Let $V$ be a finite set and let $L_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ Then any solution $(E_1, E_2),$ is also a solution of $U_2L_1: \mathscr{P}(V) \rightarrow P\mathscr{P}(V)$ operator where $U_2L_1(X) = -L_2U_1(-X)$ for any $X \in \mathscr{P}(V).$ Therefore, the solution $(E_1, E_2)$ for the defined $U_2U_1$ operator is a unique iff the solution for the corresponding $U_2L_1$ operator is unique.
\end{proposition}
\begin{proof}
Recall that $ U_2L_1(X) = -L_2U_1(-X)$. Hence, if there exists a solution $(E_1, E_2)$ which corresponds to the given $U_2L_1$ operator, this solution corresponds to a solution for the $L_2U_1$ operator which is based on the same $(E_1, E_2)$ by the equation $ L_2U_1(X) = -U_2L_1(-X).$ Similarly for the converse.
\end{proof}
\noindent \textbf{Algorithm:} Let $V$ be a finite set and let $L_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ To solve for a solution, change it to solving for a solution for the corresponding $U_2L_1$ operator by the equation $ U_2L_1(X) = -L_2U_1(-X).$ Then, when we want to know the $U_2L_1$ output of a set we look at the $L_2U_1$ output of its complement set and take the complement of that. Next, use Algorithm 4.2 and the solution found will also be a solution for the initial $L_2U_1$ operator.
\subsection{Characterising Unique Solutions}
\begin{theorem}
Let $V$ be a finite set and let $L_2U_1: \mathscr{P}(V) \rightarrow \mathscr{P}(V)$ be a given fully defined operator on $\mathscr{P}(V).$ If $(E_1, E_2)$ is a solution, then it is unique iff the following holds:
\vspace{2mm}
\noindent (i) for each $[x]_{E_2}, [y]_{E_2} \in E_2,$ if $[x]_{E_2} \neq [y]_{E_2}$ then $\textbf{u}_{E_1}([x]_{E_2}) \neq \textbf{u}_{E_1}([y]_{E_2}), $
\noindent (ii) For any $[x]_{E_2} \in E_2,$
there exists $[z]_{E_1} \in E_1$ such that, $ |[x]_{E_2} \cap [z]_{E_1}| = 1.$
\vspace{2mm}
\end{theorem}
\begin{proof}
This follows from Proposition \ref{p22}, Theorem \ref{t5} and Theorem \ref{t6}.
\end{proof}
\section{Conclusion}
We have defined and examined the consequences of double successive rough set approximations based on two, generally unequal equivalence relations on a finite set. We have given algorithms to decompose a given defined operator into constituent parts. Additionally, in sections 4.2 and 4.3 we have found a conceptual translation of the main results which is very much in the spirit of what Yao suggested in \cite{TSid}. These type of links are especially helpful in forming a coherent map of the mass of existing literature out there.
This type of analysis can be seen as somewhat analogous to decomposing a wave into constituent sine and cosine waves using Fourier analysis. In our case, we work out the possibilities of what can be reconstructed if we know that a system has in-built layered approximations. It is possible that some heuristics of how the brain works can be modelled using such approximations and cognitive science is a possible application for the theory which we have begun to work out.
\bibliographystyle{plain}
|
1,108,101,562,721 | arxiv | \section{Introduction and Motivation}
\label{sec:introduction}
The Standard Model (SM) of particle physics has been the most successful theory of elementary particles that provides excellent explanations of many physical phenomena occurring in Nature~\cite{Workman:2022ynf}. However, there are a plethora of motivations for physicists to go beyond the SM, for example, to explain the observed matter-antimatter asymmetry in the Universe, the existence of dark matter and dark energy, and the non-zero neutrino mass. The observed mass-induced flavor transition~\cite{Super-Kamiokande:2004orf, Mohapatra:2005wg, Strumia:2006db, Gonzalez-Garcia:2007dlo, Fantini:2018itu} that requires the neutrinos to be massive provides the first experimental signature of physics beyond the SM. The standard three-flavor neutrino oscillation framework involves three mixing angles ($\theta_{12}$, $\theta_{13}$, and $\theta_{23}$), one Dirac CP phase ($\delta_{\rm CP}$), and two independent mass-squared differences, $\Delta{m}^2_{21}~ (\equiv{m}^2_2-{m}^2_1~\text{in the solar sector})$ and $\Delta{m}^2_{31}~(\equiv{m}^3_3-{m}^2_1~\text{in the atmosperic sector})$. Now that the phenomenon of neutrino oscillation has been well established, the focus has been shifted to measure the oscillation parameters with utmost precision. Marvelous data from various neutrino oscillation experiments such as Super-K-Solar~\cite{Super-Kamiokande:2016yck}, SNO~\cite{SNO:2011hxd}, BOREXINO~\cite{BOREXINO:2014pcl}, Super-K-Atmospheric~\cite{Super-Kamiokande:2004orf, Super-Kamiokande:2010tar, Super-Kamiokande:2017yvm, Super-Kamiokande:2019gzr}, IceCube-DeepCore~\cite{IceCube:2017lak}, ANTARES~\cite{ANTARES:2018rtf}, Daya Bay~\cite{DayaBay:2018yms}, RENO~\cite{RENO:2018dro}, Double Chooz~\cite{DoubleChooz:2019qbj}, MINOS~\cite{MINOS:2013utc}, Tokai-to-Kamioka (T2K)~\cite{T2K:2019bcf}, and NuMI Off-axis $\nu_{e}$ Appearance (NO$\nu$A)~\cite{NOvA:2019cyt, NOvA:2021nfi} have already provided a first order picture of the lepton mixing pattern in three-flavor scenario.
There are three major issues in the three-flavor neutrino oscillation paradigm that are yet to be resolved, namely, the value of the CP phase ($\delta_{\rm CP}$), the octant of the atmospheric mixing angle ($\theta_{23}$), and the neutrino mass ordering. The Deep Underground Neutrino Experiment (DUNE)~\cite{DUNE:2015lol, DUNE:2020lwj, DUNE:2020ypp, DUNE:2020jqi, DUNE:2021cuw, DUNE:2021mtg} with its wide band neutrino beam, will play a crucial role in establishing the deviation of the atmospheric mixing angle ($\theta_{23}$) from its maximal value and settling down its correct octant with outstanding precision~\cite{Agarwalla:2021bzs}. DUNE can measure the value of atmospheric mass splitting at several $L/E$ values and settle the issue of neutrino mass ordering at high confidence level exploiting the Earth's matter effect that it possess due to its long baseline~\cite{DUNE:2020ypp}. DUNE is also capable to establish leptonic CP violation by measuring the value of $\delta_{\rm CP}$ precisely~\cite{Agarwalla:2022xdo}. Another proposed long-baseline experiment which spans from Tokai to Hyper-Kamiokande (T2HK)~\cite{Hyper-KamiokandeWorkingGroup:2014czz, Hyper-KamiokandeProto-:2015xww, Hyper-Kamiokande:2018ofw} will also shed light on these pressing issues. In the T2HK setup, one detector is placed in Japan, which is 295 km away from the J-PARC facility and will receive a highly intense narrow-band neutrino beam from the J-PARC source at an off-axis angle of $2.5^\circ$. T2HK will have a four times smaller baseline than DUNE, which in turn will face negligible Earth's matter effect and hence can measure the value of $\delta_{\rm CP}$ and establish leptonic CP violation without having the interference of the fake CP-asymmetry induced by Earth's matter~\cite{Agarwalla:2022xdo}.
Apart from measuring the standard oscillation parameters with high precision, the long-baseline experiments will also search for new physics beyond the Standard Model (BSM)\footnote{For an extensive review on this topic, see Refs.~\cite{Arguelles:2019xgp, Arguelles:2022tki}.} through neutrino oscillation, namely, eV-scale sterile neutrino~\cite{Berryman:2015nua, Agarwalla:2016mrc, Agarwalla:2016xxa, Agarwalla:2016xlg, Agarwalla:2018nlx, KumarAgarwalla:2019blx}, neutrino non-standard interactions~\cite{Coloma:2015kiu, Agarwalla:2016fkh}, non-unitary neutrino mixing~\cite{Escrihuela:2016ube, Agarwalla:2021owd}, long-range interactions~\cite{Chatterjee:2015gta}, neutrino decay~\cite{Choubey:2017dyu, Coloma:2017zpg}, and Lorentz Invariance Violation (LIV)~\cite{Barenboim:2018ctx, KumarAgarwalla:2019gdj, Fiza:2022xfw}. In this work, we mainly focus on LIV. It is a well-established fact that the Lorentz symmetry is an exact symmetry of Nature. As a consequence, the Standard Model of particle physics conserves Lorentz symmetry. However, there exist several models, unifying SM and general relativity, that violate the Lorentz and CPT symmetry at the Planck scale ($\sim10^{19}$ GeV). This can be realized at a low energy scale accessible to the current experiments under Standard Model Extension (SME) framework. Various neutrino experiments are at the forefront to test the Lorentz Invariance Violation at a low energy scale. For example, in an attempt to understand the excess of $\nu_e$ signal events in the $\nu_{\mu}$ beam, LSND collaboration~\cite{LSND:2005oop}, searched for the possible signature of LIV in the context of neutrino oscillation. They did not find any signature of LIV and put strong constraints on relevant LIV parameters.
Several other experiments have made efforts to search for LIV, which include MINOS~\cite{MINOS:2008fnv, MINOS:2010kat, MINOS:2012ozn}, MiniBooNE~\cite{MiniBooNE:2011pix}, Double Chooz~\cite{DoubleChooz:2012eiq}, Super-K~\cite{Super-Kamiokande:2014exs}, IceCube~\cite{IceCube:2010fyu} and T2K~\cite{Abe:2017eot}. None of these experiments found any positive signal of LIV and set competitive bounds on various LIV parameters. In addition to the aforementioned studies by the experimental collaborations, there are various independent works towards the exploration of LIV with accelerator neutrinos in long-baseline experiments~\cite{Dighe:2008bu, Barenboim:2009ts, Rebel:2013vc, Diaz:2015dxa, deGouvea:2017yvn, Barenboim:2017ewj, Barenboim:2018lpo, Barenboim:2018ctx, Majhi:2019tfi, Fiza:2022xfw, Majhi:2022fed}, reactor antineutrinos in short-baseline experiments~\cite{Giunti:2010zs}, atmospheric neutrinos~\cite{Datta:2003dg, Chatterjee:2014oda, SinghKoranga:2014mxh, Sahoo:2021dit, Sahoo:2022rns}, solar neutrinos~\cite{Diaz:2016fqd}, and high-energy neutrinos from astrophysical sources~\cite{Hooper:2005jp, Tomar:2015fha, Liao:2017yuy}. Recently, the KATRIN experiment, using the data from the first scientific run, placed limits on some of the oscillation-free LIV parameters that can not be probed by the time-of-flight or neutrino oscillation experiments~\cite{KATRIN:2022qou}. An exhaustive list of the constraints on all the CPT-violating and CPT-conserving LIV parameters can be found in Ref.~\cite{Kostelecky:2008ts}.
In the present work, we mainly focus on the capability of the upcoming long-baseline experiments, DUNE and T2HK in isolation and combination, to constrain the LIV parameters. We derive the sensitivities of these experiments to place competitive limits on the off-diagonal CPT-violating LIV parameters ($a_{\alpha\beta}$ where $\alpha, \beta = e, \mu, \tau$ and $\alpha\neq\beta$) and for the first time, the off-diagonal CPT-conserving LIV parameters ($c_{\alpha\beta}$ where $\alpha, \beta = e, \mu, \tau$ and $\alpha\neq\beta$). We study the impact of these LIV parameters and their associated phases at the probability level. Then we shift our attention to explore the possible degeneracies between the standard oscillation parameters ($\theta_{23}$ and $\delta_{\rm CP}$) and the above-mentioned LIV parameters. Finally, we derive the expected constraints on these LIV parameters using the upcoming long-baseline experiments, DUNE and T2HK in standalone mode and also in combination, considering their state-of-the-art simulation details. To understand various interesting features of our numerical results, we derive simple and compact analytical expressions of the oscillation probabilities for both appearance and disappearance channels.
This paper is organized as follows. In section~\ref{sec:LIV}, we discuss the formalism of neutrino oscillation in the presence of Lorentz Invariance Violation and the effects of various LIV parameters on the appearance and disappearance probabilities. In section~\ref{sec:LBL}, we give the details of the long-baseline experimental setups considered for our work and discuss the expected synergies between DUNE and T2HK in various aspects. Also, this section discusses the effect of LIV parameters at the event level. We dedicate section~\ref{sec:RnA} to describe the numerical technique used for our analyses. We present our results in section~\ref{sec:results}, where we show the correlations among different LIV parameters, the atmospheric mixing angle ($\theta_{23}$), and the Dirac CP-phase ($\delta_{\rm CP}$) and finally the expected bounds on the CPT-conserving and CPT-violating LIV parameters. We summarize our results and give our concluding remarks in section~\ref{sec:SnC}. In appendix~\ref{appndx}, we compare the numerical (exact) and analytical (approximate) probabilities.
\section{Neutrino Oscillation in Presence of Lorentz Invariance Violation}
\label{sec:LIV}
\subsection{Theoretical Formalism of LIV}
The Lorentz invariance has been considered to be an inalienable part in both the Standard Model (SM) of particle physics and General Relativity (GR) for the global as well as the local variables. However, a few proposed models in String Theory~\cite{Polyakov:1987ez, Kostelecky:1988zi, Kostelecky:1989jp, Kostelecky:1990pe, Kostelecky:1991ak, Kostelecky:1995qk, Kostelecky:1999mu, Kostelecky:2000hz} and Loop Quantum Gravity~\cite{Gambini:1998it, Alfaro:2002xz, Sudarsky:2002ue, Amelino-Camelia:2002aqz, Ng:2003jk} allow the Lorentz invariance violation (LIV) while attempting a unification of gravity with the SM gauge fields at the Planck scale ($M_P \sim 10^{19}$ GeV). Here, we consider the mechanism proposed in the string theory, which spontaneously breaks the CPT and Lorentz symmetry at a higher dimension $(>4)$ of space-time. The plausible extension of such a violation in Lorentz and CPT symmetries in the realistic four-dimensional space-time, can be attempted by introducing new interaction coefficients to the minimal SM of particle physics as a tiny perturbation. In an observer-scalar effective field theory~\cite{Weinberg:1979sa}, the strength of such interaction is expected to be suppressed by order of ($1/M_P$)~\cite{Colladay:1998fq, Kostelecky:2003fs, Colladay:1996iz, Kostelecky:2000mm, Kostelecky:2003cr, Bluhm:2005uj} manifesting the effect of Planck scale physics at low energy. The impacts of such LIV interactions can be experienced by the fundamental particles in a broad category of experiments via coherent, interference, or extreme effects.
By virtue of mass-induced neutrino flavor oscillations, the neutrinos are sensitive to the LIV effects while propagating through space-time. Under the minimal SM extension (SME), the Lagrangian density of the induced renormalizable and gauge-invariant LIV interaction terms for the left-handed neutrinos can be expressed as ~\cite{Kostelecky:2011gq, KumarAgarwalla:2019gdj, Antonelli:2020nhn, Sahoo:2021dit, Sahoo:2022rns} :
\begin{align}
\mathcal{L}_{\rm LIV} & = -\frac{1}{2}\left[a^{\mu}_{\alpha\beta}\,\overline{\psi}_\alpha\,\gamma_{\mu}\,P_L\,\psi_\beta - i c^{\mu\nu}_{\alpha\beta}\,\overline{\psi}_\alpha\,\gamma_{\mu}\,\partial_\nu P_L\,\psi_\beta \right] + h.c.\,,
\label{Eq:LIV-1}
\end{align}
where $P_L$ is the projection operator, $a^{\mu}$ and $c^{\mu\nu}$ are the CPT-violating and CPT-conserving parameters, respectively. Here, $(\mu,\,\nu)$ are space-time indices, and $\alpha,\,\beta$ are the neutrino-flavor indices. The CPT-violating coefficient changes its sign under CPT transformation while the CPT-conserving one does not (see Refs.~\cite{Sahoo:2021dit, Kostelecky:2003cr}). Now, considering a realistic scenario where the neutrinos can propagate through the Earth matter, the effective Hamiltonian of an ultra-relativistic left-handed neutrino, in a three neutrino mixing scenario, can be expressed in the flavor basis as ~\cite{Kostelecky:2011gq, Sahoo:2021dit, Sahoo:2022rns}:
\begin{align}
\mathcal{H}_{\rm eff} & = \frac{1}{2E}\,U\,\Delta m^2\,U^\dagger
+ \frac{1}{E}\big(a^\mu_{L} p_\mu
- c^{\mu\nu}_{L} p_\mu p_\nu \big) + \sqrt{2}G_FN_e\tilde{I}.
\label{Eq:LIV-2}
\end{align}
The first term in the above equation, $U$ represents the three-neutrino unitary mixing matrix, also known as PMNS matrix, and the $\Delta m^2$ part contains two independent mass-squared splittings in the form of a diagonal-matrix: ${\rm diag}(0,\,\Delta{m}^2_{21},\,\Delta{m}^2_{31})$. The second term shows the strength of induced potential with left-handed neutrino due to LIV, and $p$ is the four-momenta. The third term defines the effective matter-potential induced due to the elastic charged-current scattering between $\nu_e$ and electron. Here, $G_F$ is an electro-weak coupling constant, also known as the Fermi constant, $N_e$ is the number density of the ambient electrons present in matter, and $\tilde{I}$ is a diagonal matrix with components $(1,\,0,\,0)$. The scalar part of the last term can be parameterized with matter density as $\sqrt{2}G_FN_e$ $\approx$ $7.6\,\times 10^{-23}\cdot Y_e \cdot \rho\left(\rm g/cm^3\right)$ GeV, where $Y_e$ is the relative electron number density in the ambient matter having an average Earth-matter density $\rho$.\\
In this study, we only focus on the time-like component ($\mu,\,\nu\,=\,0$) of LIV coefficients in an isotropic space-time coordinate. From here onwards, we will consider $(a^0_L)_{\alpha\beta} \equiv a_{\alpha\beta}$ and $(c^{00}_L)_{\alpha\beta} \equiv c_{\alpha\beta}$. Using the Sun-centered celestial-equatorial coordinate (see Ref.~\cite{Kostelecky:2008ts}) as an approximated inertial frame of reference, the eq. (\ref{Eq:LIV-2}) can be re-written as follows :
\begin{align}
\mathcal{H}_{\rm eff} = \; \frac{1}{2E}
U\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & \Delta m^{2}_{21} & 0 \\
0 & 0 & \Delta m^{2}_{31} \\
\end{array}\right)U^{\dagger}
+&\left(\begin{array}{ccc}
a_{ee} & a_{e\mu} & a_{e\tau} \\
a^*_{e\mu} & a_{\mu\mu} & a_{\mu\tau} \\
a^*_{e\tau} & a^*_{\mu\tau} & a_{\tau\tau}
\end{array} \right) \nonumber \\
-&\frac{4}{3} E
\left(
\begin{array}{ccc}
c_{ee} & c_{e\mu} & c_{e\tau} \\
c^*_{e\mu} & c_{\mu\mu} & c_{\mu\tau} \\
c^*_{e\tau} & c^*_{\mu\tau} & c_{\tau\tau}
\end{array}
\right)
+ \sqrt{2}G_{F}N_{e}
\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right).
\label{Eq:LIV-3}
\end{align}
For the case of right-handed antineutrino $U \to U^*$, $a_{\alpha\beta} \to -a_{\alpha\beta}^*$, $c_{\alpha\beta} \to c_{\alpha\beta}^*$ and $\sqrt{2}G_FN_e \to -\sqrt{2}G_FN_e$. Note that an extra fraction $4/3$ appears due to the choice of isotropic coordinate. It is essential to note that at the origin of the LIV coefficients $a_{\alpha\beta}$ and $c_{\alpha\beta}$ are real-valued quantities. However, due to the hermiticity, the off-diagonal elements of these LIV interaction matrices can have imaginary components while implanting them in an effective Hamiltonian.
To have an estimate of the strength of LIV parameters which may affect the outcome of the long-baseline experiments under consideration, we compare the first three terms in the neutrino propagation Hamiltonian as shown in Eq.~\ref{Eq:LIV-3}. The first term governs the neutrino oscillation in vacuum, whereas the second and third terms are the contribution from CPT-violating and CPT-conserving LIV, respectively. For typical long-baseline experiments with neutrino energy in the GeV range, the relevant scale for standard atmospheric neutrino oscillation is $\Delta m^2_{31}/{2E}\sim 10^{-22}$ GeV. So, in order to have noticeable effects from LIV, the strength of the parameters in the second and third terms should be around the same order as the standard neutrino oscillation scale. So, we observe that both CPT-violating ($a_{\alpha\beta}$) and CPT-conserving ($E\times c_{\alpha\beta}$) parameters should have the strength of the order $10^{-22}$ GeV to have visible effects over standard neutrino oscillation in the matter.
Following the above-discussed formalism, various neutrino experiments have given bounds on the CPT-violating and CPT-conserving LIV parameters. In particular, a recent publication by the IceCube collaboration~\cite{IceCube:2017qyp}, where the analysis has been performed in an effective two-flavor oscillation scenario, presented the most stringent bounds on the CPT-violating and the CPT-conserving LIV parameters in $\mu-\tau$ sector. Apart from this, there are also limits on both CPT-violating and CPT-conserving LIV parameters using the atmospheric neutrino data from Super-K~\cite{Super-Kamiokande:2014exs}. In Table~\ref{tab:existing_bounds}, we tabulate the existing limits on various off-diagonal LIV parameters from these two experiments.
Note that in our work, we represent the off-diagonal LIV coefficients as $|R|\cdot e^{i\phi}$, where $|R|$ represents the magnitude of the LIV coefficient, and $\phi$ is the phase of that corresponding quantity.
\begin{table}[h!]
\centering
\begin{center}
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c| c| c| c|}
\hline \hline
\multicolumn{4}{|c|}{Existing constraints on CPT-violating LIV parameters} \\ \hline
Experiments & $a_{e\mu} ~[ \rm GeV ]$ & $a_{e\tau} ~[ \rm GeV ]$ & $a_{\mu\tau} ~[ \rm GeV ]$ \\ \hline
\multirow{2}{*}{Super-K (95\% C.L.)} & $\mathrm{Re}(a_{e\mu}) < 1.8\times10^{-23}$ & $\mathrm{Re}(a_{e\tau}) < 4.1\times10^{-23}$ & $\mathrm{Re}(a_{\mu\tau}) < 0.65\times10^{-23}$ \\
& $\mathrm{Im}(a_{e\mu}) < 1.8\times10^{-23}$ & $\mathrm{Im}(a_{e\tau}) < 2.8\times10^{-23}$ & $\mathrm{Im}(a_{\mu\tau}) < 0.51\times10^{-23}$ \\
\hline
\multirow{2}{*}{IceCube (99\% C.L.)} &\multirow{2}{*}{--} & \multirow{2}{*}{--} & $|\mathrm{Re}(a_{\mu\tau})| < 0.29\times10^{-23}$ \\
&\multirow{2}{*}{--} & & $|\mathrm{Im}(a_{\mu\tau})| < 0.29\times10^{-23}$ \\
\hline
\hline
\multicolumn{4}{|c|}{Existing constraints on CPT-conserving LIV parameters} \\ \hline
Experiments & $c_{e\mu}$ & $c_{e\tau}$ & $c_{\mu\tau}$ \\ \hline
\multirow{2}{*}{Super-K (95\% C.L.)} & $\mathrm{Re}(c_{e\mu}) < 8.0\times10^{-27}$ & $\mathrm{Re}(c_{e\tau}) < 9.3\times10^{-25}$ & $\mathrm{Re}(c_{\mu\tau}) < 4.4\times10^{-27}$ \\
& $\mathrm{Im}(c_{e\mu}) < 8.0\times10^{-27}$ & $\mathrm{Im}(c_{e\tau}) < 1.0\times10^{-24}$ & $\mathrm{Im}(c_{\mu\tau}) < 4.2\times10^{-27}$ \\
\hline
\multirow{2}{*}{IceCube (99\% C.L.)} &\multirow{2}{*}{--} & \multirow{2}{*}{--} & $|\mathrm{Re}(c_{\mu\tau})| < 0.39\times10^{-27}$ \\
&\multirow{2}{*}{--} & & $|\mathrm{Im}(c_{\mu\tau})| < 0.39\times10^{-27}$
\\\hline\hline
\end{tabular}
\end{adjustbox}
\end{center}
\mycaption{Existing constraints on the off-diagonal CPT-violating and CPT-conserving LIV parameters from Super-K~\cite{Super-Kamiokande:2014exs} and IceCube~\cite{IceCube:2017qyp}.}
\label{tab:existing_bounds}
\end{table}
\FloatBarrier
\subsection{Analytical Expressions of the Oscillation Probabilities with LIV}
The presence of LIV terms in the neutrino Hamiltonian would affect the neutrino propagation through a medium, consequently modifying the neutrino flavor transition probability. So, it is possible to probe LIV from various neutrino oscillation experiments. In order to have an analytical understanding of the neutrino flavor transition probabilities in the presence of LIV, we follow the approach given in Refs.~\cite{Kikuchi:2008vq, Agarwalla:2016fkh, KumarAgarwalla:2019gdj}, where authors use perturbation theory to calculate the neutrino evolution matrix in various BSM scenarios like the presence of neutral-current NSI and CPT violating LIV parameters. We use $\alpha~(\equiv \Delta m^2_{21}/\Delta m^2_{31})$, $\sin^2\theta_{13}$, and LIV parameters $a'_{\alpha\beta}~(\equiv a_{\alpha\beta}/\sqrt{2}G_F N_e)$ and $c'_{\alpha\beta}~(\equiv c_{\alpha\beta}E/\sqrt{2}G_F N_e)$ ($\alpha,\beta = e,\mu,\tau$) as the expansion parameters. In this work, we mainly probe the three off-diagonal CPT-violating and CPT-conserving LIV parameters $a_{\alpha\beta}$ and $c_{\alpha\beta}$ ($\alpha,\beta=e,\mu,\tau;\,\alpha\neq\beta$), respectively.
\newline
\newline
\noindent
$\bullet$ \textbf{$\nu_\mu\to\nu_e$ Appearance Channel:}\\
\\
The expression for the $\nu_\mu\to\nu_e$ transition probability, considering terms up to first-order in the above mentioned expansion parameters, can be written as~\cite{KumarAgarwalla:2019gdj},
\begin{align}\label{eq:pme_liv}
\centering
P_{\mu e} \simeq P_{\mu e}(\text{SI}) + P_{\mu e}(a_{e\beta}/c_{e\beta}) + \mathcal{O}(\alpha^2, \alpha \sin^2\theta_{13},a'^2_{e\beta},c'^2_{e\beta},a'^2_{\mu\tau},c'^2_{\mu\tau}),\,\,\,\,\,\,\,\beta=\mu,\tau.
\end{align}
The first term in right-hand-side (RHS) is the standard $\nu_\mu\to\nu_e$ appearance probability in the absence of any new physics parameters,
\begin{align}\label{eq:p_si}
P_{\mu e}(\text{SI})
&\simeq \mathbb{X} + \mathbb{Y}\cos(\delta_{\text{CP}} + \Delta),
\end{align}
where,
\begin{align}\label{eq:si_coeff}
&\mathbb{X} = \sin^22\theta_{13}\sin^2\theta_{23} \frac{\sin^{2}\big[(1-\hat{A})\Delta \big]}{(1-\hat{A})^{2}};\nonumber \\
&\mathbb{Y} = \alpha \sin2\theta_{12} \sin2\theta_{13} \sin2\theta_{23} \frac{\sin \hat{A}\Delta}{\hat{A}} \frac{\sin \big[(1-\hat{A})\Delta\big]}{1-\hat{A}}; \nonumber \\
&\hat{A} = \frac{2\sqrt{2}G_{F}N_{e}E}{\ensuremath{\Delta m_{31}^2}}, \qquad \Delta = \frac{\ensuremath{\Delta m_{31}^2} L}{4E}.
\end{align}
The second term in RHS of Eq.~\ref{eq:pme_liv} are the contributions from the LIV parameters $a_{e\beta}/c_{e\beta}$ ($\beta = \mu,\tau$). For CPT-violating LIV cases, the expression of this term is;
\begin{align}\label{eq:p_cptv_liv}
&P_{\mu e}(a_{e\beta})
\simeq
2|a_{e\beta}|L\sin\ensuremath{\theta_{13}}\sin2\ensuremath{\theta_{23}} \sin\Delta\big[\mathbb{Z}_{e\beta}\sin(\delta_{\rm CP} + \varphi_{e\beta}) +
\mathbb{W}_{e\beta}\cos(\delta_{\rm CP} + \varphi_{e\beta})\big],\nonumber\\
\end{align}
and for CPT-conserving case,
\begin{align}\label{eq:p_cptc_liv}
&P_{\mu e}(c_{e\beta})
\simeq\frac{-8}{3}|c_{e\beta}|EL\sin\ensuremath{\theta_{13}}\sin2\theta_{23}\sin\Delta \big[\mathbb{Z}_{e\beta}\sin(\delta_{\rm CP} + \varphi_{e\beta}) +
\mathbb{W}_{e\beta}\cos(\delta_{\rm CP} + \varphi_{e\beta})\big],\nonumber\\
\end{align}
where,
\begin{align}\label{eq:liv_coeff}
&\mathbb{Z}_{e\beta} =
\begin{cases}
- c_{23} \sin \Delta, & \text{if}\ \beta=\mu. \\
s_{23} \sin \Delta, & \text{if}\ \beta=\tau.
\end{cases} \nonumber \\
&\mathbb{W}_{e\beta} =
\begin{cases}
c_{23} \big(\frac{s_{23}^{2}\sin \Delta}{c_{23}^{2}\Delta} + \cos \Delta \big), & \text{if}\ \beta=\mu. \\
s_{23} \big(\frac{\sin \Delta}{\Delta} - \cos \Delta \big), & \text{if}\ \beta=\tau.
\end{cases}
\end{align}
Note that the other off-diagonal LIV parameter $\ensuremath{a_{\mu\tau}}/\ensuremath{c_{\mu\tau}}$ does not appear in the first-order terms. However, it may be present in higher-order terms and has a relatively smaller impact on the appearance probabilities. Note that for the appearance probability in the antineutrino case, one needs to apply ($a_{\alpha\beta}\rightarrow-a^{*}_{\alpha\beta}$) and $c_{\alpha\beta} \to c_{\alpha\beta}^*$ and $\hat{A}\to -\hat{A}$ in Eqs.~[\ref{eq:p_si} -\ref{eq:liv_coeff}]. In Appendix~\ref{appndx}, we show the validity of the approximate analytical expression of the appearance probability derived in this section by comparing it with the exact probability calculated numerically.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/prob_CPTV_app_f1.pdf}
\vspace*{-10mm}
\mycaption{$\nu_\mu\rightarrow\nu_e$ appearance probability as a function of energy in the presence of off-diagonal CPT-violating LIV parameters $a_{e\mu}$ (left column), $a_{e\tau}$ (middle column), and $a_{\mu\tau}$ (right column). The top and bottom rows correspond to the baselines of DUNE ($L=1285$ km) and T2HK ($L=295$ km), respectively. The black line in each panel shows the SI case and four colored lines correspond to four benchmark values of the phases associated with the LIV parameters: $0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$ with LIV strength $|a_{\alpha\beta}|=2.0\times10^{-23}$ GeV ($\alpha,\beta=e,\mu,\tau;\alpha\neq\beta$). The vertical grey-dashed lines in each panel show the energies at the first and second oscillation maxima. The values of the standard oscillation parameters used in this plot are mentioned in Table~\ref{tab:params_value}.}
\label{fig:app_prob}
\end{figure}
\begin{table}[htb!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\theta_{12}$ & $\theta_{13}$ & $\theta_{23}$ & $\delta_{\text{CP}}$ & $\Delta{m^2_{21}}~[\rm{eV}^2]$ & $\Delta{m^2_{31}}~[\rm{eV}^2]$\\
\hline
$33.45^\circ$ & $8.62^\circ$ & $42.1^\circ$ & $230^\circ$ & $7.42\times10^{-5}$ & $2.51\times10^{-3}$\\
\hline
\end{tabular}
\caption{The benchmark values of the oscillation parameters used in our analysis~\cite{Esteban:2020cvm}. We consider normal mass ordering (NMO) throughout this work.}
\label{tab:params_value}
\end{table}
In Fig.~\ref{fig:app_prob}, we plot $\nu_\mu\rightarrow\nu_e$ oscillation probability as a function of energy for the baseline $L=1285$ km (top row) and $L=295$ km (bottom row) in SI case and in the presence of the CPT-violating LIV parameters. To plot exact oscillation probability, we use GLoBES software~\cite{Huber:2004ka, Huber:2007ji} and modified the probability calculator accordingly to introduce LIV. The values of the standard oscillation parameters used to calculate the probability are given in Table~\ref{tab:params_value}. The left, middle, and right columns correspond to the appearance probability in the presence of $a_{e\mu}$, $a_{e\tau}$, and $a_{\mu\tau}$ one-at-a-time, respectively with strength $|a_{\alpha\beta}|=2\times10^{-23}$ GeV. In each panel, the solid black curve shows the SI case. Four colored curves correspond to the probabilities in the presence of CPT-violating LIV for the four chosen values of the associated phase, namely, 0, $90^\circ$, $180^\circ$, and $270^\circ$. It is clear from the figure that the impact of $\ensuremath{a_{\mu\tau}}$ is marginal compared to $\ensuremath{a_{e\mu}}$ and $\ensuremath{a_{e\tau}}$. This behavior is obvious from our analytical expression in Eq.~\ref{eq:pme_liv} where $\ensuremath{a_{\mu\tau}}$ does not appear at the leading order, contrary to the other two off-diagonal CPT-violating LIV parameters. In the presence of $\ensuremath{a_{e\mu}}$, the appearance probability shows a significant deviation from the SI case depending on the value of the associated phase. Near first oscillation maxima, probability is maximum when $\phi_{e\mu}=90^\circ$ and minimum when $\phi_{e\mu}=270^\circ$. It can be explained using terms in Eq.~\ref{eq:p_cptv_liv}, which shows the contribution from LIV in oscillation probability. The sign of $\mathbb{Z}$ ($\mathbb{W}$) in Eq.~\ref{eq:p_cptv_liv} is negative (positive) near the first oscillation maxima. So, the appearance probability will be maximum (minimum) when the terms associated with $\mathbb{Z}$ and $\mathbb{W}$ is negative (positive) and positive (negative), respectively. Since $\delta_{\rm CP}=230^\circ$, this happens at $\phi_{e\mu} = 90^\circ$ ($270^\circ$). However, in the middle panels, we observe that in the presence of $\ensuremath{a_{e\tau}}$, the appearance probability is maximum (minimum) at $\phi_{e\tau}=180^\circ$ ($0^\circ$). This happens because now both $\mathbb{Z}$ and $\mathbb{W}$ are positive. So, the maximum (minimum) probability corresponds to the value of $\phi_{e\tau}$ for which both $\sin(\delta_{\rm CP} + \varphi_{e\beta})$ and $\cos(\delta_{\rm CP} + \varphi_{e\beta})$ in Eq.~\ref{eq:p_cptv_liv} is positive (negative). This occurs at $\phi_{e\tau} = 180^\circ$ ($0^\circ$) considering our benchmark value of $\delta_{\text{CP}}$.
Even though all these features can be observed both in the top and the bottom rows, we notice that the spread of the oscillation probability due to phase is significantly lesser in the lower row, where we consider a comparatively smaller baseline ($L=295$ km). It is because of $L$ dependency in Eq.~\ref{eq:p_cptv_liv}, which adds the contribution from LIV parameters.
\FloatBarrier
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/prob_CPTC_app_f1.pdf}
\vspace*{-10mm}
\mycaption{$\nu_\mu\rightarrow\nu_\mu$ disappearance probability as a function of energy in the presence of off-diagonal CPT-conserving LIV parameters $c_{e\mu}$ (left column), $c_{e\tau}$ (middle column), and $c_{\mu\tau}$ (right column). The top and bottom rows correspond to the baselines of DUNE ($L=1285$ km) and T2HK ($L=295$ km), respectively. The black line in each panel shows the SI case and four colored lines correspond to four benchmark values of the phases associated with the LIV parameters: $0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$ with LIV strength $|c_{\alpha\beta}|=1.0\times10^{-24}$ ($\alpha,\beta=e,\mu,\tau;\alpha\neq\beta$). The vertical grey-dashed lines in each panel show the energies at first and second oscillation maxima. The values of the standard oscillation parameters used in this plot are mentioned in Table~\ref{tab:params_value}.}
\label{fig:cptc_app_prob}
\end{figure}
In Fig.~\ref{fig:cptc_app_prob}, we plot the appearance probability in the presence of three off-diagonal CPT-conserving LIV parameters, $\ensuremath{c_{e\mu}}$ (left column), $\ensuremath{c_{e\tau}}$ (middle column), and $\ensuremath{c_{\mu\tau}}$ (right column) with strength $1.0\times10^{-24}$.
Here also, we use $L=1285$ km (top row) and $L=295$ km (bottom row). For DUNE ($L=1285$ km), in the presence of $c_{e\beta}$ ($\beta=\mu,\tau$), we observe that the impact of phase is in the opposite order compared to the corresponding CPT-violating LIV case in Fig.~\ref{fig:app_prob}. It is because of the opposite sign of the LIV contributing term in the case of CPT-conserving LIV, as shown in Eq.~\ref{eq:p_cptc_liv}. As expected, $c_{\mu\tau}$ has an almost negligible effect on the probability in the case of DUNE. For T2HK ($L=295$ km), all three parameters have almost no impact on the oscillation probabilities because of the smaller baseline and comparatively lower neutrino energy.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{./plots/disapp_prob_f1.pdf}
\vspace*{-10mm}
\mycaption{$\nu_\mu\rightarrow\nu_\mu$ disappearance probability as a function of energy in the presence of off-diagonal LIV parameters $a_{\mu\tau}$ (left column), $c_{\mu\tau}$ (right column). The top and bottom rows correspond to the baselines of DUNE ($L=1285$ km) and T2HK ($L=295$ km), respectively. The black line in each panel shows the SI case and four colored lines correspond to four benchmark values of the phases associated with the LIV parameters: $0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$ with LIV strength $|a_{\alpha\beta}|=2.0\times10^{-23}$ GeV, $|c_{\alpha\beta}|=1.0\times10^{-24}$ ($\alpha,\beta=e,\mu,\tau;\alpha\neq\beta$). The vertical grey-dashed lines in each panel show the energies at first and second oscillation maximums. The values of the standard oscillation parameters used in this plot are mentioned in Table~\ref{tab:params_value}.}
\label{fig:disapp_prob}
\end{figure}
\newline
\newline
\noindent
$\bullet$ \textbf{$\nu_\mu\to\nu_\mu$ Disappearance Channel:}\\
\\
Now we discuss $\nu_\mu\to\nu_\mu$ disappearance probability, another relevant oscillation channel for the long-baseline experiments.
Following the same strategy as the appearance channel, we derive the compact analytical expression for $\nu_\mu\to\nu_\mu$ disappearance probability,
\begin{align}\label{eq:pmm}
P_{\mu\mu} \simeq P_{\mu\mu}(\text{SI}) + P_{\mu\mu}(\ensuremath{a_{\mu\tau}}/\ensuremath{c_{\mu\tau}}) +\mathcal{O}(\alpha^2, \alpha \sin^2\theta_{13},a'^2_{e\beta},c'^2_{e\beta},a'^2_{\mu\tau},c'^2_{\mu\tau}),\,\,\,\,\,\,\,\beta=\mu,\tau.
\end{align}
The first term in the RHS is the standard disappearance probability without any new physics contribution,
\begin{align}
P_{\mu\mu}(\text{SI}) = P_{\mu\mu}(\text{vacuum},\text{two flavor})+\alpha\mathbb{P}+\sin^2\theta_{13}\mathbb{Q}+\alpha\sin\theta_{13}\mathbb{R},
\end{align}
where,
\begin{align}
P_{\mu\mu}(\text{vacuum},\text{two flavor}) = 1-\sin^22\theta_{23}\sin^2\Delta.
\end{align}
$\mathbb{P}$, $\mathbb{Q}$, and $\mathbb{R}$ are defined as follows,
\begin{align}
&\mathbb{P} = \cos^2\theta_{12}\sin^22\theta_{23}\sin2\Delta,\\
&\mathbb{Q} = -4\sin^2\theta_{23}\frac{\sin^2(\hat{A}-1)\Delta}{(\hat{A}-1)^2}-\frac{2}{\hat{A}-1}\sin^22\ensuremath{\theta_{23}}\left(\sin\Delta\cos \hat{A}\Delta\frac{\sin(\hat{A}-1)\Delta}{\hat{A}-1}-\frac{\hat{A}}{2}\Delta\sin2\Delta\right),\\
&\mathbb{R}= 2\sin2\theta_{12}\sin2\theta_{23}\cos\delta_{\rm CP}\cos\Delta\frac{\sin \hat{A}\Delta}{\hat{A}}\frac{\sin(\hat{A}-1)\Delta}{\hat{A}-1}.
\end{align}
Note that in the above expression, we have also neglected the term with $\alpha\sin\ensuremath{\theta_{13}}\cos2\ensuremath{\theta_{23}}$, since it is of the same order as $\alpha \sin^2\theta_{13}$. The contribution from the LIV parameters\footnote{Only off-diagonal LIV parameters that appear at the first order in the disappearance probability is $\ensuremath{a_{\mu\tau}}/\ensuremath{c_{\mu\tau}}$. This has already been discussed in Refs.~\cite{Kopp:2007ne,Kikuchi:2008vq} in case of NSI.} is given by,
\begin{align}\label{eq:Pmm_liv}
P_{\mu\mu}(\ensuremath{a_{\mu\tau}}/\ensuremath{c_{\mu\tau}}) = \frac{\sin^22\ensuremath{\theta_{23}}}{2}\left[2\sin^2\ensuremath{\theta_{13}}\Delta-\mathbb{S}\right] \sin2\Delta,
\end{align}
where,
\begin{align}\label{eq:S}
\mathbb{S} = \begin{cases}
2L\sin2\ensuremath{\theta_{23}}|a_{\mu\tau}|\cos\phi_{\mu\tau}, & \text{CPT-violating LIV}. \\
-\frac{8}{3}E L \sin2\ensuremath{\theta_{23}}|c_{\mu\tau}|\cos\phi_{\mu\tau}, & \text{CPT-conserving LIV}.
\end{cases}
\end{align}
In Appendix~\ref{appndx}, we compare the disappearance probability calculated using the analytical expressions derived above with the same calculated numerically.
In Fig.~\ref{fig:disapp_prob}, we show the $\nu_\mu\to\nu_\mu$ disappearance probability as a function of energy for the baseline $L=1285$ (top row) km and $L=295$ km (bottom row) in SI case as well as in the presence of LIV parameters. In the left column, we show the impact of CPT-violating LIV parameter $\ensuremath{a_{\mu\tau}}$, whereas the right column shows the CPT-conserving LIV parameter $c_{\mu\tau}$. As discussed earlier, the impact of the other two off-diagonal parameters does not appear at the leading order and is expected to have a negligible impact on disappearance probability. From the left panels, we observe that the phase associated with $\ensuremath{a_{\mu\tau}}$, shows a significant impact with positive (negative) deviation for $\phi_{\mu\tau}=180^\circ$ ($0^\circ$) from the SI case. We can explain this feature using the LIV contributing terms in the analytical expression (ref. to Eq.~\ref{eq:pme_liv}). It is clear that when $\mathbb{S}$ (shown in Eq.~\ref{eq:S}) is negative (positive), the disappearance probability is larger (smaller) than its corresponding SI value. Similar to the appearance probability, here also, we observe the impact of the phases become smaller for a smaller baseline, as shown in lower panels, because the LIV contributing term is proportional to the baseline length $L$. In the right column, we show the impact of the CPT-conserving LIV parameters. Here we observe that for $L=1285$ km, disappearance is maximum (minimum) for $\phi_{\mu\tau}=0$ ($180^\circ$). This can be understood from the sign of the term $\mathbb{S}$ in Eq.~\ref{eq:S}; when it is negative ($\phi_{\mu\tau}=0$), $P_{\mu\mu}(c_{\mu\tau})$ is larger compared to the case when it is positive ($\phi_{\mu\tau}=180^\circ$).
\section{Long-baseline Experiments: DUNE and T2HK}
\label{sec:LBL}
\subsection{Essential Features of the Experimental Setups}
Accelelarator-based neutrino oscillation experiments are playing a very important role in resolving issues in the standard $3\nu$ paradigm and exploring various BSM physics in the neutrino sector. Precise information about the neutrino flux, cross-section, and baseline make these experiments unique. In this work, we are going to probe LIV in the context of next-generation long-baseline experiments DUNE and T2HK. DUNE is a Fermilab-based future long-baseline experiment with an on-axis, high-intensity wide-band neutrino beam produced at Fermilab~\cite{DUNE:2020lwj, DUNE:2020jqi, DUNE:2021cuw, DUNE:2021mtg}. The detector would be a 40 kt Liquid argon time projection chamber (LArTPC) placed underground at Homestake mine, 1285 km from the source. On the other hand, T2HK~\cite{Hyper-KamiokandeProto-:2015xww, Hyper-Kamiokande:2018ofw} is another next-generation long-baseline experiment with off-axis, narrow band beam produced at the J-PARC proton synchrotron facility. The beam would be detected at Hyperkamiokande, a 187 kt water Cherenkov detector placed at a distance of 295 km from the source with an off-axis angle $2.5^\circ$. In Table~\ref{tab:exp_details}, we tabulate the other relevant information on these two experiments.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline \hline
& DUNE & T2HK\\
\hline
Detector Mass& 40 kt LArTPC & 187 kt WC \\
\hline
Baseline & 1285 km & 295 km \\
\hline
Proton Energy & 120 GeV & 80 GeV \\
\hline
Beam type & Wide-band, on-axis & Narrow-band, off-axis ($2.5^{\circ}$)\\
\hline
Beam power & 1.2 MW & 1.3 MW \\
\hline
P.O.T./year& $1.1\times10^{21}$ & $2.7\times10^{21}$\\
\hline
Run time ($\nu+\bar{\nu}$) & 5 yrs + 5 yrs & 2.5 yrs + 7.5 yrs \\
\hline
Normalization error & 2\% (app.) 5\% (disapp.) & 5\% (app.), 3.5\% (disapp.)\\
\hline\hline
\end{tabular}
\mycaption{Major features of LBL experiments, DUNE~\cite{DUNE:2020lwj} and T2HK~\cite{Hyper-KamiokandeProto-:2015xww} used in our simulation.}
\label{tab:exp_details}
\end{table}
As mentioned earlier, the DUNE setup uses an on-axis, wide-band neutrino beam with an energy range between 0.1 to 12 GeV. This allows DUNE to explore both the oscillation maximum and minima for the baseline 1285 km, which are around 2.5 GeV and 0.6 GeV, respectively. On the other hand, T2HK has an off-axis narrow band beam with an energy peak around 0.6 GeV, which is the first oscillation maxima for T2HK. The monochromatic beam will give the advantage of high statistics at the first oscillation maxima, where the impact of various physics can be significant. DUNE will have equal runtime for neutrino and antineutrino mode, which will give a larger number of expected neutrino events than the antineutrino, since the neutrino has almost three times the cross-section of the antineutrino. For T2HK, antineutrino runtime is three times neutrino in order to compensate for suppression in the cross-section. So, depending on the physics under probe, different ratios of the neutrino and antineutrino event can be useful. Also, the baseline of DUNE is 1285 km allowing it to have a large matter effect compared to T2HK, which has a baseline of 295 km. Apart from the complementarity at the neutrino flux, baseline, and runtime, the detector properties of the two setups are different. As proposed by the collaborations, DUNE has systematics uncertainties of 2.5\% in the appearance channel and 5\% in the disappearance channel and for T2HK, it is 5\% and 3.5\%, respectively. Lower systematics in the appearance channel in DUNE can allow it to have comparatively larger sensitivity to some physics that have a larger impact on the appearance channel. A similar argument can be given for T2HK in the disappearance channel.
\subsection{Expected Event Rates in the Presence of LIV}
\label{subsec:LBL-Evt-Sim}
As mentioned earlier, in this work, we consider two experimental configurations, namely, DUNE and T2HK, for our analysis. We calculate the expected event rate of these two configurations using the GLoBES software~\cite{Huber:2004ka, Huber:2007ji}. For the oscillation analysis with LIV parameters, we use GLoBES-extension $snu.c$~\cite{Kopp:2007ne}.
In Table.~\ref{tab:total_events}, we give the expected $\nu_e/\bar{\nu}_e$ and $\nu_\mu/\bar{\nu}_\mu$ event rate from DUNE and T2HK in SI case and in the presence of various LIV parameters. Assumed configurations of the experiments are discussed in detail in Sec.~\ref{sec:LBL} (see Table~\ref{tab:exp_details}). While generating the events, the strength of the CPT-violating (CPT-conserving) LIV parameters is considered to be $2.0\times10^{-23}$ GeV ($1.0\times10^{-24}$), one at-a-time. We consider the phase associated with the LIV parameters to be zero.
\begin{table}[h!]
\centering
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|ccc|*{8}{c|}}
\hline\hline
\multicolumn{3}{|c|}{\multirow{2}{*}{}} & \multicolumn{2}{|c}{$\nu_e$ appearance} & \multicolumn{2}{|c|}{$\bar\nu_e$ appearance}& \multicolumn{2}{|c}{$\nu_\mu$ disappearance} & \multicolumn{2}{|c|}{$\bar\nu_\mu$ disappearance}\\
\cline{4-11}
\multicolumn{3}{|c|}{} & DUNE & T2HK & DUNE & T2HK & DUNE & T2HK & DUNE & T2HK \\
\hline
\multicolumn{3}{|c|}{SI} & 1614 & 1613 & 292 & 727 & 15624 & 9577 & 9052 & 9074 \\
\cline{1-11}
\hline
\multicolumn{3}{|c|}{$|a_{e\mu}|$ (= $2\times10^{-23}$ GeV)} & 2276 & 1731 & 666 & 901 & 15446 & 9581 & 8891 & 9042\\
\hline
\multicolumn{3}{|c|}{$|a_{e\tau}|$ (= $2\times10^{-23}$ GeV)} & 817 & 1387 & 226 & 697 & 15613 & 9565 & 9063 & 9084\\
\hline
\multicolumn{3}{|c|}{$|a_{\mu\tau}|$ (= $2\times10^{-23}$ GeV)} & 1567 & 1598 & 303 & 735 & 14404 & 9366 & 9237 & 9373\\
\hline
\multicolumn{3}{|c|}{$|c_{e\mu}|$ (= $1.0\times10^{-24}$)} & 1757 & 1610 & 480 & 735 & 15315 & 9575 & 8800 & 9071\\
\hline
\multicolumn{3}{|c|}{$|c_{e\tau}|$ (= $1.0\times10^{-24}$)} & 1792 & 1623 & 296 & 724 & 15634 & 9578 & 9056 & 9074\\
\hline
\multicolumn{3}{|c|}{$|c_{\mu\tau}|$ (= $1.0\times10^{-24}$)} & 1620 & 1614 & 295 & 727 & 16263 & 9600 & 9411 & 9099\\
\hline
\hline
\end{tabular}
\end{adjustbox}
\mycaption{Total signal rate for the $\nu_{e}$ appearance channel and $\nu_{\mu}$ disappearance channel both in neutrino and anti-neutrino mode for DUNE and T2HK setups in SI case as well as in the presence off-diagonal LIV parameters. The relevant features of these facilities are given in Table~\ref{tab:exp_details}. The strength of the CPT-violating (CPT-conserving) LIV parameters is taken to be $2\times10^{-23}$ GeV ($1.0\times10^{-24}$). The phases associated with the off-diagonal LIV parameters are considered to be zero. The values of the other standard oscillation parameters used to calculate event rate are quoted in Table~\ref{tab:params_value}.}
\label{tab:total_events}
\end{table}
We make following observations from Table~\ref{tab:total_events}:
\begin{itemize}
\item In the presence of $a_{e\mu}$ ($a_{e\tau}$), $\nu_e$ event deviates from the SI case by 41\% (49\%) for DUNE and by 7.3\% (14\%) for T2HK.
\item Similarly, we observe that the presence of LIV parameter $a_{\mu\tau}$ changes the expected $\nu_{\mu}$ disappearance event rates by 7.8\% for DUNE and 2.2\% for T2HK from the SI case. In the presence of other LIV parameters, changes in the event rates are very small ($\leq 1\%$).
\item In presence of $c_{e\mu}$ ($c_{e\tau}$), the $\nu_{e}$ appearance event rates changes by $\sim$9\% (11\%) for DUNE, but for T2HK, the changes are very minute ($<$ 1\%). However $c_{\mu\tau}$ changes the $\nu_{\mu}$ event rates by 4\% for DUNE and 0.2\% for T2HK.
\end{itemize}
All these observations are consistent with the results seen at the probability level in the previous section.
\section{Numerical Analysis}
\label{sec:RnA}
One of the major goals of this work is to study the ability of DUNE and T2HK to constrain various LIV parameters. To do this, we examine the sensitivity of these two setups in realizing LIV.
We define our sensitivity in terms of the Poissonian $\chi^2$ defined as,
\begin{equation}
\chi^2 (\vec{\lambda}, \xi_{s}, \xi_{b}) = \min_{\vec\rho, \xi_{s}, \xi_{b}}~\Big[{{2\sum_{i=1}^{n}}\big(y_{i}-x_{i}-x_{i} \text{ln}\frac{y_{i}}{x_{i}}\big) + \xi_{s}^2 + \xi_{b}^2}~\Big],
\end{equation}
which gives the median sensitivity of the experiment where $n$ is the total number of reconstructed energy bins.
\begin{equation}
y_{i} = N^{th}_{i}
(\vec\rho)~[1 + \pi^{s}\xi_{s}] + N^{b}_{i}(\vec\rho)~[1+\pi^b\xi_{b}],
\end{equation}
where $N^{th}_{i}$ is the expected number of signal events in the $i$-th bin with the set of oscillation parameters $\vec\lambda$ = \{$\theta_{12}, \theta_{13}, \theta_{23}, \Delta{m}^2_{21}, \Delta{m}^2_{31}, \delta_{\rm CP}, a_{\alpha\beta}, \phi^{a}_{\alpha\beta}, c_{\alpha\beta}, \phi^{c}_{\alpha\beta}$\}. $\phi^{a}_{\alpha\beta}$ and $\phi^{c}_{\alpha\beta}$ are the phases associated with the CPT-violating and CPT-conserving LIV parameters, respectively. $N^{b}_{i}$ is the number of background events in the $i$-th energy bin. The systematic pulls on the signal and background are denoted by the variables $\xi_{s}$ and $\xi_{b}$, respectively. We marginalize the $\chi^2$ over the set of parameters $\vec\rho$ and also over the systematic pulls ($\xi_{s}$ and $\xi_{b}$) in the fit. The variables $\pi^s$ and $\pi^b$ stand for the normalization error on the signal and background. $x_{i} = N^{obs}_{i} + N^{b}_{i}$ embodies the prospective data from the experiment, where $N^{obs}_{i}$ is the number of charged current signal events and $N^b_{i}$, as mentioned before, is the number of background events.
We quantify our results in terms of the statistical significance given by Poissonian $\Delta{\chi}^2$ defined as,
\begin{equation}
\Delta{\chi}^2 = \min_{\vec\rho, \xi_{s}, \xi_{b}}\Big[\chi^2 (a_{\alpha\beta}/c_{\alpha\beta} \neq 0) - \chi^2 (a_{\alpha\beta}/c_{\alpha\beta} = 0)\Big],\,\,\,\,\,\alpha,\beta=e,\mu,\tau;\alpha\neq\beta.
\end{equation}
The first term in the right-hand side of the above equation is obtained when we fit the prospective data from the experiment with the theory in the presence of Lorentz Invariance Violation, and the second term is calculated by fitting with the standard case with no LIV present in theory. Due to suppression in the statistical fluctuations, we can take $\chi^2 (a_{\alpha\beta}/c_{\alpha\beta} = 0)$ $\sim$ 0 while obtaining the median sensitivity of the experiment using frequentist approach~\cite{Blennow:2013oma}. Here $\vec\rho$ is the set of parameters over which the $\chi^2$ is marginalized, and $\xi_{s}$ and $\xi_{b}$ are the systematic pulls on the signal and background, respectively.
For our analysis, we use the true values of the standard oscillation parameters as given in Table~\ref{tab:params_value}. In theory, we keep two mixing angles $\theta_{12}$, $\theta_{13}$ and two mass-splittings $\Delta m^2_{21}$, $\Delta m^2_{31}$ fixed at the same value as these parameters have been well measured. Also, we do not marginalize over the neutrino mass ordering since there is a hint towards the normal mass ordering from the global oscillation data. Also, for the correlation between LIV parameters and $\delta_{\mathrm{CP}}$, we marginalize over the $\theta_{23}$ in its allowed $3\sigma$ range~\cite{Esteban:2020cvm} and the phase $\phi$ associated with the off-diagonal LIV parameters in the range [0, $360^\circ$]. Similarly, for the LIV parameter and $\theta_{23}$ correlation analysis, we marginalize over $\delta_{\mathrm{CP}}$ in its allowed $3\sigma$ range and $\phi$ in its entire permitted range.
While deriving the limits on the off-diagonal LIV parameters, we marginalize over $\theta_{23}$, $\delta_{\rm CP}$, and $\phi$.
\section{Our Results}
\label{sec:results}
In this section, we present our results in two parts. First, we discuss the correlation between the LIV parameters and most unsettled standard oscillation parameters $\delta_{\mathrm{CP}}$ and $\theta_{23}$. This helps us to find if there is any degeneracy between the LIV parameters and the standard oscillation parameters. In the second part, we present the expected constraints on the LIV parameters from DUNE, T2HK individually, and their combination DUNE+T2HK.
\subsection{Correlations in test ($\delta_{\rm CP} - |a_{\alpha\beta}|/|c_{\alpha\beta}|$) and test ($\theta_{23} - |a_{\alpha\beta}|/|c_{\alpha\beta}|$) Planes}
\label{subsec:correlation}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/cor-dcp-CPTV-f4.pdf}
\vspace*{-10mm}
\mycaption{95\% C.L. (2 d.o.f.) contours in the $\delta_{\rm CP}-|a_{e\mu}|$ (top row), $\delta_{\rm CP}-|a_{e\tau}|$ (middle row), and $\delta_{\rm CP}-|a_{\mu\tau}|$ (bottom row) planes for DUNE, T2HK, and DUNE+T2HK. Three benchmark values of $\delta_{\mathrm{CP}}$ considered in the data are $180^\circ$ (left column), $230^\circ$ (middle column), and $300^\circ$ (right column), as shown by black dots in each panel. True values of other oscillation parameters are given in Table~\ref{tab:params_value}. In the fit, we marginalize over $\theta_{23}$ in its allowed $3\sigma$ range and new phase $\phi$ in its entire allowed range.}
\label{fig:correlation_dcp_cptv}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/cor-dcp-CPTC-f4.pdf}
\vspace*{-10mm}
\mycaption{95\% C.L. (2 d.o.f.) contours in the $\delta_{\rm CP}-|c_{e\mu}|$ (top row), $\delta_{\rm CP}-|c_{e\tau}|$ (middle row), and $\delta_{\rm CP}-|c_{\mu\tau}|$ (bottom row) planes for DUNE, T2HK, and DUNE+T2HK. Three benchmark values of $\delta_{\mathrm{CP}}$ considered in the data are $180^\circ$ (left column), $230^\circ$ (middle column), and $300^\circ$ (right column), as shown by black dots in each panel. True values of other oscillation parameters are given in Table~\ref{tab:params_value}. In the fit, we marginalize over $\theta_{23}$ in its allowed $3\sigma$ range and new phase $\phi$ in its entire allowed range.}
\label{fig:correlation_dcp_cptc}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/cor-th23-CPTV-f4.pdf}
\vspace*{-10mm}
\mycaption{ 95\% C.L. (2 d.o.f.) contours in the $\ensuremath{\theta_{23}}-|a_{e\mu}|$ (top row), $\ensuremath{\theta_{23}}-|a_{e\tau}|$ (middle row), and $\ensuremath{\theta_{23}}-|a_{\mu\tau}|$ (bottom row) planes for DUNE, T2HK, and DUNE+T2HK. Three benchmark values of $\ensuremath{\theta_{23}}$ considered in the data are $42.1^\circ$ (left column), $45^\circ$ (middle column), and $47.9^\circ$ (right column), as shown by black dots in each panel. True values of other oscillation parameters are given in Table~\ref{tab:params_value}. In the fit, we marginalize over $\delta_{\mathrm{CP}}$ in its allowed $3\sigma$ range and new phase $\phi$ in its entire allowed range.}
\label{fig:correlation_th23_cptv}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/cor-th23-CPTC-f4.pdf}
\vspace*{-10mm}
\mycaption{95\% C.L. (2 d.o.f.) contours in the $\ensuremath{\theta_{23}}-|c_{e\mu}|$ (top row), $\ensuremath{\theta_{23}}-|c_{e\tau}|$ (middle row), and $\ensuremath{\theta_{23}}-|c_{\mu\tau}|$ (bottom row) planes for DUNE, T2HK, and DUNE+T2HK. Three benchmark values of $\ensuremath{\theta_{23}}$ considered in the data are $42.1^\circ$ (left column), $45^\circ$ (middle column), and $47.9^\circ$ (right column), as shown by black dots in each panel. True values of other oscillation parameters are given in Table~\ref{tab:params_value}. In the fit, we marginalize over $\delta_{\mathrm{CP}}$ in its allowed $3\sigma$ range and new phase $\phi$ in its entire allowed range.}
\label{fig:correlation_th23_cptc}
\end{figure}
In Fig.~\ref{fig:correlation_dcp_cptv}, we show the correlations between the CPT-violating LIV parameters and the standard CP phase $\delta_{\rm CP}$. In the fit, we marginalize over the $\theta_{23}$ in its allowed $3\sigma$ range~\cite{Esteban:2020cvm} and the phase $\phi$ associated with the off-diagonal LIV parameters in the range [0, $360^\circ$]. As mentioned earlier, all the other standard oscillation parameters are fixed at their best-fit values given in Table~\ref{tab:params_value}, both in data and theory.
Top, middle, and bottom rows correspond to non-zero $a_{e\mu}$, $a_{e\tau}$, and $a_{\mu\tau}$, respectively, where consider these LIV parameters
one at-a-time in the data and fit. We take three different choices for the value of $\delta_{\rm CP}$ in the data that are allowed in the current $3\sigma$ limits, namely, $180^\circ$ (left panels), $270^\circ$ (middle panels), $300^\circ$ (right panels) as shown by the black dot in each panel. The red, blue, and black curves in each plot correspond to DUNE, T2HK, and the combination DUNE+T2HK, respectively. Each contour represents the allowed regions at 95\% C.L. (2 d.o.f.). We observe from the figure that for all the LIV parameters and all choices of the $\delta_{\mathrm{CP}}$ in the data, the allowed regions in $\delta_{\rm CP}-|a_{\alpha\beta}|$ planes are significantly small for DUNE compared to T2HK. One can understand it from the analytical expression of the oscillation probabilities discussed in Sec.~\ref{sec:LIV}. From Eq.~\ref{eq:p_cptv_liv} and Eq.~\ref{eq:Pmm_liv}, we see that contribution from the CPT-violating LIV parameters is directly proportional to $L$. So, DUNE being the experiment with a longer baseline, shows better sensitivity to the CPT-violating LIV parameters. Hence, it has smaller allowed regions in $\delta_{\rm CP}-|a_{\alpha\beta}|$ plane as compared to T2HK.
In case of $a_{e\mu}$, we notice another degenerate solution in $\delta_{\rm CP}-|a_{\alpha\beta}|$ plane which is centered around some non-zero value of $|a_{e\mu}|$. This happens for both DUNE and T2HK around $|a_{e\mu}|\approx1\times 10^{-23}$ GeV and $|a_{e\mu}|\approx4\times 10^{-23}$ GeV, respectively. This mainly occurs due to the degeneracy between $\theta_{23}$ and the complex phases ($\delta_{\rm CP}~\rm and~ \phi$) in the LIV contributing term in the appearance channel (see Eq.~\ref{eq:p_cptv_liv}), which plays a major role in constraining this parameter. For some combination of $\theta_{23}$ and $\delta_{\mathrm{CP}}+\phi$, this term minimizes at some non-zero value of $|a_{e\mu}|$ resulting in degeneracy with the standard oscillation case. As a result, we observe an allowed region at around that value of $|\ensuremath{a_{e\mu}}|$. However, when we take the combined setup DUNE+T2HK, this degeneracy disappears as the values of $L$ and $E$ in the LIV contributing terms are now different for these two experiments, and we do not observe any allowed region centred around some non-zero value of $|\ensuremath{a_{e\mu}}|$. We also observe that upon combining these two setups, the allowed regions shrink further. In the case of $|\ensuremath{a_{e\tau}}|$ (middle row) and $|\ensuremath{a_{\mu\tau}}|$ (bottom row), we do not observe such degenerate solutions at 95\% C.L. (2 d.o.f.). In both cases, DUNE+T2HK shows a small improvement in the sensitivity as compared to DUNE.
In Fig.~\ref{fig:correlation_dcp_cptc}, we repeat the above analysis for three off-diagonal CPT-conserving LIV parameters, $c_{e\mu}$ (top row), $c_{e\tau}$ (middle row), and $c_{\mu\tau}$ (bottom row). It is clear from the figure that DUNE shows a noticeable correlation between $c_{\alpha\beta}$ ($\alpha,\beta=e,\mu,\tau;\alpha\neq\beta$) and $\delta_{\mathrm{CP}}$. However, for T2HK, there is almost no correlation between those two parameters in all three cases. One can explain it from the fact that CPT-conserving parameters have negligible impact on both appearance and disappearance probabilities for T2HK as shown in the bottom row of Fig.~\ref{fig:app_prob} and bottom right panel of Fig.~\ref{fig:disapp_prob}. As discussed earlier,
this happens because of $L\times E$ dependencies in LIV contributing terms in the CPT-conserving case (see Eq.~\ref{eq:pme_liv} and Eq.~\ref{eq:Pmm_liv}). Since T2HK has a short baseline and low energy neutrino beam compared to DUNE, it shows almost no sensitivity to CPT-conserving LIV parameters.
In Fig.~\ref{fig:correlation_th23_cptv}, we show the correlation between the CPT-violating LIV parameters $a_{\alpha\beta}$ ($\alpha,\beta=e,\mu,\tau$; $\alpha\neq\beta$) and $\theta_{23}$. We consider three values of $\theta_{23}$ to in data, namely $42.1^\circ$ (left column) in the lower octant, 45$^\circ$ (middle column) maximal mixing case, and $47.9^\circ$ (right column) in the upper octant\footnote{We choose $\theta_{23}=42.1^\circ$ in lower octant as it is the current best-fit value from the global-fit of the oscillation parameters~\cite{Esteban:2020cvm}. For simulation, we marginalize over $\delta_{\mathrm{CP}}$ in its allowed $3\sigma$ range and the corresponding LIV phases in their entire allowed range. We consider the corresponding value ($47.9^\circ$) in the upper octant.}. We observe that for both DUNE and T2HK, the best result is obtained when the true value of $\theta_{23}$ is in the lower octant, where the allowed region is relatively small compared to the other two cases.
Similar to $\delta_{\mathrm{CP}}-a_{\alpha\beta}$ correlation, DUNE perform significantly better compared to T2HK for all three choices of true $\theta_{23}$. As we discussed earlier, it is because of $L$ dependencies in CPT-violating LIV contributing terms in the oscillation probabilities.
Here also, we observe degenerate allowed region at a non-zero value of $a_{e\mu}$ and $a_{e\tau}$ that appear at the opposite octant of $\theta_{23}$ for both DUNE and T2HK. This happens because of the degeneracy between $\theta_{23}$ and CP phases $\delta_{\mathrm{CP}}$, $\phi$ in the LIV contributing term of the oscillation probabilities.
When data from the DUNE and T2HK are added, the allowed region becomes smaller, and interestingly, degenerate region appearing for the individual setup vanishes, as shown by the black contour in each panel. In Fig.~\ref{fig:correlation_th23_cptc}, we show the same for CPT-conserving LIV parameters. Here also, the best results is obtained when $\theta_{23}$ is in lower octant for the individual setups. Here we observe that, although DUNE shows a notable correlation with $\theta_{23}$, whereas T2HK shows almost no correlation with the CPT-conserving LIV parameters. As mentioned before, T2HK has almost no sensitivity to the CPT-conserving LIV parameters because of its smaller baseline and lower neutrino energy.
However, we observe a slight improvement in the allowed region when we combine the data from DUNE and T2HK.
\subsection{Constraints on CPT-violating and CPT-conserving LIV parameters}
\label{subsec:constraints}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./plots/chi-square-profile-f.pdf}
\vspace*{-10mm}
\mycaption{ The expected limits on the CPT-violating and CPT-conserving LIV parameters
from DUNE (red curves), T2HK (blue curves), and DUNE+T2HK (black curves).
The upper (lower) panels show the $\Delta\chi^2$ for the off-diagonal CPT-violating (CPT-conserving) LIV parameters considering one-at-a-time. The true values of $\theta_{23}$ and $\delta_{\mathrm{CP}}$ are kept at their best fit values given in table~\ref{tab:params_value}. We marginalize over $\theta_{23}$ and $\delta_{\mathrm{CP}}$ in their 3$\sigma$ allowed range in the fit. Apart from $\theta_{23}$ and $\delta_{\mathrm{CP}}$, we also marginalize over the associated LIV phases in their total allowed range [$0^{\circ}$, $360^{\circ}$].}
\label{fig:cptv_liv_bounds}
\end{figure}
In the previous section, we have discussed the correlation of the LIV parameters with the most uncertain standard oscillation parameters, $\delta_{\mathrm{CP}}$ and $\theta_{23}$ in the context of DUNE, T2HK, and their combination. In this section, we present the limits on the off-diagonal LIV parameters that would be obtained by these three setups. As discussed earlier, in our simulation, we marginalized over $\theta_{23}$, $\delta_{\text{CP}}$, and the phase associated with the off-diagonal LIV parameters in the fit (see Sec.~\ref{sec:RnA} for details).
In Fig.~\ref{fig:cptv_liv_bounds}, we show $\Delta \chi^2$ as a function of the off-diagonal CPT-conserving (top row) and CPT-violating (bottom row) LIV parameters. The red, blue, and black lines in each panel correspond to the sensitivity of DUNE, T2HK, and DUNE+T2HK setups, respectively. The top left panel correspond to $a_{e\mu}$ case where we see that DUNE shows better sensitivity compared to T2HK at 95\% C.L.. Interestingly, for both DUNE and T2HK, there is a local minima in the $\Delta\chi^2$ around $1\times10^{-23}$ GeV and $4\times10^{-23}$ GeV, respectively. This feature can be explained using the correlation of the LIV parameters with standard oscillation parameters $\theta_{23}$ and $\delta_{\mathrm{CP}}$ discussed in Sec.~\ref{subsec:correlation}. We observe that there is allowed region in the $\delta_{\mathrm{CP}}-|a_{e\mu}|$ and $\theta_{23}-|a_{e\mu}|$ plane at around the same value of $|\ensuremath{a_{e\mu}}|$ where the local minima occurs. Since this parameter is mainly constrained by the appearance channel, it hints towards a degeneracy between the appearance probability in absence of any new physics and the same in presence of the LIV for some combination of $\theta_{23}, \delta_{\mathrm{CP}}$ and new phase $\phi$. However, this degeneracy vanishes as we combine the data from DUNE and T2HK, giving a more stringent limit on $|\ensuremath{a_{e\mu}}|$. In the top middle panel, we show the sensitivity for $|\ensuremath{a_{e\tau}}|$. Here also, a local minima of $\Delta\chi^2$ is observed for the individual setup DUNE and T2HK, which again occurs due to degeneracy between the $\theta_{23}$, $\delta_{\mathrm{CP}}$, and $\phi$ (see top middle panel of Fig.~\ref{fig:correlation_dcp_cptv} and \ref{fig:correlation_th23_cptv}). Adding the data from the two experiments solves the issue of local minima.
Top right panel shows the constraints on $\ensuremath{a_{\mu\tau}}$ for the three setups. We observe that DUNE gives significantly better limits for $\ensuremath{a_{\mu\tau}}$ as compared to T2HK. Unlike $\ensuremath{a_{e\mu}}$ and $\ensuremath{a_{e\tau}}$, we do not observe local minima of $\Delta\chi^2$ here. It happens because $\ensuremath{a_{\mu\tau}}$ is mainly constrained by the disappearance channel, where such degeneracy among the $\theta_{23}$ and the phases does not occur.
In the lower panels of Fig.~\ref{fig:cptv_liv_bounds}, we show the constraints on CPT-conserving LIV parameters. As it is clear from the oscillation probability plots in Fig.~\ref{fig:cptc_app_prob} (see bottom row) and Fig.~\ref{fig:disapp_prob} (see bottom right panel), T2HK has almost no sensitivity on the CPT-conserving LIV parameters. However, when the data from T2HK and DUNE are added, sensitivity is slightly improved for all the three off-diagonal parameters.
We tabulate our result by showing the limits at 95\% C.L. in Table~\ref{tab:constraints_a}. The Second and third columns show the limits from DUNE and T2HK independently. The last column is the ultimate limit on LIV parameters from the combination of DUNE and T2HK.
Note that for the bound on $|\ensuremath{a_{e\mu}}|$ and $|\ensuremath{a_{e\tau}}|$, we consider the most conservative scenarios, $\textit{i.e.}$ the largest value of $a_{e\beta}$, which reaches 95\% C.L. value. For $a_{e\beta}$ ($\beta = e,\mu$), the obtained constraints from DUNE are almost five times better than that of T2HK. Also, combining the data from DUNE and T2HK, the limits improved further by a factor of $\approx 2$ for $\ensuremath{a_{e\mu}}$ and $\approx 3$ for $\ensuremath{a_{e\tau}}$. For $\ensuremath{a_{\mu\tau}}$ also, constraints from DUNE outperform T2HK approximately by a factor of five. However, for DUNE+T2HK, improvement is small ($\approx 12\%$) compared to the other two off-diagonal CPT-violating LIV parameters. In the case of CPT-conserving LIV parameters, the constraints from DUNE are incomparable to that of T2HK, as the latter shows almost no sensitivity to CPT-conserving LIV parameters. However, adding the data from the two experiments shows marginal improvement in the limits.
\begin{table}[h!]
\begin{center}
\begin{adjustbox}{width=0.7\textwidth}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
&DUNE& T2HK & DUNE+T2HK & T2K+NO$\nu$A\\
\hline
$|a_{e\mu}|~[10^{-23}~\rm {GeV}]$ & $<$ 1.0 & $<$ 5.15 & $<$ 0.32 & $<$ 6.1\\
\hline
$|a_{e\tau}|~[10^{-23}~\rm {GeV}]$& $<$ 1.05 & $<$ 5.3 & $<$ 0.55& $<$ 7.0\\
\hline
$|a_{\mu\tau}|~[10^{-23}~\rm {GeV}]$& $<$ 1.26 & $<$ 5.5 & $<$ 1.1& $<$ 8.3\\
\hline
$|c_{e\mu}|~[10^{-24}]$ & $<$ 0.66 & $<$ 17.1 & $<$ 0.64 & $<$ 11.0 \\
\hline
$|c_{e\tau}|~[10^{-24}]$& $<$ 1.65 & $<$ 71.1 & $<$ 1.49& $<$ 37.5\\
\hline
$|c_{\mu\tau}|~[10^{-24}]$& $<$ 0.97 & $<$ 42.4 & $<$ 0.95 & $<$ 29.0\\
\hline\hline
\end{tabular}
\end{adjustbox}
\mycaption{Expected bounds on the off-diagonal CPT-violating and CPT-conserving LIV parameters at 95\% C.L. (1 d.o.f.) using DUNE, T2HK, and the combination of DUNE and T2HK. Last column shows the results using the combination of T2K and NO$\nu$A with their full exposures. }
\label{tab:constraints_a}
\end{center}
\end{table}
For a comparison with currently running long-baseline experiments T2K and NO$\nu$A, in the last column, we provide expected bounds from the combination of T2K and NO$\nu$A considering their full exposure. We assume a total exposure of 84.4 kt$\cdot$MW$\cdot$yrs for T2K~\cite{T2K:2001wmr, T2K:2011qtm, T2K:2014xyt} with five years of total runtime divided equally in neutrino and anti-neutrino modes. For NO$\nu$A~\cite{Ayres:2002ws, NOvA:2004blv, NOvA:2007rmc, Patterson:2012zs}, we consider total exposure of 58.8 kt$\cdot$MW$\cdot$yrs with six years of runtime with three years each in neutrino and anti-neutrino mode. We observe that DUNE puts significantly better constraints on both CPT-violating and CPT-conserving LIV parameters than T2K+NO$\nu$A setup because the former has a larger baseline and better systematic uncertainties. However, the constraints on CPT-violating LIV parameters from T2K+NO$\nu$A setup are close to that of T2HK. Since NO$\nu$A has a comparatively larger baseline ($L=810$ km), it has the upper hand in putting stringent bound on the CPT-violating parameters. However, T2HK has less systematics uncertainties to compensate for its small baseline. Similarly, for the CPT-conserving LIV parameters, limits from the T2K+NO$\nu$A setup are of the same order as T2HK, with the former giving slightly better constraints. It is because, apart from the larger baseline, the energy of the neutrino beam is also prominent for NO$\nu$A, which plays a vital role in constraining CPT-conserving LIV parameters.
The fourth column of Table~\ref{tab:constraints_a} shows the ultimate constraints of off-diagonal CPT-conserving and CPT-violating LIV parameters that would be set from the combination of two next-generation long-baseline experiments DUNE and T2HK at 95\% C.L. In Table~\ref{tab:existing_bounds}, we show the existing bounds on some CPT-violating and CPT-conserving LIV parameters from the atmospheric neutrino experiment Super-K and astrophysical neutrino experiment IceCube. Comparing it with our result in Table~\ref{tab:constraints_a}, we observe that DUNE alone may be able to give better constraints of $\ensuremath{a_{e\mu}}$ and $\ensuremath{a_{e\tau}}$ as compared to the existing bounds from Super-K. Combining DUNE and T2HK can improve the constraints for $a_{e\beta}$ ($\beta=e,\mu$) by almost one order of magnitude. One reason for this is that $a_{e\beta}$ ($\beta=e,\mu$) are mainly constrained by the appearance channel, which is the most important oscillation channel for a next-generation long-baseline experiment like DUNE, which has a considerably larger baseline. For the atmospheric neutrino experiment like Super-K, the major channel is $\nu_\mu\to\nu_\mu$ disappearance channel, in which these two LIV parameters do not appear in the leading order. For $\ensuremath{a_{\mu\tau}}$, projected results from the DUNE+T2HK setup are of the same order as Super-K, with the later having a slightly better limit. In the case of CPT-conserving LIV parameters, we observe that the existing limits from Super-K are at least one order better for $\ensuremath{c_{e\mu}}$ and $\ensuremath{c_{\mu\tau}}$ compared to our results for DUNE+T2HK. For $\ensuremath{c_{e\tau}}$, limits are comparable, with Super-K having a slightly better limit. This mainly happens because the contribution from CPT-conserving LIV parameters to oscillation probabilities in both the oscillation channels is proportional to both neutrino energy and the baseline. Atmospheric neutrino experiment like Super-K probes a significantly larger range of neutrino energies and baselines as compared to the LBL experiment like DUNE. So we expect Super-K to have better limits on the CPT-conserving LIV parameters.
\section{Summary and Conclusions}
\label{sec:SnC}
In the past few decades, data from outstanding neutrino oscillation experiments that are either completed or currently operational has almost settled the issue of measuring standard three-flavor neutrino oscillation parameters with excellent precision. Apart from resolving a few remaining issues in the three-neutrino paradigm, another major goal of the next-generation neutrino oscillation experiments will be to search for various physics beyond the standard model, which will open up a new era in particle physics.
With that motivation, in this work, we probe the Lorentz invariance violation and its impact on neutrino flavor transition in the context of the two most anticipated upcoming long-baseline experiments, DUNE and T2HK. Lorentz invariance violation can be realized in low energy effective field theories where the LIV interaction terms in the lagrangian come as multiplication of Lorentz violating coefficients and Lorentz violating operators of arbitrary mass dimensions. The coefficients of the dimension-three and dimension-four operators are, respectively, CPT-violating and CPT-conserving. In this work, for the first time, we have explored the CPT-conserving LIV parameters in the context of long-baseline experiments. Here, we focus on the isotropic components of the CPT-violating and CPT-conserving LIV parameters. The presence of non-zero CPT-violating and CPT-conserving parameters modify the neutrino propagation Hamiltonian and hence the oscillation probabilities, making them worth studying in neutrino oscillation experiments.
To have an analytical understanding about the impact of various LIV parameters on the neutrino oscillation probability, we derive simple approximate analytical expression of the $\nu_\mu\to\nu_e$ appearance probability and $\nu_\mu\to\nu_\mu$ disappearance probability using a perturbative approach up to first order in $\alpha$, $\sin^2\theta_{13}$, and LIV parameters $a_{\alpha\beta}/c_{\alpha\beta}$ ($\alpha,\beta = e,\mu,\tau;\alpha\neq\beta$). We found that for the appearance channel, $a_{e\mu}/c_{e\mu}$ and $a_{e\tau}/c_{e\tau}$ appear at the leading order, whereas for the disappearance channel, only $a_{\mu\tau}/c_{\mu\tau}$ presents. Our analytical expressions explain various features of oscillation probabilities shown in Figs.~[\ref{fig:app_prob}-\ref{fig:disapp_prob}], where we plot the exact oscillation probabilities numerically. We explain how the impact of LIV on oscillation probabilities depends on the values of phase associated with the off-diagonal LIV parameters with the help of LIV contributing term in the oscillation probability given in Eqs.~\ref{eq:p_cptv_liv},~\ref{eq:p_cptc_liv}, and \ref{eq:Pmm_liv}. Also, we find that the LIV contributing terms in our analytical expression in CPT-violating LIV and CPT-conserving LIV are proportional to $L$ and $L\times E$, respectively, both in the appearance and disappearance channel. As a result, DUNE, the experiment with a larger baseline and higher energy of the neutrino beam, shows significantly larger sensitivity to LIV than T2HK. As shown in lower panels of Fig.~\ref{fig:cptc_app_prob}, T2HK shows negligible sensitivity to the CPT-conserving LIV parameters.
Using the configuration of the DUNE and T2HK as tabulated in Table~\ref{tab:exp_details}, we calculate the expected total event rate of the two setups in the standard case and in the presence of off-diagonal CPT-conserving and CPT-violating LIV parameters considered one-at-a-time (see Table~\ref{tab:total_events}). As expected from the probability analysis, we observe a major change in the event rate from the SI case in the presence of $a_{e\mu}$ ($a_{e\tau}$), which shows 41\% (49\%) increase in the event rate for DUNE and 7.3\%(14\%) for T2HK when we consider $|a_{e\beta}|=2\times10^{-23}$ GeV ($\beta=\mu,\tau$). In the disappearance channel, the presence of $a_{\mu\tau}$ leads to modifications in the event rate by 7.8\% for DUNE and 2.2\% for T2HK. For the CPT-conserving for which we consider the strength $|c_{\alpha\beta}|=1\times 10^{-24}$ shows comparatively small changes in the event rate with maximum 11\% changes for DUNE and $<1\%$ for T2HK.
We discuss the correlation between various LIV parameters and most uncertain oscillation parameters $\theta_{23}$ and $\delta_{\mathrm{CP}}$ (see Sec.~\ref{subsec:correlation}). To demonstrate this, we show allowed regions at 95\% C.L. (2 d.o.f.) in $\delta_{\mathrm{CP}}-|a_{\alpha\beta}|/|c_{\alpha\beta}|$ (see Fig.~\ref{fig:correlation_dcp_cptv} and Fig.~\ref{fig:correlation_dcp_cptc}) and $\theta_{23}-|a_{\alpha\beta}|/|c_{\alpha\beta}|$ (see Fig.~\ref{fig:correlation_th23_cptv} and Fig.~\ref{fig:correlation_th23_cptc}) planes. In all cases, DUNE shows a more constrained allowed region in the above-mentioned planes, as expected from the probability plots. We notice almost no correlation between the CPT-conserving LIV parameters and standard oscillation parameters for T2HK, as discussed earlier. For $\delta_{\mathrm{CP}}-|a_{e\mu}|$ case, we observe some allowed regions centered around a non-zero value of $|a_{e\mu}|$, both for DUNE and T2HK. It happens due to degeneracy between the $\theta_{23}$ and the phases $\delta_{\mathrm{CP}}$ and $\phi$, which minimizes the contribution from LIV at some non-zero value of the $|a_{e\mu}|$. Also, in the case of $\theta_{23}-|a_{e\mu}|$ correlation, we observe such degenerate regions in opposite octant to the value of $\theta_{23}$ considered in the data. This also happens due to degeneracy between these three parameters. In both cases, the degenerate allowed regions vanish upon combining the data from DUNE and T2HK. Also, there is improvement in allowed region in both $\delta_{\mathrm{CP}}-|a_{\alpha\beta}|$ and $\theta_{23}-|a_{\alpha\beta}|$ planes for the DUNE+T2HK setup. For the CPT-conserving case, this improvement is marginal.
From the discussion on the correlation between the LIV parameters and standard oscillation parameters $\delta_{\mathrm{CP}}$ and $\theta_{23}$ in DUNE and T2HK, we get an idea about the sensitivity of these two setups to the off-diagonal CPT-conserving and CPT-violating LIV parameters. To quantify the sensitivity, we present the limits (see Fig.~\ref{fig:cptv_liv_bounds} and Table~\ref{tab:constraints_a}) on the LIV parameters that DUNE, T2HK, and their combination is expected to set out with their full exposure. For CPT-violating LIV parameters, DUNE shows almost five times better constraints than T2HK.
Also, for $|a_{e\beta}|$ ($\beta = \mu,\tau$), we observe a local minimum in $\Delta\chi^2$ which results in deterioration in constraints from DUNE and T2HK. It happens due to the degeneracy between $\theta_{23}$ and the phases $\delta_{\mathrm{CP}}$ and $\phi$ which is clear from the correlation plots.
For the DUNE+T2HK combination, limits improve significantly for $|a_{e\beta}|$ ($\beta = \mu, \tau$) as the above discussed degeneracies vanish. For $a_{\mu\tau}$ also, DUNE outperforms T2HK, and their combination results in a small improvement in the limits. T2HK shows almost no sensitivity for the CPT-conserving LIV parameters. As a result, T2HK produces incomparably worse constraints relative to DUNE. To compare the limits from the future LBL experiments with the currently operating ones, we show the expected constraints by combining T2K and NO$\nu$A setups with their full exposures (see the last column of Table~\ref{tab:constraints_a}). We observe for CPT-violating LIV parameters, though bounds from T2K+NO$\nu$A are worse than DUNE, it is comparable to that of T2HK. It is mainly because of the larger baseline of NO$\nu$A than T2HK that has better systematic uncertainties than the former. For the CPT-conserving parameters, T2K+NO$\nu$A gives slightly better constraints than T2HK, again because the former has a longer baseline and higher energy of the neutrino beam. We also compare our results with the existing limits on CPT-violating and CPT-conserving parameters listed in Table~\ref{tab:existing_bounds} from Super-K. We find that for CPT-violating parameters, especially $a_{e\beta}$ ($\beta=\mu,\tau$), projected limits at 95\% C.L. for the DUNE and T2HK combination is better compared to Super-K. It happens because the limits on $a_{e\beta}$ are mainly driven by the appearance channel, which is a major oscillation channel for LBL experiments, whereas the atmospheric neutrino experiment like Super-K mainly probe disappearance channel. For CPT-conserving parameters, Super-K shows almost two-order better constraints compared to the DUNE+T2HK setup. It is because of the large energy of the neutrinos in atmospheric neutrino experiments.
In this work, we discuss the impact of LIV on neutrino flavor transition probabilities in the context of next-generation long-baseline experiments DUNE and T2HK and explore the ability of these two setups to constrain the CPT-violating and CPT-conserving LIV parameters. We conclude that although these two setups have good complementarity in their configurations, a smaller baseline and lower energy of the neutrino beam for T2HK make it less favorable to probe LIV as compared to DUNE. However, combining DUNE and T2HK can improve the limits on LIV parameters up to a certain extent. We hope that our present study can be an important addition to the several interesting beyond the Standard Model scenarios which can be probed in the next-generation long-baseline neutrino oscillation experiments.
\newpage
\subsubsection*{Acknowledgments}
We acknowledge the support from the Department of Atomic Energy (DAE), Govt. of India, under the Project Identification Number RIO 4001. S.K.A. is supported by the Young Scientist Research Grant [INSA/SP/YSP/144/2017/1578] from the Indian National Science Academy (INSA). S.K.A. acknowledges the financial support from the Swarnajayanti Fellowship (sanction order No. DST/SJF/PSA- 05/2019-20) provided by the Department of Science and Technology (DST), Govt. of India, and the Research Grant (sanction order No. SB/SJF/2020-21/21) provided by the Science and Engineering Research Board (SERB), Govt. of India, under the Swarnajayanti Fellowship project. We thank M. Singh and A. Kumar for useful communications. S.K.A would like to thank the United States-India Educational Foundation for providing the financial support through the Fulbright-Nehru Academic and Professional Excellence Fellowship (Award No. 2710/F-N APE/2021). The numerical simulations are carried out using SAMKHYA: High-Performance Computing Facility at Institute of Physics, Bhubaneswar.
\begin{appendix}
\renewcommand\thefigure{A\arabic{figure}}
\renewcommand\theequation{A\arabic{equation}}
\setcounter{figure}{0}
\setcounter{equation}{0}
\section{Comparison Between Numerical and Analytical Probabilities}
\label{appndx}
\begin{figure}[h!]
\centering
\includegraphics[width=0.86\textwidth]{plots/Probability-Pme.pdf}
\mycaption{Comparison between exact $\nu_{\mu}\to\nu_e$ appearance probability calculated numerically (dashed lines) with the same calculated analytically (solid lines) using Eq.~\ref{eq:pme_liv} for a baseline $L=1285$ km. The top (bottom) row corresponds to the probability in the presence of CPT-violating (CPT-conserving) LIV parameters $a_{e\beta}$ ($c_{e\beta}$) with $\beta=\mu $ in left column, and $\beta=\tau$ in right column. Four colored curves correspond to four values of the associated phase, as mentioned in the legend. The values of the standard oscillation parameters used in the plot are given in Table~\ref{tab:params_value} with NMO.\label{fig:Pme-comparison}}
\end{figure}
In this section, we check the accuracy of the approximate analytical expressions of the oscillation probabilities derived in Sec.~\ref{sec:LIV}. In Fig.~\ref{fig:Pme-comparison}, we compare the $\nu_\mu\rightarrow\nu_e$ appearance probability in the presence of $a_{e\mu}/c_{e\mu}$ and $a_{e\tau}/c_{e\tau}$, calculated using the analytical expression with the exact oscillation probability calculated numerically using GLoBES software, for DUNE ($L = 1285$ km). We consider only these LIV parameters since they appear in the first order in the expansion parameters. We show the results for four values of the phase associated with the LIV parameters, namely, $0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$. We observe that for the CPT-violating LIV parameters, matching with the exact appearance probability is slightly worse compared to the CPT-conserving case, as the assumed strength of LIV parameters in the CPT-violating case is one order larger than the corresponding CPT-conserving case.
\begin{figure}[h!]
\centering
\includegraphics[width=0.86\textwidth]{plots/Probabilty-Pmm.pdf}
\mycaption{Comparison between exact $\nu_\mu\to\nu_{\mu}$ disappearance probability calculated numerically (dashed lines) with the same calculated analytically (solid lines) using Eq.~\ref{eq:pme_liv} for a baseline $L=1285$ km. The Left (right) column corresponds to the probability in the presence of CPT-violating (CPT-conserving) LIV parameter $a_{\mu\tau}$ ($c_{\mu\tau}$). Four colored curves correspond to four values of the associated phase, as mentioned in the legend. The values of the standard oscillation parameters used in the plot are given in Table~\ref{tab:params_value} with NMO.\label{fig:Pmm-comparison}}
\end{figure}
However, the features of the appearance probabilities for different values of LIV-phases are preserved by the analytical expression. In Fig.~\ref{fig:Pmm-comparison}, we show the same for the $\nu_\mu\to\nu_\mu$ disappearance probability. Here, we show the effect from the LIV parameters, $a_{\mu\tau}/c_{\mu\tau}$, since only these terms appear till the first order. Again, we consider four values of associated phases as in Fig.~\ref{fig:Pme-comparison}. We find the oscillation probabilities calculated analytically match quite well with the numerical ones. It is valid for all assumed values of the associated phases, both in CPT-violating and CPT-conserving scenarios. The same is true for T2HK.
\end{appendix}
\bibliographystyle{JHEP}
|
1,108,101,562,722 | arxiv | \section{Introduction}
A tensor is a multi-dimensional array of numbers, which is a generalization of a matrix. Compared to a ``flat'' matrix, a tensor provides a richer and more natural representation for many data. In this paper, we focus on the third-order tensor which looks like a magic cube. This format of data is widely used in color image and gray-scale video inpainting \cite{bertalmio2000image, komodakis2006image, liu2013tensor, korah2007spatiotemporal, chan2011an, jiang2017a}, hyperspectral image (HSI) data recovery \cite{li2012coupled, zhao2013deblurring, li2010tensor, xing2012dictionary}, personalized web search \cite{sun2005cubesvd:}, high-order web link analysis \cite{kolda2005higher-order}, magnetic resonance imaging (MRI) data recovery \cite{varghees2012adaptive}, and seismic data reconstruction \cite{kreimer2012a}.
Like the matrix decomposition, the tensor decomposition is an important multilinear algebra tool. There are many different tensor decompositions. The CANDECOMP/PAEAFAC (CP) decomposition \cite{CPdecomposition} and the Tucker decomposition \cite{tucker1966some} are the two most well-known ones. The CP decomposition can be considered as the higher order generalization of the matrix singular value decomposition (SVD). It tries to decompose a tensor into a sum of rank-one tensors. Similar to the rank-one matrix, third-order rank-one tensors can be written as the outer product of 3 vectors. The CP-rank of a tensor is defined as the minimum number of rank-one tensors whose sum generates the original tensor. This definition is an analog of the definition of matrix rank. The Tucker decomposition is the higher order generalization of the principal component analysis (PCA). It decomposes a tensor into a core tensor multiplied by a matrix along each mode. The Tucker rank based on Tucker decomposition is a vector whose $i$-th element is the mode-$i$ unfolding matrix rank.
Recent years, Kilmer and Martin \cite{kilmer2011factorization, martin2013an, kilmer2013third-order} proposed a third-order tensor decomposition called tensor singular value decomposition (t-SVD). This decomposition strategy is based on the definition of the tensor product (see Section 2). After performing one-dimensional discrete Fourier transformation (DFT) on the third dimension of the tensor, this tensor product makes tensor decomposition be an analog of matrix decomposition. This strategy avoids the loss of structure information in matricization of the tensor. But because of performing one-dimensional DFT on the third dimension, the obtained tensor is a complex tensor. These complex numbers lead to higher computational cost and are not required. Why don't we use another transformation instead of DFT to avoid its disadvantage? Discrete cosine transformation (DCT) \cite{ng1999a} is the first alternative which expresses a finite sequence in terms of a sum of the cosine functions.
DCT only produces the real number for real input. This feature greatly reduces the data in the process of t-SVD, thus saving a lot of time. And there is another difference: DFT implies periodic boundary conditions (BC) when DCT implies reflexive BCs which yields a continuous extension at the boundaries \cite{ng1999a}. If the signal satisfies reflexive BCs (real data often satisfies), the new t-SVD based on DCT can achieve better results than DFT. We give the theoretical derivation of using DCT for t-SVD and verify the superiority compared to DFT.
The rest of this paper is as follows. In Section 2, we introduce some related notations and the original t-SVD with DFT background. In Section 3, we propose the theoretical derivation of new t-SVD with DCT. Based on the new t-SVD, we introduce the new tensor nuclear norm in Section 4. We conduct extensive experiments to demonstrate the effectiveness of the proposed method in Section 5. In Section 6, we give some concluding remarks.
\section{Notations and Preliminaries}
In this section, we introduce the basic notations and give the definitions related to the t-SVD.
We use non-bold lowercase letters for scalars, e.g., $x$, boldface lowercase letters for vectors, e.g., $\mathbf{x}$, boldface capital letters for matrices, e.g., $\mathbf{X}$, boldface Calligraphy letters for tensors, e.g., $\mathcal{X}$. $\mathbb{R}$ and $\mathbb{C}$ represent the field of real number and complex number, respectively. For a third-order tensor $\mathcal{X}$, we use the MATLAB notations $\mathcal{X}(i,:,:)$, $\mathcal{X}(:,j,:)$, and $\mathcal{X}(:,:,k)$ to denote the horizontal, lateral, and frontal slices, respectively, and $\mathcal{X}(:,j,k)$, $\mathcal{X}(i,:,k)$, and $\mathcal{X}(i,j,:)$ to denote the columns, rows, and tubes, respectively. For convenience, we use $\mathbf{X}^{(k)}$ for the $k$th \textbf{frontal slice} and $\mathbf{x}_{ij:}$ for the $(i,j)$-th \textbf{tube} $\mathcal{X}(i,j,:)$. Both $\mathcal{X}(i,j,k)$ and $x_{ijk}$ represent the $(i,j,k)$-th element. The Frobenius norm of $\mathcal{X}$ is defined as $\left \| \mathcal{X} \right \|_{F} := (\sum_{i,j,k}|x_{ijk}|^{2})^{\frac{1}{2}}$. It is easily to see that $\left \| \mathcal{X} \right \|_{F}^{2} = \sum_{n=1}^{k} \left \| \mathbf{X}^{(n)} \right \|_{F}^{2}$.
Next, we introduce some definitions that are closely related to t-SVD. We use $\tilde{\mathcal{X}} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$ to represent the discrete Fourier transform of $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$ along each tube, i.e., $\tilde{\mathcal{X}}=\mathrm{fft}(\mathcal{X},[ \thinspace],3)$. The block circulant matrix \cite{martin2013an, kilmer2013third-order} is defined as
\begin{equation} \label{bcirc}
\text{bcirc}(\mathcal{X}):=\left[
\begin{array}{cccc}
\mathbf{X}^{(1)} & \mathbf{X}^{(m_{3})} & \cdots & \mathbf{X}^{(2)} \\
\mathbf{X}^{(2)} & \mathbf{X}^{(1)} & \cdots & \mathbf{X}^{(3)} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{X}^{(m_{3})} & \mathbf{X}^{(m_{3}-1)} & \cdots & \mathbf{X}^{(1)} \\
\end{array}
\right].
\end{equation}
The block diagonal matrix and the corresponding inverse operator \cite{martin2013an, kilmer2013third-order} are defined as
\begin{equation} \label{bdiag}
\text{bdiag}(\mathcal{X}):=\left[
\begin{array}{cccc}
\mathbf{X}^{(1)} & & & \\
& \mathbf{X}^{(2)} & & \\
& & \ddots & \\
& & & \mathbf{X}^{(m_{3})} \\
\end{array}
\right],
\end{equation}
$$
\text{unbdiag}(\text{bdiag}(\mathcal{X}))=\mathcal{X}.
$$
The unfold and fold operators in t-SVD \cite{martin2013an, kilmer2013third-order} are defined as
\begin{equation}\label{fold}
\text{unfold}(\mathcal{X}):=\left [
\begin{array}{c}
\mathbf{X}^{(1)} \\
\mathbf{X}^{(2)} \\
\vdots \\
\mathbf{X}^{(m_{3})}
\end{array} \right ], \quad
\text{fold}(\text{unfold}(\mathcal{X}))=\mathcal{X}.
\end{equation}
It is a important point that block circulant matrix can be block diagonalized.
\begin{theorem}[\cite{kilmer2011factorization}]\label{block diagonalized}
\begin{equation}
\text{bdiag}(\tilde{\mathcal{X}})=(\mathbf{F}_{m_{3}} \otimes \mathbf{I}_{m_{1}})\text{bcirc}(\mathcal{X})(\mathbf{F}^{H}_{m_{3}} \otimes \mathbf{I}_{m_{2}}),
\end{equation}
where $\otimes$ denotes the Kronecker product, $\mathbf{F}_{m_{3}}$ is an $m_{3} \times m_{3}$ DFT matrix and $\mathbf{I}_{m}$ is an $m \times m$ identity matrix.
\end{theorem}
\begin{definition}[t-product \cite{kilmer2013third-order}]
Given $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$ and $\mathcal{Y} \in \mathbb{C}^{m_{2} \times m_{4} \times m_{3}}$, the t-product $\mathcal{X} \ast \mathcal{Y}$ is a third-order tensor of size $m_{1} \times m_{4} \times m_{3}$
\end{definition}
\begin{equation}\label{tproduct}
\mathcal{Z}=\mathcal{X} \ast \mathcal{Y} := \text{fold}(\text{bcirc}(\mathcal{X})\text{unfold}(\mathcal{Y})).
\end{equation}
This definition is the core of t-SVD. It is like a one-dimensional convolution of two vectors under reflexive BCs, but the elements of vectors are the frontal slices of tensors. With Theorem \ref{block diagonalized}, equation (\ref{tproduct}) can be rewritten as
\begin{equation}\label{t-product DFT}
\begin{split}
\tilde{\mathcal{Z}} &= \text{fold}(\text{bdiag}(\tilde{\mathcal{X}})((\mathbf{F}_{m_{3}} \otimes \mathbf{I}_{m_{2}})\text{unfold}(\mathcal{Y}))) \\
&=\text{fold}(\text{bdiag}(\tilde{\mathcal{X}})\text{unfold}(\tilde{\mathcal{Y}}))\\
&=\text{unbdiag}(\text{bdiag}(\tilde{\mathcal{X}})\text{bdiag}(\tilde{\mathcal{Y}})).
\end{split}
\end{equation}
Equation (\ref{t-product DFT}) means that the t-product in the spatial domain corresponds to the matrix multiplication of the frontal slices in the Fourier domain, which greatly simplifies the process of the algorithm.
\begin{definition}[identity tensor \cite{kilmer2013third-order}]
The identity tensor $\mathcal{I} \in \mathbb{C}^{m_{1} \times m_{1} \times m_{3}}$ is a tensor whose first frontal slice is the identity matrix of size $m_{1} \times m_{1}$, and whose other frontal slices are all zeros.
\end{definition}
\begin{definition}[orthogonal tensor \cite{kilmer2013third-order}]
A tensor $\mathcal{Q} \in \mathbb{C}^{m_{1} \times m_{1} \times m_{3}}$ is orthogonal if it satisfies $\mathcal{Q} \ast \mathcal{Q}^{H} = \mathcal{Q}^{H} \ast \mathcal{Q} = \mathcal{I}$, where $\mathcal{Q}^{H}$ is the tensor conjugate transpose of $\mathcal{Q}$, which is obtained by conjugate transposing each frontal slice of $\mathcal{Q}$.
\end{definition}
\begin{definition}[f-diagonal tensor \cite{kilmer2013third-order}]
A tensor is called f-diagonal if each of its frontal slices is a diagonal matrix.
\end{definition}
\begin{theorem}[t-SVD \cite{kilmer2013third-order, kilmer2011factorization}]
Given a tensor $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$, the t-SVD of $\mathcal{X}$ is given by
\begin{equation}\label{tSVD}
\mathcal{X} = \mathcal{U} \ast \mathcal{S} \ast \mathcal{V}^{H} ,
\end{equation}
where $\mathcal{U} \in \mathbb{C}^{m_{1} \times m_{1} \times m_{3}}$,$\mathcal{V} \in \mathbb{C}^{m_{2} \times m_{2} \times m_{3}}$ are orthogonal tensors, and $\mathcal{S} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$ is a f-diagonal tensor.
\end{theorem}
\begin{figure}[!htp]
\centering
\includegraphics[width=0.8\textwidth]{fig/tsvd.pdf}
\caption{the t-SVD of an $m_{1} \times m_{2} \times m_{3}$ tensor. }\label{figT-SVD}
\end{figure}
\begin{definition}[tensor multi-rank and tubal rank \cite{zhang2014novel}]
Given $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$, its multi-rank is a vector $\mathbf{r} \in \mathbb{R}^{m_{3}}$ whose $i$-th element is the rank of the $i$-th frontal slice of $\tilde{\mathcal{X}}$, i.e., $\mathbf{r}_{i} = rank(\tilde{\mathbf{X}}^{(i)})$. Its tubal rank is defined as the number of nonzero singular tubes, where the singular tubes of $\mathcal{X}$ are the nonzero tubes of $\mathcal{S}$.
\end{definition}
The tensor tubal rank is actually the largest element of multi-rank.
\begin{definition}[tensor nuclear norm \cite{lu2016tensor, semerci2014tensor-based}]
Given $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$, based on the tensor multi-rank, the tensor nuclear norm (TNN) of $\mathcal{X}$ is defined as
\begin{equation}\label{tnn}
\left \| \mathcal{X} \right \|_{\ast} := \frac{1}{m_{3}} \sum_{k=1}^{m_{3}} \left \| \tilde{\mathbf{X}}^{(k)} \right \|_{\ast} .
\end{equation}
\end{definition}
In order to avoid confusion with the new definition of TNN we proposed later, we call this definition TNN-F in this paper.
The computation of t-SVD on an $m_{1} \times m_{2} \times m_{3}$ tensor needs two steps. Firstly, the first step is to
perform DFT by fast Fourier transformation (FFT) along each tube. The time complexity of the first step is $O(m_{1} m_{2} m_{3}\log(m_{3}))$. After DFT, the obtained tensor is a complex tensor which can be divided into a real number tensor and an imaginary number tensor.
The computation of SVD along each frontal slice on the obtained tensor is actually equivalent to performing on the real number tensor and the imaginary number tensor respectively. The time complexity of the second step is $O(2m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$,
which is about the computational cost of the first step.
\section{Cosine Transform Based Tensor Singular Value Decomposition}
We discuss the DCT-based t-SVD and the resulting structure in this section. Since the corresponding block circulant matrices can be diagonalized by DFT, the DFT based t-SVD can be efficiently implemented via fast Fourier transform (fft). We will show the corresponding structure of DCT-based t-SVD can be diagonalized by DCT.
We define the shift of tensor $\mathcal{A} = \text{fold}\left [
\begin{array}{c}
\mathbf{A}^{(1)} \\
\mathbf{A}^{(2)} \\
\vdots \\
\mathbf{A}^{(m_{3})}
\end{array} \right ]$ as $\sigma(\mathcal{A}) = \text{fold}\left [
\begin{array}{c}
\mathbf{A}^{(2)} \\
\mathbf{A}^{(3)} \\
\vdots \\
\mathbf{A}^{(m_{3})}\\
\mathbf{O}
\end{array} \right ]$.
It is easy to prove that any tensor $\mathcal{X}$ can be uniquely divided into $\mathcal{A} + \sigma(\mathcal{A})$.
We use $\bar{\mathcal{X}} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$ to represent the DCT along each tube of $\mathcal{X}$, i.e., $\bar{\mathcal{X}}=\mathrm{dct}(\mathcal{X},[\thinspace],3)=\mathrm{dct}(\mathcal{A}+\sigma(\mathcal{A}),[ \thinspace],3)$. We define the block Toeplitz matrix of $\mathcal{A}$ as
\begin{equation} \label{btplz}
\text{bt}(\mathcal{A}):=\left[
\begin{array}{ccccc}
\mathbf{A}^{(1)} & \mathbf{A}^{(2)} & \cdots & \mathbf{A}^{(m_{3}-1)} & \mathbf{A}^{(m_{3})} \\
\mathbf{A}^{(2)} & \mathbf{A}^{(1)} & \cdots & \mathbf{A}^{(m_{3}-2)} & \mathbf{A}^{(m_{3}-1)} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{A}^{(m_{3}-1)} & \mathbf{A}^{(m_{3}-2)} & \cdots & \mathbf{A}^{(1)} & \mathbf{A}^{(2)} \\
\mathbf{A}^{(m_{3})} & \mathbf{A}^{(m_{3}-1)} & \cdots & \mathbf{A}^{(2)} & \mathbf{A}^{(1)} \\
\end{array}
\right].
\end{equation}
The block Hankel matrix is defined as
\begin{equation} \label{bhkl}
\text{bh}(\mathcal{A}):=\left[
\begin{array}{ccccc}
\mathbf{A}^{(2)} & \mathbf{A}^{(3)} & \cdots & \mathbf{A}^{(m_{3})} & \mathbf{O} \\
\mathbf{A}^{(3)} & \mathbf{A}^{(4)} & \cdots & \mathbf{O} & \mathbf{A}^{(m_{3})} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{A}^{(m_{3})} & \mathbf{O} & \cdots & \mathbf{A}^{(4)} & \mathbf{A}^{(3)} \\
\mathbf{O} & \mathbf{A}^{(m_{3})} & \cdots & \mathbf{A}^{(3)} & \mathbf{A}^{(2)} \\
\end{array}
\right].
\end{equation}
The block Toeplitz-plus-Hankel matrix of $\mathcal{A}$ is defined as
\begin{equation} \label{btph}
\text{btph}(\mathcal{A}):= \text{bt}(\mathcal{A}) + \text{bh}(\mathcal{A}).
\end{equation}
The block Toeplitz-plus-Hankel matrix can be diagonalized. The following theorem can by similarly established as \cite{ng1999a}.
\begin{theorem}\label{dct block diagonalized}
\begin{equation}
\text{bdiag}(\bar{\mathcal{X}})=(\mathbf{C}_{m_{3}} \otimes \mathbf{I}_{m_{1}})\text{btph}(\mathcal{A})(\mathbf{C}^{T}_{m_{3}} \otimes \mathbf{I}_{m_{2}}),
\end{equation}
where $\otimes$ denotes the Kronecker product, $\mathbf{C}_{m_{3}}$ is an $m_{3} \times m_{3}$ DCT matrix.
\end{theorem}
The proof of Theorem 3 can be obtained by using the similar argument in \cite{ng1999a}.
We briefly illustrate this theorem with an example.
\begin{example}
Let the frontal slice of $\mathcal{X} \in \mathbb{R}^{2 \times 2 \times 2}$ be
$$
\mathbf{X}^{(1)} = \left[
\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}
\right], \quad \mathbf{X}^{(2)} = \left[
\begin{array}{cc}
5 & 6 \\
7 & 8
\end{array}
\right].
$$
So the component $\mathcal{A}$ is
$$
\mathbf{A}^{(1)} =\mathbf{X}^{(1)}-\mathbf{X}^{(2)} = \left[
\begin{array}{cc}
-4 & -4 \\
-4 & -4
\end{array}
\right], \quad \mathbf{A}^{(2)} =
\mathbf{X}^{(2)} = \left[
\begin{array}{cc}
5 & 6 \\
7 & 8
\end{array}
\right].
$$
The block Toeplitz matrix is
$$
\text{bt}(\mathcal{A}) =\left [
\begin{array}{cc}
\mathbf{A}^{(1)} & \mathbf{A}^{(2)} \\
\mathbf{A}^{(2)} & \mathbf{A}^{(1)} \end{array}
\right ] = \left[
\begin{array}{cccc}
-4 & -4 & 5 & 6 \\
-3 & -4 & 7 & 8 \\
5 & 6 & -4 & -4 \\
7 & 8 & -4 & -4
\end{array}
\right],
$$
and the block Hankel matrix is
$$
\text{bh}(\mathcal{A}) =\left [
\begin{array}{cc}
\mathbf{A}^{(2)} & 0 \\
0 & \mathbf{A}^{(2)} \end{array}
\right ] = \left[
\begin{array}{cccc}
5 & 6 & 0 & 0 \\
7 & 8 & 0 & 0 \\
0 & 0 & 5 & 6 \\
0 & 0 & 7 & 8
\end{array}
\right].
$$
Then the block Toeplitz-plus-Hankel matrix is
$$
\text{btph}(\mathcal{A}) = \text{bt}(\mathcal{A})+\text{bh}(\mathcal{A}) = \left[
\begin{array}{cccc}
1 & 2 & 5 & 6 \\
3 & 4 & 7 & 8 \\
5 & 6 & 1 & 2 \\
7 & 8 & 3 & 4
\end{array}
\right].
$$
By using stride permutations, we get
$$
\mathbf{P} \text{btph}(\mathcal{A}) \mathbf{P} = \left[
\begin{array}{cccc}
1 & 5 & 2 & 6 \\
5 & 1 & 6 & 2 \\
3 & 7 & 4 & 8 \\
7 & 3 & 8 & 4
\end{array}
\right] = \left[
\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
\mathbf{C} & \mathbf{D}
\end{array}
\right],
$$
where $\mathbf{P} = \left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right]$
and $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$, and $\mathbf{D}$ are Toeplitz-plus-Hankel matrices.
So we have
$$
(\mathbf{C}_{2} \otimes \mathbf{I}_{2})\text{btph}(\mathcal{A})(\mathbf{C}^{T}_{2} \otimes \mathbf{I}_{2})=(\mathbf{C}_{2} \otimes \mathbf{I}_{2})\mathbf{P}\mathbf{P}\text{btph}(\mathcal{A})\mathbf{P}\mathbf{P}(\mathbf{C}^{T}_{2} \otimes \mathbf{I}_{2}),
$$
where $\mathbf{C}_{2}$ is a $2 \times 2$ DCT matrix. In this equation, it is easy to see that
$$
\mathbf{P}(\mathbf{C}_{2} \otimes \mathbf{I}_{2})\mathbf{P}= \left[
\begin{array}{cc}
\mathbf{C}_{2} & 0 \\
0 & \mathbf{C}_{2}
\end{array}
\right].
$$
Similarly,
$$
\mathbf{P}(\mathbf{C}_{2}^{T} \otimes \mathbf{I}_{2})\mathbf{P} = \left[
\begin{array}{cc}
\mathbf{C}_{2}^{T} & 0 \\
0 & \mathbf{C}_{2}^{T}
\end{array}
\right].
$$
Hence, we have
\begin{align}
(\mathbf{C}_{2} \otimes \mathbf{I}_{2})\text{btph}(\mathcal{A})(\mathbf{C}^{T}_{2} \otimes \mathbf{I}_{2}) &= \mathbf{P}\left[
\begin{array}{cc}
\mathbf{C}_{2} & 0 \\
0 & \mathbf{C}_{2}
\end{array}
\right]\left[
\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
\mathbf{C} & \mathbf{D}
\end{array}
\right]\left[
\begin{array}{cc}
\mathbf{C}_{2}^{T} & 0 \\
0 & \mathbf{C}_{2}^{T}
\end{array}
\right]\mathbf{P} \nonumber\\
&= \mathbf{P} \left[
\begin{array}{cc}
\mathbf{C}_{2} \mathbf{A} \mathbf{C}_{2}^{T} & \mathbf{C}_{2} \mathbf{B} \mathbf{C}_{2}^{T} \\
\mathbf{C}_{2} \mathbf{C} \mathbf{C}_{2}^{T} & \mathbf{C}_{2} \mathbf{D} \mathbf{C}_{2}^{T}
\end{array}
\right] \mathbf{P} \nonumber\\
& = \left[
\begin{array}{cccc}
6 & 8 & 0 & 0 \\
10 & 12 & 0 & 0 \\
0 & 0 & -4 & -4 \\
0 & 0 & -4 & -4
\end{array}
\right]. \nonumber
\end{align}
Now, it is easy to verify
\begin{align}
\text{bdiag}(\bar{\mathcal{X}}) &= \text{bdiag}(\text{dct}(\mathcal{A}+\sigma(\mathcal{A}),[\thinspace],3)) \nonumber \\
&= (\mathbf{C}_{2} \otimes \mathbf{I}_{2})\text{btph}(\mathcal{A})(\mathbf{C}^{T}_{2} \otimes \mathbf{I}_{2}). \nonumber
\end{align}
\end{example}
\begin{definition}[DCT-based t-product]
Given $\mathcal{X} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$ and $\mathcal{Y} \in \mathbb{C}^{m_{2} \times m_{4} \times m_{3}}$, the t-product $\mathcal{X} \ast \mathcal{Y}$ is a third-order tensor of size $m_{1} \times m_{4} \times m_{3}$
\end{definition}
\begin{equation}\label{ntproduct}
\mathcal{Z}=\mathcal{X} \ast \mathcal{Y} := \text{fold}(\text{btph}(\mathcal{A})\text{unfold}(\mathcal{Y})),
\end{equation}
where $\mathcal{X} = \mathcal{A}+\sigma(\mathcal{A})$.
Equation (\ref{ntproduct}) can be rewritten as
\begin{equation}\label{tproduct dct}
\begin{split}
\bar{\mathcal{Z}} &= \text{fold}(\text{bdiag}(\bar{\mathcal{X}})((\mathbf{C}_{m_{3}} \otimes \mathbf{I}_{m_{2}})\text{unfold}(\mathcal{Y}))) \\
&=\text{fold}(\text{bdiag}(\bar{\mathcal{X}})\text{unfold}(\bar{\mathcal{Y}})).
\end{split}
\end{equation}
Based on this new t-product, the DCT-based t-SVD can be defined as follows:
\begin{theorem}[DCT-based t-SVD]
Given a tensor $\mathcal{X} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$, the DCT-based t-SVD of $\mathcal{X}$ is given by
\begin{equation}\label{ctSVD}
\mathcal{X} = \mathcal{U} \ast \mathcal{S} \ast \mathcal{V}^{T} ,
\end{equation}
where $\mathcal{U} \in \mathbb{R}^{m_{1} \times m_{1} \times m_{3}}$,$\mathcal{V} \in \mathbb{R}^{m_{2} \times m_{2} \times m_{3}}$ are orthogonal tensors, $\mathcal{S} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$ is a f-diagonal tensor, and $\mathcal{V}^{T}$ is the tensor transpose of $\mathcal{V}$, which is obtained by transposing each frontal slice of $\mathcal{V}$.
\end{theorem}
The proof of Theorem 4 can be obtained by using the similar argument in \cite{kilmer2013third-order}.
By exploiting the beautiful structure, the DCT-based t-SVD can be efficiently calculated by performing the matrix singular value decomposition for each frontal slice of the third-order tensor after DCT along each tube. For an $m_{1} \times m_{2} \times m_{3}$ tensor, the time complexity of performing DCT along each tube in the first step is $O(m_{1} m_{2} m_{3}\log(m_{3}))$ for DCT-based t-SVD, which is the same as that DFT-based t-SVD. Since DCT only produces the real number, the time complexity of calculating SVDs is $O(m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$ for DCT-based t-SVD, which is half that of DFT-based t-SVD.
\begin{table}[htbp]
\small
\centering
\setlength{\abovecaptionskip}{0pt}%
\setlength{\belowcaptionskip}{10pt}%
\renewcommand\arraystretch{0.9}
\caption{The time complexity of t-SVD and DCT-based t-SVD on an $m_{1} \times m_{2} \times m_{3}$ tensor.}
\begin{tabular}{cc}
\hline
tensor & $m_{1} \times m_{2} \times m_{3}$ \bigstrut\\
\hline
DFT & $O(m_{1} m_{2} m_{3}\log(m_{3}))$ \bigstrut[t]\\
SVD after DFT & $O(2m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$ \\
t-SVD & $O(m_{1} m_{2} m_{3}\log(m_{3}))+O(2m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$ \bigstrut[b]\\
\hline
\hline
DCT & $O(m_{1} m_{2} m_{3}\log(m_{3}))$ \bigstrut[t]\\
SVD after DCT & $O(m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$ \\
new t-SVD & $O(m_{1} m_{2} m_{3}\log(m_{3}))+O(m_{3} \min(m_{1} m_{2}^{2}, m_{2} m_{1}^{2}))$ \bigstrut[b]\\
\hline
\end{tabular}%
\label{tab:addlabel}%
\label{tabTestCost1}%
\end{table}%
\section{Low-rank Tensor Completion by TNN-C}
Based on the DCT-based t-SVD, we propose the new definition of TNN called TNN-C in this section. Then, we establish the low-rank tensor completion model \cite{jiang2017a} based on TNN-C and develop the alternating direction method of multipliers (ADMM) to tackle the corresponding low-rank tensor completion model.
\begin{definition}[TNN-C]
Given $\mathcal{X} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$, TNN-C of $\mathcal{X}$ is defined as
\begin{equation}\label{equTNN1}
\left \| \mathcal{X}\right \|_{\ast} =\frac{1}{m_{3}} \sum_{i=1}^{m_{3}} \left \| \bar{\mathbf{X}}^{(i)} \right \|_{\ast}.
\end{equation}
\end{definition}
It is easy to see that TNN-C of $\mathcal{X}$ is the sum of singular values of all frontal slices of $\bar{\mathcal{X}}$. Meanwhile, the $i$-th element of multi-rank is the rank of the $i$-th frontal slice of $\bar{\mathcal{X}}$. Thus, TNN-C is a convex surrogate of the $l_{1}$ norm of a third-order tensor's multi-rank.
The low-rank tensor completion model is defined as
\begin{equation}\label{model}
\min_{\mathcal{X}} \left \| \mathcal{X} \right \|_{\ast}, \quad s.t. \quad \mathcal{X}_{\Omega} = \mathcal{B}_{\Omega}.
\end{equation}
Letting
$$
l_{\mathbb{S}}(\mathcal{X})= \begin{cases}
0,& \text{if } \mathcal{X} \in \mathbb{S}, \\
\infty,& \text{otherwise},
\end{cases}
$$
where $\mathbb{S} := \{ \mathcal{X} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}} , \mathcal{X}_{\Omega} = \mathcal{B}_{\Omega} \} $, (\ref{model}) can be rewritten as the following unconstrained problem:
\begin{equation}\label{unconstrained}
\min_{\mathcal{X}} \left \| \mathcal{X} \right \|_{\ast} + l_{\mathbb{S}}(\mathcal{X}).
\end{equation}
By introducing an auxiliary variable $\mathcal{Y}=\mathcal{X}$, the augmented Lagrangian function of (\ref{unconstrained}) is
\begin{equation}\label{Lagrangian}
\begin{split}
L(\mathcal{X},\mathcal{Y},\mathcal{M}) & := \left \| \mathcal{Y} \right \|_{\ast} + l_{\mathbb{S}}(\mathcal{X}) + \langle \mathcal{Y}-\mathcal{X}, \mathcal{M} \rangle + \frac{\beta}{2}\left \| \mathcal{Y}-\mathcal{X} \right \|^{2}_{F} \\
& = \left \| \mathcal{Y} \right \|_{\ast} + l_{\mathbb{S}}(\mathcal{X}) + \frac{\beta}{2} \left \| \mathcal{Y}-\mathcal{X}+\frac{1}{\beta} \mathcal{M} \right \|^{2}_{F}-\frac{1}{2\beta}\langle\mathcal{M},\mathcal{M\rangle},
\end{split}
\end{equation}
where $ \mathcal{M} \in \mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$ is the Lagrangian multiplier, and $\beta$ is the balance parameter. According to the framework of ADMM \cite{boyd2011distributed, lin2010augmented, he2012alternating}, $\mathcal{X}$, $\mathcal{Y}$, and $\mathcal{M}$ are iteratively updated as
\begin{equation}\label{iterative}
\begin{cases}
\begin{aligned}
\text{Step 1: } & \mathcal{Y}^{l+1} \in \arg\min_{\mathcal{Y}} L(\mathcal{X}^{l},\mathcal{Y},\mathcal{M}^{l}), \\
\text{Step 2: } & \mathcal{X}^{l+1} \in \arg\min_{\mathcal{X}} L(\mathcal{X},\mathcal{Y}^{l+1},\mathcal{M}^{l}), \\
\text{Step 3: } & \mathcal{M}^{l+1} = \mathcal{M}^{l} + \beta (\mathcal{Y}^{l+1}- \mathcal{X}^{l+1}).
\end{aligned}
\end{cases}
\end{equation}
Now, we give the details for solving each subproblem.
In \textbf{Step 1}, the $\mathcal{Y}$-subproblem is:
\begin{equation}\label{step 1}
\arg\min_{\mathcal{Y}} \left \| \mathcal{Y} \right \|_{\ast} + \frac{\beta}{2} \left \| \mathcal{Y} - \mathcal{X}^{l} + \frac{1}{\beta} \mathcal{M}^{l} \right \|^{2}_{F},
\end{equation}
which can be solved by the following theorem \cite{lu2016tensor, semerci2014tensor-based}.
\begin{theorem}
Given $\mathcal{Z} \in \mathbb{C}^{m_{1} \times m_{2} \times m_{3}}$, a minimizer to
\begin{equation}\label{minimizer}
\min_{\mathcal{Y}} \left \| \mathcal{Y} \right \|_{\ast} + \frac{\beta}{2} \left \| \mathcal{Y} - \mathcal{Z} \right \|^{2}_{F}
\end{equation}
is given by the tensor singular value thresholding
\begin{equation}\label{wtsvt}
\mathcal{Y} = \mathcal{U} \ast \mathcal{D}_{\frac{1}{\beta}} \ast \mathcal{V}^{T},
\end{equation}
where $\mathcal{Z} = \mathcal{U} \ast \mathcal{S} \ast \mathcal{V}^{T}$ and $\mathcal{D}_{\frac{1}{\beta}}$ is an $\mathbb{R}^{m_{1} \times m_{2} \times m_{3}}$ f-diagonal tensor whose each frontal slice in the discrete cosine domain is $\bar{\mathcal{D}}_{\frac{1}{\beta}}(i,i,j) = (\bar{\mathcal{S}}(i,i,j) - \frac{1}{\beta} )_{+}$.
\end{theorem}
In \textbf{Step 2}, we solve the following problem:
\begin{equation}\label{step 2}
\arg\min_{\mathcal{X}} l_{\mathbb{S}}(\mathcal{X}) + \frac{\beta}{2} \left \| \mathcal{Y}^{l+1} - \mathcal{X} + \frac{1}{\beta} \mathcal{M}^{l} \right \|^{2}_{F},
\end{equation}
which has a closed-form solution
\begin{equation}\label{x solution}
\mathcal{X}^{l+1} = (\mathcal{Y}^{l+1} + \frac{1}{\beta} \mathcal{M}^{l})_{\Omega^{C}} + \mathcal{B},
\end{equation}
where $\Omega ^{C}$ is the complementary set of the index set $\Omega$.
We summarize the proposed ADMM procedure in Algorithm 1. Every step of ADMM has an explicit solution. Thus, the proposed method is efficiently implementable. The convergence of the ADMM method of convex functions of separable variables with linear constraints is guaranteed \cite{afonso2011an, han2012a}.\\[1ex]
\begin{tabular}{l}
\hline
\textbf{Algorithm 1} {\small ADMM for solving the proposed model (\ref{model}).}\\
\hline
\textbf{Input:} Observed data $\mathcal{B}$, index set $\Omega$, parameters $\beta$.\\
\textbf{Initialize:} $\mathcal{X}=\mathcal{B}$, $\mathcal{Y}=\mathbf{0}$, $\mathcal{M}=\mathbf{0}$, $\text{tol}=10^{-5}$, and $L=500$.\\
\indent 1: \textbf{while} $l<L$ and $\left \| \mathcal{X}^{l+1} - \mathcal{X}^{l} \right \|_{F} / \left \| \mathcal{X}^{l} \right \|_{F} > tol$ \textbf{do} \\
\indent 2: \quad $\mathcal{Z}=\mathcal{X}^{l} - \frac{1}{\beta} \mathcal{M}^{l}$; \\
\indent 3: \quad $\bar{\mathcal{Z}} = \mathrm{dct}(\mathcal{Z},[\thinspace],3)$;\\
\indent 4: \quad \textbf{for} $k=1$ to $m_{3}$ \textbf{do} \\
\indent 5: \quad \quad \indent $[\bar{\mathbf{U}}^{(k)},\bar{\mathbf{S}}^{(k)},\bar{\mathbf{V}}^{(k)}]=\mathrm{SVD}(\bar{\mathbf{Z}}^{(k)});$\\
\indent 6: \quad \quad \indent $\bar{\mathbf{D}}^{(k)} = (\bar{\mathbf{S}}^{(k)}-1/ \beta)_{+};$ \\
\indent 7: \quad \quad \indent $\bar{\mathbf{Z}}^{(k),l+1} = \mathbf{U}^{(k)}\bar{\mathbf{D}}^{(k)}\mathbf{V}^{(k) H};$ \\
\indent 8: \quad \textbf{end for}\\
\indent 9: \quad $\mathcal{Y}^{l+1} = \mathrm{idct}(\bar{\mathcal{Z}}^{l+1},[\thinspace],3);$\\
\indent 10: \quad $\mathcal{X}^{l+1} = (\mathcal{Y}^{l+1} + \frac{1}{\beta} \mathcal{M}^{l})_{\Omega^{c}} + \mathcal{B}$;\\
\indent 11: \quad $\mathcal{M}^{l+1} = \mathcal{M}^{l} + \beta (\mathcal{Y}^{l+1}- \mathcal{X
}^{l+1}).$\\
\indent 12
: \textbf{end while}\\
\textbf{Output:} The recovered tensor $\mathcal{X}$. \\
\hline
\end{tabular}
\section{Numerical Examples}
In this section, all experiments are implemented on Windows 10 and Matlab (R2017a) with an Intel(R) Core(TM) i7-7700k CPU at 4.20 GHz and 16 GB RAM.
\subsection{The Computational Time}
Saving time is the most important advantage of DCT-based t-SVD. We illustrate this advantage of the new t-SVD by operating on random tensors.
We set 4 groups of random tensors of different size and performed 1000 runs to get the average time required.
Tab.\thinspace\ref{tabTestCost1} shows that average time cost of performing t-SVD and DCT-based t-SVD, and confirms our point that DCT-based t-SVD only needs half the time of t-SVD.
\begin{table}[htbp]
\centering
\setlength{\abovecaptionskip}{0pt}%
\setlength{\belowcaptionskip}{10pt}%
\renewcommand\arraystretch{0.9}
\caption{The time cost of t-SVD and DCT-based t-SVD on the random tensors of different size.}
\begin{tabular}{ccccc}
\hline
size & \textit{100*100*100} & \textit{100*100*400} & \textit{200*200*100} & \textit{400*400*100} \bigstrut\\
\hline
FFT & 0.0041 & 0.0175 & 0.0176 & 0.0653 \bigstrut[t]\\
SVD after FFT & 0.0818 & 0.3250 & 0.3641 & 1.9015 \\
original t-SVD & 0.0859 & 0.3425 & 0.3817 & 1.9668 \bigstrut[b]\\
\hline
\hline
DCT & 0.0042 & 0.0150 & 0.0162 & 0.0601 \bigstrut[t]\\
SVD after DCT & 0.0439 & 0.1649 & 0.1978 & 0.8922 \\
new t-SVD & 0.0481 & 0.1799 & 0.2140 & 0.9523 \bigstrut[b]\\
\hline
\end{tabular}%
\label{tabTestCost1}%
\end{table}%
\subsection{Real Data}
We conduct the video and multispectral image (MSI) completion experiments and compare TNN-C with the TNN-F \cite{lu2016tensor}. In our experiments, the quality of the recovered image is measured by the average of highest peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) of all bands. PSNR of a band is defined as follows:
$$
\text{PSNR}=10\log_{10} \frac{m_{1}m_{2} \mathbf{X}^{2}_{\max}}{\left \| \hat{\mathbf{X}} - \mathbf{X} \right \|^{2}_{F}},
$$
where $\mathbf{X}$ is the masked matrix, $\hat{\mathbf{X}}$ is the recovered matrix, and $\mathbf{X}_{max}$ is the maximum pixel value of the original matrix $\mathbf{X}$. SSIM can measure the similarity between the recovered image and the masked image. This indicator can reflect the similarities in brightness, contrast, and structure of two images and is defined as
$$
\text{SSIM} = \frac{(2 \mu_{\mathbf{x}} \mu_{\hat{\mathbf{x}}} + c_{1}) (2 \sigma_{\mathbf{x} \hat{\mathbf{x}}} + c_{2}) }{( \mu^{2}_{\mathbf{x}} + \mu^{2}_{\hat{\mathbf{x}}} + c_{1}) ( \sigma^{2}_{\mathbf{x}} + \sigma^{2}_{\hat{\mathbf{x}}} + c_{1}) },
$$
where $\mu_{\mathbf{x}}$ and $\mu_{\hat{\mathbf{x}}}$ represent the average values of the original matrix and the estimated matrix, respectively, $\sigma_{\mathbf{x}}$ and $\sigma_{\hat{\mathbf{x}}}$ represent the standard deviation of $\mathbf{X}$ and $\hat{\mathbf{X}}$, respectively.
For all the following experiments, we set the maximum number of iterations to 500 and the tolerance to $1 \times 10^{-8}$. This algorithm only needs one parameter $\beta$, and we set it to $1 \times 10^{-2}$.
\textbf{Video completion.} We test 3 videos: \textit{Akiyo}, \textit{Suzie}, and \textit{Salesman}. The size of \textit{Akiyo} and \textit{Salesman} is $144 \times 176 \times 300$. The size of \textit{Suzie} is $144 \times 176 \times 150$. Tab.\thinspace\ref{tabTestVideo} shows PSNR, SSIM, and time cost of TNN-F and TNN-C. TNN-C achieves better results and costs much less time than TNN-F in all experiments. Fig.\thinspace\ref{figVideoTube} shows one selected tube. We can observe that the tube of recovered video by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Fig.\thinspace\ref{figVideoPsnr} shows the PSNR values of each frame of recovered videos by TNN-F and TNN-C. We can observe that when the sampling rate (SR) is $0.1$, the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few frames. This observation is consistent with our interpretation of BCs. Fig.\thinspace\ref{figTestVideo} shows the results recovered by TNN-F and TNN-C with $\text{SR} = 0.1$. TNN-C is visually better than TNN-F.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/akiyo-t-eps-converted-to.pdf}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/suzie-t-eps-converted-to.pdf}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/salesman-t-eps-converted-to.pdf}
\end{center}
\caption{The pixel value of a selected tube of videos \textit{Akiyo}, \textit{Suzie}, and \textit{Salesman}. }
\label{figVideoTube}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/akiyo-PSNR-10-eps-converted-to.pdf}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/suzie-PSNR-10-eps-converted-to.pdf}
\includegraphics[width=1.0\textwidth,height=3.5cm]{fig/salesman-PSNR-10-eps-converted-to.pdf}
\end{center}
\caption{The PSNR values of each frame of the recovered videos \textit{Akiyo}, \textit{Suzie}, and \textit{Salesman} obtained by TNN-F and TNN-C. }
\label{figVideoPsnr}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.23\textwidth]{fig/akiyo-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/akiyo-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/akiyo-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/akiyo-C-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/suzie-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/suzie-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/suzie-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/suzie-C-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/salesman-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/salesman-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/salesman-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/salesman-C-10-eps-converted-to.pdf}
\end{center}
\caption{A frame of the recovered videos with $\text{SR}=0.1$. From top to bottom: \textit{Akiyo}, \textit{Suzie}, and \textit{Salesman}. From left to right: the original image, the masked image, the results by TNN-F, and TNN-C. }
\label{figTestVideo}
\end{figure}
\begin{table}[htbp]
\setlength{\abovecaptionskip}{0pt}%
\setlength{\belowcaptionskip}{10pt}%
\caption{PSNR, SSIM, and time of two methods in video completion. In brackets, they are the time required for transformation and time required for performing SVD. The best results are highlighted in bold.}
\centering
\begin{tabular}{|cc|cc|cc|cc|}
\hline
\multicolumn{2}{|c|}{video} & \multicolumn{2}{c|}{\textit{akiyo}} & \multicolumn{2}{c|}{\textit{suzie}} & \multicolumn{2}{c|}{\textit{salesman}}
\bigstrut\\
\hline
SR & metric & TNN-F & TNN-C & TNN-F & TNN-C & TNN-F & TNN-C \\
\hline
\hline
\multirow{4}[2]{*}{0.05} & PSNR & 32.00 & \textbf{32.57 } & 25.50 & \textbf{26.02 } & 30.12 & \textbf{30.22 }
\bigstrut[t]\\
& SSIM & 0.934 & \textbf{0.941 } & 0.681 & \textbf{0.700 } & 0.895 & \textbf{0.897 }
\\
& \multirow{2}[1]{*}{time} & 156.2 & \textbf{91.9 } & 69.6 & \textbf{40.1 } & 148.5 & \textbf{85.6 } \\
& & (8.8+137.0) & \textbf{(6.2+70.9)} & (4.0+60.6) & \textbf{(2.9+30.6)} & (8.6+128.9) & \textbf{(6.0+65.3)}
\bigstrut[b]\\
\hline
\multirow{4}[2]{*}{0.1} & PSNR & 34.20 & \textbf{34.75 } & 27.73 & \textbf{27.93 } & 32.13 & \textbf{32.29 }
\bigstrut[t]\\
& SSIM & 0.958 & \textbf{0.963 } & 0.759 & \textbf{0.766 } & 0.928 & \textbf{0.931 }
\\
& \multirow{2}[1]{*}{time} & 141.8 & \textbf{86.3 } & 64.5 & \textbf{39.3 } & 139.5 & \textbf{84.9 }
\\
& & (8.1+122.9) & \textbf{(5.8+66.6)} & (3.8+55.2) & \textbf{(2.8+30.2)} & (8.3+120.3) & \textbf{(5.8+64.9)}
\bigstrut[b]\\
\hline
\multirow{4}[2]{*}{0.2} & PSNR & 37.44 & \textbf{38.11 } & 30.29 & \textbf{30.51 } & 35.01 & \textbf{35.20 }
\bigstrut[t]\\
& SSIM & 0.979 & \textbf{0.983 } & 0.838 & \textbf{0.844 } & 0.960 & \textbf{0.961 } \\
& \multirow{2}[1]{*}{time} & 145.2 & \textbf{79.8 } & 62.5 & \textbf{37.2 } & 135.1 & \textbf{81.3 } \\
& & (8.1+125.6) & \textbf{(5.4+60.3)} & (3.6+53.3) & \textbf{(2.8+28.6)} & (8.1+116.3) & \textbf{(5.5+61.6)}
\bigstrut[b]\\
\hline
\end{tabular}%
\label{tabTestVideo}%
\end{table}%
\textbf{MSI completion.} For MSI data, we add spectral angle mapper (SAM) and erreur relative globale adimensionnelle de synth$\grave{e}$se (ERGAS) which are common quality metrics for MSI data. SAM calculates the angle in spectral space between pixels and a set of reference tensor on spectral similarity. ERGAS measures fidelity of the recovered tensor based on the weighted sum of mean squared error (MSE) of all bands. The lower the value of these two indicators, the better the results. The size of the MSI data from CAVE database is $512 \times 512 \times 31$ with the wavelengths in the range of $400-700$ nm at an interval of 10nm.
We display one selected tube in Fig.\thinspace\ref{figMsiTube}. We can observe that the tube of recovered tensor by TNN-C is more closely to the true tube than that by TNN-F, especially near the boundary. Moreover, we plot the PSNR values of recovered tensor by TNN-C and TNN-F in Fig.\thinspace\ref{figMsiPsnr}. In general, we can observe that the PSNR values of TNN-C are higher than those of TNN-F, especially for the first and last few bands. Those observations verify TNN-C can produce more natural results as compared to TNN-F when more reasonable BCs is implied in TNN-C.
In Fig.\thinspace\ref{figTestMsi}, we show the first band of testing data recovered by the two methods with $\text{SR} = 0.1$. Obviously, TNN-C achieves better visual results than TNN-F. Tabs.\thinspace\ref{tabTestMsi1}-\ref{tabTestMsi2} give the more detailed data of other testing images. We can see that TNN-C not only has a better performance in PSNR, SSIM, SAM, and ERGAS, but also significantly reduces the time cost compared to TNN-F.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{fig/tube1-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/tube2-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/tube3-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/tube4-eps-converted-to.pdf}
\end{center}
\caption{The pixel values of a random tube of MSI \textit{Pompoms}, \textit{Stuffed toys}, \textit{Foods}, and \textit{Peppers}. }
\label{figMsiTube}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{fig/pompoms-PSNR-10-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/stuffed-PSNR-10-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/foods-PSNR-10-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{fig/peppers-PSNR-10-eps-converted-to.pdf}
\end{center}
\caption{The PSNR values of each band of the recovered MSIs \textit{Pompoms}, \textit{Stuffed toys}, \textit{Foods}, and \textit{Peppers} obtained by TNN-F and TNN-C. }
\label{figMsiPsnr}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.23\textwidth]{fig/pom-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pom-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pom-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pom-C-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/toy-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/toy-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/toy-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/toy-C-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/food-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/food-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/food-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/food-C-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pep-O-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pep-B-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pep-F-10-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{fig/pep-C-10-eps-converted-to.pdf}
\end{center}
\caption{The first band of recovered MSI images with $\text{SR}=0.1$. From top to bottom: \textit{Pompoms}, \textit{Stuffed toys}, \textit{Foods}, and \textit{Peppers}. From left to right: the original image, the masked image, the results by TNN-F, and TNN-C. }
\label{figTestMsi}
\end{figure}
\begin{table}
\setlength{\abovecaptionskip}{0pt}%
\setlength{\belowcaptionskip}{10pt}%
\caption{PSNR, SSIM, SAM, ERGAS, and time of two methods in MSI completion. In brackets, they are time required for transformation and time required for performing SVD. The best results are highlighted in bold.}
\centering
\begin{tabular}{|cc|cc|cc|}
\hline
\multicolumn{2}{|c|}{MSI} & \multicolumn{2}{c|}{\textit{Pompoms}} & \multicolumn{2}{c|}{\textit{Stuffed toys}}
\bigstrut\\
\hline
SR & metric & TNN-F & TNN-C & TNN-F & TNN-C
\bigstrut\\
\hline
\hline
\multirow{6}[2]{*}{0.05} & PSNR & 26.56 & \textbf{29.00 } & 28.44 & \textbf{31.84 }
\bigstrut[t]\\
& SSIM & 0.818 & \textbf{0.876 } & 0.892 & \textbf{0.941 } \\
& SAM & 0.22 & \textbf{0.16 } & 0.30 & \textbf{0.22 } \\
& ERGAS & 10.28 & \textbf{8.00 } & 9.80 & \textbf{6.74 } \\
& \multirow{2}[1]{*}{time} & 309.4 & \textbf{161.0 } & 320.6 & \textbf{183.4 } \\
& & (11.0+285.7) & \textbf{(8.9+135.3)} & (11.4+296.0) & \textbf{(10.3+153.4)}
\bigstrut[b]\\
\hline
\multirow{6}[2]{*}{0.1} & PSNR & 31.26 & \textbf{33.98 } & 33.37 & \textbf{36.63 }
\bigstrut[t]\\
& SSIM & 0.922 & \textbf{0.952 } & 0.955 & \textbf{0.978 } \\
& SAM & 0.13 & \textbf{0.09 } & 0.19 & \textbf{0.14 } \\
& ERGAS & 5.96 & \textbf{4.52 } & 5.53 & \textbf{3.84 } \\
& \multirow{2}[1]{*}{time} & 271.7 & \textbf{171.1 } & 320.2 & \textbf{164.5 } \\
& & (9.6+251.5) & \textbf{(9.6+143.9)} & (11.2+295.8) & \textbf{(9.2+138.1)}
\bigstrut[b]\\
\hline
\multirow{6}[2]{*}{0.2} & PSNR & 37.13 & \textbf{39.55 } & 39.14 & \textbf{41.94 }
\bigstrut[t]\\
& SSIM & 0.976 & \textbf{0.986 } & 0.986 & \textbf{0.994 } \\
& SAM & 0.07 & \textbf{0.05 } & 0.11 & \textbf{0.09 } \\
& ERGAS & 3.04 & \textbf{2.39 } & 2.82 & \textbf{2.06 } \\
& \multirow{2}[1]{*}{time} & 308.1 & \textbf{184.0 } & 278.9 & \textbf{165.8 } \\
& & (10.9+284.4) & \textbf{(10.2+154.2)} & (10.2+256.4) & \textbf{(9.2+138.7)}
\bigstrut[b]\\
\hline
\end{tabular}%
\label{tabTestMsi1}%
\end{table}%
\begin{table}
\setlength{\abovecaptionskip}{0pt}%
\setlength{\belowcaptionskip}{10pt}%
\caption{PSNR, SSIM, SAM, ERGAS, and time of two methods in MSI completion. In brackets, they are time required for transformation and time required for performing SVD. The best results are highlighted in bold.}
\centering
\begin{tabular}{|cc|cc|cc|}
\hline
\multicolumn{2}{|c|}{MSI}
& \multicolumn{2}{c|}{\textit{Foods}} & \multicolumn{2}{c|}{\textit{Peppers}}
\bigstrut\\
\hline
SR & metric & TNN-F & TNN-C & TNN-F & TNN-C
\bigstrut\\
\hline
\hline
\multirow{6}[2]{*}{0.05} & PSNR
& 31.48 & \textbf{33.33 } & 34.89 & \textbf{36.87 }
\bigstrut[t]\\
& SSIM
& 0.904 & \textbf{0.932 } & 0.946 & \textbf{0.965 } \\
& SAM
& 0.27 & \textbf{0.21 } & 0.21 & \textbf{0.15 } \\
& ERGAS
& 9.52 & \textbf{8.01 } & 6.31 & \textbf{5.21 } \\
& \multirow{2}[1]{*}{time}
& 281.0 & \textbf{164.8 } & 284.9 & \textbf{155.0 } \\
&
& \multicolumn{1}{p{5.625em}}{(10.3+258.7)} & \textbf{(9.2+137.9)} & (10.4+255.2) & \textbf{(8.8+128.7)}
\bigstrut[b]\\
\hline
\multirow{6}[2]{*}{0.1} & PSNR
& 35.31 & \textbf{37.73 } & 39.25 & \textbf{41.27 }
\bigstrut[t]\\
& SSIM
& 0.957 & \textbf{0.974 } & 0.980 & \textbf{0.989 } \\
& SAM
& 0.18 & \textbf{0.13 } & 0.13 & \textbf{0.09 } \\
& ERGAS
& 6.14 & \textbf{4.91 } & 3.86 & \textbf{3.18 } \\
& \multirow{2}[1]{*}{time}
& 291.4 & \textbf{167.7 } & 278.3 & \textbf{146.8 } \\
&
& (10.7+267.9) & \textbf{(9.4+140.2)} & (10.0+256.6) & \textbf{(8.6+124.9)}
\bigstrut[b]\\
\hline
\multirow{6}[2]{*}{0.2} & PSNR
& 43.13 & \textbf{40.30 } & 44.30 & \textbf{46.22 }
\bigstrut[t]\\
& SSIM
& 0.993 & \textbf{0.986 } & 0.995 & \textbf{0.997 } \\
& SAM
& 0.11 & \textbf{0.08 } & 0.07 & \textbf{0.05 } \\
& ERGAS
& 3.49 & \textbf{2.68 } & 2.19 & \textbf{1.82 } \\
& \multirow{2}[1]{*}{time}
& 289.7 & \textbf{164.0 } & 286.2 & \textbf{153.6 } \\
&
& (10.6+266.7) & \textbf{(9.3+137.4)} & (10.4+264.2) & \textbf{(9.0+138.5)}
\bigstrut[b]\\
\hline
\end{tabular}%
\label{tabTestMsi2}%
\end{table}%
\textbf{Parameter analysis.} We analyze the robustness of TNN-C for different parameters using MSI data \textit{Stuffed toys} with $SR = 0.1$. TNN-C only requires one parameter $\beta$. As shown in Fig.\thinspace(\ref{figParaAna}), different $\beta$ lead to nearly the same PSNR value, but it affects the convergence speed. After testing, we choose $\beta = 1 \times 10^{-2}$ for all experiments.
\begin{figure}[!htp]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig/parameter-eps-converted-to.pdf}
\end{center}
\caption{The PSNR values with respect to the iteration for different values of parameter $\beta$.}
\label{figParaAna}
\end{figure}
\section{Concluding Remarks}
We have introduced the DCT as an alternative of DFT into the framework of t-SVD. Based on the resulting t-SVD, the DCT based tensor nuclear norm (TNN-C) is suggested for low-rank tensor completion problem. We have developed an efficient alternating direction method of multipliers (ADMM) to tackle the corresponding model. Numerical experiments are reported to demonstrate the superiority of the DCT-based t-SVD. In the future
research work, other transforms based tensor singular value decomposition can be considered and studied. We expect other transforms based
tensor singular value decomposition can deal with data tensors from specific applications.
\section*{Acknowledgment}
The research is supported by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132),
the HKRGC GRF 1202715, 12306616,
12200317 and HKBU RC-ICRS/16-17/03.
\section*{References}
\bibliographystyle{elsarticle-num}
|
1,108,101,562,723 | arxiv | \section{Introduction}\label{sec1}
Treatment effect analyses often entail a measurement error problem as well as an endogeneity problem.
For example, \cite{black/sanders/taylor:2003} document a substantial measurement error in educational attainments in the 1990 U.S. Census.
At the same time, educational attainments are endogenous treatment variables in a return to schooling analysis, because unobserved individual ability affects both schooling decisions and wages \citep{card:2001}.
The econometric literature, however, has offered only a few solutions for addressing the two problems at the same time.
An instrumental variable is a standard technique for correcting endogeneity and measurement error \citep[e.g.,][]{angrist/krueger:2001}, but, to the best of my knowledge, no existing research has explicitly investigated the identifying power of an instrumental variable for the heterogeneous treatment effect when the treatment is both mismeasured and endogenous.\footnote{Many existing methods, including \cite{mahajan:2006} and \cite{lewbel:2007}, allow for the treatment effect to be heterogeneous due to observed variables. In this paper I focus on the heterogeneity due to unobserved variables by considering the local average treatment effect framework.}
I consider a mismeasured treatment in the framework of \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}, and focus on the local average treatment effect as a parameter of interest.
My analysis studies the identifying power of a binary instrumental variable under the following two assumptions: (i) the instrument affects the outcome and the measured treatment only through the true treatment (the exclusion restriction of an instrument), and (ii) the instrument weakly increases the true treatment (the deterministic monotonicity of the true treatment in the instrument).
These assumptions are an extension of \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996} into the framework with mismeasured treatment.
The local average treatment effect is the average treatment effect for the compliers, that is, the subpopulation whose true treatment status is strictly affected by an instrument.
Focusing on the local average treatment effect is meaningful for a few reasons.\footnote{\cite{deaton:2009} and \cite{heckman/urzua:2010} are cautious about interpreting the local average treatment effect as a parameter of interest. See also \cite{imbens:2010,imbens:2014} for a discussion.}
First, the local average treatment effect has been a widely used parameter to investigate the heterogeneous treatment effect with endogeneity.
My analysis offers a tool for a robustness check to those who have already investigated the local average treatment effect.
Second, the local average treatment effect can be used to extrapolate to the average treatment effect or other parameters of interest.
\cite{imbens:2010} emphasize the utility of reporting the local average treatment effect in addition to the other parameters of interest, because the extrapolation often requires additional assumptions and can be less credible than the local average treatment effect.
The mismeasured treatment prevents the local average treatment effect from being point-identified.
As in \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}, the local average treatment effect is the ratio of the intent-to-treat effect over the size of compliers.\footnote{The intent-to-treat effect is defined as the mean difference of the outcome between the two groups defined by the instrument. The size of compliers is the probability of being a complier, and it is the mean difference of the true treatment (\citealp{imbens/angrist:1994} and \citealp{angrist/imbens/rubin:1996}).}
Since the measured treatment is not the true treatment, however, the size of compliers is not identified and therefore the local average treatment effect is not identified.
The under-identification for the local average treatment effect is a consequence of the under-identification for the size of compliers; if I assumed no measurement error, I could compute the size of compliers based on the measured treatment and therefore the local average treatment effect would be the Wald estimand.\footnote{The Wald estimand in this paper is defined as the ratio of the intent-to-treat effect over the mean difference of the measured treatment between the two groups defined by the instrument.
Note that the Wald estimand is identified because it uses the measured treatment, but it is not the local average treatment effect because it does not use the true treatment.}
I take a worst case scenario approach with respect to the measurement error and allow for a general form of measurement error.
The only assumption concerning the measurement error is its independence of the instrumental variable.
(Section \ref{sec3Less} dispenses with this assumption and shows that it is still possible to bound the local average treatment effect.)
I consider the following types of measurement error.
First, the measurement error is nonclassical; that is, it can be dependent on the true treatment.
The measurement error for a binary variable is always nonclassical.
It is because the measurement error cannot be negative (positive) when the true variable takes the low (high) value.
Second, I allow the measurement error to be endogenous (or differential); that is, the measured treatment can be dependent on the outcome conditional on the true treatment.
For example, as \cite{black/sanders/taylor:2003} argue, the measurement error for educational attainment depends on the familiarity with the educational system in the U.S., and immigrants may have a higher rate of measurement error.
At the same time, the familiarity with the U.S. educational system can be related to the English language skills, which can affect the labor market outcomes.
\cite{bound/brown/mathiowetz:2001} also argue that measurement error is likely to be endogenous in some empirical applications.
(In Appendix D, I explore for the identifying power of the exogeneity assumption on the measurement error. The additional assumption yields a tighter sharp identified set, but I still cannot point identify the local average treatment effect in general.)
Third, there is no assumption concerning the marginal distribution of the measurement error.
It is not necessary to assume anything about the accuracy of the measurement.
In the presence of measurement error, I derive the identified set for the local average treatment effect (Theorem \ref{theorem1}).
\begin{figure}
\centering
\subfloat[When the intent-to-treat effect is positive]{
\begin{tikzpicture}[thick]
\draw[->] (0,1) to (10,1);
\node[] at (5,1) {$|$};
\node[] at (5,.5) {$0$};
\node[] at (6,1) {$\blacklozenge$};
\node[] at (6,1.5) {ITT};
\draw [line width=4] (6,1) to (8.5,1);
\node[] at (9.5,1) {$\blacklozenge$};
\node[] at (9.75,1.5) {Wald};
\end{tikzpicture}
}
\\
\subfloat[When the intent-to-treat effect is zero]{
\begin{tikzpicture}[thick]
\draw[->] (0,1) to (10,1);
\node[] at (5,.5) {$0$};
\node[] at (5,1) {$|$};
\draw [line width=4] (4.9,1) to (5.1,1);
\node[] at (5,1) {$\blacklozenge$};
\node[] at (5,1.5) {Wald};
\node[] at (5,2) {ITT};
\end{tikzpicture}
}
\\
\subfloat[When the intent-to-treat effect is negative]{
\begin{tikzpicture}[thick]
\draw[->] (0,1) to (10,1);
\node[] at (5,1) {$|$};
\node[] at (5,.5) {$0$};
\node[] at (4,1) {$\blacklozenge$};
\node[] at (4,1.5) {ITT};
\draw [line width=4] (1.5,1) to (4,1);
\node[] at (.5,1) {$\blacklozenge$};
\node[] at (.25,1.5) {Wald};
\end{tikzpicture}
}
\\
\caption{Identified set for the local average treatment effect.\label{GraphResult} ITT is the intent-to-treat effect and Wald is the Wald estimand. The thick line is the identified set for the local average treatment effect. Note that the identified set is $\{0\}$ when the intent-to-treat effect is zero.}
\end{figure}
Figure \ref{GraphResult} describes the relationship among the identified set for the local average treatment effect, the intent-to-treat effect, and the Wald estimand.
First, the intent-to-treat effect has the same sign as the local average treatment effect.
This is why Figure \ref{GraphResult} has three subfigures according to the sign of the intent-to-treat effect: (a) positive, (b) zero, and (c) negative.
Second, the intent-to-treat effect is the sharp lower bound on the local average treatment effect in absolute value.
Third, the Wald estimand is an upper bound on the local average treatment effect in absolute value.
The Wald estimand is the probability limit of the instrumental variable estimator in my framework, which ignores the measurement error but controls only for the endogeneity.
This point implies that an upper bound on the local average treatment effect is obtained by ignoring the measurement error.
\cite{frazis/loewenstein:2003} obtain a similar result in the homogeneous treatment effect model.
Last, but most importantly, the sharp upper bound in absolute value can be smaller than the Wald estimand.
It is a potential cost of ignoring the measurement error and using the Wald estimand.
Even for analyzing only an upper bound on the local average treatment effect, it is recommended to take the measurement error into account, which can yield a smaller upper bound than the Wald estimand.
Section \ref{sec3.1} investigates when the Wald estimand coincide with the sharp upper bound.
I extend the identification analysis to incorporate covariates other than the treatment variable.
In this setting, the instrumental variable satisfies the exclusion restriction after conditioning covariates.
Based on the insights from \cite{abadie:2003} and \cite{frolich:2007}, I show that the identification strategy of this paper works in the presence of covariates.
I construct a confidence interval for the local average treatment effect.
To construct the confidence interval, first, I approximate the identified set by discretizing the support of the outcome where the discretization becomes finer as the sample size increases.
The approximation for the identified set resembles many moment inequalities in \cite{menzel:2014} and \cite{chernozhukov/chetverikov/kato:2014}, who consider a finite but divergent number of moment inequalities.
I apply a bootstrap method in \cite{chernozhukov/chetverikov/kato:2014} to construct a confidence interval with uniformly asymptotically valid asymptotic size control.
The confidence interval also rejects parameter values which do not belong to the sharp identified set.
An empirical excise and a Monte Carlo simulation demonstrate a finite sample property of the proposed inference method.
The empirical exercise is based on \cite{abadie:2003}, who studies the effects of 401(k) participation on financial savings, and considers a misclassification of the 401(k) participation.\footnote{The pension type is subject to a measurement error. See, for example, \cite{gustman/steinmeier/tabatabai:2007} for the pension type misclassification in the Health and Retirement Study.}
As an extension, I consider the dependence between the instrument and the measurement error.
In this case, there is no assumption on the measurement error, and therefore the measured treatment has no information on the local average treatment effect.
Even without using the measured treatment, however, I can still apply the same identification strategy and obtain finite (but less tight) bounds on the local average treatment effect.
Moreover, I offer a new use of repeated measurements as additional sources for identification.
The existing practice of repeated measurements uses one of them as an instrumental variable, as in \cite{hausman/ichimura/newey/powell:1991}, \cite{hausman/newey/powell:1995}, \cite{mahajan:2006}, and \cite{hu:2008}.\footnote{It is worthwhile to mention that \cite{lewbel:2007} allows for a certain form of the endogeneity in a repeated measurement, under which a repeated measurement still satisfies some exclusion restriction.}
However, when the true treatment is endogenous, the repeated measurements are likely to be endogenous and are not good candidates for an instrumental variable.
My identification strategy demonstrates that those variables are useful for bounding the local average treatment effect in the presence of measurement error, even if none of the repeated measurement are valid instrumental variables.
The remainder of this paper is organized as follows.
Section \ref{sec1.1} explains several empirical examples motivating mismeasured endogenous treatments and Section \ref{sec1.2} reviews the related econometric literature.
Section \ref{sec2} introduces mismeasured treatments in the framework of \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}.
Section \ref{sec3} constructs the identified set for the local average treatment effect.
I also discuss two extensions. One extension describes how repeated measurements tighten the identified set even if I cannot use any of the repeated measurements as an instrumental variable, and the other dispenses with independence between the instrument and the measurement error.
Section \ref{sec4} proposes an inference procedure for the local average treatment effect.
Section \ref{sec5.1} conducts an empirical illustrations, and Section \ref{sec5} conducts Monte Carlo simulations.
Section \ref{sec6} concludes.
Appendix collects proofs and remarks.
\subsection{Examples for mismeasured endogenous treatments}\label{sec1.1}
I introduce several empirical examples in which binary treatments can be both endogenous and mismeasured at the same time.
The first example is the return to schooling, in which the outcome is wages and the treatment is educational attainment, for example, whether a person has completed college or not.
Unobserved individual ability affects both the schooling decision and wage determination, which leads to the endogeneity of educational attainment in the wage equation (see, for example, \cite{card:2001}).
Moreover, survey datasets record educational attainments based on the interviewee's answers, and these self-reported educational attainments are subject to measurement error.
\cite{griliches:1977}, \cite{angrist/krueger:1999}, \cite{kane/rouse/staiger:1999}, \citealp{card:2001}, \cite{black/sanders/taylor:2003} have pointed out the mismeasurement of educational attainments.
For example, \cite{black/sanders/taylor:2003} estimate that the 1990 Decennial Census has 17.7\% false positive rate of reporting a doctoral degree.
The second example is labor supply response to welfare program participation, in which the outcome is employment status and the treatment is welfare program participation.
Self-reported welfare program participation in survey datasets can be mismeasured \citep{hernandez/pudney:2007}.
The psychological cost for welfare program participation, welfare stigma, affects job search behavior and welfare program participation simultaneously; that is, welfare stigma may discourage individuals from participating in a welfare program, and, at the same time, affect an individual's effort in the labor market (see \cite{moffitt:1983} and \cite{besley/coate:1992} for a discussion on the welfare stigma).
Moreover, the welfare stigma gives welfare recipients some incentive not to reveal their participation status to the survey, which causes endogenous measurement error in that the unobserved individual heterogeneity affects both the measurement error and the outcome.
The third example is the effect of a job training program on wages.
As it is similar to the return to schooling, unobserved individual ability plays a key role in this example.
Self-reported completion of job training program is also subject to measurement error \citep{bollinger:1996}.
\cite{frazis/loewenstein:2003} develop a methodology for evaluating a homogeneous treatment effect with mismeasured endogenous treatment, and apply their methodology to evaluate the effect of a job training program on wages.
The last example is the effect of maternal drug use on infant birth weight.
\cite{kaestner/joyce/wehbeh:1996} estimate that a mother tends to underreport her drug use, but, at the same time, she tends to report it correctly if she is a heavy user.
When the degree of drug addiction is not observed, it becomes an individual unobserved heterogeneity which affects infant birth weight and the measurement in addition to the drug use.
\subsection{Literature review}\label{sec1.2}
Here I summarize the related econometric literature.
\cite{mahajan:2006}, \cite{lewbel:2007}, and \cite{hu:2008} use an instrumental variable to correct for measurement error in a binary (or discrete) treatment in the homogeneous treatment effect framework and they achieve nonparametric point identification of the average treatment effect.
They assume that the true treatment is exogenous, whereas I allow it to be endogenous.
Finite mixture models are related to my analysis.
I consider the unobserved binary treatment, whereas finite mixture models deal with unobserved type.
\cite{henry/kitamura/salanie:2014} and \cite{henry/jochmans/salanie:2015} are the most closely related.
They investigate the identification problem in finite mixture models, by using the exclusion restriction in which an instrumental variable only affects the mixing distribution of a type without affecting the component distribution (that is, the conditional distribution given the type).
If I applied their approach directly to my framework, their exclusion restriction would imply conditional independence between the instrumental variable and the outcome given the true treatment.
This conditional independence implies that the local average treatment effect does not exhibit essential heterogeneity \citep{heckman/schmierer/urzua:2010} and that the local average treatment effect is the mean difference between the control and treatment groups.\footnote{\label{FootNote3}This footnote uses the notation introduced in Section \ref{sec2}.
The conditional independence implies $E[Y\mid T^{\ast},Z]=E[Y\mid T^{\ast}]$.
Under this assumption,
\begin{eqnarray*}
E[Y\mid Z]
&=&
P(T^{\ast}=1\mid Z)E[Y\mid Z,T^{\ast}=1]+P(T^{\ast}=0\mid Z)E[Y\mid Z,T^{\ast}=0]\\
&=&
P(T^{\ast}=1\mid Z)E[Y\mid T^{\ast}=1]+P(T^{\ast}=0\mid Z)E[Y\mid T^{\ast}=0]
\end{eqnarray*}
and therefore $\Delta E[Y\mid Z]=\Delta E[T^{\ast}\mid Z](E[Y\mid T^{\ast}=1]-E[Y\mid T^{\ast}=0])$. I obtain the equality
$$
\frac{\Delta E[Y\mid Z]}{\Delta E[T^{\ast}\mid Z]}=E[Y\mid T^{\ast}=1]-E[Y\mid T^{\ast}=0]
$$
This above equation implies that the local average treatment effect does not depend on the compliers of consideration, which is in contrast with the essential heterogeneity of the treatment effect.
Furthermore, since $E[Y\mid T^{\ast}=1]-E[Y\mid T^{\ast}=0]$ is the local average treatment effect, I do not need to care about the endogeneity.}
Instead of applying the approaches in \cite{henry/kitamura/salanie:2014} and \cite{henry/jochmans/salanie:2015}, I use a different exclusion restriction in which the instrumental variable does not affect the outcome or the measured treatment directly.
A few papers have applied an instrumental variable to a mismeasured binary regressor in the homogenous treatment effect framework.
They include \cite{aigner:1973}, \cite{kane/rouse/staiger:1999}, \cite{bollinger:1996}, \cite{black/berger/scott:2000}, \cite{frazis/loewenstein:2003}, and \cite{ditragliaand/garica-jimeno:2015}.
\cite{frazis/loewenstein:2003} and \cite{ditragliaand/garica-jimeno:2015} are the most closely related among them, since they allow for endogeneity.
Here I allow for heterogeneous treatment effects, and I contribute to the heterogeneous treatment effect literature by investigating the consequences of the measurement errors in the treatment.
\cite{kreider/pepper:2007}, \cite{molinari:2008}, \cite{imai/yamamoto:2010}, and \cite{kreider/pepper/gundersen/joliffe:2012} apply a partial identification strategy for the average treatment effect to the mismeasured binary regressor problem by utilizing the knowledge of the marginal distribution for the true treatment.
Those papers use auxiliary datasets to obtain the marginal distribution for the true treatment.
\cite{kreider/pepper/gundersen/joliffe:2012} is the most closely related, in that they allow for both treatment endogeneity and endogenous measurement error.
My instrumental variable approach can be an an alternative strategy to deal with mismeasured endogenous treatment.
It is worthwhile because, as mentioned in \cite{schennach:2013}, the availability of an auxiliary dataset is limited in empirical research.
Furthermore, it is not always the case that the results from auxiliary datasets is transported into the primary dataset \citep[][p.10]{carroll/crainiceanu/ruppert/stefanski:2012},
Some papers investigate mismeasured endogenous continuous variables, instead of binary variables.
\cite{amemiya:1985,hsiao:1989,lewbel:1998,song/schennach/white:2015} consider nonlinear models with mismeasured continuous explanatory variables.
The continuity of the treatment is crucial for their analysis, because they assume classical measurement error.
The treatment in my analysis is binary and therefore the measurement error is nonclassical.
\cite{hu/shiu/woutersen:2015} consider mismeasured endogenous continuous variables in single index models.
However, their approach depends on taking derivatives of the conditional expectations with respect to the continuous variable.
It is not clear if it can be extended to binary variables.
\cite{song:2015} considers the semi-parametric model when endogenous continuous variables are subject to nonclassical measurement error.
He assumes conditional independence between the instrumental variable and the outcome given the true treatment, which would impose some structure on the outcome equation when a treatment is binary (see Footnote \ref{FootNote3}).
Instead I propose an identification strategy without assuming any structure on the outcome equation.
\cite{chalak:2013} investigates the consequences of measurement error in the instrumental variable instead of the treatment.
He assumes that the treatment is perfectly observed, whereas I allow for it to be measured with error.
Since I assume that the instrumental variable is perfectly observed, my analysis is not overlapped with \cite{chalak:2013}.
\cite{Manski:2003}, \cite{blundell/gosling/ichimura/meghir:2007}, and \cite{kitagawa:2010b} have similar identification strategy in the context of sample selection models.
These papers also use the exclusion restriction of the instrumental variable for their partial identification results.
Particularly, \cite{kitagawa:2010b} derives the integrated envelope from the exclusion restriction, which is similar to the total variation distance in my analysis because both of them are characterized as a supremum over the set of the partitions.
First and the most importantly, I consider mismeasurement of the treatment, whereas the sample selection model considers truncation of the outcome.
It is not straightforward to apply their methodologies in sample selection models into mismeasured treatment problem.
Second, I offer an inference method with uniform size control, but \cite{kitagawa:2010b} derives only point-wise size control.
Last, \cite{blundell/gosling/ichimura/meghir:2007} and \cite{kitagawa:2010b} use their result for specification test, but I cannot use it to carry out a specification test because the sharp identified set of my analysis is always non-empty.
Finally, \cite{calvi/lewbel/tommasi:2017} and \cite{yanagi:2017} have recently discussed identification issues of the local average treatment effect in the presence of a measurement error in the treatment variable.
They are built on results in the previous draft of this paper \citep{ura:2015} to derive novel and important results when there are additional variables in a dataset: multiple measurements of the true treatment variable \citep{calvi/lewbel/tommasi:2017} or multiple instrumental variables \citep{yanagi:2017}.
In contrast, the results of this paper are valid without these additional variables and only requires the assumptions in \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}.
\section{Local average treatment effect framework with misclassification}\label{sec2}
My analysis considers a mismeasured treatment in the framework of \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}.
The objective is to evaluate the causal effect of a binary treatment $T^\ast\in\{0,1\}$ on an outcome $Y$, where $T^\ast=0$ represents the control group and $T^\ast=1$ represents the treatment group.
To deal with endogeneity of $T^\ast$, I use a binary instrumental variable $Z\in\{0,1\}$ which shifts $T^\ast$ exogenously without any direct effect on $Y$.
The treatment $T^\ast$ of interest is not directly observed, and instead there is a binary measurement $T\in\{0,1\}$ for $T^\ast$.
I put the $\ast$ symbol on $T^\ast$ to emphasize that the true treatment $T^\ast$ is unobserved.
I allow $Y$ to be discrete, continuous or mixed; $Y$ is only required to have some known dominating finite measure $\mu_Y$ on the real line.
For example, $\mu_Y$ can be the Lebesgue measure or the counting measure.
Let $\mathbf{Y}$ be the support for the random variable $Y$ and $\mathbf{T}=\{0,1\}$ be the support for $T$.
To describe the data generating process, I consider the counterfactual variables.
$T_{z}^{\ast}$ is the counterfactual true treatment when $Z=z$.
$Y_{t^\ast}$ is the counterfactual outcome when $T^\ast=t^\ast$.
$T_{t^\ast}$ is the counterfactual measured treatment when $T^\ast=t^\ast$.
The individual treatment effect is $Y_1-Y_0$.
It is not directly observed; $Y_0$ and $Y_1$ are not observed at the same time.
Only $Y_{T^\ast}$ is observable.
Using the notation, the observed variables $(Y,T,Z)$ are generated by the three equations:
\begin{eqnarray}
T&=&T_{T^\ast}\label{measurement}\\
Y&=&Y_{T^\ast}\label{outcome}\\
T^\ast&=&T^\ast_Z\label{treatment_assignment}.
\end{eqnarray}
Figure \ref{arrows} graphically describes the relationship among the instrument $Z$, the (unobserved) true treatment $T^{\ast}$, the measured treatment $T$, and the outcome $Y$.
\begin{figure}
\centering
\begin{tikzpicture}[]
\node[] at (0,0) {outcome $Y$};
\node[] at (5,0) {true treatment $T^\ast$};
\node[] at (5,-3) {measured treatment $T$};
\node[] at (10,0) {instrument $Z$};
\draw[->] (8,0) to (7,0);
\draw[->] (3,0) to (2,0);
\draw[->] (5,-1) to (5,-2);
\end{tikzpicture}
\caption{Graphical representation of dependencies among variables\label{arrows}}
\end{figure}
(\ref{measurement}) is the measurement equation, which is the arrow from $T^\ast$ to $T$ in Figure \ref{arrows}.
$T-T^\ast$ is the measurement error; $T-T^\ast=1$ (or $T_0=1$) represents a false positive and $T-T^\ast=-1$ (or $T_1=0$) represents a false negative.
Equations (\ref{outcome}) and (\ref{treatment_assignment}) are the same as \cite{imbens/angrist:1994} and \cite{angrist/imbens/rubin:1996}.
(\ref{outcome}) is the outcome equation, which is the arrow from $T^\ast$ to $Y$ in Figure \ref{arrows}.
(\ref{treatment_assignment}) is the treatment assignment equation, which is the arrow from $Z$ to $T^\ast$ in Figure \ref{arrows}.
A potentially non-zero correlation between $(Y_0,Y_1)$ and $(T^\ast_{0},T^\ast_{1})$ causes an endogeneity problem.
In a return to schooling analysis, $Y$ is wages, $T^\ast$ is the true indicator for college completion, $Z$ is the proximity to college, and $T$ is the self-reported college completion.
The treatment effect $Y_1-Y_0$ in the return to schooling is the effect of college completion $T^\ast$ on wages $Y$.
The college completion is not correctly measured in a survey dataset, such that only the self report $T$ is observed.
This section and Section \ref{sec3} impose only the following assumption.
\begin{assumption}\label{assumption1}
(i) For each $t^\ast=0,1$, $Z$ is independent of $(T_{t^\ast},Y_{t^\ast},T^\ast_{0},T^\ast_{1})$.
(ii) $T^\ast_{1}\geq T^\ast_{0}$ almost surely.
(iii) $0<P(Z=1)<1$.
\end{assumption}
Assumption \ref{assumption1} (i) is the exclusion restriction and I consider stochastic independence instead of mean independence.
Although it is stronger than the minimal conditions for the identification for the local average treatment effect without measurement error, a large part of the existing applied papers assume stochastic independence \citep[][p.405]{huber/mellace:2014}.
$Z$ is also independent of $T_{t^\ast}$ conditional on $(Y_{t^\ast},T^\ast_{0},T^\ast_{1})$, which is the only assumption on the measurement error for the identified set in Section \ref{sec3}.
(Section \ref{sec3Less} even dispenses with this assumption.)
Assumption \ref{assumption1} (ii) is the monotonicity condition for the instrument, in which the instrument $Z$ increases the value of $T^\ast$ for all the individuals.
\cite{dechaisemartin:2014} relaxes the monotonicity condition, and it can be shown in Appendix E that the identification results in my analysis still holds with a slight modification under the complier-defiers-for-marginals condition in \cite{dechaisemartin:2014}.
Note that Assumption \ref{assumption1} does not include a relevance condition for the instrumental variable.
The standard relevance condition $T^\ast_{1}\ne T^\ast_{0}$ does not affect the identification results in my analysis.
I will discuss the relevance condition in my framework after Theorem \ref{theorem1}.
Assumption \ref{assumption1} (iii) excludes that $Z$ is constant.
As I emphasized in the introduction, the framework here does not assume anything on measurement error $T_{t^\ast}$ except for its independence from $Z$.
Assumption \ref{assumption1} does not impose any restriction on the marginal distribution of the measurement error $T_{t^\ast}$ or on the relationship between the measurement error $T_{t^\ast}$ and $(Y_{t^\ast},T^\ast_{0},T^\ast_{1})$.
Particularly, the measurement error can be endogenous, that is, $T_{t^\ast}$ and $(Y_{t^\ast},T^\ast_{0},T^\ast_{1})$ can be correlated.\footnote{Although it has not been supported in validation data studies (e.g., \citealp{black/sanders/taylor:2003}), a majority of the literature on measurement error has assume that the measurement error is exogenous (\citealp{bound/brown/mathiowetz:2001}). I also explore for the identifying power of the exogenous measurement error assumption in Appendix D.}
I focus on the local average treatment effect, which is defined by
$$
\theta=E[Y_1-Y_0\mid T_0^\ast<T_1^\ast].
$$
The local average treatment effect is the average of the treatment effect $Y_1-Y_0$ over the subpopulation (the compliers) whose treatment status is strictly affected by the instrument.
\citet[][Theorem 1]{imbens/angrist:1994} show that the local average treatment effect equals
$$
\frac{\Delta E[Y\mid Z]}{\Delta E[T^*\mid Z]},
$$
where I define $\Delta E[X\mid Z]= E[X\mid Z=1]-E[X\mid Z=0]$ for a random variable $X$.
Note that $\Delta E[Y\mid Z]$ is the intent-to-treat effect, that is, the regression of $Y$ on $Z$.
The treatment is measured with error, and therefore the above fraction $\Delta E[Y\mid Z]/\Delta E[T^*\mid Z]$ is not the Wald estimand
$$
\frac{\Delta E[Y\mid Z]}{\Delta E[T\mid Z]}.
$$
Since $\Delta E[T^*\mid Z]$ is not identified, I cannot identify the local average treatment effect.
The failure of point identification comes purely from the measurement error, because the local average treatment effect would be point-identified under $T=T^\ast$.
In fact, my proposed methodology in this paper is essentially a bounding strategy of $\Delta E[T^*\mid Z]$ and I use the bound to construct the sharp identified set for the local average treatment effect.
\section{Identified set for the local average treatment effect}\label{sec3}
This section show how the instrumental variable partially identifies the local average treatment effect in the framework of Section \ref{sec2}.
Before defining the identified set, I express the local average treatment effect as a function of the underlying distribution $P^{\ast}$ of $(Y_0,Y_1,T_0,T_1,T^\ast_{0},T^\ast_{1},Z)$.
I use the $\ast$ symbol on $P^{\ast}$ to clarify that $P^{\ast}$ is the distribution of the unobserved variables.
I denote the expectation operator $E$ by $E_{P^{\ast}}$ when I need to clarify the underlying distribution.
The local average treatment effect is a function of the unobserved distribution $P^{\ast}$:
$$
\theta(P^{\ast})= E_{P^{\ast}}[Y_1-Y_0\mid T_0^\ast<T_1^\ast].
$$
I denote by $\Theta$ the parameter space for the local average treatment effect $\theta$, that is, the set of $\int yf_1(y)d\mu_Y(y)-\int yf_0(y)d\mu_Y(y)$ where $f_0$ and $f_1$ are density functions dominated by the known probability measure $\mu_Y$.
For example, $\Theta=[-1,1]$ when $Y$ is binary.
The identified set is the set of parameter values for the local average treatment effect which is consistent with the distribution of the observed variables.
I use $P$ for the distribution of the observed variables $(Y,T,Z)$
The equations (\ref{measurement}), (\ref{outcome}), and (\ref{treatment_assignment}) induce the distribution of the observables $(Y,T,Z)$ from the unobserved distribution $P^{\ast}$, and I denote by $P^{\ast}_{(Y,T,Z)}$ the induced distribution.
When the distribution of $(Y,T,Z)$ is $P$, the set of $P^{\ast}$ which induces $P$ is $\{P^{\ast}\in\mathcal{P}^{\ast}: P=P^{\ast}_{(Y,T,Z)}\}$, where $\mathcal{P}^{\ast}$ is the set of $P^{\ast}$'s satisfying Assumptions \ref{assumption1}.
For every distribution $P$ of $(Y,T,Z)$, the (sharp) identified set for the local average treatment effect is defined as
$\Theta_I(P)=\{\theta(P^{\ast})\in\Theta: P^{\ast}\in\mathcal{P}^{\ast}\mbox{ and }P=P^{\ast}_{(Y,T,Z)}\}$.
\citet[][Theorem 1]{imbens/angrist:1994} provides a relationship between $\Delta E[Y\mid Z]$ and the local average treatment effect:
\begin{equation}\label{eqIA}
\theta(P^{\ast})P^{\ast}(T_0^\ast<T_1^\ast)=\Delta E_{P^{\ast}}[Y\mid Z],
\end{equation}
This equation gives the two pieces of information of $\theta(P^{\ast})$.
First, the sign of $\theta(P^{\ast})$ is the same as $\Delta E_{P^{\ast}}[Y\mid Z]$.
Second, the absolute value of $\theta(P^{\ast})$ is at least the absolute value of $\Delta E_{P^{\ast}}[Y\mid Z]$.
The following lemma summarizes these two pieces.
\begin{lemma}\label{lemma1}
Under Assumption \ref{assumption1},
$$
\theta(P^{\ast})\Delta E_{P^{\ast}}[Y\mid Z]\geq 0
$$
$$
|\theta(P^{\ast})|\geq |\Delta E_{P^{\ast}}[Y\mid Z]|.
$$
\end{lemma}
I derive a new implication from the exclusion restriction for the instrumental variable in order to obtain an upper bound on $\theta(P^{\ast})$ in absolute value.
To explain the new implication, I introduce the total variation distance, which is the $L^1$ distance between the distribution $f_1$ and $f_0$:
For any random variable $X$, define
$$
TV_X=\frac{1}{2}\int |f_{X\mid Z=1}(x)-f_{X\mid Z=0}(x)|d\mu_X(x),
$$
where $\mu_X$ is a dominating measure for the distribution of $X$.
\begin{lemma}\label{lemma2}
Under Assumption \ref{assumption1},
$$
TV_{(Y,T)}\leq TV_{T^\ast}=P^{\ast}(T_0^\ast<T_1^\ast).
$$
\end{lemma}
The first term, $TV_{(Y,T)}$, in Lemma \ref{lemma2} reflects the dependency of $f_{(Y,T)\mid Z=z}(y,t)$ on $z$, and it can be interpreted as the magnitude of the distributional effect of $Z$ on $(Y,T)$.
The second and third terms, $TV_{T^\ast}$ and $P^{\ast}(T_0^\ast<T_1^\ast)$, are the effect of the instrument $Z$ on the true treatment $T^\ast$.
Based on Lemma \ref{lemma2}, the magnitude of the effect of $Z$ on $T^\ast$ is no smaller than the magnitude of the effect of $Z$ on $(Y,T)$.
The new implication in Lemma \ref{lemma2} gives a lower bound on $P^{\ast}(T_0^\ast<T_1^\ast)$ and therefore yields an upper bound on the local average treatment effect in absolute value, combined with equation (\ref{eqIA}).
Therefore, I use these relationships to derive an upper bound on the local average treatment effect in absolute value, that is,
$$
|\theta(P^{\ast})|=\frac{|\Delta E_{P^{\ast}}[Y\mid Z]|}{P^{\ast}(T_0^\ast<T_1^\ast)}\leq \frac{|\Delta E_{P^{\ast}}[Y\mid Z]|}{TV_{(Y,T)}}
$$
as long as $TV_{(Y,T)}>0$.
Theorem \ref{theorem1} shows that the above observations characterize the sharp identified set for the local average treatment effect.
\begin{theorem}\label{theorem1}
Suppose that Assumption \ref{assumption1} holds, and consider an arbitrary data distribution $P$ of $(Y,T,Z)$.
The identified set $\Theta_I(P)$ for the local average treatment effect is characterized as follows:
$\Theta_I(P)=\Theta$ if $TV_{(Y,T)}=0$; otherwise,
$$
\Theta_I(P)
=
\begin{cases}
\left[\Delta E_P[Y\mid Z],\frac{\Delta E_P[Y\mid Z]}{TV_{(Y,T)}}\right]&\mbox{ if }\Delta E_P[Y\mid Z]>0\\
\{0\}&\mbox{ if }\Delta E_P[Y\mid Z]=0\\
\left[\frac{\Delta E_P[Y\mid Z]}{TV_{(Y,T)}},\Delta E_P[Y\mid Z]\right]&\mbox{ if }\Delta E_P[Y\mid Z]<0.
\end{cases}
$$
\end{theorem}
The total variation distance $TV_{(Y,T)}$ plays two roles in determining the sharp identified set in this theorem.
First, $TV_{(Y,T)}$ measures the strength of the instrumental variable, that is, $TV_{(Y,T)}>0$ is the relevance condition in my identification analysis.
When $TV_{(Y,T)}>0$, the interval in the above theorem is always nonempty and bounded, which implies that $Z$ has some identifying power for the local average treatment effect.
By contrast, $TV_{(Y,T)}=0$ means that the instrumental variable $Z$ does not affect $Y$ and $T$, in which case $Z$ has no identifying power for the local average treatment effect.
In this case, $f_{(Y,T)\mid Z=1}=f_{(Y,T)\mid Z=0}$ almost everywhere over $(y,t)$ and particularly $\Delta E_P[Y\mid Z]=0$.
Note that all the three inequalities in Theorem \ref{theorem1} have no restriction on $\theta$ in this case.
Second, $TV_{(Y,T)}$ determines the length of the sharp identified set.
The length is $|\Delta E_P[Y\mid Z]|(TV_{(Y,T)}^{-1}-1)$, which is a decreasing function in $TV_{(Y,T)}$.
In general, the lower and upper bounds of the sharp identified set are not equal to the local average treatment effect.
The lower bound is weakly smaller (in the absolute value) than the local average treatment effect, because the size of the compliers is weakly smaller than one.
The upper bound is weakly larger (in the absolute value) than the local average treatment effect, because $TV_{(Y,T)}$ is weakly smaller than the size of the compliers due to the mis-measurement of the treatment variable.
The standard relevance condition $\Delta E_P[T\mid Z]\ne 0$ is not required in Theorem \ref{theorem1}.
$\Delta E_P[T\mid Z]\ne 0$ is a necessary condition to define the Wald estimand, but the sharp identified set does not depend directly on the Wald estimand.
In fact, $TV_{(Y,T)}>0$ in Theorem \ref{theorem1} is weaker than $\Delta E_P[T\mid Z]\ne 0$.
Note that the sharp identified set is always non-empty.
There is no testable implications on the distribution of the observed variables, and therefore it is impossible to conduct a specification test for Assumption \ref{assumption1}.
\subsection{Wald estimand and the identified set}\label{sec3.1}
The Wald estimand $\Delta E_P[Y\mid Z]/\Delta E_P[T\mid Z]$ can be outside the identified set.
One necessary and sufficient condition for the Wald estimand to be included in the identified set is given as follows.
\begin{lemma}\label{Wald_lemma}
The Wald estimand is in the identified set if and only if
\begin{equation}
f_{(Y,T)\mid Z=1}(y,1)\geq f_{(Y,T)\mid Z=0}(y,1)\mbox{ and }f_{(Y,T)\mid Z=1}(y,0)\leq f_{(Y,T)\mid Z=0}(y,0).
\label{TESTABLEIMPLICAT}
\end{equation}
\end{lemma}
This condition in (\ref{TESTABLEIMPLICAT}) are the testable implications from the the local average treatment effect framework without measurement error (\citealp{balke/pearl:1997} and \citealp{heckman/vytlacil:2005}).
The recent papers by \cite{huber/mellace:2014}, \cite{kitagawa:2014}, and \cite{mourifie/wan:2014} propose the testing procedures for (\ref{TESTABLEIMPLICAT}).
Based on the results in Theorem \ref{theorem1}, their testing procedures are re-interpreted as a test for the null hypothesis that the Wald estimand is inside the sharp upper bound on the local average treatment effect.\footnote{Unfortunately, (\ref{TESTABLEIMPLICAT}) cannot be used for testing the existence of a measurement error. Even if there is non-zero measurement error, (\ref{TESTABLEIMPLICAT}) can still hold.}
\subsection{Conditional exogeneity of the instrumental variable}
As in \cite{abadie:2003} and \cite{frolich:2007}, this section considers the conditional exogeneity of the instrumental variable $Z$ in which $Z$ is exogenous given a set of covariates $V$, which weaker than the unconditional exogeneity in Assumption \ref{assumption1}.
\begin{assumption}\label{assumption1condi}
There is some variable $V$ taking values in a set $\mathbf{V}$ satisfying the following properties.
(i) For each $t^\ast=0,1$, $Z$ is conditionally independent of $(T_{t^\ast},Y_{t^\ast},T^\ast_{0},T^\ast_{1})$ given $V$.
(ii) $T^\ast_{1}\geq T^\ast_{0}$ almost surely.
(iii) $0<P(Z=1\mid V)<1$.
\end{assumption}
I define the $V$-conditional total variation distance by
$$
TV_{X\mid V}=\frac{1}{2}\int |f_{X\mid Z=1,V}(x)-f_{X\mid Z=0,V}(x)|d\mu_X(x).
$$
Note that $TV_{X\mid V}$ is a random variable as a function of $V$.
Under the conditional exogeneity of $Z$, Theorem \ref{theorem1} becomes as follows.
\begin{theorem}\label{theorem1conditional}
Suppose that Assumption \ref{assumption1condi} holds, and consider an arbitrary data distribution $P$ of $(Y,T,Z,V)$.
The identified set $\Theta_I(P)$ for the local average treatment effect is characterized as follows: $\Theta_I(P)=\Theta$ if $E_P[TV_{(Y,T)\mid V}]=0$; otherwise,
$$
\Theta_I(P)
=
\begin{cases}
\left[E_P[\Delta E_P[Y\mid Z,V]],\frac{E_P[\Delta E_P[Y\mid Z,V]]}{E_P[TV_{(Y,T)\mid V}]}\right]&\mbox{ if }E_P[\Delta E_P[Y\mid Z,V]]>0\\
\{0\}&\mbox{ if }E_P[\Delta E_P[Y\mid Z,V]]=0\\
\left[\frac{E_P[\Delta E_P[Y\mid Z,V]]}{TV_{(Y,T)}},E_P[\Delta E_P[Y\mid Z,V]]\right]&\mbox{ if }E_P[\Delta E_P[Y\mid Z,V]]<0.
\end{cases}
$$
\end{theorem}
\subsection{Identifying power of repeated measurements}
The identification strategy in the above analysis offers a new use of repeated measurements as additional sources for identification.
Repeated measurements \citep[for example,][]{hausman/ichimura/newey/powell:1991} is a popular approach in the literature on measurement error, but they cannot be instrumental variables in this framework.
This is because the true treatment $T^\ast$ is endogenous and it is natural to suspect that a measurement of $T^\ast$ is also endogenous.
The more accurate the measurement is, the more likely it is to be endogenous.
Nevertheless, the identification strategy incorporates repeated measurements as an additional information to tighten the identified set for the local average treatment effect, when they are coupled with the instrumental variable $Z$.
Unlike the other paper on repeated measurements, I do not need to assume the independence of measurement errors among multiple measurements.
The strategy also benefits from having more than two measurements unlike \cite{hausman/ichimura/newey/powell:1991} who achieve point identification with two measurements.
Consider a repeated measurement $R$ for $T^\ast$.
I do not require that $R$ is binary, so $R$ can be discrete or continuous.
Like $T=T_{T^\ast}$, I model $R$ using the counterfactual outcome notations.
$R_1$ is a counterfactual second measurement when the true treatment $T^\ast$ is $1$, and $R_0$ is a counterfactual second measurement when the true treatment $T^\ast$ is $0$.
Then the data generation of $R$ is
$$
R=R_{T^\ast}.
$$
I strengthen Assumption \ref{assumption1} by assuming that the instrumental variable $Z$ is independent of $R_{t^\ast}$ conditional on $(Y_{t^\ast},T_{t^\ast},T^\ast_{0},T^\ast_{1})$.
\begin{assumption}\label{assumption8}
(i) $Z$ is independent of $(R_{t^\ast},T_{t^\ast},Y_{t^\ast},T^\ast_{0},T^\ast_{1})$ for each $t^\ast=0,1$.
(ii) $T^\ast_{1}\geq T^\ast_{0}$ almost surely.
(iii) $0<P(Z=0)<1$.
\end{assumption}
Note that I do not assume the independence between $R_{t^\ast}$ and $T_{t^\ast}$, where the independence between the measurement errors is a key assumption when the repeated measurement is an instrumental variable.
Assumption \ref{assumption8} tightens the identified set for the local average treatment effect as follows.
The requirement on $R$ does not restrict $R$ to have the same support as $T^\ast$.
In fact, $R$ can be any variable which depends on $T^\ast$.
For example, $R$ can be another outcome variable than $Y$.
\begin{theorem}\label{theorem4}
Suppose that Assumption \ref{assumption8} holds, and consider an arbitrary data distribution $P$ of $(R,Y,T,Z)$.
The identified set $\Theta_I(P)$ for the local average treatment effect is characterized as follows:
$\Theta_I(P)=\Theta$ if $TV_{(R,Y,T)}=0$; otherwise,
$$
\Theta_I(P)
=
\begin{cases}
\left[\Delta E_P[Y\mid Z],\frac{\Delta E_P[Y\mid Z]}{TV_{(R,Y,T)}}\right]&\mbox{ if }\Delta E_P[Y\mid Z]>0\\
\{0\}&\mbox{ if }\Delta E_P[Y\mid Z]=0\\
\left[\frac{\Delta E_P[Y\mid Z]}{TV_{(R,Y,T)}},\Delta E_P[Y\mid Z]\right]&\mbox{ if }\Delta E_P[Y\mid Z]<0.
\end{cases}
$$
\end{theorem}
The identified set in Theorem \ref{theorem4} is weakly smaller than the identified set in Theorem \ref{theorem1}.
The total variation distance $TV_{(R,Y,T)}$ in Theorem \ref{theorem4} is weakly larger than that in Theorem \ref{theorem1}, because, using the triangle inequality,
\begin{eqnarray*}
TV_{(R,Y,T)}
&=&
\frac{1}{2}\sum_{t=0,1}\iint|(f_{(R,Y,T)\mid Z=1}-f_{(R,Y,T)\mid Z=0})(r,y,t)|d\mu_R(r)d\mu_Y(y)\\
&\geq&
\frac{1}{2}\sum_{t=0,1}\int|\int(f_{(R,Y,T)\mid Z=1}-f_{(R,Y,T)\mid Z=0})(r,y,t)d\mu_R(r)|d\mu_Y(y)\\
&=&
\frac{1}{2}\sum_{t=0,1}\int|(f_{(Y,T)\mid Z=1}-f_{(Y,T)\mid Z=0})(y,t)|d\mu_Y(y)\\
&=&
TV_{(Y,T)}
\end{eqnarray*}
and the strict inequality holds unless the sign of $(f_{(R,Y,T)\mid Z=1}-f_{(R,Y,T)\mid Z=0})(r,y,t)$ is constant in $r$ for every $(y,t)$.
Therefore, it is possible to test whether the repeated measurement $R$ has additional information, by testing whether the sign of $(f_{(R,Y,T)\mid Z=1}-f_{(R,Y,T)\mid Z=0})(r,y,t)$ is constant in $r$.
\subsection{Dependence between measurement error and instrumental variable}\label{sec3Less}
It is still possible to apply the same identification strategy and obtain finite (but less tight) bounds on the local average treatment effect, even without the independence between the instrumental variable and the measurement error. (Assumption \ref{assumption1} (i) implies that $Z$ is independent of $T_{t^\ast}$ for each $t^\ast=0,1$.)
Instead Assumption \ref{assumption1} is weakened to allow for the measurement error $T_{t^\ast}$ to be correlated with the instrumental variable $Z$.
\begin{assumption}\label{assumption1-less}
(i) $Z$ is independent of $(Y_{t^\ast},T^\ast_{0},T^\ast_{1})$ for each $t^\ast=0,1$.
(ii) $T^\ast_{1}\geq T^\ast_{0}$ almost surely.
(iii) $0<P(Z=0)<1$.
\end{assumption}
Theorem \ref{theorem1-less} shows that the above observations characterize the identified set for the local average treatment effect under Assumption \ref{assumption1-less}.
\begin{theorem}\label{theorem1-less}
Suppose that Assumption \ref{assumption1-less} holds, and consider an arbitrary data distribution $P$ of $(Y,T,Z)$.
The identified set $\Theta_I(P)$ for the local average treatment effect is characterized as follows:
$\Theta_I(P)=\Theta$ if $TV_Y=0$; otherwise,
$$
\Theta_I(P)
=
\begin{cases}
\left[\Delta E_P[Y\mid Z],\frac{\Delta E_P[Y\mid Z]}{TV_Y}\right]&\mbox{ if }\Delta E_P[Y\mid Z]>0\\
\{0\}&\mbox{ if }\Delta E_P[Y\mid Z]=0\\
\left[\frac{\Delta E_P[Y\mid Z]}{TV_Y},\Delta E_P[Y\mid Z]\right]&\mbox{ if }\Delta E_P[Y\mid Z]<0.
\end{cases}
$$
\end{theorem}
The difference from Theorem \ref{theorem1} is that Theorem \ref{theorem1-less} does not depend on the measured treatment $T$.
Although it is observed in the dataset, $T$ does not have any information on the local average treatment effect because Assumption \ref{assumption1-less} does not restrict $T$.
When $TV_Y>0$, there are nontrivial upper and lower bounds on the local average treatment effect even without using the measured treatment $T$.
\section{Inference}\label{sec4}
Based on the sharp identified set in the presence of covariates (Theorem \ref{theorem1conditional}), this section constructs a confidence interval for the local average treatment effect based on an i.i.d. sample $\{W_i:1\leq i\leq n\}$ of $W=(Y,T,Z,V)$.
The confidence interval described below controls the asymptotic size uniformly over a class of data generating processes, and rejects all the fixed alternatives.
The identified set in \ref{theorem1conditional} is characterized by moment inequalities as follows.
\begin{lemma}\label{lemma3}
Let $P$ be an arbitrary data distribution of $W=(Y,T,Z,V)$.
Under Assumption \ref{assumption1condi},
$\Theta_I(P)$ is the set of $\theta\in\Theta$ in which
\begin{eqnarray}
&&E_P\left[-\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\mathrm{sgn}(\theta)Y\right]\leq 0\label{ID_Cond1}\\
&&E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\mathrm{sgn}(\theta)Y-|\theta|\right]\leq 0\label{ID_Cond2}\\
&&E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\left(|\theta|h(Y,T,V)-\mathrm{sgn}(\theta)Y\right)\right]\leq 0\mbox{ for all }h\in\mathbf{H}\label{ID_Cond3},
\end{eqnarray}
where $\pi(V)=P(Z=1\mid V)$, $\mathbf{H}$ is the set of measurable functions on $\mathbf{Y}\times\mathbf{T}\times\mathbf{V}$ taking a value in $\{-.5,.5\}$ and $\mathrm{sgn}(x)\equiv\mathbbm{1}\{x\geq 0\}-\mathbbm{1}\{x<0\}$.
\end{lemma}
I construct a $(1-\alpha)$-confidence interval for the local average treatment effect $\theta$ with treating $\pi$ as a nuisance parameter for given $\alpha\in(0,0.5)$.
I assume that a $(1-\delta)$-confidence interval $\mathcal{C}_{\pi,n}(\delta)$ for $\pi$ is available for researchers for given $\delta\in(0,\alpha)$.
Given $\mathcal{C}_{\pi,n}(\delta)$, I construct the $(1-\alpha-\delta)$-confidence interval $\mathcal{C}_{\theta,n}(\alpha+\delta)$ for the local average treatment effect as
$$
\mathcal{C}_{\theta,n}(\alpha+\delta)=\bigcup_{\pi\in\mathcal{C}_{\pi,n}(\delta)}\{\theta\in\Theta: T(\theta,\pi)\leq c(\alpha,\theta,\pi)\},
$$
where $T(\theta,\pi)$ and $c(\alpha,\theta,\pi)$ are defined below using the bootstrap-based testing \citep{chernozhukov/chetverikov/kato:2014}.
The number of the moment inequalities in Lemma \ref{lemma3} can be finite or infinite, which determines whether some of the existing methods can be applied directly to the inference on the local average treatment effect.
When $(Y,V)$ has finite supports and therefore $\mathbf{H}$ is finite, the sharp identified set is characterized by a finite number of inequalities, and therefore I can apply inference methods based on unconditional moment inequalities.\footnote{The literature on conditional and unconditional moment inequality models is broad and growing. See \cite{canay/shaikh:2016} for a recent survey on this literature.}
To the best of my knowledge, however, inference for the local average treatment effect in my framework does not fall directly into the existing moment inequality models when either $Y$ or $V$ is continuous. When either $Y$ or $V$ is continuous, the sharp identified set is characterized by an uncountably infinite number of inequalities. In the current literature on the partially identified parameters, an infinite number of moment inequalities are mainly considered in the context of conditional moment inequalities. The identified set in this paper is not characterized by conditional moment inequalities.\footnote{\cite{chernozhukov/lee/rosen:2013} also considers an infinite number of unconditional moment inequalities in which the moment functions are continuously indexed by a compact subset in a finite dimensional space. It is not straightforward to verify the continuity condition (Condition C.1 in their paper) for the moment inequalities in Lemma \ref{lemma3}, in which the moment functions need to be continuously indexed by a compact subset of the finite dimensional space.}\footnote{\cite{andrews/shi:2016} considers an infinite number of unconditional moment inequalities in which the moment functions satisfies manageability condition. I cannot apply their approach here because $h(Y,T,V)$ takes discrete values in $\{-.5,.5\}$ and then the packing numbers depends on the sample size.}
I considers a sequence of finite sets $\mathbf{H}_n$ which converges to $\mathbf{H}$ as a sample size increases. (The convergence is formally defined in Assumption \ref{H_conditions}, and an example for $\mathbf{H}_n$ appears after Assumption \ref{H_conditions}.)
Note that, when $\mathbf{H}$ is finite, $\mathbf{H}_n$ can be equal to $\mathbf{H}$.
If $\mathbf{H}$ is replaced with $\mathbf{H}_n$ in Lemma \ref{lemma3}, the number of the moment inequalities becomes finite.
At the same time, as $\mathbf{H}_n$ approaches to $\mathbf{H}$, the approximation error from using $\mathbf{H}_n$ converges to zero, and the number of the inequalities can be increasing, particularly diverging to the infinity when $\mathbf{H}$ includes infinite elements.
The approximated identified set is characterized by a finite number of the following moment inequalities:
\begin{eqnarray}
&&E_P\left[-\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\mathrm{sgn}(\theta)Y\right]\leq 0\label{Approx1}\\
&&E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\mathrm{sgn}(\theta)Y-|\theta|\right]\leq 0\label{Approx2}\\
&&E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}\left(|\theta|h(Y,T,V)-\mathrm{sgn}(\theta)Y\right)\right]\leq 0\mbox{ for all }h\in\mathbf{H}_n.\label{Approx3}
\end{eqnarray}
Denote by $p_n$ the resulting number of moment inequalities, that is, the number of elements in $\mathbf{H}_n$ plus $2$.
Note that, when $K_n=1$, the moment inequalities in (\ref{Approx3}) is equivalent to using the Wald estimand as the upper bound for $|\theta|$.
For the size $\alpha\in(0,.5)$, I construct a test statistic $T(\theta,\pi)$ and a critical value $c(\alpha,\theta,\pi)$ via the multiplier bootstrap \citep{chernozhukov/chetverikov/kato:2014} for many moment inequality models (described in Section \ref{CCKbootstrap}).\footnote{In this paper I focus on the one-step multiplier bootstrap in \citep{chernozhukov/chetverikov/kato:2014}. It is also possible to use the two- or three-step empirical/multiplier bootstrap in this paper, but I do not compare them because the comparison of these methods is above the scope of this paper. }
\cite{chernozhukov/chetverikov/kato:2014} studies the testing problem for moment inequality models in which the number of the moment inequalities is finite but growing.
Since the number of the moment inequalities in (\ref{Approx1})-(\ref{Approx3}) is finite but growing, their results are applicable to construct a confidence interval based on (\ref{Approx1})-(\ref{Approx3}).
\begin{assumption}\label{asympt_assumption}
Given positive constants $C_2$ and $\eta$, the class of data generating processes, denoted by $\mathcal{P}_0$, and the parameter spaces $\Theta\times\Pi$ satisfy
\begin{itemize}
\item[(i)] $\max\{E_P[Y^3]^{2/3},E_P[Y^4]^{1/2}\}<C_2$,
\item[(ii)] $\Theta\subset\mathbb{R}$ is bounded,
\item[(iii)] The random variable inside $E_P$ in (\ref{Approx1})-(\ref{Approx3}) has a non-zero variance for every $j=1,\ldots,p_n$ and every $\theta\in\Theta$,
\item[(iv)] $\liminf_{n\rightarrow\infty}\inf_{P\in\mathcal{P}_0}P(\pi(P)\in \mathcal{C}_{\pi,n}(\delta))\geq 1-\delta$,
\item[(v)] $\eta<\pi(V)<1-\eta$ for every $\pi\in\Pi$.
\end{itemize}
\end{assumption}
The first assumption (i) is a regularity condition.
The second assumption (ii) requires researchers to know ex ante upper and lower bounds on the parameter.
The third assumption (iii) guarantees that the test statistic is well-defined.
The fourth assumption (iv) is that the confidence interval for $\pi$ controls the size uniformly over $\mathcal{P}_0$.
The last assumption (v) is that the propensity score $\pi(v)=P(Z=1\mid V=v)$ is bounded away from zero and one.
In this paper I assume that $\{\mathbf{H}_n\}$ satisfies the following conditions.
\begin{assumption}\label{H_conditions}
(i) $\mathbf{H}_n\subset\mathbf{H}_{n+1}$. (ii) The convergence
\begin{equation}\label{approx_vanish}
\sup_{h\in\mathbf{H}}E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}h(Y,T,V)\right]-\max_{h\in\mathbf{H}_n}E_P\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}h(Y,T,V)\right]\rightarrow 0
\end{equation}
holds uniformly over $\pi\in\Pi$ and $P\in\mathcal{P}_0$.
(iii) The number of elements in $\mathbf{H}_n$ satisfies
\begin{equation}\label{number_moments}
\log^{7/2}(p_nn)\leq C_1n^{1/2-c_1}\mbox{ and }\log^{1/2}p_n\leq C_1n^{1/2-c_1}
\end{equation}
for some $c_1\in (0,1/2)$ and $C_1>0$.
\end{assumption}
An example of $\mathbf{H}_n$ is obtained by discretizing $Y$.
Consider a partition $I_{n,1},\ldots,I_{n,K_n}$ over $\mathbf{Y}\times\mathbf{T}\times\mathbf{V}$, in which the intervals $I_{n,k}$ and the grid size $K_n$ depend on the sample size $n$.
Let $h_{n,j}$ be a generic function of $\mathbf{Y}\times\mathbf{T}\times\mathbf{V}$ into $\{-.5,.5\}$ that is constant over $I_{n,k}$ for every $1\leq k\leq K_n$.
Let $\mathbf{H}_n=\{h_{n,1},\ldots,h_{n,2^{K_n}}\}$ be the set of all such functions.
Lemma \ref{H_example} shows that this construction of $\mathbf{H}_n$ satisfies Eq. (\ref{approx_vanish}) under conditions on $I_{n,k}$ and $f_{(Y,T)\mid Z=z}$.
The conditions in Lemma \ref{H_example} guarantee that the approximation error from the discretization vanishes as the sample size $n$ increases.
It is worthwhile to mention that, when $K_n=1$, the implied upper bound in (\ref{Approx3}) is equal to the Wald estimand.
It can be smaller than the Wald estimand as long as $K_n\geq 2$,
\begin{lemma}\label{H_example}
Assumption \ref{H_conditions} holds if
\begin{itemize}
\item[(i)] the partition $I_{n+1,1},\ldots,I_{n+1,K_{n+1}}$ is a refinement of the partition $I_{n,1},\ldots,I_{n,K_n}$;
\item[(ii)] $p_n=2^{K_n}+2$ satisfies (\ref{number_moments});
\item[(iii)] there is a positive constant $D_1$ such that $I_{n,k}$ is a subset of some open ball with radius $D_1/K_n$ in $\mathbf{Y}\times\mathbf{T}\times\mathbf{V}$; and
\item[(iv)] the density function $f_{(Y,T)\mid Z=z,V}$ is H\"older continuous in $(y,t,v)$ with the H\"older constant $D_0$ and exponent $d$.
\end{itemize}
\end{lemma}
Theorem \ref{fixed_alternative} shows asymptotic properties of the confidence interval $\mathcal{C}_{\theta,n}(\alpha+\delta)$. The first result (i) is the uniform asymptotic size control and the second result (ii) is the consistency against all the fixed alternatives.
\begin{theorem}\label{fixed_alternative}
Suppose that Assumptions \ref{asympt_assumption} and \ref{H_conditions} hold.
(i) The confidence interval controls the asymptotic size uniformly:
$$
\liminf_{n\rightarrow\infty}\inf_{P\in\mathcal{P}_0, \theta\in\Theta_I(P)}P(\theta\in \mathcal{C}_{\theta,n}(\alpha+\delta))\geq 1-\alpha-\delta,
$$
(ii) If Eq. (\ref{approx_vanish}) holds, the confidence interval excludes all the fixed alternatives:
$$
\lim_{n\rightarrow\infty}P(\theta\in \mathcal{C}_{\theta,n}(\alpha+\delta))=0\mbox{ for every }(\theta,P)\in\Theta\times\mathcal{P}_0\mbox{ with }\theta\not\in\Theta_I(P).
$$
\end{theorem}
\section{Empirical illustrations}\label{sec5.1}
This section studies the effects of 401(k) participation on financial savings using the inference method in Section \ref{sec4}.
I introduce a measurement error problem to the analysis of \cite{abadie:2003}, which investigates the local average treatment effect using the eligibility for 401(k) program.
The robustness to misclassification is empirically relevant, because the retirement pension plan type is subject to a measurement error in survey datasets.
Using the Health and Retirement Study, for example, \cite{gustman/steinmeier/tabatabai:2007} estimate that around one fourth of the survey respondents misclassified their pension plan type.
The dataset in my analysis is from the Survey of Income and Program Participation (SIPP) of 1991. It has been used in various analyses, e.g., \cite{poterba/venti/wise:1995} and \cite{abadie:2003}.
I follow the data construction in \cite{abadie:2003}.
The sample consists of households in which at least one person is employed, which has no income from self-employment, and whose annual family income is between \$10,000 and \$200,000.
The resulting sample size is 9,275.
The outcome variable $Y$ is the net financial assets, the measured treatment variable $T$ is the self-reported participation in 401(k), $Z$ is the eligibility for 401(k) and $R$ is the participation in an individual retirement account (IRA). The control variables $V$ includes constant, family income, age and its square, marital status, and family size. I compute the summary statistics for these variables in Table \ref{table_summary}.
The 401(k) participation can be endogenous, because participants in 401(k) might be more informed or plan more about retirement savings than non-participants.
To control for the endoeneity problem, this paper uses 401(k) eligibility as an instrumental variable.
I use the linear probability model for the regression of the instrumental variable $Z$ on the control variables $V$, that is,
$E[Z\mid V]=\pi(V)=V'\beta_{\pi}$.
For a comparison purpose, I compute the Wald estimator, $\$16,290$, with a 95\% bootstrapped confidence interval $[5,976,\ 27,611]$.\footnote{The Wald estimator here is the sample analogue of $E\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}Y\right]/E\left[\frac{Z-\pi(V)}{\pi(V)(1-\pi(V))}T\right]$ with $\pi$ being estimated in the linear probability model.}
The intent-to-treat effect $\Delta E[Y\mid Z]$ is estimated as $\$10,981$ with a 95\% bootstrapped confidence interval $[4,169,\ 18,558]$
Table \ref{table_empri1}-\ref{table_empri3} shows that the confidence intervals for the local average treatment effect, under different assumptions (Theorem \ref{theorem1}, \ref{theorem4}, \ref{theorem1-less}, respectively).\footnote{I use 2000 draws for the bootstrap and set $\beta=0.1\%$ for the moment selection and $\delta=1\%$ for the estimation of $\beta_{\pi}$. See Appendix A for the multiplier bootstrap critical value.}
The confidence intervals in these tables are robust to a misclassification of the treatment variable.
They are wider than the 95\% confidence interval $[5,976,\ 27,611]$ for the Wald estimator, but are in general comparable to the confidence interval for the Wald estimator.
The confidence intervals in this exercise do not shrink as $K_n$ increases from $1$ to $5$. (Note that, when $K_n=1$, the moment inequalities in (\ref{Approx3}) is equivalent to using the Wald estimand as the upper bound for $|\theta|$.)
This is possibly because the data generation process does not violate the conditions in (\ref{TESTABLEIMPLICAT}) to a large extent, and therefore the Wald estimand is close (even if not equal) to the sharp upper bound for the local average treatment effect.
Table \ref{table_empri2} summarizes the confidence intervals with the IRA participation $R$ as an additional measurement, as discussed in Theorem \ref{theorem4}.
It shows similar values to Table \ref{table_empri1} and it can be interpreted as a result that the IRA participation $R$ has only little identifying power on the local average treatment effect in this empirical exercise.
Table \ref{table_empri3} summarizes the confidence intervals without using the measured treatment $T$, as in Theorem \ref{theorem1-less}.
The lower bound of the confidence intervals does not change from those in Table \ref{table_empri1}, because the lower bound of the identified set does not change without the information from the measured treatment $T$.
The upper bound is 3-4 times larger than those in Table \ref{table_empri1}, which is the cost of not using $T$.\footnote{When $K_n=1$, $TV_Y$ becomes zero and then there is no finite upper bound on the local average treatment effect.}
\begin{table}
\centering
\begin{tabular}{l|cc}
\hline
\hline
&mean&standard deviation\\
\hline
$Y$: family net financial assets & 19,071 & 63,963\\
$T$: 401(k) participation & 0.2762 & 0.4472\\
$Z$: IRA participation & 0.3921 & 0.4883\\
$R$: 401(k) eligibility & 0.2543 & 0.4355\\
\hline
\end{tabular}
\caption{Summary statistics for $Y$, $T$, and $Z$}
\label{table_summary}
\bigskip\bigskip
\begin{tabular}{c|rlrl}
\hline
$K_n$ & \multicolumn{2}{c}{90\% CI} & \multicolumn{2}{c}{95\% CI} \\
\hline
1 & [4,899 & 26,741] & [3,626& 28,945]\\
2 & [4,883 & 26,994] & [3,619& 29,077]\\
3 & [4,860 & 27,002] & [3,596& 29,124]\\
4 & [4,851 & 27,171] & [3,595& 29,302]\\
\hline
\end{tabular}
\caption{90\% and 95\% confidence intervals for the local average treatment effect for different $K_n$. (Based on Theorem \ref{theorem1})}
\label{table_empri1}
\bigskip\bigskip
\begin{tabular}{c|rlrl}
\hline
$K_n$ & \multicolumn{2}{c}{90\% CI} & \multicolumn{2}{c}{95\% CI} \\
\hline
1 & [4,899 & 26,994] & [3,619& 29,241]\\
2 & [4,859 & 27,148] & [3,602& 29,273]\\
3 & [4,830 & 27,396] & [3,590& 29,331]\\
4 & [4,832 & 27,581] & [3,582& 29,488]\\
\hline
\end{tabular}
\caption{90\% and 95\% confidence intervals with using $R$ as in Theorem \ref{theorem4}}
\label{table_empri2}
\bigskip\bigskip
\begin{tabular}{c|rlrl}
\hline
$K_n$ & \multicolumn{2}{c}{90\% CI} & \multicolumn{2}{c}{95\% CI} \\
\hline
1 & [4,912 & Inf] & [3,647& Inf]\\
2 & [4,889 & 79,281 ] & [3,625& 85,330]\\
3 & [4,876 & 102,242] & [3,614& 109,829]\\
4 & [4,878 & 123,440] & [3,618& 132,642]\\
\hline
\end{tabular}
\caption{90\% and 95\% confidence intervals without $T$ as in Theorem \ref{theorem1-less}}
\label{table_empri3}
\end{table}
\newpage
\section{Numerical example and Monte Carlo simulations}\label{sec5}
This section considers a numerical example to illustrates the theoretical properties in the previous section.
I consider the following data generating process:
\begin{eqnarray*}
Z&\sim&Bernoulli(0.5)\\
T^\ast&=&1\{-3/4+1/2Z+U_1\geq 0\}\\
Y&=&2T^\ast+\Phi(U_2)\\
T&=&T^\ast+(1-2T^\ast)1\{U_3\leq\gamma\}
\end{eqnarray*}
where $\Phi$ is the standard normal cdf and, conditional on $Z$, $(U_1,U_2,U_3)$ is drawn from the Gaussian copula with the correlation matrix $$\left(\begin{array}{ccc}1&0.25&0.25\\ 0.25&1&0.25\\ 0.25&0.25&1\end{array}\right).$$
I set $\gamma=0, 0.2, 0.4$, which captures the degree of the misclassification.
In this design, the treatment variable is endogenous since $U_1$ and $U_2$ are correlated.
In addition, the misclassification is endogenous in that $U_2$ and $U_3$ are correlated.
Table \ref{table_poppara} lists the three population objects: the local average treatment effect, the Wald estimand, and the identified set for the local average treatment effect.
Note that, unless $\gamma=0$, the distribution for $(Y,T,Z)$ violates the conditions in (\ref{TESTABLEIMPLICAT}).
When there is no measurement error, the sharp upper bound is equal to the Wald estimand, which is the case for $\gamma=0$.
When there is a measurement error, the sharp upper bound for the local average treatment effect can be smaller than the Wald estimand.
In order to focus on the finite sample properties of the test $\mathbbm{1}\{T(\theta,\pi)>c(\alpha,\theta,\pi)\}$, I only evaluate coverage probabilities given $\pi=0.5$ for various value of $\theta$.
The partition of grids is equally spaced over $\mathbf{Y}$ with the number the partitions $K_n=1,\ldots,4$.
Coverage probabilities are calculated as how often the $95\%$ confidence interval includes a given parameter value out of $1000$ simulations.
The sample size is $n=500$ for Monte Carlo simulations. I use $1000$ bootstrap repetitions to construct critical values.
I set $\beta=0.1\%$ for the moment selection.
Figures \ref{fig1}-\ref{fig3} describe the coverage probabilities of the confidence intervals for each parameter value.
When the degree of measurement error is zero ($\gamma=0$), the power for the confidence interval with $K_n=1$ has a slightly better performance than those with $K_n\geq 2$. It can be because the number of moment inequalities are larger for $K_n\geq 2$ and then the critical value is bigger.
As the degree of measurement error becomes larger, the power for the confidence intervals with $K_n\geq 2$ becomes better than that with $K_n=1$. It is a result of the fact that the sharp upper bound for the local average treatment effect is smaller than the Wald estimand.
Next, I investigate the identifying power of an additional measurement.
$$
R=T^\ast+(1-2T^\ast)1\{U_4\leq\gamma\}
$$
where $(U_1,U_2,U_3,U_4)$ is drawn from the Gaussian copula with the correlation matrix
$$\left(\begin{array}{cccc}1&0.25&0.25&0.25\\ 0.25&1&0.25&0.25\\ 0.25&0.25&1&0.25\\ 0.25&0.25&0.25&1\end{array}\right).$$
Table \ref{table_poppara2} lists the three population objects: the local average treatment effect, the Wald estimand, and the identified set for the local average treatment effect.
Figures \ref{fig4}-\ref{fig6} describe the coverage probabilities of the confidence intervals for each parameter value.
The comparison among different $K_n$'s are similar to the previous figures.
Last, I consider the dependence between measurement error and instrumental variable, as in Section \ref{sec3Less}.
Table \ref{table_poppara3} lists the three population objects and Figures \ref{fig7}-\ref{fig9} describe the coverage probabilities of the confidence intervals.
Since they do not use any information from the measured treatment $T$, the identified sets and the confidence intervals show that the upper bounds on the local average treatment effect is larger than those under the independence between measurement error and instrumental variable.
The difference becomes smaller when the degree of the measurement error is larger.
It can be considered as the result that, when the misclassification happens too often, the measured treatment $T$ has only little information about the true treatment and therefore there is a small difference between the identified sets.
\begin{table}
\centering
\begin{tabular}{l|ccc}
\hline
\hline
$\gamma$ & LATE & Identified set & Wald estimand \\
\hline
$0$ & 2.00 & [1,\ 2.00] & 2.00 \\
$0.2$ & 2.00 & [1,\ 2.41] & 3.01 \\
$0.4$ & 2.00 & [1,\ 2.64] & 8.72\\
\hline
\end{tabular}
\caption{Population parameters (Theorem \ref{theorem1})}
\label{table_poppara}
\bigskip\bigskip
\begin{tabular}{l|ccc}
\hline
\hline
$\gamma$ & LATE & Identified set & Wald estimand \\
\hline
$0$ & 2.00 & [1,\ 2.00] & 2.00 \\
$0.2$ & 2.00 & [1,\ 2.26] & 3.01 \\
$0.4$ & 2.00 & [1,\ 2.62] & 8.72\\
\hline
\end{tabular}
\caption{Population parameters with using $R$ (Theorem \ref{theorem4})}
\label{table_poppara2}
\bigskip\bigskip
\begin{tabular}{l|ccc}
\hline
\hline
$\gamma$ & LATE & Identified set & Wald estimand \\
\hline
$0$ & 2.00 & [1,\ 2.67] & 2.00 \\
$0.2$ & 2.00 & [1,\ 2.67] & 3.01 \\
$0.4$ & 2.00 & [1,\ 2.68] & 8.72\\
\hline
\end{tabular}
\caption{Population parameters without $T$ (Theorem \ref{theorem1-less}) }
\label{table_poppara3}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig1_2.png}
\caption{Coverage of the confidence interval (Theorem \ref{theorem1}) for $\gamma=0$.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig2_2.png}
\caption{Coverage of the confidence interval (Theorem \ref{theorem1}) for $\gamma=0.2$.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig3_2.png}
\caption{Coverage of the confidence interval (Theorem \ref{theorem1}) for $\gamma=0.4$.}
\label{fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig1_3.png}
\caption{Coverage of the confidence interval with using $R$ (Theorem \ref{theorem4}) for $\gamma=0$.}
\label{fig4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig2_3.png}
\caption{Coverage of the confidence interval with using $R$ (Theorem \ref{theorem4}) for $\gamma=0.2$.}
\label{fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig3_3.png}
\caption{Coverage of the confidence interval with using $R$ (Theorem \ref{theorem4}) for $\gamma=0.4$.}
\label{fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig1_1.png}
\caption{Coverage of the confidence interval without $T$ (Theorem \ref{theorem1-less}) for $\gamma=0$.}
\label{fig7}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig2_1.png}
\caption{Coverage of the confidence interval without $T$ (Theorem \ref{theorem1-less}) for $\gamma=0.2$.}
\label{fig8}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,keepaspectratio]{fig3_1.png}
\caption{Coverage of the confidence interval without $T$ (Theorem \ref{theorem1-less}) for $\gamma=0.4$.}
\label{fig9}
\end{figure}
\newpage
\section{Conclusion}\label{sec6}
This paper studies the identifying power of an instrumental variable in the heterogeneous treatment effect framework when a binary treatment is mismeasured and endogenous.
The assumptions in this framework are the monotonicity of the instrumental variable $Z$ on the true treatment $T^\ast$ and the exogeneity of $Z$.
I use the total variation distance to characterize the identified set for the local average treatment effect $E[Y_1-Y_0\mid T_0^\ast<T_1^\ast]$.
I also provide an inference procedure for the local average treatment effect.
Unlike the existing literature on measurement error, the identification strategy does not reply on a specific structure of the measurement error; the only assumption on the measurement error is its independence of the instrumental variable.
There are several directions for future research.
First, the choice of the partition $\mathbf{I}_n$ in Section \ref{sec4}, particularly the choice of $K_n$, is an interesting direction.
To the best of my knowledge, the literature on many moment inequalities has not investigated how econometricians choose the numbers of the many moment inequalities, e.g., \cite{andrews/shi:2016}.
Second, it may be worthwhile to investigate the other parameter for the treatment effect.
This paper has focused on the local average treatment effect, but the literature on heterogeneous treatment effect has emphasized the importance of choosing a adequate treatment effect parameter in order to answer relevant policy questions.
Third, it is also interesting to investigate various assumptions on the measurement errors.
In some empirical settings, for example, it may be reasonable to assume that the measurement error is one-directional (e.g., misclassification happens only when $T^\ast=1$).
Fourth, it is not trivial how the analysis of this paper can be extended to an instrumental variable taking more than two values. For a general instrumental variable, it is always possible to focus on two values of the instrumental variable and apply the analysis of this paper to the subpopulation with the instrumental variable taking these two values.
However, different pairs of the values can have different compliers, so that the parameter of interest is not common across different pairs, as in \cite{heckman/vytlacil:2005}.
|
1,108,101,562,724 | arxiv | \section{Introduction}
The study of the phase diagram of QCD is very important for astrophysics, cosmology, and particle physics. However, the phase diagram of QCD is still incompletely studied. The reason is that quarks and gluons constitute a strongly interacting system, and this fact complicates the use of analytical methods. One of the possible approaches to the study of the properties of such a system is the LQCD, where functional integrals are numerically calculated using the Monte Carlo method. Calculations in lattice QCD allow ab initio study of the properties of a quark-gluon plasma~\cite{bib:A_Bazavov}.
However, calculations in LQCD with finite chemical potential are now impossible due to the sign problem of the fermionic determinant. Therefore, we shall study a simpler theory: QFT with $SU(2)$ gauge group and two degenerate flavors, where no sign problem arises~\cite{bib:JB_Kogut_sign,bib:T_Makiyama,bib:S_Hands_sign}, in order to better understand the qualitative features of the QCD phase diagram and to analyze the effect of a non-zero chemical potential on the properties of QGP. The point is that $QC_2D$ has specific relation for the Dirac operator~\cite{bib:JB_Kogut_sign,bib:T_Makiyama}:
\begin{eqnarray}\label{Dirac_relation}
det\Bigl[ M(\mu_q) \Bigr] = det\Bigl[ \left( \tau_2 C \gamma_5 \right)^{-1} M(\mu_q) \left( \tau_2 C \gamma_5 \right) \Bigr] = det\Bigl[M(\mu_q^*)\Bigr]^*,
\end{eqnarray}
where $M(\mu_q) = \gamma_\mu D_\mu + m_q - \mu_q \gamma_4$ is the Dirac operator in continuum $R^4$, $\mu_q = \mu_B / 2$ is the quark chemical potential, $C = \gamma_2 \gamma_4$, and $\tau_2$ is a generator of the $SU(2)$ group. Relation~\ref{Dirac_relation} guarantees, that $det\Bigl[ M(\mu_q) \Bigr]$ is real for the real $\mu_q$. One can also prove~\cite{bib:S_Hands_sign}, that the spectrum of $M(\mu_q)^\dagger M(\mu_q)$ at finite real $\mu_q$ is strictly positive, both in the continuum and for the lattice formulation~\ref{Dirac_operator}, if the quark mass is non-zero.
Our aim is to study the influence of the chemical potential on the Polyakov loop, chiral condensate and baryon number density. Especially interesting is to understand the effect of a non-zero baryon density on the breaking/recovery of the chiral symmetry. Similar investigations were performed in~\cite{bib:T_Makiyama,bib:S_Cotter,bib:JI_Skullerud} for $N_f = 2$ with Wilson fermions and in~\cite{bib:JB_Kogut_1}\cite{bib:JB_Kogut_2} with $N_f = 4$ and 8 flavors of staggered fermions respectively. However, Wilson fermions explicitly violate the chiral symmetry~\cite{bib:Gattringer_Lang}, thus they may not reveal all the phase transition lines in the $QC_2D$ phase diagram. In this paper we consider $N_f=2$ flavors of staggered fermions, taking the fourth root of the $det\Bigl[ M(\mu_q)^\dagger M(\mu_q) \Bigr]$ via the R-algorithm~\cite{bib:HJ_Rothe}.
\section{Lattice formulation}
The partition function of the system under study has the form:
\begin{eqnarray}\label{Z}
Z = \int DU\,det\Bigl[M^\dagger(\mu_q) M(\mu_q) \Bigr]^{ \frac{1}{4} } e^{- S_G[U]},
\end{eqnarray}
where the functional integration is performed over the $SU(2)$ group manifold, $M(\mu_q)$ is the lattice Dirac operator for
Kogut-Susskind fermions with the baryon chemical potential, and $S_G[U]$ is the Wilson gauge action~\cite{bib:KG_Wilson}:
\begin{eqnarray}\label{S_G}
S_G = \beta \sum_{x}\sum_{\mu < \nu = 1}^4 \Bigl(1 - \frac{1}{2} {\rm Tr} \, U_{x, \mu\nu} \Bigr).
\end{eqnarray}
Here $\beta = \frac{4}{g^2}$, and $U_{x, \mu\nu} = U_{x, \mu} U_{x + \hat{\mu}, \nu} U^\dagger_{x + \hat{\nu}, \mu} U^\dagger_{x, \nu}$. The lattice Dirac operator $M(\mu_q)$ in~\ref{Z} has the form:
\begin{eqnarray}\label{Dirac_operator}
M_{xy} = ma\delta_{xy} + \frac{1}{2}\sum_{\mu = 1}^4 \eta_\mu(x)\Bigl[ U_{x, \mu}\delta_{x + \hat{\mu}, y}e^{\mu_q a\delta_{\mu, 4}} - U^\dagger_{x - \hat{\mu}, \mu}\delta_{x - \hat{\mu}, y}e^{- \mu_q a\delta_{\mu, 4}} \Bigr],
\end{eqnarray}
where $a$ is the lattice spacing, $m$ is the mass of the quark, and the functions $\eta_1(x) = 1,\, \eta_2(x) = (-1)^{x_1},\, \eta_3(x) = (-1)^{x_1 + x_2},\, \eta_4(x) = (-1)^{x_1 + x_2 + x_3}$ are the $\gamma$-matrices after the Kogut-Susskind transformation.
The chemical potential $\mu_q$ is introduced in~\ref{Dirac_operator} by means of the multiplication of time links by the exponential factor $e^{\pm \mu_q a}$. This way of the introduction of the chemical potential makes it possible to avoid additional divergences and to reproduce the known result for free fermions in the continuum limit~\cite{bib:RV_Gavai}.
\begin{figure}[t]
\begin{center}
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width = 1.0\textwidth]{P_loop.eps}
\caption{Polyakov loop as a function of T for three values of the baryon chemical potential.}
\label{fig:P_loop}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width = 1.0\textwidth]{ch_cond_all.eps}
\caption{Chiral condensate as a function of T for three values of the baryon chemical potential. The ordinate axis
is given on a logarithmic scale.}
\label{fig:Ch_cond}
\end{minipage}
\end{center}
\end{figure}
For partition function~\ref{Z} in the continuum limit to correspond to two dynamic quark flavors, we extract the fourth order root of the fermionic determinant using the rational approximation with an accuracy of $O(10^{-15})$~\cite{bib:MA_Clark}. Configurations were generated by means of the hybrid Monte Carlo method, $\Phi$-algorithm~\cite{bib:Gattringer_Lang} was employed. We considered a $16^3 \times 6$ lattice with the bare fermion mass $ma = 0.01$, $\beta$ = 1.6 \ldots 2.7, and $\mu_q a$ = 0.0 \ldots 0.6 (for each set of parameters, 400 independent configurations were generated). The program code was written with the use of CUDA. The calculations were performed at the ITEP supercomputer and IHEP cluster.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.75\textwidth]{bar_density_on_mu_conf.eps}
\caption{Baryon number density as a function of~$\mu_q$ in the low-temperature phase ($\beta = 1.6$). Linear fit is shown.}
\label{fig:Bar_density}
\end{center}
\end{figure}
\section{Numerical results and discussion}
To study the physical properties of the system, we considered the following observables (triangular brackets mean thermodynamic averaging):
\begin{itemize}
\item Polyakov loop:
\begin{eqnarray}\label{Pol_loop}
\< L \> = \frac{1}{N_s^3} \sum_{x_1,x_2,x_3 = 0}^{N_s - 1} \frac{1}{2} \< {\rm Tr} \, \prod_{x_4 = 0}^{N_\tau - 1} U_{x, 4} \> ;
\end{eqnarray}
\item chiral condensate:
\begin{eqnarray}\label{chiral_condensate}
a^3\<\overline{\psi}\psi\> = - \frac{1}{N_s^3 N_\tau}\frac{1}{8}\frac{\partial}{\partial (ma)} log\,Z = \frac{1}{N_s^3 N_\tau}\frac{1}{8} \< {\rm Tr} \, M^{-1} + {\rm Tr} \, (M^\dagger)^{-1} \> ;
\end{eqnarray}
\item baryon number density:
\begin{eqnarray}\label{bar_number}
a^3 \<n_B\> = \frac{1}{N^3_s N_\tau} \frac{1}{16} \frac{\partial (log\,Z)}{\partial (\mu_q a)} = \frac{1}{N^3_s N_\tau} \frac{1}{8} \< Re \Tr {\frac{\partial M}{\partial (\mu_q a)} M^{-1}} \> .
\end{eqnarray}
\end{itemize}
In order to fix the scale we employed Sommer parameter $r_0$ = 0.468(4) fm~\cite{bib:A_Bazavov} and performed the measurements on $16^3 \times 32$ lattices with $m_q a = 0.01$. Lattice spacings and pion masses are listed in the Table~\ref{tabular:pion_masses}.
\begin{table}[bH!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\beta$ & $a, fm$ & $M_{\pi}, MeV$ \\
\hline
1.7 & 0.45(1) & 109(3) \\
1.9 & 0.20(1) & 216(6) \\
2.1 & 0.135(2) & 430(13) \\
2.2 & 0.097(1) & 551(16) \\
\hline
\end{tabular}
\end{center}
\caption{Lattice spacings and pion masses.}
\label{tabular:pion_masses}
\end{table}
Figure~\ref{fig:P_loop} shows the dependences of the Polyakov loop on temperature for three $\mu_q$ values, a crossover phase transition is observed. It may also be seen in the figure, that an increase in the baryon chemical potential results in an slight increase in $\< L \>$ for the same temperature. However, the susceptibilities of $\< L \>$ can not be measured with the existing statistics and the influence of the baryon chemical potential on $T_c$s cannot be determined.
Figure~\ref{fig:Ch_cond} shows the dependences of the chiral condensate on $T$ for various $\mu_q$ values. It can be seen that an increase in the baryon chemical potential leads to a significant decrease in $\< \overline{\psi} \psi \>$, i.e., to the recovery of the chiral symmetry. These results are in agreement with the previous results obtained for Wilson fermions~\cite{bib:S_Cotter} and with the outcomes of~\cite{bib:JB_Kogut_ChPT}. On the figure~\ref{fig:Bar_density} the dependence of the baryon number density on $\mu_q$ in the confinement phase is shown. One can see, that at $\mu_q \approx m_\pi / 2$ it begins to rise linearly, and then the increment becomes non-linear. Such a behaviour agrees well with the ChPT predictions~\cite{bib:JB_Kogut_ChPT,bib:DT_Son_ChPT}.
\begin{acknowledgments}
The authors are grateful to M.~M{\"u}ller-Preussker and V.~G.~Bornyakov for stimulating discussions. Numerical calculations were performed at the ITEP systems "Graphyn" and "Stakan" (authors are much obliged to A.~V.~Barylov, A.~A.~Golubev, V.~A.~Kolosov, I.~E.~Korolko and M.~M.~Sokolov for the help) and at the IHEP cluster (authors are much obliged to V.~V.~Kotlyar for the help). This work was supported by the Russian Foundation for Basic Research (projects nos. 14-02-01185-a, 15-02-07596-a, and 15-32-21117 mol\_a\_ved), by the Council of the President of the Russian Federation for Support of Young Scientists and Leading Scientific Schools (project no. MD-3215.2014.2), and by the FAIR-Russia Research Centre. The work of A.~Yu.~Kotov was supported by the Dynasty Foundation.
\end{acknowledgments}
|
1,108,101,562,725 | arxiv | \section{Introduction}
The Gamow-Teller (GT) properties of nuclei in the medium mass
region of the periodic
table are crucial determinants of
the precollapse evolution of a supernova~\cite{bethe:1992}.
The core of a massive
star at the end of hydrostatic burning is stabilized by electron degeneracy
pressure as long as its mass does not exceed the appropriate Chandrasekhar
mass $M_{CH}$. If the core mass exceeds $M_{CH}$, electrons are captured by
nuclei. For many of the nuclei that determine the electron
capture rate in this early stage of the presupernova \cite{aufderheide:1994},
Gamow-Teller (GT) transitions contribute significantly.
Due to insufficient experimental information, the GT$_+$
transition rates have so far been treated only qualitatively in
collapse simulations, assuming the GT$_+$ strength to reside in a single
resonance whose energy relative to the daughter ground state has been
parametrized phenomenologically \cite{FFN:1980};
the total GT$_+$ strength has
been taken from the single-particle model. However, recent $(n,p)$ experiments
\cite{williams:1995,alford:1990,vetterli:1989,elkateb:1994,ronnquist:1993},
show that the GT$_+$ strength is
fragmented over many states, and that the total strength is significantly
quenched compared to the single-particle model. (A recent update of the
GT$_+$ rates for use in supernova simulations assumed a constant quenching
factor of 2 \cite{aufderheide:1994}.)
In this paper, we describe our calculations of
Gamow-Teller strength distributions in iron region nuclei:
the shell model Monte Carlo
(SMMC) technique is used to obtain the response functions of the Gamow-Teller
operators in the full $0\hbar\omega$ $fp$-shell model space.
These response functions are related to the strength distributions
through an inverse Laplace transformation, which we carry out
using a Maximum Entropy method.
Our starting point is the
interacting shell model~\cite{mayer:1955}, which gives an accurate and
consistent description of the properties of
light nuclei~\cite{cohen:1965,wildenthal:1988} when an appropriate interaction
is used.
In the shell model, nucleons
occupy a spectrum of single-particle orbitals that are formed by the
presence of an assumed mean field. These nucleons interact
through a residual effective interaction,
which is derived from a realistic nucleon-nucleon potential
through the $G$-matrix formalism\cite{hj:1995}.
The resultant interaction matrix elements require some minimal tuning
to optimally account for known spectroscopic properties.
In the conventional approach,
the solution to the shell model
is obtained by diagonalizing the
nuclear Hamiltonian in a suitably chosen basis of many-particle
configurations.
Since the
Hamiltonian matrix to be diagonalized grows combinatorially with
the size of the single-particle basis and the number of valence
nucleons, realistic calculations are feasible in
the full $fp$-shell only for nuclei with $A \le 50$.
Hence, the traditional calculation of various nuclear properties
for medium-heavy and heavy nuclei lies beyond the scope of
direct-diagonalization methods except in a severely truncated model
space.
The SMMC method
\cite{johnson:1992,lang:1993,ormand:1994,physrep:1997}
scales more gently with the problem
size than do traditional direct-diagonalization techniques,
allowing larger, and
hence more realistic, calculations.
This method exploits the fact that most of the billions
of configurations in nuclei are unimportant for general
nuclear properties, so that only a subset of the relevant configurations
needs to be sampled.
Observables are calculated as thermal averages in a
canonical ensemble of nuclear configurations, so that
nuclei at finite temperature can be studied quite naturally.
SMMC methods were used in the first
complete $0\hbar\omega$ calculations for a number of
ground-state~\cite{alhassid:1994,dean:1994,karli:1995},
and finite-temperature properties~\cite{dean:1995} of mid-$fp$ shell nuclei.
These studies used both the
Richter-Brown~\cite{richter:1991} and the
KB3~\cite{kb3:1981} residual interactions.
For the purposes of investigating
Gamow-Teller transitions, the KB3 interaction
(obtained by minimally modifying the monopole strength in the
original Kuo-Brown matrix elements~\cite{kbrown:1968})
is well-suited for full $0\hbar\omega$ studies throughout the lower-$fp$ shell
region~\cite{caurier:1994}.
Observables that have been calculated with this interaction
in the SMMC approach include
the energy $\langle H\rangle$,
the total $B(E2)$, $B(M1)$, GT strengths, and
various pairing properties; the calculated
ground-state properties compare very well with experiment.
Importantly, these studies showed that the experimentally observed
quenching of the total GT strength is consistently reproduced by the
correlations within the full $fp$-shell if a renormalization of the spin
operator by the factor 0.8 is invoked \cite{karli:1995,caurier:1994}.
The same
renormalization factor had already been deduced from sd-shell
\cite{wildenthal:1988} and
$fp$-shell nuclei with $A \leq 49~$\cite{caurier2:1996,poves:1997}
and thus appears to be universal.
In Section II, we reveiw the SMMC method and its application
to response
functions. We apply a Maximum Entropy (MaxEnt) method
to perform the required inverse
Laplace transform of the SMMC response functions;
our implementation of MaxEnt for SMMC
is discussed in
Section III. Section IV includes a validation of these methods
against direct diagonalization for GT transitions in {$^{48}$Ti},
and we present GT strength functions for several
heavier nuclei in the $fp$-shell ($A=48-64$)
where experimental data are available.
We also discuss the evolution of these distributions with
temperature. A brief conclusion follows in Section V.
\section{The Shell Model Monte Carlo Method} \label{smmc}
The SMMC method is based on a
statistical formulation of the
nuclear many-body problem. In the finite-temperature
version of this approach, an observable is
calculated as the canonical expectation value
of a corresponding operator $\hat{\cal A}$ at a given temperature $T$
and is given by~\cite{johnson:1992,lang:1993,ormand:1994,physrep:1997}
\begin{equation}
\langle \hat {\cal A}\rangle=
{{\rm Tr_A} [\hat {\cal A} e^{-\beta \hat H}]\over
{\rm Tr_A} [e^{-\beta \hat H}]},
\end{equation}
where $\hat U=\exp(-\beta \hat H)$ is the imaginary-time
many-body propagator,
${\rm Tr_A} \hat U$ is the canonical partition function
for $A$ nucleons, $\hat H$ is the shell model Hamiltonian,
and $\beta=1/T$ is the inverse temperature.
In terms of a spectral expansion, the total strength of a transition
operator $\hat {\cal A}$
is then given by the following expectation value:
\begin{equation}
B({\cal A}) \equiv \langle \hat {\cal A}^\dagger \hat {\cal A}\rangle =
{{\sum_{i,f} e^{-\beta E_i} \vert \langle
f \vert \hat {\cal A} \vert i \rangle \vert ^2} \over {\sum_i e^{-\beta
E_i}}},
\end{equation}
where $\vert i\rangle$ ($\vert f\rangle$) are the many-body states of the
initial (final)
nucleus with energy $E_i$ ($E_f$). The total strength from the ground state
can be obtained by choosing a sufficiently large value for $\beta$ such
that only
the ground state contributes due to the Boltzmann weight.
In addition to the ``static'' strength [Eq. (2)], one can calculate
for an imaginary-time $\tau$,
the response function, $R_{\cal A}(\tau)$, which describes
dynamical behavior and contains information about the nuclear spectrum:
\begin{equation}
\label{response function}
R_{\cal A}(\tau)\equiv\langle \hat {\cal A}^\dagger(\tau) \hat {\cal
A}(0)\rangle =
{ {\rm Tr}_A[e^{-(\beta-
\tau) \hat H} \hat {\cal A}^\dagger e^{-\tau \hat H} \hat {\cal A}]
\over {\rm Tr}_A [e^{-\beta \hat H}]}
={\sum_{if} e^{-\beta E_i} e^{-\tau(E_f-E_i)} {\vert \langle f \vert
\hat{\cal A}
\vert i \rangle \vert}^2 \over \sum_i e^{-\beta E_i} }.
\end{equation}
The strength distribution
\begin{equation}
S_{\cal A}(E)={{\sum_{if} \delta(E-E_f+E_i)
e^{-\beta E_i} \vert \langle f \vert
\hat {\cal A} \vert i \rangle \vert ^2} \over {\sum_i e^{-\beta E_i}}}
\end{equation}
is related to
$R_{\cal A}(\tau)$
by a Laplace Transform,
\begin{equation}
\label{laplace transform}
R_{\cal A}(\tau) = \int_{-\infty}^\infty S_{\cal A} (E) e^{-\tau E} dE.
\end{equation}
Note from Eq.~(3) that
ground-state to ground-state transitions require large
$(\beta-\tau)$ in addition to large $\beta$.
The large-$\tau$ behavior of $R_{\cal A}$ allows, in principle,
a measurement of the specific transition between the ground state
and the lowest allowed final state
by the operator; the slope of log$_e$[$R(\tau)$] in this
limit provides the transition energy, and the
intercept measures the transition strength.
The SMMC canonical expectation values are based on
the discretization of the many-body propagator, $e^{-\beta H}$,
into a finite number of ``time'' slices, $N_t$, each of duration
$\Delta\beta=\beta/N_t$. At each time slice the many-body propagator
is linearized via the Hubbard-Stratonovich
transformation~\cite{hubbard:1957,strato:1957};
observables are thus expressed as path integrals of one-body
propagators in fluctuating auxiliary fields. The integration
is carried out by a Metropolis random walk~\cite{Met:1953}.
To circumvent the ``sign problem'' encountered in the SMMC
calculations with realistic interactions,
we use the extrapolation procedure
outlined in Refs.~\cite{alhassid:1994,dean:1995}.
Yet another, but distinct, source of the sign problem
is an odd number of nucleons
in the canonical expectation values~\cite{physrep:1997}.
We overcome this problem by a number-projection technique,
first employed in
\cite{dean:1994} and subsequently used in
\cite{physrep:1997}, that allows us to extract information concerning
odd-$A$ nuclei from the neighboring even-even system.
\section{The method of Maximum Entropy} \label{maxent}
Once we have the Gamow-Teller response functions, they must be inverted
to obtain strength distributions.
The inverse of the Laplace transform (5) required to extract the strength
functions is
an ill-conditioned numerical problem~\cite{press:1992}. The kernel (which in
this case is $e^{-\tau E}$) acts as a smoothing operator and thus
the solution, for which the kernel must be inverted, will be extremely
sensitive
to small changes, ({\it i.e.}, to errors) in the input data. In
this section, we describe a Maximum Entropy procedure to carry out
the inversion~\cite{physrep:1997}.
Consider the $\chi^2$-deviation of the data, $r_i \equiv
R(\tau=i\Delta\beta)$,
with errors, $\sigma_i$,
from the fit values $F_i\{S\}$ produced by the trial inverse and
obtained according to
Eq.~(\ref{laplace transform}),
\begin{equation}
\label{chisq}
\chi^2\{S\}= \sum_i {({r_i - F_i\{S\} \over \sigma_i})}^2.
\end{equation}
Direct minimization of $\chi^2$ is numerically stable only in the
simplest of circumstances (such as few-parameter data fitting).
Combining $\chi^2$ with some other
auxiliary well-conditioned functional, $P\left\{S\right\}$, such
that $P\{S\}$ has a minimum at the smooth solution, $S(E)$, and
penalizes strongly oscillating functions, leads to a compromise
between fitting the data and the expected smoothness of the inverse.
Thus one minimizes the joint
functional
\begin{equation}
\label{functional}
{{1}\over{2}}\chi^2\left\{S\right\}+P\left\{S\right\} \; .\end{equation}
The functional $P\{S\}$ is chosen as the information theoretic entropy,
\begin{equation}
\label{pofs}
P\left\{S\right\}=\alpha \int dE \left[ m(E)-S(E)+
S(E)\ln\left({{S(E)}\over{m(E)}}\right)\right]\;,
\end{equation}
where $m(E)$ is a default model and $\alpha$ is an adjustable
parameter that both specify the
{\it a priori} knowledge of $S(E)$.
In order to minimize the functional (\ref{functional}),
we employ the technique
of Ref.~\cite{meshkov:1994}, which involves an iterative
sequence of linear programming problems. We first
expand Eq.~(\ref{pofs}) to second order in $S(E)$ about some positive function
$f(E)$ to obtain
\begin{equation}
\label{expansion of pofs}
P\left\{f\mid S\right\}=\alpha \int dE \left\{\left(m-{{f}\over{2}}\right)
+\left[\ln\left({{f}\over{m}}\right)-1\right]S
+{{S^2}\over{2f}}\right\} \; .
\end{equation}
If the true minimum $S(E)$ of the non-quadratic functional in
Eq.~(\ref{pofs}) is taken as a point of expansion of $f(E)$ in
[Eq.~(\ref{expansion of pofs})], then
it also gives the minimum of the corresponding quadratic functional
\begin{equation}
S(E)=\min_a \left[{{1}\over{2}}\chi^2\left\{a\right\} +
P\left\{S\mid a\right\}\right] \; . \end{equation}
Since we require extraction of positive strength function, we iterate
while retaining partially the result of the previous iteration as,
\begin{equation}
S^{(n+1)}=\min_{S\ge 0}\left[{{1}\over{2}}\chi^2\left\{S\right\}+
P\left\{f^{(n)}\mid S\right\}\right]\;, \end{equation}
with
\begin{equation}
f^{(n)}(E)=\xi S^{(n-1)}(E)+(1-\xi)S^{(n)}(E)\;, \end{equation}
and the default model as the starting approximation to $S$,
\begin{equation}
S^{(0)}(E)=S^{(-1)}(E) \equiv m(E)\;. \end{equation}
The rate of convergence and stability are controlled by the
mixing parameter $0 < \xi < 1$;
a value of $\xi=0.3$ is a reasonable choice
to guarantee stability. Typically, convergence to the ``true'' solution
is obtained in less than 40 iterations.
In this way, the minimization of a general functional that is intrinsic
to a Maximum Entropy approach is reduced to an iterative procedure
in which each step requires the minimization of a quadratic functional
with linear inequality constraints.
Some general remarks regarding this inversion technique are called for.
Since $R(\tau)$ is calculated
at discrete values of imaginary time and, in principle, up to an imaginary
time $\beta$, the smallest energy that
can be resolved in $S(E)$ is of order $1/\beta$, and the largest
is the inverse of the discretization size, $1/\Delta\beta$. In practice,
numerical noise forces a cut-off in the largest $\tau$ value that can
be used, thus decreasing the energy resolution.
As we mentioned above, the default model can be chosen by investigating
the characteristics of the response function.
From Eq.~(3), one sees that
$d\ln[R(\tau)]/d\tau \vert_{\tau=0}$
gives the centroid of the
distribution in the parent nucleus, and thus
in the case of the GT$_+$ operator
we choose for the default model
a Gaussian
with a peak at this energy and with a width of $1.5-2$ MeV;
this width can be estimated from $d^2\ln[R(\tau)]/d\tau^2 \vert_{\tau=0}$.
The parameter $\alpha$ is the inverse of the total strength of the
distribution, and is calculated from the default model as
$\alpha=\left[\int dE m(E)\right]^{-1}$.
In the case of the GT$_-$ operator, we make
a better guess for the default model by including some features
of the distribution. Experimental distributions typically have
three regions: the $T=T_z$, and $T=T_z+1$
regions distributed around 6 MeV and 12 MeV, respectively,
and a more fragmented region at lower energies.
We choose for our GT$_-$ default model two gaussians
with the same widths, each centered at the appropriate energy.
The lower energy part of the distributions is governed by the high
$\tau$ region of the response function. Although this
region of the response function
is sometimes contaminated by large statistical fluctuations,
the reconstruction tends to give a low-energy peak that well
describes these more discrete transitions.
\section{Gamow-Teller strength distributions}
The GT operators are defined as
${\bf GT}_{\pm}=\sum_l \mbox{\boldmath $\sigma$}_l\tau^{\pm}_l$, where
$\mbox{\boldmath $\sigma$}_l$ is the Pauli spin operator
for nucleon $\it l$ and
$\tau^-_l$ ($\tau^+_l$) is the isospin lowering (raising)
operator that changes a neutron (proton) into a proton (neutron);
they thus describe charge-changing decay modes.
GT strength distributions play an important role in two
very different contexts. In the astrophysical context, medium-heavy nuclei
at a finite temperature in the core of a pre-supernova
capture electrons. A strong phase space dependence makes the relevant
electron capture rates more sensitive to GT {\it distributions}
than to total strengths~\cite{aufderheide1:1993,aufderheide2:1993} and
thus necessitates complete $0\hbar\omega$
calculations of these distributions.
GT strengths are also
important in studies of double beta decay~\cite{boehm:1987}. The two-neutrino
mode of this decay, which
provides important
confidence in extracting the neutrino mass from zero-neutrino decay
experiments, is equivalent to a description of the GT
strength functions from the ground states of the parent and daughter
nuclei.
Thus, any reliable calculation of the two-neutrino matrix element
must accurately describe these strength distributions.
In the following
sections we demonstrate and validate the MaxEnt method for the GT
operator by comparing our results with direct diagonalization. We then
compare our results with experimentally obtained distributions for various
$fp$-shell nuclei. In what follows we will use
the renormalized GT operator corresponding to
${\bf GT_{\pm}}/1.26$~\cite{karli:1995,caurier:1994}.
\subsection{Comparison with direct diagonalization}
Direct-diagonalization results in the complete $fp$-shell
can be obtained for nuclei with $A \leq 48$.
We choose {$^{48}$Ti} for a comparison
and in Fig. 1, we show our results for this nucleus.
The lower left-most panel shows the GT$_+$
response function, $R(\tau)$, for {$^{48}$Ti} as measured in the parent
and the middle lower panel shows the extracted strength distribution,
$S(E)$, in the daughter, {$^{48}$Sc}.
Also shown in the same panel is the direct-diagonalization
result~\cite{caurier:1990}.
The discrete transitions found in the direct diagonalization have been
smeared with a gaussian of width 0.25~MeV in order to
facilitate comparisons.
While the SMMC total strength ({\it i.e.}, the area under the
curve) $B(GT_+) = 0.72\pm0.11$~\cite{karli:1995}
compares very well with the
direct-diagonalization value of $0.79$~\cite{caurier:1994},
the SMMC can recover only
gross features of this distribution.
In particular, the peak is somewhat too narrow,
mainly due to the information lost by the Laplace
transform. This attribution was checked by calculating the response
function $R(\tau)$ for the
direct diagonalization
distribution. (The peaks were smeared by Gaussians of 0.25 MeV width to
account for the SMMC finite discretization.)
This response function is shown in the lower left panel of Fig.~1, and
agrees well with the SMMC result.
The lower right-most panel in Fig. 1 shows the energy dependence of
the cumulative strength, $\int_0^{E^\star} S(E^\prime) dE^\prime$, where
$E^\star$ is the excitation energy in the daughter. One
can see that the SMMC recovers the centroid and the width of the
distribution reasonably.
A brief remark about the possible sources
of error is in order. Since our MaxEnt procedure
provides a most probable
extraction of the strength function, the strength distributions do not have
error bars associated with them. However, from the SMMC error bars
for $R(\tau)$, we estimate the error in the position of the centroid
to be about $0.5$ MeV. In addition, we note that
the response functions are measured in
the parent nucleus, and to obtain the energy in the daughter we use
the experimental mass excesses and a parametrization of the Coulomb
energy as defined in~\cite{caurier:1994}. [In the
test case ($^{48}$Ti), we exactly calculate this mass difference.]
This parametrization
provides a good overall description of the masses
of the nuclei in this region~\cite{karli:1995}.
We find an average deviation between $0.1$ MeV
(for $A=48$ nuclei) and $0.5$ MeV (for $A=54$ nuclei) of our
calculated binding energies from experimental values,
suggesting that our procedure is quite justified.
The upper panels of Fig. 1 show our results for the GT$_-$ operator in
{$^{48}$Ti}. The total strength, $B(GT_-)$, can be readily obtained
from the renormalized Ikeda sum rule, $B(GT_-)-B(GT_+)=3(N-Z)/(1.26)^2 $
which is obeyed
by both the SMMC and direct-diagonalization calculations.
The GT$_-$ operator takes the $N>Z$ parent nucleus (with $T=T_z+1$)
to $T=T_z$(dotted),
$T=T_z+1$ (dashed), and $T=T_z+2$ (not shown) states in
the $^{48}$V daughter. The $T=T_z$ states
are the lowest in energy and contain most (85\% in this case) of
the strength. Assuming in the default model
that the centroid
of the $T=T_z+1$ states is located 5 MeV higher than the centroid of
the $T=T_z$ states, we obtain a good reproduction of both
components of the
strength distribution. This general assumption
is experimentally valid in the even-even nuclei in this region.
We also see at low energy
a hint of the discrete low-energy states in the reconstruction.
\subsection{Comparison with experiment}\label{comparison with experiment}
Experimental GT distributions are obtained from
intermediate-energy charge exchange $(n,p)$ [or $(p,n)$] cross sections
at forward angles, which are proportional to
the GT strength~\cite{bertsch:1987}.
These experimental
distributions typically extend only to $8$ MeV in the daughter nucleus
to exclude contributions from other multipolarities.
We first compare our {$^{48}$Ti} result for the GT$_+$ distribution against
experiment, as shown in Fig.~2.
To simulate the finite experimental resolution
and presentation of the data, the SMMC results have been
smeared with Gaussians of standard
deviation of $1.77$ MeV, following Ref. ~\cite{caurier:1995}. Our
results are represented by the dotted line in Fig.~2, while the
diagonalization
results are shown as a solid histogram. The smeared diagonalization result
is shown by the dashed line in the figure.
The experimental $B(GT_+)$ distribution, shown as solid dots,
sums $1.42\pm0.2$ \cite{alford:1990}
compared to our
renormalized value of $0.71\pm0.11$. We find that the calculated
$0\hbar\omega$ GT$_+$ strength extends only over the region
E$^\star < 8$ MeV
(in agreement with the experimental value for this range of energy
$B(GT_+)$ = 0.77 $\pm$ 0.1).
This suggests that
the GT$_+$ strength observed for E$^\star > 8$ MeV
corresponds to correlations
outside our $0\hbar\omega$ model space. A similar conclusion has been
reached in Ref.~\cite{caurier:1994}.
We note that the inadequacy of a $0\hbar\omega$ model space to
describe the GT$_+$ distribution at E$^\star > 8$ MeV
might have some relevance to
the $\beta\beta$ decay of {$^{48}$Ca}~\cite{poves:1995},
where considerable $2\nu\beta\beta$
strength could be obtained
from the overlap of this distribution with that of {$^{48}$Ca} in the $(p,n)$
direction for these energies.
However, the measured $2\nu\beta\beta$ decay rate of $^{48}$Ca
\cite{balysh:1997} agrees
well with the calculation based on the $0\hbar\omega$ shell model, which
includes the $1/1.26$ normalization of the GT transition operator.
We now turn to a comparison of SMMC results with experiment for
nuclei in the mid-$fp$-shell
where complete direct-diagonalization calculations are not possible.
We first consider the $(n,p)$ reaction and
in Fig.~3 we show our results for all even-even nuclei with $A=48-64$
for which data are available
~\cite{williams:1995,vetterli:1989,elkateb:1994}.
The SMMC results have been smeared with Gaussians of standard
deviation of $1.77$ MeV to account for
the finite experimental resolution, following Ref. ~\cite{caurier:1995}.
Experimentally, the GT$_+$ strength is significantly
fragmented over many states; the centroids and the widths of
these distributions are reproduced very well in the SMMC approach.
Our results for the total strengths are given in Table I.
They agree with the data very well except for $^{64}$Ni where
our calculation underestimates the total experimental
strength~\cite{williams:1995}
suggesting the need to augment the model space with the
$g_{9/2}$ and the $g_{7/2}$
orbitals. This shortcoming of the present model space is also visible
in the GT$_+$ distribution, which places the GT$_+$
peak approximately 1.5 MeV above the experimental peak and misses
the second peak at E$\approx 6$ MeV, which is possibly due to
$(g_{9/2}-g_{7/2})$ transitions.
SMMC results for odd-$A$ nuclei in the $(n,p)$ direction are
shown in Fig.~4, where again the centroids and widths of the
distributions are in good agreement with the
data~\cite{elkateb:1994,rapaport:1984,alford:1993}.
Calculations for odd-$A$ nuclei are performed
at a finite temperature of $0.8$ MeV.
(The temperature dependence of
these distributions will be discussed later in Section~\ref{distributions and
temperature}.) The response functions for the three nuclei in
Fig.~4 are sampled from the partition functions of their
neighbors, {\it i.e.},
$^{51}$V from $^{52}$Cr, $^{55}$Mn from $^{56}$Fe, and $^{59}$Co from
$^{60}$Ni. The peaks of the observed GT$_+$ distributions in odd-$A$
nuclei in Fig.~4 are consistently at higher excitation energies in the
daughter compared to the
even-even cases in Fig.~3, a feature reproduced by the SMMC
calculations. These higher excitation energies cause some $0\hbar\omega$
strength to lie above the typical
8 MeV cut-off in odd-$A$ nuclei.
The data for $^{51}$V and $^{59}$Co have been analyzed
for additional strength above 8 MeV
~\cite{aufderheide1:1993,aufderheide2:1993}
(see Table 1), while, to our knowledge,
$^{55}$Mn has not been reanalyzed for potential GT strength at
E$^\star >8$ MeV.
For even-even nuclei the $0\hbar\omega$ GT$_+$ strength appears to
be exhausted at energies below 8 MeV, in agreement with the SMMC
results shown in Fig.~3. Our
results for $^{51}$V
and $^{55}$Mn show some strength above 8 MeV, but this
is not the case for $^{59}$Co.
In Fig.~5 we compare the GT$_-$ distributions for a few nuclei
where experimental data are available~\cite{vetterli:1989,rapaport:1983};
the experimental data for
$^{56}$Fe have been obtained from Ref.~\cite{caurier:1995}.
From the cumulative strengths in the
right panels of Fig.~5,
we can conclude that the
SMMC approach reproduces the experimental distribution moderately well
for the cases of $^{54}$Fe and $^{56}$Fe. For the Ni isotopes,
only partial information is available about these distributions.
For $^{58}$Ni
the peaks in the experimental data~\cite{rapaport:1983}
shown are to be associated with
a finite width 1.3 MeV, 0.7 MeV, and 0.5 MeV for the peaks
at 9.2 MeV, 11.2 MeV, and 13.0 MeV, respectively. The strength
in the giant resonance region
between 6.4 MeV and 13.0 MeV is quoted as
5.5,
while we obtain $6.1$, which is consistent with the
uncertainty in the excitation energy.
For $^{60}$Ni the experimental value of the total
GT$_+$ strength~\cite{rapaport:1983}
is $7.2\pm1.8$ whereas we obtain $10.87\pm0.23$. As our
calculation obeys the renormalized Ikeda sum-rule
and reproduces the measured GT$_+$ strength,
the lower
experimental value indicates some strength outside the
experimental window of E$^\star > 14$ MeV. We also note that
while Ref.~\cite{rapaport:1983} quotes an integrated strength of
$6.22$ between $4.0$ and $14.0$, MeV we obtain a value of $4.65$.
\subsection{Temperature dependence of GT strengths}
\label{distributions and temperature}
We now turn to the temperature evolution of GT$_+$
strength functions. Representative strength
distributions for two nuclei, $^{59}$Co and $^{60}$Ni, at
several temperatures are shown in Fig.~6. Both figures are plotted as
a function of $E$, the energy transfer to the parent nucleus.
We note that
the restriction of the model space to only $fp$-shell
renders our calculation quantitatively unreliable for even-even nuclei
at $T \gtrsim 1.4$ MeV~\cite{dean:1995},
while for the odd-$A$ cases this temperature is likely even lower.
With increasing temperature, three distinct effects occur that
influence the GT strength distributions.
\begin{itemize}
\item The number of states contributing to the thermal ensemble
increases. Due to the pairing gap in even-even nuclei, this occurs at a
higher temperatures in even-even nuclei than in odd-A nuclei.
\item GT transitions which are Pauli blocked at low temperatures due to
closed neutron subshells (e.g. the $f_{7/2}$ orbital) can become
thermally unblocked as neutrons are moved to excited orbitals with
increasing temperature. Similarly, protons which are thermally excited to
higher orbitals can undergo allowed GT transitions.
\item The ground state in even-even nuclei is dominated by like-nucleon
pairing. As indicated by SMMC calculations, these pairs break at around
$T=1$ MeV. Thus at low temperatures, a GT$_+$ transition involves
breaking a proton pair associated with an extra energy of 1-2 MeV. This
``penalty energy'' is removed at higher temperatures in states of higher
excitation energy, in which the pair correlations are diminished.
\end{itemize}
As we will discuss in the following, these three effects allow for an
understanding of the temperature dependence of the GT$_+$ strength
distributions.
In the case of $^{59}$Co, with increasing temperature, the entire
distribution shifts to lower excitation energies.
The total strength decreases
and the width of the distribution increases
marginally with increasing temperatures. (We have
checked that in the
high-$T$ limit, $B(GT_+)$
rises to the single-particle value as expected.)
Due to the lack of pairing of the odd particle in an odd-A nucleus,
states of various spins are more quickly populated than in the
even-even systems. These states
then make transitions to daughter states by the GT operator.
Thus, a plethora of states is easily accessible at moderate temperatures,
and the required excitation energy in the daughter is lower.
For $^{60}$Ni, the peak in the strength
distribution remains roughly constant with increasing temperature,
while the width increases with
the appearance of low-lying strength due to transitions from the
thermally occupied to the empty excited orbitals.
Note also that the centroid of the distribution remains constant at the
low temperatures and then shifts to lower excitation at higher
temperatures.
The near constancy of the peak position in $^{60}$Ni at low temperatures
supports the shifting assumption
[attributed to D.~Brink in Ref.~\cite{aufderheide:1991}]
which states that the centroid corresponding to
each parent excited state is shifted upward in the daughter nucleus by
the energy of the parent state~\cite{aufderheide:1991}. This hypothesis
assumes that the internal configuration of the low-lying states is
roughly the same. With increasing temperature, however, states with
other internal configurations gain statistical weight, and in particular,
the pair correlations in these excited states decrease. SMMC
calculations indicate that pairs break around $T=1$ MeV in even-even
nuclei, allowing for a dramatic increase in thermally populated states
in the parent at and above this temperature.
For these excited states, no coherence energy has to be paid as penalty
to break a proton pair in the GT transition, and the peak in the GT
distribution moves to smaller energies. We also note that at
temperatures $T \leq 1.3$ MeV the thermal ensemble already includes the
lowest excited $T+1$ states allowing for transitions at $E=0$.
In contrast, these transitions are not observed in $^{59}$Co at the
temperatures considered here, since the $T+1$ states in
this nuclei are at higher excited energies due to the larger neutron
excess. We also observe a gradual decrease of the peak position with
temperature in accordance with the fact that no pairing gap has to be
overcome in odd-A nuclei.
\section{Summary and Conclusions}
As mentioned in the Introduction,
electron capture on iron region nuclei
plays an important role at the onset of core collapse in a
massive star. Under these conditions, nuclei have a finite temperature of
0.2$-$0.6 MeV. It is well known that for nuclei with an opened
$fp$-shell
neutron configuration, $GT_+$ transitions dominate the electron capture
rate, and a strong phase-space dependence makes the rate sensitive to the
full $GT_+$ distribution, rather than only to the total strength.
Unfortunately, the $GT_+$ strength is not experimentally accessible for
those nuclei of importance in the presupernova collapse. Thus, collapse
studies have to rely on theoretical estimates which, until recently, could
not be performed with great confidence. This has now changed. As SMMC
calculations reproduce the measured data from first principles without
nucleus-specific data fitting (which has been necessary in previous
studies), they are reliable enough to predict the $GT_+$ distributions
for those astrophysically important nuclei not experimentally
accessible. SMMC calculations for these nuclei are in progress.
In this paper, we have calculated response
functions for the Gamow-Teller operators for
several nuclei in the $fp$-shell. We use the KB3 interaction, which
is well suited for $0\hbar\omega$ calculations.
Using an implementation of the MaxEnt technique, we have
then obtained the corresponding strength distributions.
The extracted Gamow-Teller distributions compare very well
with both direct-diagonalization calculations and the experimentally
obtained distributions. We note that we invoke the
standard renormalization factor
of $1/1.26$ for the transition operator, in keeping with the observation
in $sd-$ and $fp$-shell nuclei that complete $0\hbar\omega$ calculations
require this overall renormalization for agreement with experiment.
We have also studied the effect of finite temperature on Gamow-Teller
distributions and have demonstrated for sample nuclei that our
calculations at $T=0.8$ MeV should be adequate to describe the
distributions required to calculate electron capture rates
for the pre-supernova problem\cite{aufderheide:1994}.
Studies of the Gamow-Teller strengths
and electron capture rates for nuclei relevant
to the presupernova collapse will be described elsewhere.
\acknowledgements
We acknowledge support from the
U.S. National Science Foundation
under Grants PHY94-12818 and PHY94-20470.
Oak Ridge National Laboratory is managed by Lockheed Martin Energy
Research Corp. for the U.S. Department of Energy under contract number
DE-AC05-96OR22464. D.J.D acknowledges an E.~P.~Wigner Fellowship from ORNL.
KL has been partly supported by the Danish Research Council.
Grants of computational resources were provided by the Center
for Advanced Computational Research at Caltech and the
Center of Computational Science at ORNL.
|
1,108,101,562,726 | arxiv | \section{Introduction}
In the expanding ejecta of a supernova, dust grains condense from cooling metal-rich gas. These newly formed grains are injected into the interstellar medium (ISM), where they cause interstellar extinction and diffuse infrared emission, serve as a coolant and opacity source, catalyze H$_2$ formation, and serve as building blocks for planets and smaller rocky bodies.
In particular, the origin of dust has been fiercely debated since the discoveries of a huge amount of dust grains at redshifts higher than $z$ = 5 \citep{2011A&ARv..19...43G}. In the early universe, core-collapse SNe from massive stars are likely to be the dominant source of dust \citep{2007ApJ...662..927D}. Infrared-submillimeter studies of SN1987A \citep{2011Sci...333.1258M, 2015ApJ...800...50M, 2014ApJ...782L...2I, 2012A&A...541L...1L, 2015ApJ...810...75D}, SNR G54.1+0.3 \citep{2017ApJ...836..129T}, Cas A \citep{2010ApJ...719.1553S, 2010A&A...518L.138B}, and the Crab Nebula \citep{2012ApJ...760...96G}, and several other supernova remnants (SNRs) \citep{2018arXiv181100034C}, as well as emission-line asymmetry studies of SN1980K, SN1993J, and Cas A \citep{2017MNRAS.465.4044B}, have reported a subsolar mass of cool dust formed in the ejecta which has not yet been destroyed by the supernova reverse shock \citep{2016A&A...590A..65M, 2016A&A...587A.157B}. What fraction of the dust can survive the shock depends on their sizes after formation \citep[e.g.,][]{2007ApJ...666..955N, 2006ApJ...648..435N}, so understanding both the mass and size of dust produced in supernovae is important.
In previous dust formation studies, the effect of the pulsar wind nebula (PWN) emission has usually been neglected, even though nebulae have been found in several SNRs with dust. In particular, the dust found in the Crab Nebula was smaller than predicted from models \citep{2009ASPC..414...43K, 2012ApJ...753...72T}; it is possible that this may have been due to the early PWN energy injection. Dust formation in SN1987A also occurred later than predicted by most condensation models \citep{1993ApJS...88..477W, 1991A&A...249..474K, 2015A&A...575A..95S}, which may have been due to PWN emission, even though no compact object has yet been detected. It is unknown if a PWN could be energetic enough to delay dust formation, yet be weak enough to remain below the detection limits.
Early supernova remnants have been searched for signals of a PWN, and recent optical studies of Type Ic SLSN 2015bn and peculiar Type Ib SN 2012au have presented tentative spectroscopic evidence of a central engine in the nebular phase \citep{2016ApJ...828L..18N, 2018ApJ...864L..36M}. Some studies have suggested X-rays \citep{kot+13,Metzger_et_al_2013, Kashiyama+16, MKM16} and gamma-rays \citep{kot+13, MKM16} as possible probes for PWNe.
X-ray studies have produced some tentative candidates \citep{Perna_Stella_2004,per+08,Margutti_et_al_17}, but are not largely constraining, and detecting gamma-ray signals should provide a more direct probe of the pulsar, but is more challenging and has not yet produced any candidates \citep{2015ApJ...805...82M,Renault-Tinacci:2017gon}.
Recently, the idea has emerged to test the pulsar-driven model by detecting early radio and submillimetre PWNe emission \citep{MKM16} from broad-line Type Ic hypernovae and from Type Ic superluminous supernovae (SLSNe), which are both hypothesized by some to be pulsar-driven \citep[e.g.,][]{2010ApJ...724L..16P,qui+11,Inserra_et_al_2013,2014MNRAS.444.2096N,2016ApJ...828L..18N,Metzger_et_al_15,Wang_et_al_2015,Dai_et_al_2016}.
Radio observations with facilities such as the Karl G. Jansky Very Large Array (VLA) are promising, but the ejecta attenuates signals at this wavelength for around 10-100 years \citep{MKM16,2018MNRAS.474..573O}, which roughly matches the age of our oldest SLSN candidates, so radio detections may still not be viable for a few years; recent studies have only placed weak constraints on the model \citep{2018MNRAS.473.1258S, 2018ApJ...857...72H}. Submillimetre observations with facilities such as the Atacama Large Millimeter/submillimetre Array (ALMA) are also promising, as the ejecta only attenuates signals at this wavelength for around 1-10 years \citep{MKM16, 2018MNRAS.474..573O}. However, ALMA has previously been used to study dust in SNRs \citep[e.g.,][]{2014ApJ...782L...2I}, and dust emission in SLSN remnants may interfere with the detection of PWN emission. Therefore, comparing dust emission spectra in SLSNe to PWN spectra would tell us if the dust will interfere with ALMA observations; be detectable in another band, such as infrared; or be subdominant to the PWN emission in all cases.
To study dust formation and emission in pulsar-driven supernovae, we use a steady-state model, which is overviewed in Section \ref{sec:thy}. So far, only the sublimation of previously formed dust has been studied \citep[e.g.,][]{2000ApJ...537..796W, 2011EP&S...63.1067K}. The PWN emission can delay the formation of dust due to the added energy injection and is capable of sublimating dust as it forms, leading to longer formation times and the possible non-production of dust at all. The PWN emission can also ionize the ejecta gas before dust formation, leading to increased temperature and Coulomb repulsion between ions, which may also prevent dust formation. However, once dust has formed, the grains can absorb emission in the optical/UV band, greatly increasing their temperature compared to the case without a central pulsar. These hot dust grains will re-emit in the infrared or submillimetre, and this emission might be detectable with telescopes like ALMA, Herschel, Spitzer, and the James Webb Space Telescope (JWST). This gives an indirect signal, to compliment the direct radio and submillimetre detection discussed in \cite{MKM16} and \cite{2018MNRAS.474..573O}, by which we can detect newborn pulsars.
\section{Theory} \label{sec:thy}
The system we consider is shown schematically in Figure~\ref{fig:cartoon}, and we examine it from the supernova explosion until the beginning of the Sedov phase, when a reverse shock is driven back into the ejecta and is expected to destroy smaller dust grains via sputtering. The spin down of the neutron star generates a PWN which pushes against and injects energy into the ejecta at $R_{\rm w}$, the edge of the shocked wind region. We use the one zone model approximation, where all the ejecta is contained between $R_{\text{w}}$ and $R_{\text{ej}}$. The inner region of the ejecta can be either a sublimation region of radius $R_c$, if the optical/UV luminosity is high enough to heat the dust above its sublimation temperature, or an ionization region of radius $R_s$, if high energy radiation can ionize most of the gas in the region. Both of these possibilities should prevent dust formation in that region; we discuss the conditions for these regions in Sections \ref{sec:dustsub} and \ref{sec:gasio}. Outside this region, there will be a thin region where most of the optical/UV emission will be absorbed by dust and re-emitted in the infrared; this absorption region has a thickness given by $\tau_{\rm opt/UV} \sim$ 1 and it's emission is described in Section \ref{sec:dustem}. Outside of the absorption region is the cold, dusty region, where the dust is not being heated by non-thermal radiation, and cools via adiabatic and radiative cooling; we neglect the emission of this region entirely, since it is expected to be much cooler than the absorption region. If $R_c$ or $R_s$ is $>$ $R_{\text{ej}}$, then no dust will form and the entire ejecta will be the sublimation/ionization region with no absorption or cold, dusty region.
\begin{figure}
\includegraphics[width=\linewidth]{cartoon}
\caption{The system examined in this paper; not to scale. The PWN generated by the central NS pushes on and injects energy into the ejecta at $R_{\rm w}$, which we consider to have three layers: the sublimation or ionization region on the inside, where dust can not form due to non-thermal emission from the PWN; the absorption region with thickness $\ll$ $R_{\rm ej}$ at the edge of the sublimation or ionization region, where optical and UV photons are absorbed and infrared photons are emitted; and the cold/dusty region on the outside, where the dust is optically thin to infrared emission and is not heated by non-thermal emission. Note that all three regions do not always appear, depending on parameters and time evolution.}
\label{fig:cartoon}
\end{figure}
\subsection{Pulsar Spin Down, Ejecta Dynamics, and Non-Thermal Emission}
To calculate pulsar spin-down, ejecta dynamics, and non-thermal emission we use the model from \cite{Kashiyama+16}.
The velocity of the ejecta $v_{\rm ej}$ is calculated from the kinetic energy $E_K = M_{\rm ej}v_{\rm ej}^2/2$, which evolves with time as
\begin{equation}
\frac{dE_K}{dt} = \frac{E_{\rm int}}{t_{\rm dyn}}.
\label{eqn:dekrej}
\end{equation}
Here $M_{\rm ej}$ is the mass of the ejecta, $E_{\rm int}$ is the total internal energy, and $t_{\rm dyn} = R_{\rm ej}/v_{\rm ej}$ is the dynamical timescale of the ejecta, where $R_{\rm ej}$ is the outer radius of the ejecta.
Note that $E_{\rm int}$ is determined by the balance between heating via the PWN irradiation and radioactive decay of $^{56}$Ni and $^{56}$Co and cooling via the radiative energy loss and expansion.
The radius of the inner edge of the ejecta $R_{\rm w}$ increases as:
\begin{equation}
v_{\rm w} = \frac{dR_{\rm w}}{dt} = v_{\rm nb} + \frac{R_{\rm w}}{t}
\label{eqn:drwdt}
\end{equation}
where $v_{\rm nb}$, the velocity of the shocked wind region, is determined by the pressure balance at $R_{\rm w}$ via
\begin{equation}
V_\text{nb} \approx \sqrt{\frac{7}{18}\frac{\int L_\text{SD} \times \min[1,\tau^\text{nb}_\text{T}v_\text{nb}/c]dt}{M_\text{ej}}\left(\frac{R_\text{ej}}{R_\text{w}}\right)^3},
\label{eqn:vnb}
\end{equation}
where the factor $\min[1,\tau^\text{nb}_\text{T}v_\text{nb}/c]$ is the fraction of spin-down luminosity deposited in the SN ejecta and $\tau^\text{nb}_\text{T}=(R_\text{w}/R_\text{ej})\tau_\text{ej}$, where $\tau_\text{ej}$ is the optical depth of the ejecta,
\begin{equation}
\tau_\text{ej} = \frac{3\kappa M_\text{ej}}{4\pi R^2_\text{ej}},
\label{eqn:tauejesc}
\end{equation}
where $\kappa$ is the Thompson opacity.
We assume that the ejecta mass $M_{\rm ej}$ is confined in a volume of $V_{\rm ej} = (4\pi/3)(R_{\rm ej}^3-R_{\rm w}^3)$ with a uniform density $\rho_{\rm ej} = M_{\rm ej}/V_{\rm ej}$; the density profile is always constant, regardless of the compression of the ejecta due to the PWN. The initial conditions of the model have $R_{\rm ej} = 1.0 \times 10^{11}$ cm and $R_{\rm w} = 0.1R_{\rm ej}$, although our results are not sensitive to these values.
In particular, for cases with rapidly rotating pulsars, the expanding nebula compresses the ejecta into a thin shell ($R_{\rm ej}-R_{\rm w} \ll R_{\rm ej}$). Here, if $R_{\rm w} \geq 0.8R_{\rm ej}$, we set $R_{\rm w} = 0.8R_{\rm ej}$; this value gives a shock compression ratio of 5, which is in between the values of 4 and 7 for adiabatic and isothermal/radiative shocks respectively. For simplicity, we take this value due to the uncertainty of the nature of the shock. Also note that once a pulsar injects energy comparable to the initial explosion energy into the ejecta, the ejecta will undergo a hot bubble breakout~\citep{2017MNRAS.466.2633S}. This can modify the density structure especially of the outer ejecta, which is not taken into account in our model.
In general, embryonic PWN spectra are obtained by solving kinetic equations that take into account electromagnetic cascades in the nebula~\citep{2015ApJ...805...82M}.
For simplicity, in this work, the fiducial PWN spectrum is approximated to be:
\begin{equation}
\nu F_\nu = \frac{\epsilon_{\rm e} L_{\rm SD}(t)}{{\cal R}_b}
\begin{cases}
(E_\gamma/E^b_\text{syn})^{2-\alpha_1} & (E_\gamma < E^b_\text{syn}), \\
(E_\gamma/E^b_\text{syn})^{2-\alpha_2} & (E^b_\text{syn} < E_\gamma),
\end{cases}
\label{eqn:wns}
\end{equation}
where $\alpha_1 = 1.5-1.8$ and $\alpha_2 = 2.15$ unless otherwise noted, and $\epsilon_{\rm e}$ is the fraction of spin-down energy that goes into the emission, which we take to be a free parameter of order unity.\footnote{In the very early phase with a high compactness of the source, Compton cascades are fully developed in the so-called saturated regime, in which a flat energy spectrum is expected \citep[e.g.][]{Metzger_et_al_2013}. At later phases, the spectrum has two humps of synchrotron and inverse-Compton emissions \citep{2015ApJ...805...82M}. Because we focus on the dust-forming phase, which happens at longer timescales, we should take the above values expected at low frequencies.}
In the fast cooling limit, one has $\alpha_{1,2}=(2+q_{1,2})/2$ for the electron-positron injection index $q_{1.2}$.
Also, $L_{\rm SD}$ is the total electromagnetic emission from the spin-down of the pulsar
\begin{equation}
L_{\rm SD}(t) \approx L_{\rm SD,0}f_{\rm SD}(t),
\label{eqn:lsd}
\end{equation}
where
\begin{align}
L_{\rm SD,0} &\simeq 8.6 \times 10^{45} \text{ erg/s } \left(\frac{B}{10^{13}\text{ G}}\right)^{2} \left(\frac{P}{1 \text{ ms}}\right)^{-4}, \\
f_{\rm SD}(t) &= \begin{cases}
1 & (t < t_{\rm SD}), \\
(t/t_{\rm SD})^{-2} & (t > t_{\rm SD}),
\end{cases} \\
t_{\rm SD} &\simeq 37 \text{ days } \left(\frac{B}{10^{13}\text{ G}}\right)^{-2} \left(\frac{P}{1 \text{ ms}}\right)^{2},
\end{align}
where $B$ and $P$ are the initial rotation period and dipole magnetic field of the pulsar, which we vary; ${\cal R}_b$ is a normalization factor given by
${\cal R}_b \sim1/(2-\alpha_1)+1/(\alpha_2-2)$ and the break photon energy is
\begin{equation}
E^b_\text{syn} = \frac{3}{2} \hbar \gamma^2_b \frac{eB_{\rm PWN}}{m_{\text{e}}c},
\label{eqn:ebsyn}
\end{equation}
where $\gamma_b = 3 \times 10^5$ (we discuss the uncertainty of this parameter in Section \ref{sec:disgammab}) and $B_{\rm PWN}$ is the magnetic field in the PWN
\begin{equation}
B_{\rm PWN}^2 \approx \frac{6\epsilon_B L_{\rm SD,0}}{v_{\rm w}^3 t_{\rm SD}^2}
\begin{cases}
(t/t_{\rm SD})^{-2} & (t < t_{\rm SD}), \\
(t/t_{\rm SD})^{-3} & (t > t_{\rm SD}),
\end{cases}
\label{eqn:bpwn}
\end{equation}
where $\epsilon_B = 3 \times 10^{-3}$ is the fraction of spin-down energy that goes into the PWN magnetic energy.
At $t = t_{\rm SD}$,
this has a value of:
\begin{equation}
B_{\rm PWN} (t_{\rm SD})\sim 113 \text{ G } \left(\frac{v_{\rm w}}{10^9 \text{ cm s$^{-1}$}}\right)^{-3/2} \left(\frac{B}{10^{13}\text{ G}}\right)^{3} \left(\frac{P}{1 \text{ ms}}\right)^{-4}.
\label{eqn:bpwntsd}
\end{equation}
This field evolution assumes that the field energy $B_{\rm PWN}^2$ is proportional to the total energy of the PWN.
Although it may not be completely true, such an assumption can be justified when the magnetic field is toroidally dominated.
This assumed spectrum is motivated by previous studies~\citep{2015ApJ...805...82M, MKM16,2018MNRAS.474..573O}; which assumes a broken power-law injection of relativistic electrons and positrons in to the PWN, inferred from Galactic PWN observations~\citep{tt10, tt13}, and calculates synchrotron emission and inverse Compton scattering, pair production and consequent electromagnetic cascades.
Equation \ref{eqn:wns} can be a good approximation especially for the early phase where all the injected electrons and positrons are in the fast cooling regime. However, this is not necessary the case in the late phase; relatively low-energy electrons and positrons are in the slow cooling regime. As a result, in general, the fast-cooling limit spectrum overestimates the PWN flux at low-energy bands, e.g., in the infrared, submm, and radio bands. In more realistic cases, previously injected electrons (sometimes called relic electrons) can change the power-law spectrum~\citep{MKM16}. We discuss effects of these relic electrons and uncertainties in the PWN spectrum in Appendix \ref{sec:disrealspec}.
The non-thermal flux can be reduced if a significant fraction of the spin-down energy remains unconverted to radiation or $\epsilon_e$ is decreased, but the optical flux should also be reduced correspondingly.
Note that only the luminosity in the optical band is important for dust temperature and sublimation.
Throughout this paper, we take the ejected nickel mass $M_{\text{Ni}}$,
the initial kinetic energy of the ejecta $E_{\text{SN}}$, and the opacity $\kappa$ to be 0.1 $M_{\sun}$, $10^{51}$ erg, and 0.1 g cm$^{-2}$ respectively, as in \cite{2018MNRAS.474..573O}.
\subsection{Dust Formation}
Dust formation in SN ejecta has mainly been studied with classical nucleation theory and its extension \citep{1989ApJ...344..325K, 1991A&A...249..474K, 2003ApJ...598..785N, 2008ApJ...684.1343N, 2010ApJ...713..356N, 2011ApJ...736...45N, 2001MNRAS.325..726T, 2007MNRAS.378..973B}. In this theory, dust condensation is described by the formation of stable seed nuclei and their growth, where the formation rate is derived by assuming the nucleation current to be in a steady state \citep{2013ApJ...776...24N}. This theory has allowed us to predict the size distribution and mass of condensing grain species, and these results have nicely explained the mass of dust formed in SN1987A \citep{1991A&A...249..474K} and the formation and evolution processes of dust in Cas A \citep{2010ApJ...713..356N}.
The model we use for dust formation is the steady-state model, first developed by \cite{1987PThPh..77.1402K} by introducing the concept of a key species or key molecule, which has the lowest collisional frequency among gaseous reactants, and then generalized by \cite{2013ApJ...776...24N}, whose formulation we take here. In this formulation, collisions between gaseous key molecules and clusters of $n$ key molecules, which we refer to as $n$-mers, control the reaction kinetics.
As the gas cools, dust condensation proceeds via the formation of clusters and subsequent attachment of key molecules to those clusters. The concentration of gas $c_1$ (we denote the concentration of $n$-mers $c(n,t) = c_n$) is given by
\begin{equation}
c_1 = \frac{M_{\rm ej}f_{\text{KM}}(1 - f_{\text{con}})}{V_{\rm ej} m_1}
\label{eqn:cevo}
\end{equation}
\noindent
where
$f_{\text{KM}}$ is the initial mass fraction of the key molecule in the ejecta, $f_{\text{con}}$ is the condensation efficiency, and $m_1$ is the mass of the key molecule.
The growth rate of grains, which we assume are spherical, is given by
\begin{equation}
\frac{da}{dt} = s\Omega_0 \left( \frac{kT_{\text{gas}}}{2\pi m_1}\right)^{\frac{1}{2}}c_1\left( 1 - \frac{1}{S}\right),
\label{eqn:cevo}
\end{equation}
\noindent
where $a$ is the grain radius, $s$ is the sticking probability of the key molecule onto grains, $\Omega_0$ is the volume of the condensate per key molecule, $k$ is the Boltzmann constant, $T_{\text{gas}}$ is the gas temperature, and $S$ is the supersaturation ratio
\begin{align}
\ln S = \frac{A}{T_{\text{gas}}} - B + \ln \left( \frac{c_1kT_{\text{gas}}}{p_s}\right) + \ln \Xi,
\label{eqn:ssr}
\end{align}
\noindent
where $A$ and $B$ are thermodynamic constants given in \cite{2003ApJ...598..785N},
$p_s = 1\,{\rm bar} = 10^6\,{\rm erg\,cm^{-3}}$, and
\begin{equation}
\Xi = \frac{\prod_{k=1}^i (p_k^{\mathcal A}/p_s)^{\nu_k}}{\prod_{k=1}^j (p_k^{\mathcal B}/p_s)^{\eta_k}},
\label{eqn:xidef}
\end{equation}
\noindent
where $\nu_k$ and $\eta_k$ are the stoichiometric coefficients and $p_k^{\mathcal A}$ and $p_k^{\mathcal B}$ ($k = 1$-$i$ and $1$-$j$ respectively) are the partial pressures for the gaseous reactants and products, $\mathcal{A}_k$ and $\mathcal{B}_k$, respectively, in the general chemical reaction below
\begin{equation}
\mathcal{Z}_{n-1} + ( \mathcal{X} + \nu_1\mathcal{A}_1 + ... + \nu_i\mathcal{A}_i ) \rightleftharpoons \mathcal{Z}_n + (\eta_1\mathcal{B}_1 + ... + \eta_j\mathcal{B}_j),
\label{eqn:chemreac}
\end{equation}
\noindent
where $\mathcal{Z}_n$ is an $n$-mer cluster generated from the nucleation of $n$ key molecules $\mathcal{X}$.
In the steady-state approximation, the current density $J_n$ is independent of $n$, being equal to the steady-state nucleation rate $J_s$. Starting from the reaction equation above, and following the derivation from \cite{2013ApJ...776...24N}, the steady-state nucleation rate is
\begin{equation}
J_s=s\Omega_0 \left( \frac{2\sigma_{\text{ten}}}{\pi m_1}\right)^{\frac{1}{2}}c_1^2 \Pi \exp \left( -\frac{4}{27} \frac{\mu^3}{(\ln S)^2} \right),
\label{eqn:jsdef}
\end{equation}
\noindent
where $\sigma_{\text{ten}}$ is the surface tension of the condensate taken from \cite{2003ApJ...598..785N}, $\mu = 4\pi a_0^2 \sigma_{\text{ten}}/kT_{\text{gas}}$ is the ratio between the surface energy of the condensate due to tension and the thermal energy of the gas, $a_0 = (3 \Omega_0/4 \pi)^{1/3}$ is the hypothetical grain radius per key molecule, which is calculated in \cite{2003ApJ...598..785N}, and the correction factor $\Pi$ is given by
\begin{equation}
\Pi = \left( \frac{\prod_{k=1}^i (c_k^{\mathcal{A}}/c_1)^{\nu_k}}{\prod_{k=1}^j (c_k^{\mathcal{B}}/c_1)^{\eta_k}} \right)^{\frac{1}{\omega}},
\label{eqn:pidef}
\end{equation}
\noindent
where
\begin{equation}
\omega = 1 + \sum_{k=1}^i \nu_k - \sum_{k=1}^j \eta_k.
\label{eqn:omedef}
\end{equation}
Once $J_s$ is calculated, dividing by $\tilde{c}_1$ gives us $I_s$, which is used to calculate
\[
\frac{dK_i}{dt} =
\begin{cases}
I_s(t)n_*^{\frac{i}{3}} + \frac{i}{a_0} \left( \frac{da}{dt} \right) K_{i-1} & \text{for $i=1-3$} \\
I_s(t) & \text{for $i=0$}.
\end{cases}
\label{eqn:dkdt}
\]
Here $K_0$ represents the number density of dust grains ($K_0 = n_{\text{dust}}/\tilde{c}_1$), and $K_3$ represents the number fraction of key molecules locked in dust grains. Therefore, we can calculate the condensation efficiency $f_{\text{con}}(t)$ and average radius $a_{\text{ave}}(t)$ by
\begin{align}
f_{\text{con}} =& K_3, \\
a_{\text{ave}} =& a_0 \left( \frac{K_3}{K_0}\right)^{\frac{1}{3}}.
\label{eqn:fconaave}
\end{align}
We tested the formation of dust for two initial dust compositions (mass fractions are given in Table~\ref{tbl:comps}), which we call the Ib and Ic compositions due to the supernovae they correspond to. These compositions are based on nucleosynthesis calculations \citep{2010ApJ...725..940Y} used in recent radiative transfer simulations of various types of supernovae with various types of progenitors, which account for nuclear fusion during the explosion \citep{2011MNRAS.414.2985D, 2012MNRAS.426L..76D, 2015MNRAS.453.2189D, 2016MNRAS.458.1253V, 2017A&A...603A..51D}. The Ib composition is similar to that of a small (ZAMS mass of 15-25 M$_{\odot}$) Wolf-Rayet star in a binary with roughly solar metallicity; one would expect about 3-5 M$_{\odot}$ of ejecta in this case. The Ib composition is also fairly similar to a low metallicity Wolf-Rayet star without a binary companion with a ZAMS mass of around 25 M$_{\odot}$; the ejecta mass in this case would be $\sim$ 15 M$_{\odot}$. The Ic composition is similar to that of a large solar metallicity Wolf-Rayet star with ZAMS mass of around 60 M$_{\odot}$ evolved without a binary companion; one would expect about 5-7 M$_{\odot}$ of ejecta in this case.
The biggest differences between the two are the lack of Si in the Ic composition and the lower overall numbers in the Ib composition. While the Si mass fraction is not zero in real SNe, the simulations give a mass fraction of about 10 times lower than that of Mg for the Ic composition progenitor; this is small enough where we expect MgO grains to be formed in much greater quantity than MgSiO$_3$ or Mg$_2$SiO$_4$, so we neglect Si completely for the Ic composition. The Ib composition has lower numbers because a large fraction of the gas is still He, which does not form dust and is thus neglected in this study. The large fraction of He means that observed SNe with the Ib composition would be either Type Ib or IIb, depending on if any H gas still remained as well, while observed SNe with the Ic composition would be seen as Type Ic.
We examine two different types of dust growth for each composition. For the Ib composition we examine the formation of C and MgSiO$_3$ grains, which we expect to be formed preferentially over Mg$_2$SiO$_4$ by about a factor of 3 \citep{2010ApJ...713..356N}. For the Ic composition, since there is not enough Si to form large quantities of MgSiO$_3$ or Mg$_2$SiO$_4$, we examine growth of C and MgO grains. The growth reaction equations for these clusters are
\begin{align}
\text{C}_{n-1} + \text{C} & \rightleftharpoons \text{C}_n, \label{eqn:creac} \\
\text{MgSiO}_{3,n-1} + \text{Mg} + \text{SiO} + \text{O} & \rightleftharpoons \text{MgSiO}_{3,n}, \label{eqn:mgsio3reac} \\
\text{MgO}_{n-1} + \text{Mg} + \text{O} & \rightleftharpoons \text{MgO}_n . \label{eqn:mgoreac}
\end{align}
The physical properties of each dust grain used in the calculation are listed in Table~\ref{tbl:gprop}. We assume for the Ib composition that the concentrations of Mg and Si gas remain equal, and we assume that the number of oxygen atoms remains fixed, since the ejecta is oxygen dominated and grain formation will not significantly affect the concentration.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
Composition & $f_{\text{C}} $ & $f_{\text{O}}$ & $f_{\text{Mg}}$ & $f_{\text{Si}}$ \\ \hline
Ib & 0.1 & 0.3 & 0.03 & 0.03 \\
Ic & 0.3 & 0.6 & 0.05 & 0 \\ \hline
\end{tabular}
\caption{Initial mass fractions of the different gaseous elements in the ejecta.}
\label{tbl:comps}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
Grain Type & C$_{\text{(s)}}$ & MgSiO$_{3\text{(s)}}$ & MgO$_{\text{(s)}}$ \\
Key Species & C$_{\text{(g)}}$ & Mg$_{\text{(g)}}$ & Mg$_{\text{(g)}}$ \\
$A/10^4$ (K) & 8.64726 & 25.0129 & 11.9237 \\
$B$ & 19.0422 & 72.0015 & 33.1593 \\
$a_0$ ({\AA}) & 1.281 & 2.319 & 1.646 \\
$\sigma_\text{ten}$ (erg cm$^{-2}$) & 1400 & 400 & 1100 \\ \hline
\end{tabular}
\caption{The properties of the dust grains considered in this study. The subscript (s) and (g) represent solids and gasses respectively. Since Mg and Si have the same concentration in the Ib composition, either one can be used as the key species. Values are taken from \protect\cite{2003ApJ...598..785N}.}
\label{tbl:gprop}
\end{table}
We ignore the formation of CO molecules, even though in oxygen-rich ejecta (which both compositions have) it is expected that carbon dust will not form in large quantities due to the preferential formation of CO molecules. Since our model is a one-zone model, including CO formation would mean that carbon dust formation would be greatly suppressed. In more complicated models, supernovae have both an oxygen-rich shell where silicate and Mg-molecule-based dust formation is dominant and a carbon-rich shell where carbon dust formation is dominant \citep[e.g.,][]{2008ApJ...684.1343N,2010ApJ...713..356N}. For most supernovae, we would only expect carbon dust formation in the carbon-dominant shell, which surrounds the oxygen-rich shell and usually contains $\sim$ 50\% of the carbon atoms, but particularly in SLSNe, turbulent mixing mixes the gas and homogenizes the ejecta, meaning that carbon dust will not form. For this reason, we treat the formation of each species independently, without accounting for shielding due to the early formation of one type of dust.
We take $s = 0.8$ and $n_* = 100$, the sticking probability of a colliding gas molecule and the minimum number of molecules for a cluster to be considered a dust grain respectively, for all calculations; as long as $n_*$ is large enough, the results do not qualitatively change - this is discussed in Appendix B of \cite{2013ApJ...776...24N}.
\subsection{Dust Sublimation}\label{sec:dustsub}
Once the gas has cooled enough to form, the dust can still be sublimated by the PWNe optical-UV emission. The equation for dust grains in radiative equilibrium between absorbing PWN emission and emitting thermal emission in the IR band is
\begin{equation}
\frac{L_{\text{opt/UV}}}{4\pi r^2}Q_{\text{opt/UV}}\pi a^2 = \langle Q \rangle _T 4\pi a^2 \sigma T_{\text{dust}}^4,
\label{eqn:graineq}
\end{equation}
\noindent
where $L_{\text{opt/UV}}$ is the non-thermal luminosity in the band between 2-6 eV (0.2-0.6 $\mu$m), $\sigma$ is the Stefan-Boltzmann constant, $r$ is the radius of the dust grain's position, $Q_{\text{opt/UV}}$ is the absorption efficiency factor averaged over the optical/UV spectrum, which we assume is $\approx$ 1, and finally
\begin{align}
\langle Q \rangle _T = & \frac{\int B_{\nu}(T_{\text{dust}})Q_{\text{abs,}\nu}d\nu}{\int B_{\nu}(T_{\text{dust}})d\nu} \\
\approx & \frac{Da_{-5}(T_{\text{dust}}/2300 \text{ K})}{1+Da_{-5}(T_{\text{dust}}/2300 \text{ K})},
\label{eqn:qt}
\end{align}
\noindent
where $a_{-5} = a/10^{-5}$ cm and $D$ is a constant ($\sim$ 0.3 for C dust grains, $\sim$ 0.03 for silicates and MgO) \citep{1984ApJ...285...89D}. These choices of emissivities are consistent with studies examining the sublimation of previously formed dust grains larger than $10^{-5}$ cm \citep{2000ApJ...537..796W}, but is not completely accurate as the dust is growing, or if the dust does not grow to $10^{-5}$ cm. \cite{1984ApJ...285...89D} calculated the emissivities of both graphite and silicates using their dielectric function and found it varied strongly and non-linearly with both grain size and absorption frequency. We discuss this approximation further in Section \ref{sec:dis}.
Dust will be sublimated if its equilibrium temperature is greater than the critical temperature $T_c$ for supersaturation, which can be calculated by setting $S=1$ in Equation~\ref{eqn:ssr}. From Equation~\ref{eqn:graineq}, the critical radius for dust sublimation is:
\begin{equation}
R_c = \left( \frac{L_{\text{opt/UV}}}{16\pi \sigma T_c^4} \frac{Q_{\text{opt/UV}}}{\langle Q \rangle _{T_c}}\right)^{\frac{1}{2}}.
\label{eqn:tdust}
\end{equation}
If $R_c < R_{\text{ej}}$ (the edge of the ejecta), then no dust can be formed due to sublimation from the PWNe emission. Any dust that would have formed at this point is converted back to gas in our calculation.
\subsection{Dust Emission}\label{sec:dustem}
Once dust can start to form without being sublimated, it emits thermally in the infrared band. The optical-UV optical depth is:
\begin{equation}
\tau_{\text{opt/UV}} = \int_{R_c}^{R_{\text{ej}}} n_{\text{dust}}(r) \pi a^2 dr
\label{eqn:uvopdep}
\end{equation}
\noindent
is $\gg$ 1 in the dusty ejecta, so only a thin layer (the absorption region) will be heated by the PWN emission. This region will be located just outside $R_c$ and will emit just below $T_c$ if $R_c > R_{\text{w}}$, or be located at $R_{\text{w}}$ and emit at $T_\text{dust}(R_{\text{w}})$, with a blackbody luminosity,
\begin{equation}
L_{\nu} = 4\pi R^2 \langle Q \rangle _T \pi \frac{2h\nu^3}{c^2} \frac{1}{e^{\frac{h\nu}{k_BT}}-1}.
\label{eqn:thicklv}
\end{equation}
Although the reprocessed emission is sometimes modelled with a frequency-dependent emissivity with $L_{\nu} \propto \nu^{2+\beta}$ for $h\nu < kT$ \citep[e.g.,][]{1991ApJ...381..250B, 2007ApJ...663..866D, 2010ApJ...708..127S}, we use the frequency-averaged emissivity from Equation~\ref{eqn:qt}. Since we are only interested in the peak of the spectrum, the exact spectral index in the Rayleigh-Jeans limit is not important to our results.
Since the reprocessed emission lies at longer wavelengths than the absorbed PWN emission, which are longer than the typical size of the dust grains, the rest of the dust will appear optically thin for this dust emission. The thermal emission at $T_c$ or $T_\text{dust}(R_{\text{w}})$ will be directly observable.
\subsection{Gas Ionization}\label{sec:gasio}
Ionization of the gas can cause a temperature increase due to the collisions with free electrons, as well as increased Coulomb repulsion between charged ions which may prevent dust formation. However, ion-molecule reactions proceed more quickly due to ions inducing dipole moments in neutral atoms, enhancing their electrostatic attraction \citep{2005fost.book.....S}. Although the extent to which these effects compete and the ionization states in which they dominate are not well known, it is important to identify the region in which they will be important.
We calculate the radius out to which the gas can be ionized by the non-thermal radiation using the standard formula for the Str{\"o}mgen Radius $R_s$, but slightly modified due to the ejecta being in a shell from $R_{\text{w}}$ to $R_{\text{ej}}$. The formula becomes
\begin{equation}
R_s = \left(\frac{3}{4\pi}\frac{\Phi}{c_1^2 \beta_2} + R_{\text{w}}^3 \right)^{\frac{1}{3}},
\label{eqn:stromrad}
\end{equation}
where $\beta_2$ is the total recombination rate, which depends on electron temperature and chemical composition, and $\Phi$ is the flux of ionizing photons from the source. $\Phi$ can be calculated from the spectrum in Equation~\ref{eqn:wns}, by
\begin{align}
& \Phi= \frac{F^b_{\nu}}{E^b_\text{syn}} \times \nonumber\\
& \begin{cases}
\frac{1}{\alpha_1-1}[(\frac{E_I}{E^b_\text{syn}})^{-(\alpha_1-1)} -1] + \frac{1}{\alpha_2-1} & (E_I < E^b_{\rm syn}), \\
\frac{1}{\alpha_2-1}\left(\frac{E_I}{E^b_\text{syn}}\right)^{-(\alpha_2-1)} & (E^b_\text{syn} < E_I)
\end{cases}
\label{eqn:ionflux}
\end{align}
where $E_I$ is the ionization energy of the gas atom, which depends on the atoms being ionized, but is between 5-15 eV for all atoms of interest here.
For gas density we use the concentration if no dust is formed $\tilde{c}_1$, since $R_s$ is not physically relevant to this study after dust is formed, but we examine multiple types of dust and want to treat their formation and ionization independently. We do not couple this calculation to the dust formation and sublimation calculation, as it is not well known what effect partial ionization of the ejecta will have on dust formation.
This formulation produces results that are mostly consistent with recent results by \cite{2018arXiv180605690M}, who calculated the ionization state of hydrogen- and oxygen-rich ejecta in a system with a $B$ = 10$^{14}$ G and $P$ = 1 ms rotating pulsar and 10 M$_{\sun}$ of ejecta. They find that the density averaged ionization fraction increases slowly for the hydrogen-rich ejecta and stays roughly constant for oxygen-rich ejecta.
However, their ejecta profile consists of a homogeneous core below $R_{\text{w}}$ and a high-velocity tail instead of a thin shell from $R_{\text{w}}$-$R_{\text{ej}}$, like ours, which will change the fraction of ejecta which becomes ionized.
Note that the gas ionization is important for the detectability of radio and submm emission because of various absorption processes. \cite{MKM16} studied effects of synchrotron self-absorption and free-free absorption, and found that the free-free absorption becomes irrelevant for $\gtrsim3-30$~yr ($\gtrsim1-3$~yr) at GHz (at 100~GHz) assuming the singly ionized state~\citep{MKM16}, which has been confirmed by \cite{2018arXiv180605690M}.
\section{Results}
We perform a parameter study for the initial pulsar rotation period $P$ and the initial magnetic field $B$. The overall PWN flux is multiplied by the factor $\epsilon_e$, which is analogous to changing the power law spectrum or the dust absorption bandwidth, since only the total luminosity in the optical band is important for dust temperature and sublimation (Sec. \ref{sec:dustem}). We investigate five sets of ejecta and PWNe parameters, shown in Table~\ref{tbl:planrun}; they will give us qualitative information on the effect of changing ejecta mass, the PWNe spectrum, and the composition as well as being case studies for typical binary Wolf-Rayet progenitors (Ib5-1) and low metallicity single progenitors (Ib15-1), while Ic5-1 will be a case study for large solar metallicity single progenitors and Ic15-1 will be a case study for millisecond pulsar-driven superluminous supernovae. It's also worth noting that Ib15-1 and Ic5-1 have the same amount of total carbon, so comparing these two give insight into the effects of significant changes to the dynamics.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
ID & Composition & $M_{\rm ej}$ M$_{\sun}$ & $\epsilon_e$ \\ \hline
Ib5-1 & Ib & 5 & 1 \\
Ib5-05 & Ib & 5 & 0.5 \\
Ib15-1 & Ib & 15 & 1 \\
Ic5-1 & Ic & 5 & 1 \\
Ic15-1 & Ic & 15 & 1 \\ \hline
\end{tabular}
\caption{The five sets of ejecta and PWNe parameters we study. $\epsilon_e$ is a multiplying factor for the PWNe flux.}
\label{tbl:planrun}
\end{table}
\subsection{Effects of a Pulsar}
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{props_np.pdf}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{props_p.pdf}
\end{subfigure} \\
\caption{The time evolution of $\ln(S)$, $I_s$, $f_{\text{con}}$ and $a_{\text{ave}}$ for both C and MgSiO$_3$ dust in the Ib5-1 composition without a pulsar (top) and with a $P$ = 2 ms, $B$ = 10$^{13}$ G pulsar (bottom). The pulsar makes dust formation occur later but more quickly, and the parameter evolution is qualitatively similar in both cases.}
\label{fig:ourtd}
\end{figure}
In Figure~\ref{fig:ourtd}, we compare the time evolution of $\ln(S)$, $I_s$, $f_{\text{con}}$ and $a_{\text{ave}}$ for both types of dust in the Ib5-1 composition without a pulsar (top) and with a $P$ = 2 ms, $B$ = 10$^{13}$ G pulsar (bottom). The pulsar delays the onset of formation (which we refer to throughout this paper as the formation timescale) and decreases the time from the beginning of dust formation until $f_{\text{con}} \sim 1$ (which we refer to throughout this paper as the condensation timescale). The formation timescale is increased from $\sim$ 180 to $\sim$ 590 days for C dust, and from $\sim$ 215 to $\sim$ 800 days for MgSiO$_3$ dust, and the condensation timescale is decreased from $\sim$ 5 days to less than 1 day for both types of dust. These effects are due to the slowed cooling due to the energy injection from the PWN and also the delay in dust formation due to sublimation, increasing $\ln(S)$ well above 1 when dust begins to form.
However, the evolution of these properties is qualitatively similar to the case with no pulsar. There is a spike in $I_s$ corresponding to the sudden nucleation of dust throughout the ejecta, which causes $f_{\text{con}}$ to jump to $\sim$ 1 within the condensation timescale. After this, the supersaturation ratio drops because of the drop in gas concentration, and this causes $I_s$ to fall to $\sim$ 0. As time goes on, the nucleated grains grow in size by accreting free key molecules, causing the gas concentration to drop further, but more slowly than during nucleation. There is also nucleation of new grains as the temperature drops further, but the growth rate is small and concentration evolution is dominated by growth of previously nucleated grains. However, the growth of these grains is very slow once $f_{\text{con}} \sim 1$, as $a_{\text{ave}}$ stays relatively constant after this time. We see $\ln(S)$ fall and then rise again at later times; this second rise corresponds to the point when grain growth drops off. These results are also qualitatively similar to the high density case from \cite{2013ApJ...776...24N}, which is where the steady state formulation agreed with the more rigorous non-steady-state model, although our condensation timescale is shorter due to our ejecta being confined to a smaller volume, thus being even more dense.
\subsection{Formation Timescale and Ionization}
In Figure~\ref{fig:radev}, we show the evolution of the ejecta inner $R_{\text{w}}$ and outer radius $R_{\text{ej}}$, critical (sublimation) radii $R_{\text{c}}$ for both types of dust, and Str{\"o}mgen (ionization) radius $R_{\text{s}}$, for the Ib5-1 parameters with initial dipole field $B$ = 5 $\times$ 10$^{12}$ G and initial rotation periods $P$ = 1, 3, and 10 ms. The sublimation radius is shown from the point where the supersaturation ratio $S$ first becomes greater than 1, and thus $T_{\rm gas}$ becomes less than $T_c$, even if dust would be sublimated as soon as it starts to form.
The PWN emission gets stronger as $P$ decreases, and we see multiple effects because of this. The formation time for both types of dust can increase, due to the emission increasing the temperature of the ejecta. All the radii also increase as the emission gets stronger; $R_{\text{w}}$ and $R_{\text{ej}}$ increase because the magnetized wind from the PWN accelerates the ejecta expansion, and $R_{\text{c}}$ and $R_{\text{s}}$ both increase due to increased luminosity in the optical/UV band and above the ionization energy, respectively; the increase in $R_{\text{w}}$ can lead to enhanced adiabatic cooling, which can decrease the formation timescale. We also see the ejecta become thicker at high $P$ due to the low acceleration of the inside edge of the ejecta.
However, the increase in $R_{\text{c}}$ and $R_{\text{s}}$ as period decreases is greater than the increase in $R_{\text{w}}$, and this leads to qualitatively different dust formation behaviour. For $P$ = 10 ms, dust formation begins in the outer region for both dust types and in the inner region for C grains as soon as the supersaturation ratio $S$ = 1; the inside region at first has MgSiO$_3$ dust grains sublimated as soon as they begin to form, but as $R_{\text{c}}$ decreases these regions will eventually begin to form dust. $R_{\text{s}}$ is only slightly greater than $R_{\text{w}}$, so only the very inner region will be ionized. For $P$ = 3 ms, the sublimation radii for both types of dust are outside the edge of the ejecta when the dust first starts to form, so the dust is immediately sublimated once it becomes large enough to absorb optical/UV radiation; due to smaller dust grains having lower infrared emissivity than larger ones, they are unable to radiate heat as efficiently and are thus easier to sublimate. As the PWN luminosity decreases, $R_{\text{c}}$ drops below $R_{\text{ej}}$ and dust begins to form near the outer edge of the ejecta. The dust-forming region will grow larger as time passes until $R_{\text{c}}$ = $R_{\text{s}}$, at which point the gas in the inner region will remain ionized while in the outer region the dust will remain unsublimated, although maybe partially ionized. For $P$ = 1 ms, the entire ejecta will be at least partially ionized before dust begins to form, so it seems unlikely that a large amount of dust will be able to form at all.
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{rplotb_p1}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{rplotb_p2}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{rplotb_p4}
\end{subfigure}
\caption{The time evolution of the ejecta inner $R_{\text{w}}$ and outer radius $R_{\text{ej}}$ (blue), critical (sublimation) radii $R_{\text{c}}$ for both C (solid red) and MgSiO$_3$ (dashed red) dust, and Str{\"o}mgen (ionization) radius $R_{\text{s}}$ (green), for the Ib5-1 parameters with $B$ = 5 $\times$ 10$^{12}$ G and $P$ = 1 (top), 3 (middle), and 10 ms (bottom). The sublimation radius is shown from the point where the supersaturation ratio $S$ first becomes greater than 1.}
\label{fig:radev}
\end{figure}
In Figure~\ref{fig:formt}, we show the formation timescale for C and MgSiO$_3$ (in the Ib composition) or MgO (in the Ic composition) dust for all parameters shown in Table~\ref{tbl:planrun}. The dashed black line indicates when dust formation starts to be delayed due to sublimation, and the solid black line indicates where the ejecta is fully ionized before dust formation begins, which may stop dust formation altogether. Numerical values for the minimum and maximum formation times, as well as the formation time with no pulsar, are given in Table~\ref{tbl:formt}.
Each graph has some qualitative features in common. The shortest formation timescale is for low $P$ and high $B$; this is because the high initial energy injection causes the ejecta to expand very quickly, which makes adiabatic gas cooling more effective, and the fast spin-down time of the pulsar means the PWN luminosity drops very quickly, so the ejecta heating is minimal. As the $B$ field drops, the lower ejecta velocity and slowly declining PWN emission increase the formation timescale and sublimation inside the dashed region increase it even more, like in Figure~\ref{fig:radev} (middle). With some parameters, there is a $P$-$B$ region which has the longest formation time where the ejecta will be fully ionized before dust starts to form, like in Figure~\ref{fig:radev} (top). In this region, it is likely that the ionization breakout will prevent dust formation entirely. However, decreasing $B$ even further will cause the formation timescale to drop as the effects of the PWN get weaker and eventually become negligible. As $P$ increases, the PWN also gets weaker, and can cause the formation timescale to increase, as the ejecta velocity and thus adiabatic cooling decrease, or cause the formation timescale to decrease, as the heating from the PWN decreases. The balance between these two determines the formation timescale.
The different types of dust have different formation timescales due to both the mass fractions of their constituent gas atoms in the ejecta as well as their thermodynamic properties. The formation timescale of MgSiO$_3$ dust is about 120\% of the timescale for C dust for accelerated or pulsar-free formation and 140-150\% for delayed formation. The parameter space for sublimation delay and for ionization expands slightly for MgSiO$_3$ compared to C. The factor difference between MgO formation time and C formation time vary more than with MgSiO$_3$. For pulsar-free formation, the MgO timescale is similar to MgSiO$_3$, being around 120\% of the timescale for C. For pulsar-accelerated formation, the difference is about 150\% for high ejecta mass and almost 250\% for low ejecta mass because formation is delayed longer by sublimation at low mass. Pulsar-delayed formation gives the highest discrepancy though, with MgO dust taking roughly 5 times longer to form than C. The parameter space for sublimation delay and for ionization expands significantly for MgO compared to C, with a significant amount of the parameter space we examined delayed by sublimation.
The parameter sets have many quantitative differences due to the effects of mass and luminosity on expansion and energy injection. Decreasing the PWN luminosity (compare Ib5-1 and Ib5-05) decreases the ejecta acceleration, thermal energy injection, and non-thermal ionization and sublimation. This decreases the maximum formation timescale and increases the minimum timescale, bringing everything closer to the pulsar-free scenario, which is the limit of decreased PWN luminosity. However, even though the luminosity was cut by 50\%, the formation timescales only changed by 20-30\% and the parameter space for sublimation delay and ionization do not change very much. Increasing the mass (compare Ib5-1 and Ib15-1, and Ic5-1 and Ic15-1) slows the expansion of the ejecta, slowing adiabatic cooling and and increasing the energy flux from the PWN, heating up the ejecta. As a result, the formation timescale is increased for all scenarios, varying from increases of around 30\% for delayed formation to 100\% for accelerated formation, except for MgO, which only increases by around 20\% for accelerated formation.
\begin{figure*}
\settoheight{\tempdima}{\includegraphics[width=.33\linewidth]{pbdia_b15_c}}%
\centering\begin{tabular}{@{}c@{ }c@{ }c@{}}
&\textbf{C dust} & \textbf{MgSiO$_3$/MgO dust} \\
\rowname{Ib5-1}&
\includegraphics[width=.33\linewidth]{pbdia_b15_c}&
\includegraphics[width=.33\linewidth]{pbdia_b15_m}\\[-1ex]
\rowname{Ib5-05}&
\includegraphics[width=.33\linewidth]{pbdia_b0p55_c}&
\includegraphics[width=.33\linewidth]{pbdia_b0p55_m}\\[-1ex]
\rowname{Ib15-1}&
\includegraphics[width=.33\linewidth]{pbdia_b115_c}&
\includegraphics[width=.33\linewidth]{pbdia_b115_m}\\[-1ex]
\rowname{Ic5-1}&
\includegraphics[width=.33\linewidth]{pbdia_s15_c}&
\includegraphics[width=.33\linewidth]{pbdia_s15_m}\\[-1ex]
\rowname{Ic15-1}&
\includegraphics[width=.33\linewidth]{pbdia_s115_c}&
\includegraphics[width=.33\linewidth]{pbdia_s115_m}\\[-1ex]
\end{tabular}
\caption{Dependence of formation timescale for C and MgSiO$_3$ (in the Ib composition) or MgO (in the Ic composition) dust on $B$ and $P$. The dashed black line indicates when dust formation starts to be delayed due to sublimation, and the solid black line indicates where the ejecta is fully ionized before dust formation begins, which may stop dust formation altogether (Eg. Figure \ref{fig:radev}). Numerical values for the minimum and maximum formation times, as well as the formation time with no pulsar, are given in Table~\ref{tbl:formt}.}%
\label{fig:formt}
\end{figure*}
\begin{table*}
\begin{tabular}{|c|ccc|ccc|ccc|} \hline
& \multicolumn{3}{c}{C dust}&\multicolumn{3}{c}{MgSiO$_3$ dust}&\multicolumn{3}{c}{MgO dust}\\
ID & $t_{\text{max}}$ & $t_{\text{min}}$ & $t_{\text{no pulsar}}$ & $t_{\text{max}}$ & $t_{\text{min}}$ & $t_{\text{no pulsar}}$ & $t_{\text{max}}$ & $t_{\text{min}}$ & $t_{\text{no pulsar}}$ \\ \hline
Ib5-1 & 1118 & 58 & 180 & 1649 & 73 & 215 &&& \\
Ib5-05 & 883 & 72 & 180 & 1263 & 88 & 215 &&& \\
Ib15-1 & 1498 & 120 & 316 & 2180 & 141 & 376 &&& \\
Ic5-1 & 1072 & 58 & 177 &&&& 5030 & 143 & 216 \\
Ic15-1 & 1420 & 118 & 311 &&&& 6657 & 175 & 378 \\ \hline
\end{tabular}
\caption{Numerical values in days for the minimum and maximum formation times, as well as the formation time with no pulsar, for all parameters shown in Table~\ref{tbl:planrun}. The formation timescale dependence on $B$ and $P$ is shown in Figure~\ref{fig:formt}.}
\label{tbl:formt}
\end{table*}
\subsection{Dust Size Distribution}
In Figure~\ref{fig:adist}, we show the final average size distribution for C and MgSiO$_3$ (in the Ib composition) or MgO (in the Ic composition) dust for all parameter sets shown in Table~\ref{tbl:planrun}, and show the minimum and maximum size for each in Table \ref{tbl:forma}. The size of the dust is heavily dependent on the gas concentration at the time of formation. Thus, in parameter regions where dust formation is delayed by sublimation, or the ejecta is accelerated by the PWN, the dust size is significantly smaller. Dust size is largest when the effect of the PWN on the ejecta becomes weaker or negligible, which is why the large $P$ and small $B$ region produces the largest dust; the spin-down timescale of this neutron star is over 10$^4$ years. Increasing ejecta mass (compare Ib5-1 and Ib15-1, and Ic5-1 and Ic15-1) and gas concentration (compare C dust in Ib5-1 and Ic5-1, and Ib15-1 and Ic15-1) also increases dust size.
\begin{figure*}
\settoheight{\tempdima}{\includegraphics[width=.33\linewidth]{pbdia_b15_c_adist}}%
\centering\begin{tabular}{@{}c@{ }c@{ }c@{}}
&\textbf{C dust} & \textbf{MgSiO$_3$/MgO dust} \\
\rowname{Ib5-1}&
\includegraphics[width=.33\linewidth]{pbdia_b15_c_adist}&
\includegraphics[width=.33\linewidth]{pbdia_b15_m_adist}\\[-1ex]
\rowname{Ib5-05}&
\includegraphics[width=.33\linewidth]{pbdia_b0p55_c_adist}&
\includegraphics[width=.33\linewidth]{pbdia_b0p55_m_adist}\\[-1ex]
\rowname{Ib15-1}&
\includegraphics[width=.33\linewidth]{pbdia_b115_c_adist}&
\includegraphics[width=.33\linewidth]{pbdia_b115_m_adist}\\[-1ex]
\rowname{Ic5-1}&
\includegraphics[width=.33\linewidth]{pbdia_s15_c_adist}&
\includegraphics[width=.33\linewidth]{pbdia_s15_m_adist}\\[-1ex]
\rowname{Ic15-1}&
\includegraphics[width=.33\linewidth]{pbdia_s115_c_adist}&
\includegraphics[width=.33\linewidth]{pbdia_s115_m_adist}\\[-1ex]
\end{tabular}
\caption{Dependence of final average dust size for C and MgSiO$_3$ (in the Ib composition) or MgO (in the Ic composition) dust on $B$ and $P$.}%
\label{fig:adist}
\end{figure*}
\begin{table}
\begin{tabular}{|c|cc|cc|cc|} \hline
& \multicolumn{2}{c}{C dust}&\multicolumn{2}{c}{MgSiO$_3$ dust}&\multicolumn{2}{c}{MgO dust}\\
ID & $a_{\text{max}}$ & $a_{\text{min}}$ & $a_{\text{max}}$ & $a_{\text{min}}$ & $a_{\text{max}}$ & $a_{\text{min}}$ \\ \hline
Ib5-1 & 8.4 & 0.6 & 3.6 & 1.1 && \\
Ib5-05 & 8.7 & 0.6 & 3.7 & 1.1 && \\
Ib15-1 & 30 & 0.6 & 10.0 & 1.1 && \\
Ic5-1 & 21 & 0.6 &&& 2.3 & 0.8 \\
Ic15-1 & 85 & 0.7 &&& 8.1 & 0.8 \\ \hline
\end{tabular}
\caption{Numerical values in nm (10$^{-7}$ cm) for the minimum and maximum final average dust size, for all parameters shown in Table~\ref{tbl:planrun}. The size distribution dependence on $B$ and $P$ is shown in Figure~\ref{fig:adist}.}
\label{tbl:forma}
\end{table}
Numerical simulations \citep{2010ApJ...715.1575S, 2012ApJ...748...12S} suggest that grains below 100 nm will be almost completely destroyed by the SN reverse shock, and larger ones will be sputtered to a smaller size \citep{2010ApJ...713..356N}. With this criterion, most dust will be destroyed by the reverse shock, since the average dust radius is always lower than 100 nm. However, since the dust distribution found by \cite{2013ApJ...776...24N} spans about one order of magnitude, it's possible that the largest C dust in non-pulsar-driven Type Ic SNe, or SNe with large ejecta masses, may survive the reverse shock, but the presence of a pulsar wind nebula increases the likelihood that most of the dust will be destroyed. Silicates will always be destroyed by the reverse shock unless the ejecta mass is higher than we model, as will carbon in the Ib5-1 and Ib5-05, regardless of the presence of a pulsar or not. It is therefore unlikely that pulsar-driven SN will contribute greatly to the overall dust concentration in the ISM.
It's worth noting that the average dust size is almost constant over the entire area where formation is delayed due to sublimation (compare to the dashed lines in Figure \ref{fig:formt}); this is likely due to our use of the steady-state approximation. This size is close to the minimum size for dust to be considered a grain, and indicates that dust can not grow far beyond the point where is can efficiently absorb continuum energy, and that the final size of this dust actually depends on detailed microphysics beyond the scope of this paper.
\subsection{Dust Emission} \label{sec:resdustem}
We are interested in the possible detection of dust emission in Type-Ic SLSN remnants, so we examine the emission for two fiducial parameter sets: the P1 set, with $P$ = 1 ms, $B$ = 10$^{14}$ G, and $M_{\rm ej}$ = 15 M$_{\sun}$; and the P2 set, with $P$ = 2 ms, $B$ = 2 $\times$ 10$^{13}$ G, and $M_{\rm ej}$ = 5 M$_{\sun}$. These are chosen to roughly match the $P_{\rm min}$ and $M_{\rm max}$ cases from \cite{2018MNRAS.474..573O}, and both have the Ic composition.
The spectra of the PWN and dust and compared for the two cases in Figure~\ref{fig:slsn}. We account for uncertainty in the PWN spectra by showing the region for $\alpha_1$ between 1.8 and 1.5, and discuss this spectral uncertainty further in Appendix \ref{sec:disrealspec}. The dust spectrum shown was calculated with $\alpha_1$ = 1.8, but is expected to be lower by a factor of $\lesssim$ 2 in the first decade after the explosion when calculated with $\alpha_1$ = 1.5. We see that the detectability of the dust emission depends heavily on the spectral index, ranging from undetectable if
$\alpha_1$ = 1.8 to easily detectable after 2 years in both cases if $\alpha_1$ = 1.5.
For the $\alpha_1$ = 1.8 case (in which the non-thermal flux is likely to be overestimated), although the dust spectrum approaches the PWN spectrum as time passes, due to the lower absorbed energy giving the dust spectrum a lower peak frequency, it is subdominant for at least 20 years in both cases. For the case with lower $\alpha_1$ the dust luminosities in the first few years are around $\nu L_\nu \sim 10^{37}-10^{39}$ erg/s at around $10^4-10^5$ GHz depending on the case, which would be visible within $\sim$ 100-1000 Mpc using 2500 s observations from either Spitzer or JWST. It is also worth noting that the dust emission is not significant below $10^3$ GHz, so PWN observations with ALMA (100-250 GHz) should not be significantly affected by dust. Thus results on the detectability of non-thermal submm emission \citep{MKM16,2018MNRAS.474..573O} are unaffected.
If $\epsilon_e < 1$, then the luminosity of the dust emission will decrease, but so will its peak wavelength, so its relative luminosity compared to the PWN spectrum will increase, possibly making the emission detectable for close supernovae even with $\alpha_1$ $\sim$ 1.8.
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{slsnspec_p1}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{slsnspec_p2}
\end{subfigure}
\caption{The PWN (dotted/shaded) and dust (dotted) spectra 2 (green), 5 (magenta), and 20 (cyan) years after the explosion. The P1 case is shown above and the P2 case below. The dotted lines represent the PWN spectra with $\alpha_1$ = 1.5 and 1.8, with the shaded region in between. The dust emission was calculated with $\alpha_1$ = 1.8, and is expected to be lower by a factor of $\lesssim$ 2 in the first decade after the explosion when calculated with $\alpha_1$ = 1.5.}
\label{fig:slsn}
\end{figure}
\subsection{Applications to Previous SNe}
SN1987A is the most well studied supernova to date, in part because of the dust formed in its ejecta. The explosion of the $\sim$ 18-20 M$_{\sun}$ blue supergiant Sk-69 202 produced an explosion with $\sim$ 15 M$_{\sun}$ of ejecta \citep{1987Natur.328..318G}. The ejecta contained a $\sim$ 10 M$_{\sun}$ hydrogen envelope with a $\sim$ 5 M$_{\sun}$ core of heavier elements \citep{1988PASAu...7..355W, 2006NuPhA.777..424N}. Mg and Si were both produced in roughly equal amounts of about 0.1 M$_{\sun}$, while about 0.15-0.25 M$_{\sun}$ of C was produced \citep{1990ApJ...349..222T, 2011Sci...333.1258M}, giving mass fractions of 0.007 and 0.01-0.02 respectively; both of these are a factor of $\sim$ 5 lower than in our Ib composition. There is evidence for the formation of both carbon and silicate dust, with almost all of the carbon gas ending up in dust grains, and about 0.4 M$_{\sun}$ of MgSiO$_3$ produced \citep{2011Sci...333.1258M, 2015ApJ...810...75D, 2015A&A...575A..95S}. There were no silicate lines detected in the early spectra \citep{1993ApJS...88..477W}, but that could be because the emission features were absorbed by the carbon dust \citep{2015ApJ...810...75D}. There has not yet been any detection of a compact remnant, although a pulsar with initial spin $P >$ 100 ms and $B \sim$ 10$^{11-12}$ G is still not ruled out \citep{2007AIPC..937..134M}. Dust was hypothesized to condense in the ejecta between 415 to 615 days \citep{1993ApJS...88..477W} or even at timescales longer than 1000 days \citep{2015A&A...575A..95S, 2015MNRAS.446.2089W}, and this timescale is longer than expected for a no pulsar system; even though the C dust concentration is a factor of $\sim$ 5 lower than in our Ib composition, the Ic composition has a C concentration a factor of 3 higher but only condenses 3 days faster. Based on our results, a pulsar with $P >$ 100 ms and $B \sim$ 10$^{11-12}$ can not explain the delay in dust formation, and a pulsar which could explain the delay would have produced detectable non-thermal radiation \citep{2016ARA&A..54...19M}, and would have heated the dust to over 1000 K, which is larger than any predicted model \citep{1993ApJS...88..477W, 2011Sci...333.1258M}.
The more recent 2012au presents an interesting case. It is a Type Ib supernova with an ejecta mass of 5-7 M$_{\sun}$ \citep{2013ApJ...772L..17T}, making our Ib5-1 case a good approximation. No dust emission has been reported yet, and after $\sim$ 1 year the spectrum was consistent with radioactive heating \citep{2013ApJ...770L..38M}, but [OIII] emission lines have been reported at 6.2 years, which may require another heating source. \cite{2018ApJ...864L..36M} proposed a Crab-like pulsar on the basis of limiting the velocity of the PWN at late times, but the supernova was more luminous than a regular Type Ib supernova and had a kinetic energy of $\sim$ 10$^{52}$ erg \citep{2013ApJ...770L..38M}, similar to hypernovae, which requires a faster spinning central pulsar. It is more likely that the central pulsar has a magnetic field of $\sim 10^{14}$ G and a period close to 1 or 2 ms; this would contribute to the luminosity and kinetic energy at early times and still provide energy to ionize the ejecta after more than 6 years, something not likely with a Crab-like pulsar. This could possibly be tested by trying to observe dust emission, as ejecta with a Crab-like pulsar would likely have colder, larger dust and ejecta with a faster, stronger field pulsar would have smaller, hotter dust, or possibly none at all.
Some other Galactic SNRs have been observed to have both dust and a neutron star or PWN, such as Kes 75 \citep{2008Sci...319.1802G, 2012ApJ...745...46T}, SNR G54.1+0.3 \citep{2010ApJ...710..309T, 2001A&A...370..570L}, Cas A \citep{2013ApJ...777...22E, 2010ApJ...713..356N}, and the Crab Nebula \citep{2012ApJ...753...72T}. These SN produced between 0.01-1 M$_{\sun}$ of dust, and there has not yet been a reverse shock in any the SNR. However, the initial spin period of these pulsars were all likely $>$ 10 ms, so the PWN likely did not have a strong effect on their dust formation. However, since the dust found in the Crab Nebula was reported to be smaller than expected \citep{2009ASPC..414...43K, 2012ApJ...753...72T}, this may be evidence that the initial pulsar rotated with $P <$ 10 ms and suppressed grain growth, although this may be simply due to the dust mass being derived using an inaccurate distance to the Crab Nebula, as recent studies suggest the distance may be greater than previously thought \citep{2018AJ....156...58B,2018arXiv181112272F}.
\section{Discussion} \label{sec:dis}
\subsection{Dependence on $\gamma_b$} \label{sec:disgammab}
The value of $\gamma_b$, which determines the break in the photon spectrum (see Equation \ref{eqn:ebsyn}), is not well constrained for very young PWNe \citep{2007whsn.conf...40V, tt13, 2014JHEAp...1...31T}. Our value of 3 $\times$ 10$^5$ gives a spectral break in the optical to X-ray range, depending on timescale, but a value closer to 10$^2$ moves the spectral break into the submillimetre to infrared range. We calculated the time evolution of $\ln(S)$, $I_s$, $f_{\text{con}}$ and $a_{\text{ave}}$ for the same pulsar and ejecta as Figure~\ref{fig:ourtd} (bottom), and show it in Figure \ref{fig:gbtev}. The formation timescale is much closer to the non-pulsar case than the case with $\gamma_b$ = 3 $\times$ 10$^5$, with C dust forming around 280 days and MgSiO$_3$ forming around 370 days.
\begin{figure}
\includegraphics[width=\linewidth]{props_p_gbl}
\caption{The same as Figure~\ref{fig:ourtd} (bottom), but with $\gamma_b$ = 3 $\times$ 10$^2$ instead of 3 $\times$ 10$^5$.}
\label{fig:gbtev}
\end{figure}
The behaviour of the parameters for both types of dust is qualitatively similar to the pulsar cases with $\gamma_b$ = 3 $\times$ 10$^5$ cases. Both $\ln(S)$ and $I_s$ rise to a very high value before their drop at the formation time. The drop-off in $I_s$ and rise in $f_{\text{con}}$ are also much steeper than the non-pulsar case, signifying a very short condensation timescale. A likely interpretation of this data is that the dust was sublimated at formation at first, and the more diffuse ejecta after the temperature dropped combined with the large cluster formation rate show that as soon as the temperature dropped, all the gas immediately formed dust without having a chance to grow by further accrete gas particles. This interpretation also explains why the average dust size is similar to the $\gamma_b$ = 3 $\times$ 10$^5$ case.
We also calculated the formation timescale distribution and average dust size distribution for the Ib5-1 parameter set, shown in Figures \ref{fig:g3e2ftd} and \ref{fig:g3e2adsd}. We find that the effects of the pulsar are greatly reduced, with the maximum formation timescale for C and MgSiO$_3$ reduced from 1118 to 460 days and 1649 to 623 days respectively, the minimum formation timescale for C and MgSiO$_3$ increased from 58 to 93 days and 73 to 111 days respectively, ionization breakouts not occurring, and the parameter space with formation delayed by sublimation decreasing. We also find that a 1 ms pulsar barely affects dust size for magnetar strength fields, but reduces the size around $B = 10^{12}-10^{13}$ G just as much as the $\gamma_b$ = 3 $\times$ 10$^5$ case.
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{pbdia_b15_c_gb}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{pbdia_b15_m_gb}
\end{subfigure}
\caption{The formation timescale distribution for C (top) and MgSiO$_3$ (bottom) dust for the Ib5-1 parameter set with $\gamma_b$ = 3 $\times$ 10$^2$.}
\label{fig:g3e2ftd}
\end{figure}
\begin{figure}
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{pbdia_b15_c_adist_gb}
\end{subfigure} \\
\begin{subfigure}
\centering
\includegraphics[width=\linewidth]{pbdia_b15_m_adist_gb}
\end{subfigure}
\caption{The average dust size distribution for C (top) and MgSiO$_3$ (bottom) dust for the Ib5-1 parameter set with $\gamma_b$ = 3 $\times$ 10$^2$.}
\label{fig:g3e2adsd}
\end{figure}
With $\gamma_b$ = 3 $\times$ 10$^2$, any trace of a pulsar engine would be difficult to detect. The effects on formation time are much weaker than the fiducial case and occur over a smaller parameter region, and a noticeable effect on dust size is confined to a smaller region as well. Detection of a re-emitted signal is unlikely as well, since the spectral break is now in the submillimtre/infrared, so the reprocessed emission of the dust, which will be colder due to a low optical/UV flux, will be dominated by the non-thermal emission close to the peak of the spectrum.
\subsection{Emissivities}
We mention in Section~\ref{sec:dustsub} that the emissivities we use are the biggest uncertainties in our model, because the shape of the emissivities changes greatly with grain size \citep[e.g.,][Figures 4 and 5]{1984ApJ...285...89D}. For C dust, the emissivity for $a > 10^{-5}$ cm is $\sim$ 1 between 1-12 eV, and decreases in this range as dust size decreases; however, the emissivity rises at higher energies at dust size decreases. For silicates, the emissivity starts to drop off at $a >10^{-4}$~cm in the optical/UV band, but is otherwise similar to that of C. Both types of grains absorb almost all radiation across roughly one order of magnitude, and since our spectrum is fairly flat in $\nu F_\nu$, the total energy absorbed should be comparable for all dust sizes except where $a < 10$~nm, where the emissivity drops off over the entire spectrum by a factor of 2-3. Our bandwidth is only about half an order of magnitude so it is likely that we slightly underestimate the temperature of the dust. The total energy absorbed should be about twice as high as our model for $a >10$~nm, so the temperature will be higher by around 1.2; this will cause sublimation to be more effective, but also make the emission more detectable. The differences in formation time and dust size will be similar to the differences between our Ib5-05 and Ib5-1 parameter sets. However, since we found that final grain sizes in many cases was $<$ 10 nm, this will decrease the energy absorbed by small grains by a factor of 2-3 or more, leading to sublimation being less effective for newly formed grains.
\subsection{Other Uncertainties}
Our model also has a few other caveats. We assumes a steady state, where the current density $J_n$ from $(n-1)$-mer to $n$-mer is independent of $n$, being identical to the steady-state nucleation rate $J_s$ from Equation~\ref{eqn:jsdef}. \cite{2013ApJ...776...24N} show that this model is only applicable if the saturation timescale $\tau_{\rm sat} \gtrsim 30\tau_{\rm coll}$, the collisional timescale, which they show to occur at higher densities; otherwise, the steady-state model condenses slightly quicker, but with smaller grains than the non-steady state.
We assume spherical symmetry, which ignores the non-sphericity of the PWN emission, as well as the formation of clumps or fluid instabilities in the ejecta \citep{1991A&A...249..474K,2015ApJ...810...75D}. Our calculation is based on one-zone modeling, so the dust formation rate is independent of radius; in reality, it should be sharply affected by the density profile and shell structure of the ejecta.
Since we are interested in emission at early times, we assume no reverse shock has propagated through the ejecta; if the supernova is surrounded by circumstellar medium (CSM), the ejecta-CSM collision could send a reverse shock through the ejecta and destroy the dust \citep{2016ARA&A..54...19M, 2018MNRAS.478..110S}.
We also consider only spherical dust grains, instead of ellipsoidal grains \citep{2015ApJ...810...75D, 2015ApJ...800...50M} or more irregular shapes \citep{2003asdu.confE.170M}. We calculate the extent of ionization in the ejecta, but we ignore the effects of the increased electron temperature and charge separation on the rest of the ejecta. We ignore sputtering, which should decrease the average grain size \citep{1979ApJ...231...77D, 1979ApJ...231..438D, 1996ApJ...469..740J}, although the effect should not be very significant due to the thermal velocity of the grains not being extremely large \citep{1946BAN....10..187O}. We neglect the shape of the grain distribution altogether, calculating only the average grain size; this will affect the dust emission, since even though the emission region is optically thick, there should be a range of temperatures emitted (due to different emissivities), not a single one as we assume.
\section{Summary}
Using a model of dust formation, sublimation, emission, and gas ionization, we calculated the dust abundances, sizes, and radiation for several pulsar-driven supernovae. We found that dust formation is qualitatively similar with and without pulsars, but it can be accelerated with $\sim$ ms rotating pulsars with super-critical magnetic fields due to the increased effectiveness of adiabatic cooling. It can also be delayed for lower fields and higher periods due to thermal energy injection and sublimation or stopped altogether due to ionization breakout. Carbon dust forms before silicates, and MgSiO$_3$ forms in much shorter timescales than MgO when the pulsar can delay the formation, even though they form at similar timescales when pulsars accelerate formation or do not have a significant effect on the ejecta. Increasing ejecta mass, lowering the PWN luminosity, and lowering the key molecule concentration all delay the dust formation as well. The typical formation timescales range from $\sim$ a few months for accelerated formation, to $\sim$ a year for no significant PWN efffect, to $\sim$ 4-6 years for delayed formation of C or MgSiO$_3$ dust, to $\sim$ 15 years for delayed formation of MgO dust.
We found that the average size of the dust is decreased to $\sim$ a few nanometers or less when a pulsar either accelerates or delays formation from the $\gtrsim$ 10 nm dust formed when pulsar energy injection is not significant, meaning that dust from pulsar-driven supernovae will likely not survive the SN reverse shock. However, the emission from the pulsar-heated dust could be detectable out to $\sim$ 100-1000 Mpc from typical SLSNe depending on the the low-energy PWN spectral index.
Applying this model to SNR with both dust and PWNe is not particularly insightful, as the newborn pulsar in each cases was expected to have a long enough period where PWN energy injection would not have significantly affected dust formation, although the small dust size found in the Crab Nebula could be evidence for a pulsar effect. Applying this model to SN1987A could explain the delayed formation of dust in some models, but predicts a pulsar with non-thermal luminosity well above previous detection limits. Applying this model to SN2012au could provide insight into the nature of the central engine depending on the possible detection and properties of dust.
We caution that the uncertainties in the PWN spectrum at early times can greatly affect the strength of the pulsar effects. If the value of the spectral break is greatly decreased, the effects of the pulsar are almost negligible outside of a small parameter space, where the timescale effects are weakened but the affect on dust size may be weakened or amplified.
Our model relies on several assumptions. The most significant assumption we make is to fix the dust absorption emissivity and assume an emission emissivity which may not be valid over the entire range of dust sizes we examine, but due to the shape of our PWN spectrum we do not expect the dust temperature to change by more than a factor of 1.2 in most cases. Our model is also spherically symmetric, which ignores clumping and inhomogeneity in the ejecta, is one-zone, which ignores the radial dependence of concentration on dust formation, and does not fully account for effects like ionization and sputtering. Despite this, our calculations give some insight into what emission may be expected and detectable for a pulsar-driven supernova, the timescales for which these observations may be feasible, and the fate of the dust as the SNR evolves.
\section*{Acknowledgements}
We thank Keiichi Maeda, Takaya Nozawa, and Akihiro Suzuki for discussion.
C. M. B. O. has been supported by the Grant-in-aid for the Japan Society for the Promotion of Science (18J21778). K. K. acknowledges financial support from JSPS KAKENHI grant 18H04573 and 17K14248. K. M. acknowledges financial support from the Alfred P. Sloan Foundation and NSF grant PHY-1620777.
\bibliographystyle{mnras}
|
1,108,101,562,727 | arxiv | \section{INTRODUCTION}
Magnetic fields are ubiquitous in the photosphere and interact with convective plasmas. Photospheric convective plasma flows advect magnetic fields, and eventually concentrate them. This interaction may cause magnetic reconnections and the excitation of magnetohydrodynamic waves, which could contribute to the heating of the solar corona \citep{Alfven47, Parker98, Hughes03, DePontieu07, Tomczyk07, vanBallegooijen11, Stangalini13a, Stangalini13b, Giannattasio14}. Thus, it is important to investigate the interaction between small-scale magnetic elements and plasma flows \citep{Hughes03, Viticchie06}. Given that magnetic fields are passively transported by plasma flows, the motion of magnetic elements is a manifestation of intrinsic organizing processes and can be described in terms of diffusion.
Bright points in the photosphere are thought to be the foot-points of the magnetic flux tubes that the convective motions of granules push violently into intergranular lanes \citep{Stenflo85, Solanki93}. G-band observations are suitable for studying bright points because they appear brighter due to a reduced abundance of the CH-lines molecule at a higher temperature \citep{Steiner01}, and are often referred to as G-band bright points (hereafter GBPs). Therefore, it is possible to measure the dynamics of photospheric magnetic flux tubes, although only a fraction of magnetic elements are thought to be associated with GBPs by observations, magneto-hydrodynamical simulations, or semi-empirical models of flux concentrations \citep{Keller92, Berger01, Sanchez01, Steiner01, Schussler03, Carlsson04, Shelyag04, Beck07, Ishikawa07, deWijn08}.
In the past decades, many authors focused on the photospheric dispersal of magnetic elements or bright points in a field of view (FOV), such as an active region (AR) or a quiet Sun (QS) region. Diffusion processes represent the efficiency of dispersal in the photosphere, which uses a diffusion index, $\gamma$, quantifying the transport process with respect to a normal diffusion (random walk). Historically, normal-diffusion ($\gamma$\,=\,1) is the first known diffusion process. It characterizes a trajectory that consists of successive random steps and is described by a simplest form of diffusion theory \citep{Fick55, Einstein05, Lemons97}. \citet{Leighton64} first held that the dispersal rate of magnetic regions in the photosphere is normal. Later, \citet{Jokipii68} and \citet{Muller94} suggested that photospheric magnetic elements move in a random walk due to supergranular flows.
When $\gamma$\,$\neq$\,1, the process is termed anomalous diffusion. The motion of magnetic elements in ARs or network regions in the QS is sub-diffusive ($\gamma$\,$<$\,1) due to being trapped at stagnation points (i.e. points with nearly to zero horizontal velocity; sinks of flow field; \citealt{Lawrence93, Simon95, Cadavid99}).
More evidence based on high resolution data show that the photospheric BP motion is probably super-diffusive ($\gamma$\,$>$\,1; \citealt{Berger98, Lawrence01, Keys14, Yang15}. Typically, \citet{Abramenko11} indicated that the $\gamma$ value increases from a plage area in AR ($\gamma$\,=\,1.48) to a QS region ($\gamma$\,=\,1.53), and to a coronal hole (CH; $\gamma$\,=\,1.67).
The diffusion coefficient, $K$, expresses the rate of increase in the dispersal area in units of time for magnetic elements or GBPs. The $K$ values range from 60 to 176\,$\rm km^{2}$\,$\rm s^{-1}$ in ARs or network regions in the QS \citep{Wang98, Schrijver90, Berger98, Hagenaar99, Giannattasio14, Keys14}, and from 190 to 400\,$\rm km^{2}$\,$\rm s^{-1}$ in QS regions \citep{Schrijver90, Berger98, Giannattasio14, Jafarzadeh14, Yang15}. In particular, \citet{Abramenko11} compared the $K$ values in an AR and QS region, which were about 12 and 19\,$\rm km^{2}$\,$\rm s^{-1}$, respectively, whereas it was 22\,$\rm km^{2}$\,$\rm s^{-1}$ in a CH region.
However, studies focusing on the diffusion of GBPs at different magnetic field strengths are scarce.
Recent high spatial and high temporal magnetograms acquired with the $Hinode$ /Solar Optical Telescope (SOT: \citealt{Kosugi07, Ichimoto08, Suematsu08}) provide an unprecedented opportunity to map a GBP to its co-spatial magnetic field strength. For instance, \citet{Utz13} extracted the corresponding magnetic field strength of each GBP from the longitudinal magnetogram within the spectro-polarimeter (SP) data. Nevertheless, high temporal SP data have such a narrow FOV that they are unsuitable for tracking the complete trajectories of GBPs. Instead, Stokes $I$ and $V$ images of Narrow-band Filter Imager (NFI) are co-spatial and co-temporal with G-band images, and keep a large FOV in observations. Thus, it is feasible to extract the simultaneous longitudinal magnetic field strength of each GBP during its lifetime.
The aim of this paper is to study the relation between the dispersal of GBPs and the associated longitudinal magnetic field strengths. It will shed light on how magnetic elements of different longitudinal fields diffuse on the solar surface, and then assist the study of interaction between convections and magnetic fields.
The layout of the paper is as follows. The observations and data sets are described in Section 2. The data reduction and analysis are detailed in Section 3. In Section 4, the relation of diffusions of GBPs on different longitudinal magnetic field strengths is presented. Finally, the discussion and conclusion are given in Section 5 and 6, respectively.
\section{DATA SETS}
We used two data sets of different magnetized environments acquired with Hinode /SOT. Each data set comprises G-band filtergrams (BFI) and Stokes $I$ and $V$ images (NFI). The G-band data with a wavelength of 4305 {\AA} are suitable for photospheric bright point sensitive investigations. The circular polarization $I$ and $V$ images can measure longitudinal magnetic fields.
In data set \uppercase\expandafter{\romannumeral1}, there is an AR (NOAA 10960) between 2007 May 30 and June 14. The region produced numerous C- and M-class flares during that period. We adopted a timeseries of G-band images taken between 22:31:21 and 23:37:26\,$\rm UT$ on June 9. The images were obtained at a 30\,$\rm s$ cadence with an exposure time of 0.15\,$\rm s$. The spatial sampling is 0.109$''$ over a 111.6$''\times$111.6$''$ FOV. The center of FOV is $x$\,=\,429.5$''$ and $y$\,=\,-158.5$''$ corresponding to a heliocentric angle of 28.2$^{\circ}$ equaling a cosine value of 0.88. We also used a co-spatial and co-temporal timeseries of Stokes $I$ and $V$ images taken 200 m{\AA} blueward of the Na \textsc{i} D 5896{\AA} spectral line with a spatial sampling of 0.16$''$ over a 327.7$''\times$163.8$''$ FOV.
Data set \uppercase\expandafter{\romannumeral2} was recorded on 2007 March 31. It covers a quiet Sun region near the solar disc center. The FOV of the G-band data is 111.6$''\times$55.8$''$ with a resolution of 0.109$''$, and its center pointed to solar coordinates of $x$\,=\,207.7$''$ and $y$\,=\,-130.3$''$. This position corresponds to a heliocentric angle of 12.4$^{\circ}$ equaling a cosine value of 0.98. The data set starts from 11:36\,$\rm UT$ until 12:40\,$\rm UT$, with a temporal sampling of 35\,$\rm s$. The timeseries of co-spatial and co-temporal Stokes $I$ and $V$ images taken 120 m{\AA} blueward of Fe \textsc{i} 6302.5{\AA} spectral line has an FOV of 163.8$''\times$81.9$''$ with a resolution of 0.16$''$.
\section{DATA REDUCTION AND ANALYSIS}
The filtergrams were calibrated and reduced to level-1 using a standard data reduction algorithm fg\_prep.pro. The projection effects of both data sets were corrected according to the heliocentric longitude and latitude of each pixel.
\subsection{Alignment}
The G-band images and the Stokes $I$ and $V$ images was aligned carefully. The temporally closest images were chosen first, and the different spatial sampling were overcome by bicubic interpolating the Stokes $I$ and $V$ images to the corresponding spatial sampling of the G-band images. After that, a sub\_pixel level image registration procedure \citep{Feng12, Yang15} was employed for spatial alignment: All G-band images in the timeseries were aligned to its first image, and then the Stokes $I$ and $V$ images were aligned to the G-band images according to the displacement between the simultaneous Stokes $I$ image and the G-band image. Finally, all of the images were cut for the same FOV.
\subsection{Calibration of NFI Magnetogram}
The NFI Stokes $I$ and $V$ images of two data sets were measured with Na \textsc{i} D 5896{\AA} and Fe \textsc{i} 6302.5{\AA} spectral lines, respectively. Calibration of the NFI magnetograms is needed because they do not allow a full Stokes inversion to be performed. We calibrated the NFI longitudinal magnetic field strengths with a reference to the longitudinal magnetograms in the level-2 SP data, which is inverted from the MERLIN code \citep{Lites07}. As indicated in the previous studies for magnetic fields outside sunspots, a linear relation, $B_{\parallel} = \beta V/I$, can be calibrated between the circular polarization, $V/I$, and the longitudinal field, $B_{\parallel}$ in weak-field approximation \citep{Jefferies89, Chae07, Ichimoto08, Zhou10}.
The NFI $V/I$ images were desampled to consist with the spatial sampling of the SP longitudinal magnetogram. Then the narrow stripes of the timeseries NFI $V/I$ images were cut and aligned with a reference to the co-spatial and co-temporal stripes of the SP longitudinal magnetograms by image registration techniques. The SP observation of data set \uppercase\expandafter{\romannumeral2} was simultaneous with the NFI images of Fe spectral line, so the calibrated coefficient of data set \uppercase\expandafter{\romannumeral2} was measured using the simultaneously SP data directly. Unfortunately, co-spatial and co-temporal SP data were not taken with the NFI images of data set \uppercase\expandafter{\romannumeral1}, so another NFI observation (the same as Na spectral line of data set \uppercase\expandafter{\romannumeral1}, which had simultaneously SP data), was employed to measure the $\beta$ value of Na line. This observation was recorded from an AR between 20:20:05 and 20:59:29\,$\rm UT$ on 2007 July 1, which was temporally close to data set \uppercase\expandafter{\romannumeral1}. Both SP data were obtained by scanning a narrow slit over an area of 297.1$''\times$163.8$''$ with a resolution of 0.30$''\times$0.32$''$ per pixel.
Figure~\ref{fig1} shows the plots of NFI $V/I$ versus $B_{sp}$ of Na and Fe spectral lines, respectively. The SP longitudinal magnetograms were multiplied by corresponding filling factors. The pixels where the values were less than noise of SP longitudinal magnetograms or NFI $V/I$ images were excluded. Only the pixels whose absolute field strengths are less than 1000\,$\rm G$ were taken into account in data set \uppercase\expandafter{\romannumeral1}. With linear regression analyses of the relations of Na and Fe spectral lines, the $\beta$ values are 13.1 and 35.6\,$\rm kG$ with correlation coefficients of 0.95 and 0.87, respectively. Consequently, longitudinal magnetic fields at any location in the NFI data of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} were determined by multiplying the $V/I$ with the corresponding $\beta$ values. The mean longitudinal magnetic field strengths of data set \uppercase\expandafter{\romannumeral1} (excluding the sunspots region) and \uppercase\expandafter{\romannumeral2} are 132 and 64\,$\rm G$, respectively. In addition, we selected a relatively quiet region (about 1$''\times$1$''$) in the NFI magnetograms to quantify the noise of NFI magnetograms. The standard deviations ($\sigma$) are 10 and 20\,$\rm G$, respectively.
\begin{figure}
\epsscale{1.1}
\plottwo{figure1a.eps}{figure1b.eps}
\caption{Plots of NFI $V/I$ versus $B_{sp}$ of different spectral lines. Panel (a): plot of NFI $V/I$ versus $B_{sp}$ of 200 m{\AA} blueward of the Na \textsc{i} D 5896{\AA} spectral line. The data were recorded between 20:20:05 and 20:59:29\,$\rm UT$ on 2007 July 1. Panel (b): plot of NFI $V/I$ versus $B_{sp}$ of data set \uppercase\expandafter{\romannumeral2}, with 120 m{\AA} blueward of the Fe \textsc{I} 6302.5{\AA} spectral line spectral line. See the electronic edition of the Journal for a color version of this figure.\label{fig1}}
\end{figure}
For estimating the variation of the calibrated coefficients in different observations, we also processed some other data sets observed with Na \textsc{i} D 5896{\AA} lines. The result is that their $\beta$ values are stable and limited in the range of 10\%. Therefore, it is feasible to calibrate the NFI magnetograms of data set \uppercase\expandafter{\romannumeral1} using the calibration coefficient of the Na spectral line measured by another observation.
\subsection{Detecting and Tracking GBPs}
A Laplacian and morphological dilation algorithm \citep{Feng13} was used to detect GBPs in each G-band image, and then a three-dimensional segment algorithm \citep{Yang14} was employed to track the evolution of GBPs in the timeseries of G-band images. Figure~\ref{fig2} shows a G-band image of data set \uppercase\expandafter{\romannumeral1} and the corresponding NFI magnetogram, in which the positions of the GBPs are highlighted in red. The GBPs cover 3.5\% of the selected FOV. Figure~\ref{fig3} shows the images of data set \uppercase\expandafter{\romannumeral2}, but the GBPs only cover 1\%. \citet{Sanchez04} proposed that the GBPs cover 0.7\% in the internetwork regions. Later, \citet{Sanchez10} detailed that the value was between 0.9\% and 2.2\% in QS regions. Recently, the fractional area occupied by GBPs in ARs is measured as twice to triple larger than that in QS regions \citep{Romano12, Feng13}.
\begin{figure}
\epsscale{1.1}
\plottwo{figure2a.eps}{figure2b.eps}
\caption{Panel (a): a G-band image of data set \uppercase\expandafter{\romannumeral1} (NOAA 10960), which the size is 104.1$''\times$105.9$''$. Panel (b): the co-aligned NFI magnetogram; the GBPs identified from the G-band image are highlighted with red. The $x$ and $y$ coordinates are ticked by arcseconds. \label{fig2}}
\end{figure}
\begin{figure}
\includegraphics[angle=0,scale=.60]{figure3a.eps}
\\
\\
\includegraphics[angle=0,scale=.666]{figure3b.eps}
\caption{Panel (a): a G-band image of data set \uppercase\expandafter{\romannumeral2} (a quiet Sun region), which the size is 109.8$''\times$54.3$''$. Panel (b): the co-aligned NFI magnetogram; the GBPs identified from the G-band image are highlighted with red. The $x$ and $y$ coordinates are ticked by arcseconds. \label{fig3}}
\end{figure}
For reducing detection error, these GBPs were discarded if (1) their equivalent diameters are less than 100\,$\rm km$ or greater than 500\,$\rm km$, (2) their lifetimes are shorter than 60\,$\rm s$, (3) their horizontal velocities exceed 7\,$\rm km s^{-1}$, (4) their lifecycles are not complete, or (5) they merge or split during their lifetimes. As a result, a total of 103,023 bright points remain in 132 images, yielding 18,494 evolving GBPs of data set \uppercase\expandafter{\romannumeral1}; 19,349 bright points remain in 110 images, yielding 3,242 evolving GBPs in data set \uppercase\expandafter{\romannumeral2}.
\subsection{Extraction of the Magnetic field Strengths of GBPs}
The peak longitudinal magnetic field strength of each bright point was extracted from the corresponding NFI magnetogram after its region was identified. The reason why a peak value was adopted rather than the mean or median is that the scales of GBPs are so small that slight errors in image alignment can degrade average or median values significantly \citep{Berger07}.
The absolute strongest magnetic field strength, $B$, during the evolution of each GBP was calculated because it represents the peak state. Taking the noise of the NFI magnetogram into account, we discarded the GBPs from data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} with a $B$ are below 50 and 100\,$\rm G$ (about 4$\sigma$), respectively. About 97\% GBPs of data set \uppercase\expandafter{\romannumeral1} fall into a range of 50\,--\,1000\,$\rm G$, while 97.5\% of data set \uppercase\expandafter{\romannumeral2} fall into 100\,--\,450\,$\rm G$. Consequently, these GBPs of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} were categorized into 19 and 7 groups by setting the same bin as 50\,$\rm G$, respectively.
\subsection{Diffusion Approaches}
A traditional analysis of the diffusion process is the Lagrangian approach, which is efficient when we deal with diffusive properties of tracers within turbulent fluid flows \citep{Monin75}. The approach consists of two main steps: (1) computing the spatial displacement, $\triangle l$, of an individual GBP as a function of time interval, $\tau$, measured from its first appearance; (2) calculating the mean-square displacement, $\langle(\triangle l)^{2}\rangle$, for each time interval as a function of $\tau$. The power index, $\gamma$, of the mean-square displacement is defined as
\begin{equation}\label{eqa1}
\langle(\triangle l)^{2}\rangle = C\tau^{\gamma},
\end{equation}
where, $C$ is the coefficient of proportionality. Usually, $\gamma$ and $C$ are derived as the slope and the exponential of the intercept on the $y$ axis of the spectrum over a range of $\tau$ on a log-log scale, respectively. However, this approach is not ideal for studying the random motions of GBPs \citep{Dybiec09, Jafarzadeh14}. These authors indicated that the second step might cause the mixing of different diffusive processes. Thus, they suggested that the square displacement of each GBP ought to be calculated separately. The $\gamma$ value could be measured by the slope of its own square displacement on a log-log scale. Then, the mean diffusion index and the associated standard deviation (the square root of the variance) could be obtained by fitting the distribution of the $\gamma$ values. We named this approach distribution of diffusion indices (DDI).
In this study, both of the approaches were adopted and compared.
\section{RESULT}
\subsection{Lagrangian Approach}
The mean-square displacements of GBPs in different longitudinal magnetic field strength bins versus the times of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} are displayed on a log-log scale in Figure~\ref{fig4} (a) and (b), respectively.
Many previous authors have proposed that the estimation of $\gamma$ using the Lagrangian approach may be strongly affected by timescales. In particular, \citet{Abramenko11} interpreted that the $\gamma$ value measured for small timescales ($\lesssim$ 300\,$\rm s$) could represent the intrinsic diffusion of GBPs. As shown in Figure~\ref{fig4}, the lengths of mean-square displacements are different and the tails are not regular because only a few GBPs have long lifetimes. If the $\gamma$ value was calculated by fitting the whole mean-square displacement, it would be determined by a few long-lived GBPs. Therefore, we analyzed the mean-square displacements for the timescale $\tau$\,$\lesssim$ 300\,$\rm s$ to probe the intrinsic depiction of the diffusion index. The $\gamma$ values and the associated standard deviations were conveniently captured, respectively, by the slopes of mean-square displacements for the timescale $\tau$\,$\lesssim$ 300\,$\rm s$ on a log-log scale within 95\% confidence intervals.
\begin{figure}
\includegraphics[angle=0,scale=.6]{figure4a.eps}
\\
\\
\includegraphics[angle=0,scale=.6]{figure4b.eps}
\caption{The mean-square displacement $\langle(\triangle l)^{2}\rangle$ of GBPs as a function of time, $\tau$, on a log-log scale in different longitudinal magnetic field strength bins by the Lagrangian approach. Panel (a): The mean-square displacement of data set \uppercase\expandafter{\romannumeral1}, which the longitudinal magnetic field strength bins range from 50 to 1000\,$\rm G$. Panel (b): The mean-square displacement of data set \uppercase\expandafter{\romannumeral2}, which the longitudinal magnetic field strength bins range from 100 to 450\,$\rm G$.\label{fig4}}
\end{figure}
In Figure~\ref{fig5} (a), the mean diffusion indices and the associated standard deviations of the different longitudinal magnetic field strength bins of data set \uppercase\expandafter{\romannumeral1} are illustrated using error bar in black solid line. Taking the quantity of each bin as a weight, we analyzed the relation between the diffusion indices and the longitudinal magnetic field strengths using a weighted curve fitting. It fits well with an exponential function (black dashed line). The $\gamma$ value and the standard deviation are 1.61$\pm$0.17 (super-diffusion) inside the bin of 50\,--\,100\,$\rm G$. As the longitudinal magnetic field strength increases, the $\gamma$ value decreases. At a strong longitudinal magnetic field strength, the gradient becomes small and the $\gamma$ value arrives at $\sim$1.00 (normal-diffusion). The empirical formula deduced from the fitting equation is given to be:
\begin{equation}\label{eqa2}
\hat{\gamma}(B)= a e^{b B} + c,
\end{equation}
where $B$ is the longitudinal magnetic field strength in kG. The parameters $a$, $b$, and $c$ are 0.77$\pm$0.10, -1.95$\pm$0.91, and 0.96$\pm$0.15 under the 95\% confidence interval, respectively.
\begin{figure}
\epsscale{1.0}
\plottwo{figure5a.eps}{figure5b.eps}
\caption{Panel (a): the relations between the diffusion indices of GBPs and the longitudinal magnetic field strengths for the timescale is less than 300\,$\rm s$ by the Lagrangian approach. Panel (b): the relations between the diffusion coefficients of GBPs and the longitudinal magnetic field strengths for the timescale is less than 300\,$\rm s$ by the Lagrangian approach. The mean values and the standard deviations of data set \uppercase\expandafter{\romannumeral1} are illustrated using the error bars in black solid lines, which are fitted well with an exponential function (black dashed line). The relation of data set \uppercase\expandafter{\romannumeral2} is illustrated with a red dotted line.\label{fig5}}
\end{figure}
The diffusion indices of data set \uppercase\expandafter{\romannumeral2} are also shown in Figure~\ref{fig5} (a) with a red dotted line. Compared with data set \uppercase\expandafter{\romannumeral1}, the trend is simple because the longitudinal magnetic field strengths of GBPs only range from 100 to 450\,$\rm G$. The $\gamma$ value continues to mostly decrease from 1.70$\pm$0.07 to 1.27$\pm$0.10, except the first $\gamma$ value is smaller than the second, and the fifth is slightly smaller than the sixth. In Figure~\ref{fig4} (b), it can be seen that a sudden drop of the mean-square displacement happens at $\tau \simeq$ 300\,$\rm s$ in the first longitudinal field strength bin. This leads to a small $\gamma$ value with a large standard deviation. Because of the limited range of longitudinal magnetic field strengths, we neglected the fitting formula.
We then established the diffusion coefficient, $K$, of anomalous diffusion with the formula \citep{Monin75}:
\begin{equation}\label{eqa3}
K(\tau)= \frac{C\gamma}{4}\tau^{\gamma-1},
\end{equation}
where $\gamma$, $\tau$, and $C$ are deduced from equation (\ref{eqa1}). Figure~\ref{fig5} (b) shows the relations between $K$ and $B$ for the timescale $\tau\,\lesssim$ 300\,$\rm s$. The $K$ value of data set \uppercase\expandafter{\romannumeral1} decreases exponentially from 143$\pm$50 to 26$\pm$4\,$\rm km^{2}$\,$\rm s^{-1}$. The empirical formula is given as:
\begin{equation}\label{eqa4}
\hat{K}(B)= a e^{b B} + c,
\end{equation}
where $B$ is the longitudinal magnetic field strength in kG. The parameters $a$, $b$, and $c$ are 165.30$\pm$15, -3.23$\pm$0.87, and 17.22$\pm$12 under 95\% confidence interval, respectively.
The trend of $K$ of data set \uppercase\expandafter{\romannumeral2} is similar to the corresponding $\gamma$. The $K$ value decreases from 200$\pm$31 to 69$\pm$15\,$\rm km^{2}$\,$\rm s^{-1}$ except the first $K$ value is smaller than the second, and the fifth is also smaller than sixth. It can be seen that the $K$ values are distinctly higher than those of data set \uppercase\expandafter{\romannumeral1}. In equation (\ref{eqa1}) and (\ref{eqa3}), $\gamma$ relates to the acceleration and $C$ relates to the initial velocity. The $C$ values deduced from equation (\ref{eqa1}) of data set \uppercase\expandafter{\romannumeral2} are greater than those of \uppercase\expandafter{\romannumeral1}. It is in agreement with the previous studies, where the horizontal velocity of GBPs is attenuated in ARs compared to QS regions \citep{Berger98, Mostl06, Keys11}. This is the main reason why the $K$ values of data set \uppercase\expandafter{\romannumeral2} are distinctly higher than those of data set \uppercase\expandafter{\romannumeral1}.
\subsection{DDI Approach}
Figure~\ref{fig6} (a) shows the relation between $\gamma$ and $B$ in data set \uppercase\expandafter{\romannumeral1} using the DDI approach in the two-dimensional histogram. The $\gamma$ value of each GBP is measured by the slope of its own square displacement and lifetime on a log-log scale. The distribution of $\gamma$ in each bin is fitted with a Gaussian function well and the marginal distribution of $\gamma$ is projected on $yoz$ plane. All distributions have similar shapes, but shift with the increasing longitudinal field strengths. The ranges of $\gamma$ values in different bins have no significant difference. About 83\%$\sim$90\% $\gamma$ values range from 0 to 4. The mean $\gamma$ and the associated standard deviation are calculated by fitting the Gaussian distribution of each bin. We find that the mean $\gamma$ values continue to mostly decrease from 1.66 to 1.33, and the associated standard deviations limit in a small range from 0.04 to 0.07. At the first longitudinal field strength bin, the $\gamma$ value is 1.66$\pm$0.05, then it decreases with the increasing longitudinal field strength, and finally reaches a value of 1.33$\pm$0.07. On the upper $xoy$ plane, the mean $\gamma$ values of all bins and a quadratic curve fit of the mean $\gamma$ values are drawn in red.
\begin{figure}
\includegraphics[angle=0,scale=.6]{figure6a.eps}
\\
\\
\includegraphics[angle=0,scale=.6]{figure6b.eps}
\caption{The two-dimensional histograms of the diffusion indices of GBPs and the longitudinal magnetic field strengths by the DDI approach. The marginal distribution of the longitudinal magnetic field strengths is projected on the $xoz$ plane. The marginal distribution of the diffusion indices is projected on the $yoz$ plane. On the upper $xoy$ plane, the mean $\gamma$ values of all bins, and a quadratic curve fit of the mean $\gamma$ values are drawn in red. Panel (a): the histogram of data set \uppercase\expandafter{\romannumeral1}. Panel (b): the histogram of data set \uppercase\expandafter{\romannumeral2}. \label{fig6}}
\end{figure}
Figure~\ref{fig6} (b) shows the relation between $\gamma$ and $B$ in data set \uppercase\expandafter{\romannumeral2}. The longitudinal field strengths range from 100 to 450\,$\rm G$. The distribution of $\gamma$ in each bin is also fitted well with a Gaussian function. In different bins, about 90\%$\sim$95\% $\gamma$ values range from 0 to 4. The $\gamma$ values decrease from 1.84$\pm$0.07 to 1.37$\pm$0.14, except for the value of 1.68$\pm$0.07 in the sixth bin.
To explore the relation between $\gamma$ and $B$ more clearly, we redrew the mean $\gamma$ values and the associated standard deviations of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} using the error bars in Figure~\ref{fig7} (a) with a black solid line and red dotted line, respectively. The trend of the mean $\gamma$ values of data set \uppercase\expandafter{\romannumeral1} is fitted well with an exponential function (black dashed line). By weighted curve fitting, the empirical formula was deduced as equation (\ref{eqa2}), where the parameters $a$, $b$ and $c$ are 0.32$\pm$0.07, -2.15$\pm$1.58, and 1.41$\pm$0.06 under the 95\% confidence interval, respectively.
Note that the marginal distributions of $B$ of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} are projected on the $xoz$ plane in Figure~\ref{fig6} (a) and (b), respectively, which are fitted with a double log-normal distribution with two peaks at 214$\pm$72 and 662$\pm$91\,$\rm G$, and a log-normal distribution with a peak at 277$\pm$49\,$\rm G$. Interestingly, there are two peaks in data set \uppercase\expandafter{\romannumeral1}, but only one peak in data set \uppercase\expandafter{\romannumeral2}. The first peak value of data set \uppercase\expandafter{\romannumeral1} is close to the peak value of data set \uppercase\expandafter{\romannumeral2}.
The $K$ value of each GBP is calculated by equation (\ref{eqa3}) using its lifetime as the timescale $\tau$. The mean $K$ values are determined by the distributions of the $K$ values of different bins, respectively. Figure~\ref{fig7} (b) shows both relations between $K$ and $B$ of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}. The $K$ value of data set \uppercase\expandafter{\romannumeral1} decreases exponentially from 89$\pm$5 to 41$\pm$4\,$\rm km^{2}$\,$\rm s^{-1}$. The empirical formula follows equation (\ref{eqa4}), where $a$ is 66.21$\pm$11.01, $b$ is -1.57$\pm$0.94, and $c$ is 32.80$\pm$15.32 under 95\% confidence interval. The $K$ value of data set \uppercase\expandafter{\romannumeral2} decreases from 139$\pm$6 to 67$\pm$24\,$\rm km^{2}$\,$\rm s^{-1}$ except for the value of 124$\pm$9\,$\rm km^{2}$\,$\rm s^{-1}$ inside the first bin. We find that the $K$ values in the first bins of both data sets are small, although the corresponding $\gamma$ values are not. The main reason is that the lifetimes of GBPs with weak field strengths are shorter than those with strong ones.
\begin{figure}
\epsscale{1.0}
\plottwo{figure7a.eps}{figure7b.eps}
\caption{Panel (a): the relations between the diffusion indices of GBPs and the longitudinal magnetic field strengths by the DDI approach using error bars. Panel (b): the relation between the diffusion coefficients of GBPs and the longitudinal magnetic field strengths. The mean values and the associated standard deviations of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} are drawn with a black solid line and red dotted line, respectively. Both relations of data set \uppercase\expandafter{\romannumeral1} are fitted very well with exponential functions (black dashed lines). \label{fig7}}
\end{figure}
\section{DISCUSSION}
We used high spatial and temporal resolution G-band images and simultaneous NFI Stokes $I$ and $V$ images acquired with Hinode /SOT. The corresponding longitudinal magnetic field strength of each GBP was extracted from the calibrated NFI magnetogram after carefully aligning these data with the G-band images. The point-to-point method is feasible to investigate the diffusion regimes of magnetic flux tubes at different longitudinal magnetic strengths.
\subsection{Lagrangian and DDI Approach}
The Lagrangian approach and the DDI approach have been adopted to measure the $\gamma$ and $K$ values separately. The relations between $\gamma$ and $B$ and between $K$ and $B$ are both fitted well with exponential functions no matter which approach was used, although the values are somewhat different.
The traditional Lagrangian approach is typically used to analyze the diffusion of tracers within fluid flows. The step of mean-square displacement averages the displacement of individual GBPs, and then results in diminished displacement. It is the main reason that the $\gamma$ values are smaller than those of the DDI approach. In addition, this approach inevitably depends on timescales. A short timescale generally results in a small diffusion index, and vice versa. Most GBPs have a short lifetime because the distribution of lifetime follows an exponential function and the mean lifetime is about 150\,$\rm s$. Therefore, a few random long-lived GBPs will determine the diffusion efficiency for long timescales, which is illustrated in Figure~\ref{fig4}. The timescale was determined variously in previous studies. Some authors took the whole mean-square displacement directly, while others cut part of the tail of mean-square displacement based on the percentage or the goodness-of-fit, etc. We calculated the mean-square displacements of all longitudinal field strength bins for the timescale $\lesssim$ 300\,$\rm s$ because this timescale is not affected by a few long-lived GBPs.
Another approach is the DDI, where the mean diffusion index is obtained from the distribution of diffusion indices of individual GBPs. We analyzed all trajectories of individual GBPs separately. The approach avoids mixing GBPs in different diffusion regimes, so it sheds light on the intrinsic property of their proper motions. However, if a GBP moves in an erratic or circular path, the slope of linear fit on a log-log scale would be very small (even below zero) or very large (greater than four) with a low goodness-of-fit. Examples of such pathological cases have been given by \citet{Jafarzadeh14}. These cases result in meaningless diffusion indices. The percentages of meaningless diffusion indices of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} are 13\% and 8\%, respectively. In detail, the very small diffusion indices (below zero) and very large (greater than four) account for 9.6\% and 3.5\% of data set \uppercase\expandafter{\romannumeral1}, 6\% and 2\% of data set \uppercase\expandafter{\romannumeral2}, respectively. Besides that, the $K$ value of each GBP is calculated using its lifetime as the timescale. That means most $K$ values are calculated by very short lifetimes because the lifetime distribution of GBPs follows an exponential function. According to equation (\ref{eqa3}), a shorter timescale will give rise to a smaller $K$ value. This directly leads to the $K$ values are smaller than those calculated for the timescale $\tau\,\lesssim$ 300\,$\rm s$ by the Lagrangian approach.
Above all, each of the two approaches has advantages and disadvantages respectively. We prefer to adopt the DDI approach to estimate the diffusion efficiency of GBPs because it reflects the diffusion regimes of individual GBPs.
\subsection{Diffusion Index and Diffusion Coefficient}
The $\gamma$ and $K$ values decrease with the increasing longitudinal magnetic field strengths, and they decrease exponentially of data set \uppercase\expandafter{\romannumeral1}. It is suggested that in the same environment, strong magnetic elements diffuse less than weak elements. In addition, Figure~\ref{fig5} and Figure~\ref{fig7} indicate that the $\gamma$ and $K$ values of GBPs in strong magnetized environments are less than those with the same longitudinal field strengths in weak ones. The $\gamma$ values of data set \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2} by the Lagrangian approach are 1.30$\pm$0.09 and 1.61$\pm$0.06 for the timescale $\tau$\,$\lesssim$ 300\,$\rm s$, and by the DDI approach they are 1.53$\pm$0.01 and 1.79$\pm$0.01, respectively. The $K$ values using the Lagrangian approach are 56$\pm$14 and 165$\pm$41\,$\rm km^{2}$\,$\rm s^{-1}$ for $\tau$\,$\lesssim$ 300\,$\rm s$, and are 78$\pm$29 and 130$\pm$54\,$\rm km^{2}$\,$\rm s^{-1}$, respectively, using the DDI approach.
Some authors analyzed the diffusion in special regions. In network regions of the QS or in ARs, \citet{Berger98} got $\gamma$\,=\,1.34$\pm$0.06. \citet{Cadavid99} found $\gamma$\,=\,0.76$\pm$0.04 for timescales shorter than 22 minutes and 1.10$\pm$0.24 for timescales longer than 25 minutes. \citet{Lawrence01} indicated $\gamma$\,=\,1.13$\pm$0.01. \citet{Wang98} reported $K$\,=\,150\,$\rm km^{2}$\,$\rm s^{-1}$. \citet{Schrijver96} argued that a corrected diffusivity of 600\,$\rm km^{2}$\,$\rm s^{-1}$ is in a good agreement with the well performing model for the magnetic flux transport during a solar cycle \citep{Wang94}. By tracking magnetic features in MDI magnetograms, \citet{Hagenaar99} found $K$\,=\,70\,--\,90\,$\rm km^{2}$\,$\rm s^{-1}$. The smallest $K$ value of 0.87\,$\rm km^{2}$\,$\rm s^{-1}$ was reported by \citet{Chae08} from Hinode /SOT NFI magnetograms. Based on solving the equation of magnetic induction, they modeled the difference of the magnetic field in individual pixels between two frames to measure the magnetic diffusivity. The speculated reason for such a small $K$ value is that their method aimed at individual pixels between two frames with a 10\,$\rm minute$ interval. In QS regions, \citet{Cameron11} described $K$ lying in the range of 100\,--\,340\,$\rm km^{2}$\,$\rm s^{-1}$. \citet{Chitta12} reported $\gamma$\,=\,1.59 and $K$=90\,$\rm km^{2}$\,$\rm s^{-1}$. \citet{Yang15} proposed $\gamma$ \,=\,1.50 and $K$=191$\pm$20\,$\rm km^{2}$\,$\rm s^{-1}$. \citet{Jafarzadeh14} obtained $\gamma$ of 1.69$\pm$0.08 and $K$=257$\pm$32\,$\rm km^{2}$\,$\rm s^{-1}$ with the Ca \textsc{ii} H data using the DDI approach.
Some authors also compared the diffusion in different regions. \citet{Schrijver90} found the $K$ values of 110 and 250\,$\rm km^{2}$\,$\rm s^{-1}$ in the core of a plage region and surrounding quiet regions, respectively. \citet{Berger98} determined 60$\pm$11\,$\rm km^{2}$\,$\rm s^{-1}$ for network GBPs by assuming normal diffusion ($K =\langle(\bigtriangleup l)\rangle^{2}/4\tau$). However, when they reconstructed the velocity field using correlation tracking techniques, they presented the $K$ values of 77, 176, and 285\,$\rm km^{2}$\,$\rm s^{-1}$ in the magnetic, network, and quiet regions, respectively. \citet{Abramenko11} used the TiO data obtained with the New Solar Telescope (NST) of the Big Bear Solar Observatory (BBSO). They indicated that $\gamma$ and $K$ are 1.48 and 12\,$\rm km^{2}$\,$\rm s^{-1}$ for an AR, 1.53 and 19\,$\rm km^{2}$\,$\rm s^{-1}$ for a QS region, and 1.67 and 22\,$\rm km^{2}$\,$\rm s^{-1}$ for a CH. \citet{Giannattasio14} defined a network area and an internetwork area in the QS using Hinode /SOT NFI magnetograms. They found $\gamma$\,=\,1.27$\pm$0.05 and 1.08$\pm$0.11 in the network area (at smaller and larger scales, respectively), and 1.44$\pm$0.08 in the internetwork area. They also found $K$ value ranges from 80 to 150\,$\rm km^{2}$\,$\rm s^{-1}$ in a network area, and from 100 to 400\,$\rm km^{2}$\,$\rm s^{-1}$ in an internetwork area. However, some authors had different opinions. \citet{Keys14} estimated $\gamma$ values of $\sim$1.2 and $K$ values of $\sim$120$\pm$45\,$\rm km^{2}$\,$\rm s^{-1}$ for three subfields of varying magnetic flux densities. They proposed that the diffusion regimes in all three subfields were the same. Their $\gamma$ and $K$ values are consistent with those other authors analyzed in network regions, and more like the saturation value that we found at a strong longitudinal magnetic field strength.
Previous works suggested that magnetized elements diffuse more slowly in strongly magnetized regions than in less magnetized ones, although the range of reported $\gamma$ and $K$ values are large. Such a large range may be due to different instruments, data, and methods, especially magnetic flux densities of environments. Our results agreed with their conclusions. The absence of strong magnetic fields in the medium makes it possible for the bright points to diffuse faster (perhaps because of less interactions), resulting in larger diffusion indices and diffusion coefficients. Importantly, we find that strong magnetic elements diffuse less than weak elements, both in strong magnetized environments and in weak ones.
\subsection{Diffusion Regime}
The magnetic field is ubiquitous in the photosphere and interacts with convective flows at different scales. Strongly magnetized elements are not easily perturbed by convective (supergranular, mesogranular, granular) motions. They withstand the perturbing action of convective flows much better than the weak magnetic elements, so their motions are slower and result in smaller diffusion indices. \citet{Giannattasio14} interpreted that magnetic elements are transported effectively by convection, where a weak-field regime holds, especially internetwork; whereas, in network, where is a strong field, magnetic elements cannot be further transported and concentrated due to reduced convection. \citet{Abramenko11} indicated that GBPs in strong magnetic fields are so crowded within narrow intergranular lanes when compared with a weak magnetic environment, where GBPs can move freely due to a lower population density.
Additionally, the difference has been also interpreted as the combined effect of granular, mesogranular, and supergranular flows. \citet{Spruit90} showed that mesogranular flows decrease when approaching mesogranular boundaries. \citet{Orozco12} found that magnetic elements in the internetwork start accelerating radially outward at the center of a supergranular cell, and decelerate as they approach the boundaries of supergranules. \citet{Jafarzadeh14} analyzed that this is caused by an increasing velocity with radial distance of the supergranular flow profile due to mass conservation assuming a constant upflow over most of supergranules. Supergranular flows systematically advect all GBPs toward the boundaries of supergranules, and granular and mesogranular flows impart the GBPs with additional velocity. The GBPs at the internal of supergranular cell tend to follow a super-diffusive regime because the motions of GBPs are imparted mainly by supergranular flow and intergranular turbulence. Once these GBPs reach the boundaries of supergranules, they decelerate because they are trapped in the sinks due to inflows from the opposite directions (i.e., from neighbouring supergranules). Granular motions are still active, however, causing GBPs to undergo normal or even sub-diffusion processes.
Note that, this study only involved isolated GBPs. Non-isolated GBPs, which are most likely located at stagnation points, were excluded due to the difficulty of measuring of their displacements. Therefore, sub-diffusive GBPs might be underestimated.
\section{CONCLUSION}
We have presented a study of the dispersal of GBPs at different longitudinal magnetic field strengths. Two different environments are considered, namely, a strongly magnetized AR and a weakly magnetized quiet Sun region, characterized by different mean longitudinal magnetic field strengths of 132 and 64\,$\rm G$, respectively. The corresponding data sets were acquired with Hinode /SOT comprising FG G-band filtergrams (BFI) and Stokes $I$ and $V$ images (NFI). After identifying and tracking GBPs in the G-band images, we extracted the corresponding longitudinal magnetic field strength of each GBP from the co-aligned and calibrated NFI magnetograms, then categorized the GBPs into different groups by their strongest longitudinal magnetic field strengths during their lifetimes. The Lagrangian approach and the distribution of indices (DDI) approach were adopted separately to explore the diffusion efficiency of GBPs in different longitudinal magnetic field strength groups. The values of the diffusion index by the Lagrangian approach are generally smaller than those by the DDI approach. The main reason is that the step of mean-square displacement in the Lagrangian approach averages the displacement of individual GBPs and results in diminished displacement.
We find that the values of the diffusion index and the diffusion coefficient both decrease exponentially with the increasing longitudinal magnetic field strengths. The empirical formulas deduced from exponential fitting and the parameters have been presented in Section 4. Stronger elements show lower diffusion indices and diffusion coefficients both in active regions and in QS regions. Additionally, the diffusion indices and coefficients are generally larger in regions of low mean magnetic flux (i.e., in the quiet Sun).
The different diffusion regimes of GBPs are mainly set by convections. Strongly magnetized elements are not easily perturbed by convective motions, and vice versa. The reason they have slower motions and smaller diffusion indices and diffusion coefficients is that they withstand the perturbing action of convective flows much better than weak ones. Their magnetic energy is not negligible compared with the kinetic energy of the gas, and therefore the flows cannot perturb them so easily.
\acknowledgments
The authors are grateful to the anonymous referee for constructive comments and detailed suggestions to this manuscript. The authors are grateful to the support received from the National Natural Science Foundation of China (No: 11303011, 11263004, 11463003, 11163004, U1231205), Open Research Program of the Key Laboratory of Solar Activity of the Chinese Academy of Sciences (No: KLSA201414, KLSA201309). This work is also supported by the Opening Project of Key Laboratory of Astronomical Optics \& Technology, Nanjing Institute of Astronomical Optics \& Technology, Chinese Academy of Sciences (No. CAS-KLAOT-KF201306) and the open fund of the Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, China. The authors are grateful to the $Hinode$ team for the possibility to use their data. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the $Hinode$ mission is conducted by the $Hinode$ science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA (U.S.A.), ESA, and NSC (Norway).
|
1,108,101,562,728 | arxiv | \section{#1}\setcounter{equation}{0}}
\newcommand{\subsectiono}[1]{\subsection{#1}\setcounter{equation}{0}}
\def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}}
\def{\hbox{ 0\kern-1.5mm 0}}{{\hbox{ 0\kern-1.5mm 0}}}
\def{\wh a}{{\widehat a}}
\def{\wh b}{{\widehat b}}
\def{\wh c}{{\widehat c}}
\def{\wh d}{{\widehat d}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\begin{document}
\baselineskip 24pt
\begin{center}
{\Large \bf BPS State Counting in N=8 Supersymmetric String Theory for
Pure D-brane Configurations}
\end{center}
\vskip .6cm
\medskip
\vspace*{4.0ex}
\baselineskip=18pt
\centerline{\large \rm Abhishek Chowdhury, Richard S.~Garavuso,
Swapnamay Mondal, Ashoke Sen}
\vspace*{4.0ex}
\centerline{\large \it Harish-Chandra Research Institute}
\centerline{\large \it Chhatnag Road, Jhusi,
Allahabad 211019, India}
\vspace*{1.0ex}
\centerline{\small E-mail: abhishek,garavuso,swapno,[email protected]}
\vspace*{5.0ex}
\centerline{\bf Abstract} \bigskip
Exact results for the BPS index are known for a class of BPS dyons in type II string theory
compactified on a six dimensional torus. In this paper we set up the problem of counting the same
BPS states in a duality frame in which the states carry only Ramond-Ramond charges.
We explicitly count the number of states carrying the lowest possible charges and find agreement
with the result obtained in other duality frames. Furthermore,
we find that after factoring out the supermultiplet structure, each of these states
carry zero angular momentum. This is in agreement
with the prediction obtained from a representation of these states as supersymmetric black
holes.
\vfill \eject
\baselineskip=18pt
\tableofcontents
\sectiono{Introduction} \label{s1}
Understanding the microscopic origin of Bekenstein-Hawking entropy is one of the important
problems in any theory of quantum gravity, and in particular in string theory. In recent years
there has been considerable progress towards this direction, including precision counting of
microscopic states in certain string theories with 16 or more unbroken
supersymmetries\cite{9607026,0510147,0609109,0802.1556,0803.2692,9903163,0506151}.
One of these theories is type IIA or IIB compactified on a six dimensional
torus. In this theory, for certain configurations carrying a combination of Kaluza-Klein (KK) monopole
charge, momentum along one of the circles of the torus and D-brane wrapping charges along
some of the cycles of the torus, one can carry out the exact counting for the number of microscopic
BPS states\cite{0506151}. On the other hand, for large charges this system can be described by a
supersymmetric
black hole with a finite area event horizon. Thus, by comparing the logarithm of the number of
microstates with the Bekenstein-Hawking entropy of the corresponding black hole, one can
verify the equality of the macroscopic and microscopic entropy of the black hole.
Although the counting of microscopic states was carried out for a specific system of KK
monopoles and D-branes carrying momentum along a compact circle, using duality symmetry
we can map it to other systems. In particular it is possible to map this configuration to a system
that contains only D-brane charges. Duality symmetry predicts that the
BPS index of this system computed from
microscopic counting should give the same result as the original system to which
it is dual. Nevertheless, it is of some interest to count the number of microscopic states of the new
system directly. At the least, this will provide us with another non-trivial test of duality symmetry
which, although has been tested in many ways, has not been proven. Another motivation for this
is that
by learning how to count states of pure D-brane systems in type
II string theory on $T^6$ we may eventually gain some insight into similar counting for D-branes
wrapped on various cycles of Calabi-Yau manifolds.
Indeed,
for type II compactification on Calabi-Yau manifolds, all charges are associated with
D-branes wrapped on various cycles of Calabi-Yau manifolds as there are no non-contractible
circles and hence
no momentum, KK monopole charges or winding numbers of fundamental strings and
NS 5-branes. Earlier attempts to count states of pure D-brane systems describing a black hole
can be found in \cite{0509168,0607010,0702146}.
Another reason for studying representations of black holes as pure D-brane systems is as follows.
One knows on general grounds that supersymmetric black holes in 3+1 dimensions describe
an ensemble of states each of which carries strictly zero angular
momentum\cite{0903.1477,1009.3226} after factoring out
the fermion zero mode contribution whose quantization
generates the supermultiplet structure.
This leads to many non-trivial conjectures about the sign of the index of supersymmetric black holes
which have been verified explicitly\cite{1008.4209,1208.3476}.
However, in
microscopic counting of the same system, one often finds BPS states carrying non-zero angular
momentum. This does not represent a contradiction between microscopic and macroscopic
results, since only the index, and not the detailed information about angular momentum, is
protected as we go from the weak coupling regime where microscopic calculation is valid, to the
strong coupling regime where the black hole computation is valid. Nevertheless, one could
ask if there is a duality frame in which the detailed information about the angular momentum in the microscopic and macroscopic descriptions matches.
Since in the macroscopic description all black
holes carry zero angular momentum, in the microscopic description this will demand that all
states are singlets under the SU(2) rotation group.
Recent analysis of some microstates of ${\cal N} = 2$
supersymmetric black holes revealed that when we describe them as D-branes wrapped on certain internal cycles of Calabi-Yau manifolds we indeed get exactly zero angular momentum for the microstates of single centered black holes\cite{1205.5023,1205.6511,1207.0821}.
Assuming this to be a general phenomenon led to the conjectured Coulomb branch formula
for computing the spectrum of quiver quantum mechanics and of general systems of
multicentered black holes\cite{1103.1887,1207.2230,1302.5498}.
Now in ${\cal N}=2$ supersymmetric string theories, the above
analysis is made complicated due to the fact that
the index receives a contribution from both single and
multi-centered black holes. Since the latter do not necessarily carry zero angular momentum,
we need to carefully subtract the contribution from
multi-centered black holes before we can verify that D-brane microstates representing single
centered black holes carry zero angular momentum.
This can be
done\cite{0807.4556,1103.1887}, and was used in the analysis of
\cite{1103.1887,1207.2230,1302.5498}.
However, in type II string theory on $T^6$, which has ${\cal N}=8$ supersymmetry,
the multi-centered black holes do not
contribute to the index, and hence we expect that only single centered black holes will
survive at a generic point in the moduli space of the theory\cite{0803.1014}.
Generalization of the observations in ${\cal N}=2$ supersymmetric string theories made
above would then suggest that
representing
a supersymmetric black hole in type II on $T^6$
as a system carrying only
RR charges associated with various D-brane sources, we may get a system whose microstates
would have strictly zero angular momentum after factoring out the goldstino fermion modes
whose quantization generates the supermultiplet structure. Now, after factoring out these
fermionic zero modes and the bosonic zero modes
associated with various translational symmetries, the BPS states of the D-brane system
correspond to the cohomology of the moduli space of classical
solutions of the world-line theory of the system,
with the space-time rotation group acting as the Lefshetz SU(2) action on the
cohomology\cite{9907100,1205.5023}.
This shows that
in order to get only zero angular momentum states, all states must come from the middle
cohomology. Since any compact manifold has a non-trivial 0-form and a top form, the only way
that a manifold can have only middle cohomology is if it becomes zero dimensional, i.e.\ a collection
of points.\footnote{We thank Boris Pioline for discussion on this point.}
Verification of this conjecture is another motivation for our analysis.
In this paper we shall analyze a pure D-brane system in type II theory
compactified on $T^6$ that is dual to the system for which
the microscopic result is known, and test the result by direct computation of the microscopic
index of the D-brane system. We introduce the system in \S\ref{ssystem}, and derive its
world-line theory for the lowest possible values of the charges
in \S\ref{s2}. In \S\ref{s3} we explicitly count the index of supersymmetric
states of this system.
This is shown to reduce to counting the number of independent solutions of a
set of polynomial equations -- a problem that can be easily solved.
We find that the solution contains a set of isolated points provided we work at a generic
point in the moduli space of the theory parametrized by constant background values of
the metric and 2-form fields along the internal torus.
Hence, at least in this example, the microstates carry strictly
zero angular momentum in agreement with the macroscopic results.
In \S\ref{sb} we briefly discuss possible generalization of our analysis to cases where
we
replace each D-brane of the system described in \S\ref{ssystem} by a stack of
parallel D-branes. We conclude with a discussion of our results in \S\ref{sconc}.
In appendix \ref{sa}
we derive the
relation between some of the parameters of the D-brane world-volume theory and the background
values of the metric and 2-form field along $T^6$. In appendix \ref{sc} we describe the chain
of dualities that relate the system under consideration to the system analyzed in
\cite{0506151}. Finally, in appendix \ref{sd} we give explicit solutions to the polynomial equations
which appear in the analysis of \S\ref{s3}.
\sectiono{The system} \label{ssystem}
Let us consider for definiteness a type IIA string theory on $T^6$ labelled by the coordinates
$x^4,\ldots, x^9$ and in this theory we take a system
containing $N_1$ D2-branes wrapped along
4-5 directions, $N_2$ D2-branes wrapped along 6-7 directions, $N_3$ D2-branes wrapped
along 8-9 directions,
$N_4$ D6-branes wrapped along 4-5-6-7-8-9 directions and $N_5$ D4-branes along
6-7-8-9 directions.
By a series of duality transformations
described in appendix \ref{sc},
this configuration is related to a system
of $N_1$ KK monopoles associated with the 4-direction, $-N_2$ units of momentum along the
5-direction, $N_3$ D1-branes along the 5-direction, $N_4$ D5-branes along 5-6-7-8-9 directions
and $-N_5$ units of momentum along the 4-direction.
The microscopic index of this system was computed explicitly in \cite{0506151}
for $N_1=1$. By a further series of
U-duality transformations reviewed {\it e.g.} in \cite{0708.1270}, this system can be mapped to a
system in type IIA string theory on $T^6$ with only NS-NS sector charges,
containing $-N_2$ units of momentum along the 5-direction, $N_1$ fundamental strings
wound along the 5-direction, $N_4$ KK monopoles associated with the 4-direction, $-N_3$
NS 5-branes wrapped along 5-6-7-8-9 directions and $N_5$ NS 5-branes along
4-6-7-8-9 directions. In the notation of
\cite{0708.1270}, the electric charge vector $Q$ and magnetic charge vector $P$
for this state in the latter description are represented as
\begin{equation}
Q=\begin{pmatrix}0\cr -N_2 \cr 0\cr -N_1\end{pmatrix}, \quad P = \begin{pmatrix}
N_3\cr N_5\cr N_4\cr 0\end{pmatrix}\, .
\end{equation}
The T-duality invariant
inner product matrix between charges
was given by $\begin{pmatrix}0 & I_2\cr I_2 & 0\end{pmatrix}$.
With this we get
\begin{equation}
Q^2 =2\, N_1 N_2, \quad P^2 = 2\, N_3 N_4, \quad Q\cdot P=-N_1 N_5\, .
\end{equation}
We also define
\begin{eqnarray}\displaystyle
\ell_1 &=& {\rm gcd}\{Q_i P_j - Q_j P_i\} =
{\rm gcd}\{ N_1 N_3, N_1 N_4, N_2 N_3, N_2 N_4, N_5 N_1\}, \nonumber \\
\ell_2 &=& {\rm gcd}\{Q^2/2, P^2/2, Q\cdot P\} =
{\rm gcd}\{ N_1 N_2, N_3 N_4, N_1 N_5\}\, .
\end{eqnarray}
We shall consider configurations for which
\begin{equation}
{\rm gcd}\{\ell_1, \ell_2\}=1, \quad i.e.\ \quad
{\rm gcd}\{ N_1 N_3, N_1 N_4, N_2 N_3, N_2 N_4, N_1 N_2, N_3 N_4, N_1 N_5\}=1 \, .
\end{equation}
In this case, following \cite{0702150,0712.0043} one can show that there is a further
series of duality transformations that map this system to one with
$N_1=1$\cite{0804.0651} for which the
microscopic index is known from the analysis of
\cite{0506151}. Expressed in terms of the more general set of variables $(N_1,\cdots,N_5)$,
the result for the BPS index for this system, which in this case corresponds to the
14-th helicity supertrace $B_{14}$, takes the form\cite{0908.0039}
\begin{equation}
B_{14} = (-1)^{Q\cdot P+1} \sum_{s|\ell_1 \ell_2} s\, \widehat c(\Delta/s^2) \, , \quad \Delta \equiv
Q^2 P^2 - (Q\cdot P)^2 = 4\, N_1 N_2 N_3 N_4 - (N_1 N_5)^2
\, ,
\end{equation}
where $\widehat c(u)$ is defined through the
relation\cite{9903163,0506151}
\begin{equation} \label{ek6.5}
-\vartheta_1(z|\tau)^2 \, \eta(\tau)^{-6} \equiv \sum_{k,l} \widehat c(4k-l^2)\,
e^{2\pi i (k\tau+l z)}\, .
\end{equation}
$\vartheta_1(z|\tau)$ and $\eta(\tau)$ are respectively the odd Jacobi
theta function and the Dedekind eta function.
In this paper we shall analyze the simplest of these configurations with
\begin{equation} \label{echarge}
N_1=N_2=N_3=N_4=1, \quad N_5=0\, .
\end{equation}
For this, \refb{ek6.5} predicts
\begin{equation} \label{eb14}
B_{14} = 12\, .
\end{equation}
We shall verify this by direct counting of microstates of the D-brane
system.
\sectiono{The low energy dynamics of the D-brane system} \label{s2}
The combined system of four D-branes that we have introduced in \S\ref{ssystem}
with the choice of $N_i$'s given in \refb{echarge}
preserves 4 out of the 32 supersymmetries. This
is equivalent to ${\cal N}=1$ supersymmetry in 3+1 dimensions.
Since we are dealing with a quantum mechanical
system, we can effectively regard this as an ${\cal N}=1$ supersymmetric theory
in 3+1 dimensions, dimensionally reduced to 0+1 dimensions. Thus we can
can use the ${\cal N}=1$ superfield
formalism, but ignore all spatial derivatives and integration over spatial directions
while writing the action. We shall follow the normalization conventions
of \cite{9701069} in constructing this
action.
Since the four D-branes we have are
related to each other by T-duality, each of them individually has the same low energy theory
given by the dimensional reduction of ${\cal N}=4$ supersymmetric U(1) gauge theory
from 3+1 to 0+1 dimensions.
We begin with one of the four different D-branes. In the language
of ${\cal N}=1$ supersymmetry in 3+1 dimensions, each D-brane has one U(1)
vector superfield $V$ and
three chiral superfields $\Phi_1,\Phi_2, \Phi_3$.
A vector multiplet, after dimensional reduction to 0+1 dimensions,
has three scalars corresponding to three spatial components of the gauge field
and a gauge field $A_0$. We can use the gauge $A_0=0$ and interpret the three scalars as
the coordinates giving the location of the D-brane along the three non-compact directions.
We shall denote these three scalars by $X_1, X_2, X_3$.
The three chiral multiples $\{\Phi_i\}$ contains three complex scalars
$\{\Phi_i\}$.\footnote{Following usual notation, we shall use the same symbol to denote a
superfield and its scalar component.}
These complex scalars give the coordinates or Wilson lines along $x^4+i x^5$, $x^6+i x^7$ and
$x^8+i x^9$ directions respectively.
For example, for the D6-brane all three complex scalars
correspond to Wilson lines, while for the D2-brane wrapped along the
4-5 directions, $\Phi_1$ corresponds
to a Wilson line along $x^4+ix^5$ but $\Phi_2$ and $\Phi_3$ correspond to positions of the brane
along $x^6+ix^7$ and $x^8+i x^9$ respectively. Finally, we shall use a superscript $(k)$ to
label the four different D-branes, with $k=1,2,3$ corresponding to D2-branes wrapped along the
4-5, 6-7 and 8-9 directions and $k=4$ corresponding to the D6-brane along 4-5-6-7-8-9
directions.
Besides these fields, for every pair of D-branes labelled by
$(k,\ell)$ we have two
chiral superfields $Z^{(k\ell)}$ and
$Z^{(\ell k)}$ arising from open strings stretched between the two D-branes. They
carry respectively 1 and $-1$ units of charge
under the vector superfield $V^{(k)}$ and $-1$ and 1 units of
charge under the vector superfield $V^{(\ell)}$.
We shall now write down the action involving these fields. To begin with we shall assume that the
six circles of $T^6$ are orthonormal to each other, with each circle having radius
$\sqrt{\alpha'}$ and that there is no background 2-form field along $T^6$.
From now on, we shall set $\alpha'=1$.
In this case the action
takes the form
\begin{equation}
S_{kinetic} + \int dx^0 \left[ \int d^4\theta \sum_{k=1}^4 \sum_{\ell =1 \atop \ell\ne k}^4
\left\{ \bar Z^{(k\ell)} e^{2V^{(\ell)} - 2V^{(k)}} Z^{(k\ell)} \right\}
+ \int d^2 \theta \, {\cal W} + \int d^2\bar \theta \, \overline{{\cal W}} \right]\, ,
\end{equation}
where $S_{kinetic}$ denotes the kinetic terms for the vector
superfields $V^{(k)}$ and the gauge neutral chiral superfields $\Phi^{(k)}_i$. These
have the standard form and will not be written down explicitly. The superpotential
${\cal W}$ has two different components. The first component describes the coupling of the
superfields $\Phi^{(k)}$ to $Z^{(k\ell)}$ and takes the form
\begin{equation} \label{ew1}
{\cal W}_1 = \sqrt 2 \left[\sum_{k,\ell,m=1}^3 \varepsilon^{k\ell m} \, \Phi^{(k)}_m
\, Z^{(k\ell)} Z^{(\ell k)} + \sum_{k=1}^3 \Big(\Phi^{(k)}_k - \Phi^{(4)}_k\Big) Z^{(4k)} Z^{(k 4)} \right]
\, ,
\end{equation}
where $\varepsilon^{k\ell m}$ is the totally antisymmetric symbol with $\varepsilon^{123}=1$. The second component
describes the cubic self-coupling between the $Z^{(k\ell)}$'s and takes the form
\begin{equation} \label{ew2}
{\cal W}_2 = \sqrt 2 \, C\, \sum_{k,\ell, m=1\atop k<\ell,m; \, \ell\ne m }^4 Z^{(k\ell)} Z^{(\ell m)} Z^{(m k)}\, ,
\end{equation}
where $C$ is a constant whose value can be computed in principle by analyzing the coupling
between open strings stretched between different branes, but we shall not need it for
our analysis.
The sum over $k,\ell, m$ runs over all distinct values of $k$, $\ell$ and $m$
which are not related
by cyclic permutations of $(k,\ell,m)$.
There could also be gauge invariant quartic
and higher order terms involving the $Z^{(k\ell)}$'s, but as we shall see, these can be ignored
in our analysis.
So far we have assumed that background metric along $T^6$ is diagonal and that there are no
background 2-form fields. We shall now study the effect to switching on small background
values of the
off-diagonal components of the metric and 2-form fields. As reviewed in appendix
\ref{sa}, this has two
effects. First it introduces Fayet-Iliopoulos (FI) term with coefficient
$c^{(k)}$ for each of the four vector superfields, satisfying
\begin{equation} \label{eck0}
\sum_{k=1}^4 c^{(k)} = 0\, .
\end{equation}
Second, it generates a linear term in the superpotential of the form
\begin{equation} \label{ew3}
{\cal W}_3 = \sqrt 2\left[
\sum_{k,\ell,m=1}^3 c^{(k\ell)} \, \varepsilon^{k\ell m} \Phi^{(k)}_m
+ \sum_{k=1}^3 c^{(k4)} \, \Big(\Phi^{(k)}_k - \Phi^{(4)}_k\Big) \right], \quad
c^{(\ell k)} = c^{(k\ell)} \quad \hbox{for} \quad 1\le k<\ell\le 4\, .
\end{equation}
Explicit expressions for $c^{(k)}$ and $c^{(k\ell)}$ for $1\le k <\ell \le 4$ in terms of the off-diagonal
components of the metric and 2-form fields have also been given in appendix \ref{sa}.
Let us now write down the potential involving the scalar fields derived from the above action.
This consists of three pieces. The first comes from the usual quartic coupling between the gauge
field components $X^{(k)}_i$ and the charged scalars $Z^{(k\ell)}$ and takes the form
\begin{equation} \label{egauge}
V_{gauge} = \sum_{i=1}^3 \sum_{k=1}^4 \sum_{\ell =1\atop \ell \ne k}^4 \,
(X^{(k)}_i - X^{(\ell)}_i) (X^{(k)}_i - X^{(\ell)}_i)
\Big( \bar Z^{(k\ell)} Z^{(k\ell)} + \bar Z^{(\ell k)} Z^{(\ell k)} \Big)\, ,
\end{equation}
where `bar' denotes complex conjugation.
The second component of the potential is the D-term contribution. This takes the form
\begin{equation} \label{evd}
V_D = {1\over 2} \,
\sum_{k=1}^4 \Big\{ \sum_{\ell=1\atop \ell \ne k}^4 \Big(\bar Z^{(k\ell)} Z^{(k\ell)}
- \bar Z^{(\ell k)} Z^{(\ell k)} \Big) - c^{(k)}\Big\}^2\, .
\end{equation}
The third component is the F-term potential which takes the form
\begin{equation} \label{evf}
V_F=\sum_{k=1}^4 \sum_{i=1}^3 \left| {\partial W\over \partial \Phi^{(k)}_i} \right|^2
+ \sum_{k=1}^4 \sum_{\ell=1\atop \ell \ne k}^4 \left| {\partial W\over \partial Z^{(k\ell)} }\right|^2\, .
\end{equation}
For finding a supersymmetric configuration we have to look for configurations with vanishing
potential. Since the potential is a sum of squares, this requires setting each of these
terms to zero. In \S\ref{s3} we shall look for solutions to these conditions.
Note that the potential has the following flat directions
\begin{eqnarray}\displaystyle \label{eflat}
&& \Phi^{(k)}_m \to \Phi^{(k)}_m+\xi_m, \quad \hbox{for} \quad 1\le k\le 3, \quad k \ne m\, ;
\quad 1\le m\le 3,
\nonumber \\
&& \Phi^{(k)}_k \to \Phi^{(k)}_k + \zeta_k, \quad \Phi^{(4)}_k \to
\Phi^{(4)}_k+\zeta_k, \quad \hbox{for} \quad 1\le k\le 3\, , \nonumber \\
&& X^{(k)}_i \to X^{(k)}_i + a_i \, ,\quad \hbox{for} \quad 1\le k\le 4, \quad 1\le i\le 3\, ,
\end{eqnarray}
where $\xi_m$ and $\zeta_k$ are arbitrary complex numbers and $a_i$ are arbitrary
real numbers. The $a_i$'s represent overall translation of the system along the non-compact
directions. The symmetries generated by $\xi_m$ and $\zeta_k$ imply
that the potential has six complex flat directions.\footnote{These directions are
all compact since they are associated with translations along $T^6$ and the dual
torus $\widetilde T^6$.
Thus the quantization of the zero modes associated with
these bosonic flat directions does not cause any problem and
gives a unique zero energy ground state.}
This corresponds to six exactly massless chiral multiplets. Since each chiral multiplet
contains a Weyl fermion in 3+1 dimensions which has four real components, we have
altogether $6\times 4=24$ real fermion zero modes after dimensional reduction to 0+1
dimensions. The vector superfield
$\sum_{k=1}^4 V^{(k)}$ also decouples from the action, reflecting the symmetry parametrized
by the $a_i$'s. The Majorana fermion belonging
to this multiplet gives 4 more fermion zero modes. Thus altogether we have
$24+4=28$ fermion zero modes.
These are the Goldstino modes associated with supersymmetry breaking; since
a 1/8 BPS black hole in ${\cal N}=8$ supersymmetric string theory preserves 4 out of
32 supersymmetries, we expect $32-4=28$ broken supersymmetries.
Quantization of these 28
fermion zero modes
gives the $2^{14}$ fold degenerate supermultiplet which is the right degeneracy for a 1/8 BPS
state in a theory with 32 supersymmetries.
\sectiono{Supersymmetric solutions} \label{s3}
We shall now look for configurations preserving supersymmetry, i.e.\ configurations which
make the potential vanish. As noted below \refb{evf}, this requires setting each term in
$V_{gauge}$, $V_D$ and $V_F$ to zero. Furthermore, due to the $U(1)^4$ gauge symmetry of
the original theory, we need to classify solutions up to equivalence relations under these
$U(1)$ gauge symmetries:
\begin{equation}
Z^{(k\ell)} \to \exp\left[i\left(\theta^{(k)}-\theta^{(\ell)}\right)\right] Z^{(k\ell)}\, ,
\end{equation}
where $\theta^{(k)}$ for $1\le k\le 4$
are the gauge transformation parameters. Note that the overall $U(1)$ --
obtained by setting all the $\theta^{(k)}$'s equal -- acts trivially on the $Z^{(k\ell)}$'s.
Furthermore, since we have fixed $A^{(k)}_0=0$ gauge, we need to demand equivalence
only under the subgroup of the gauge group that preserves this gauge condition.
This leaves us with the global part of the gauge group,
labelled by time independent $\theta^{(k)}$'s.
We begin by examining the equations $\partial {\cal W}/ \partial \Phi^{(k)}_i=0$ for $1\le k\le 4$ and
$1\le i\le 3$. Using \refb{ew1}, \refb{ew3} we see that this gives
\begin{equation} \label{eqzkl}
Z^{(k\ell)} Z^{(\ell k)} = - c^{(k\ell)} \quad \hbox{for} \quad 1\le k< \ell\le 4 \, .
\end{equation}
It follows from this that as long as the $c^{(k\ell)}$ are non-zero for every $k,\ell$
in the range $1\le k<\ell\le 4$,
none of the $Z^{(k\ell)}$'s can vanish. Eq.\refb{egauge} now gives
\begin{equation}
X^{(k)}_i=0 \quad \hbox{for} \quad 1\le k\le 4, \quad 1\le i\le 3\, ,
\end{equation}
up to the translation symmetry parametrized by the constants $a_i$ in eq.\refb{eflat}.
Next we consider the $\partial{\cal W} /\partial Z^{(k\ell)}=0$ equations. This gives
\begin{eqnarray}\displaystyle \label{ezeq}
&& \sum_{m=1}^3 \varepsilon^{k\ell m} \Big(\Phi^{(k)}_m - \Phi^{(\ell)}_m\Big)
\, Z^{(\ell k)} + C\, \sum_{m=1\atop m\ne k,\ell}^4 Z^{(\ell m)} Z^{(mk)} = 0 \quad \hbox{for} \quad
1\le k, \ell \le 3\, , \nonumber \\
&& \Big(\Phi^{(k)}_k - \Phi^{(4)}_k\Big) Z^{(k 4)}
+ C\, \sum_{\ell=1\atop \ell \ne k}^3 Z^{(k\ell)} Z^{(\ell 4)} = 0\quad \hbox{for} \quad
1\le k \le 3\, ,\nonumber \\
&& \Big(\Phi^{(k)}_k - \Phi^{(4)}_k\Big) Z^{(4 k)}
+ C\, \sum_{m=1\atop m \ne k}^3 Z^{(4m)} Z^{(mk)} = 0 \quad \hbox{for} \quad
1\le k \le 3\, .
\end{eqnarray}
These equations serve two purposes. First they determine the combinations
\begin{equation}
\Phi^{(k)}_m - \Phi^{(\ell)}_m \quad \hbox{for} \quad 1\le k,\ell, m\le 3, \quad k,l,m\, \,
\, \hbox{distinct},
\qquad \hbox{and} \qquad \Phi^{(k)}_k - \Phi^{(4)}_k \quad \hbox{for} \quad 1\le k\le 3\, ,
\end{equation}
in terms of the $Z^{(k\ell)}$'s.
This gives 6 linear combinations of the 12 complex scalars $\Phi^{(k)}_i$. The rest
of the $\Phi^{(k)}_i$'s are associated with flat directions and hence remain undetermined.
Second they give the following relations among the $Z^{(k\ell)}$'s:
\begin{eqnarray}\displaystyle \label{ezref}
&& Z^{(k\ell )} \sum_{m=1\atop m\ne k,\ell}^4 Z^{(\ell m)} Z^{(mk)}
= Z^{(\ell k)} \sum_{m=1\atop m\ne k,\ell}^4 Z^{(k m)} Z^{(m\ell)} \quad \hbox{for}
\quad 1\le k,\ell\le 3\, , \nonumber \\
&& Z^{(4 k)} \, \sum_{\ell=1\atop \ell \ne k}^3 Z^{(k\ell)} Z^{(\ell 4)}
= Z^{(k 4)} \sum_{m=1\atop m \ne k}^3 Z^{(4m)} Z^{(mk)}
\quad \hbox{for}
\quad 1\le k \le 3\, .
\end{eqnarray}
Finally let us turn to the D-term constraints. It is well known that the effect of the D-term
constraints together with quotienting by the $U(1)$ gauge groups is to convert the space
spanned by the coordinates $Z^{(k\ell)}$ to a toric variety. This is parametrized by the
coordinates $Z^{(k\ell)}$ modded out by the complexified $U(1)$ gauge groups {\it after
removing appropriate submanifolds of complex codimension $\ge 1$ from the space spanned
by the $Z^{(k\ell)}$'s}. These submanifolds are obtained by setting one or more $Z^{(k\ell)}$'s
to zero, and depend on the FI parameters $c^{(k)}$. However, since we have seen that the
F-term constraints force all the $Z^{(k\ell)}$'s to be non-zero, removal of these complex
submanifolds has no effect on the final solutions.\footnote{Put another way,
for a generic toric variety, if some equation is given in terms of
homogeneous coordinates, it may have solutions in more than one patch.
Thus, when we translate the equations in terms of coordinates of any single
patch (which does not cover the whole variety) and look for the solutions,
we always have the risk of not having all the solutions.
Fortunately this is not the case here. If we closely look into
what are the regions that are not covered by an arbitrary single patch, we
see that these are the regions where some of the coordinates vanish.
But our $Z^{(k\ell)}$'s cannot vanish
due to the constraint $Z^{(k\ell)}Z^{(\ell k)} = m_{k\ell}$. Thus, although such
regions exist in the toric variety, they are not part of the solution of
our equations. Hence it is enough to work in a single patch only, which is what we
do.
}
Thus, we can proceed by
parametrizing the variety by an appropriate set of gauge invariant polynomials and forget
about the D-term constraints. Since to start with there are $4\times 3=12$
independent $Z^{(k\ell)}$'s,
and we quotient by a $U(1)^3$ gauge group -- the overall U(1) having trivial action on all
the $Z^{(k\ell)}$'s -- we need $12-3=9$ independent gauge invariant coordinates. We take them
to be
\begin{equation}
\label{polynomial-ring-basis-Abelian-vacuum}
\begin{aligned}
u_1 &\equiv Z^{(12)} Z^{(21)} \, ,
&u_2&\equiv Z^{(23)} Z^{(32)} \, ,
&u_3 &\equiv Z^{(31)} Z^{(13)} \, , \\[1ex]
u_4 &\equiv Z^{(14)} Z^{(41)} \, ,
&u_5 &\equiv Z^{(24)} Z^{(42)} \, ,
&u_6 &\equiv Z^{(34)} Z^{(43)} \, , \\[1ex]
u_7 &\equiv Z^{(12)} Z^{(24)} Z^{(41)} \, ,
&u_8 &\equiv Z^{(13)} Z^{(34)} Z^{(41)} \, ,
&u_9 &\equiv Z^{(23)} Z^{(34)} Z^{(42)} \, .
\end{aligned}
\end{equation}
We now note that \refb{eqzkl} fixes $u_1,\ldots, u_6$ completely. Thus, the only remaining
variables are $u_7,u_8,u_9$ and the equations to be solved are given in
\refb{ezref}. These actually give three independent equations
\begin{eqnarray}\displaystyle \label{ezfin}
Z^{(23)} Z^{(31)} Z^{(12)} + Z^{(23)} Z^{(34)} Z^{(42)} &=& Z^{(32)} Z^{(21)} Z^{(13)}
+ Z^{(32)} Z^{(24)} Z^{(43)}
\, , \nonumber \\
Z^{(24)} Z^{(41)} Z^{(12)} + Z^{(24)} Z^{(43)} Z^{(32)} &=& Z^{(42)} Z^{(21)} Z^{(14)}
+ Z^{(42)} Z^{(23)} Z^{(34)}
\, , \nonumber \\
Z^{(34)} Z^{(41)} Z^{(13)} + Z^{(34)} Z^{(42)} Z^{(23)} &=& Z^{(43)} Z^{(31)} Z^{(14)}
+ Z^{(43)} Z^{(32)} Z^{(24)}
\, .
\end{eqnarray}
Defining
\begin{equation}
m_{k\ell} = m_{\ell k} = - c^{(k\ell)} \quad \hbox{for} \quad 1\le k<\ell \le 4\,
\end{equation}
and using the solutions for $u_1,\ldots, u_6$ given in \refb{eqzkl},
eqs.\refb{ezfin} can be expressed as
\begin{eqnarray}\displaystyle
\label{u_7-u_8-u_9-system}
u_7 \, u_8^{-1} &=& m_{24} \left( \frac{ m_{24} \, m_{23} \, m_{12} \, u_9^{-1} - u_7 \, u_8^{-1} \,
u_9 } { m_{31} \, u_7 \, u_8^{-1} \, u_9 - m_{23} \, m_{24}^{2} \, m_{34} \, u_9^{-1}} \right),
\nonumber \\
u_7 \,u_9 &=& m_{24} \left( \frac{ m_{12} \, m_{14} \, u_9 - m_{34} \, m_{23} \, u_7 } { u_7 - u_9 }
\right), \nonumber \\
u_8 \,u_9^{-1} &=& \left( \frac{ m_{34} \,m_{31} \,m_{14}- u_8 \, u_9 }{ u_8 \, u_9- m_{34} \, m_{23} \,m_{24}
} \right),
\end{eqnarray}
respectively.
The solutions to the system
\eqref{u_7-u_8-u_9-system}
are given in Table \ref{u_7-u_8-u_9-solutions} of appendix \ref{sd}.
The important point to note from
Table
\ref{u_7-u_8-u_9-solutions} is that there are 12 distinct solutions. This shows that there
are 12 supersymmetric ground states, in perfect agreement with the prediction
\refb{eb14} from the dual description. Furthermore since the moduli space of solutions is
zero dimensional, all the solutions carry zero angular momentum after factoring out the
contribution of the goldstino fermion zero modes. This is in agreement with the prediction
from the black hole side.
It is clear from the form of the potential as well as the solutions given in
Table~\ref{u_7-u_8-u_9-solutions} that under a uniform scaling of all the $c^{(k)}$'s and
$c^{(k\ell)}$'s by a real parameter
$\lambda$, the $Z^{(k\ell)}$'s and $\Phi^{(k)}_m$'s
at the solution (except the ones associated with flat directions)
scale as $\lambda^{1/2}$. Thus by taking
$\lambda$ to be small we can ensure that each $Z^{(k\ell)}$ and $\Phi^{(k)}_m$
at the solution is small.
In this case the contributions from the
quartic and higher order terms in the superpotential are small compared to the cubic terms
that we have included. This justifies our ignoring such terms for studyng these solutions.
This also justifies our ignoring the fact that $\Phi^{(k)}_i - \Phi^{(\ell)}_i$ for
$1\le k,\ell\le 3$ and $\Phi^{(k)}_k - \Phi^{(4)}_k$ for $1\le k\le 3$ are periodic variables
while solving the eqs.~\refb{ezeq}.
Note however that we have not ruled out existence of solutions where
$\Phi^{(k)}_i - \Phi^{(k)}_j$ and $Z^{(k\ell)}$'s are of order unity measured in the string
scale. In such cases we must
take into account possible higher order terms in the superpotential, and must also
include the effect of $\Phi^{(k)}_i$'s being periodic variables so that we have to include
in our analysis also open string states which wind around the various circles on their way
from one D-brane to another.
In other words, full stringy dynamics is needed
for examining the existence of these states. Our experience with BPS state counting
tells us however that the BPS states arise only from low energy fluctuations on the branes
and hence it seems unlikely that there will be new BPS states from the stringy configurations
of the type described above.
\sectiono{Non-abelian generalization} \label{sb}
\def{~ \rm Tr}{{~ \rm Tr}}
In this section we shall generalize the analysis of \S\ref{s2} to the case
where some of the stacks have more than one brane, i.e. the $N_i$'s introduced in
\S\ref{ssystem} are not all equal to 1. We shall focus on the scalar fields and their
potential since this is what is needed for the counting of supersymmetric solutions.
We begin with a discussion of how the scalar degrees of freedom change in this case.
First of all, the complex scalars $\Phi^{(k)}_i$ and the real scalars $X^{(k)}_i$
become $N_k\times N_k$ hermitian matrices transforming
in the adjoint representation of $U(N_k)$. On the other hand, the complex scalar
$Z^{(k\ell)}$ becomes $N_k\times N_\ell$ complex matrix transforming in the
$(N_k,\bar N_\ell)$ representation of $U(N_k)\times U(N_\ell)$.
Let us now describe the modification of the potential. The superpotential ${\cal W}_1$ given in
\refb{ew1} is generalized to
\begin{eqnarray}\displaystyle \label{ew1gen}
{\cal W}_1 &=& \sqrt 2\left[\sum_{k,\ell,m=1}^3 \varepsilon^{k\ell m} {~ \rm Tr} \, \Big(\Phi^{(k)}_m
\, Z^{(k\ell)} Z^{(\ell k)} \Big) + \sum_{k=1}^3 {~ \rm Tr} \, \Big( Z^{(4k)} \Phi^{(k)}_k Z^{(k 4)} \Big)\right.
\nonumber \\ && \left.
\qquad - \sum_{k=1}^3 {~ \rm Tr} \, \Big( \Phi^{(4)}_k Z^{(4k)} Z^{(k 4)} \Big)\right]
\, .
\end{eqnarray}
The generalization of \refb{ew2} takes the form
\begin{equation} \label{ew2gen}
{\cal W}_2 = \sqrt 2\, C \,
\left[\sum_{k,\ell, m=1\atop k<\ell,m; \, \ell\ne m}^4 {~ \rm Tr} \Big(Z^{(k\ell)} Z^{(\ell m)} Z^{(m k)}
\Big)\right]\, .
\end{equation}
The generalization of \refb{ew3} is
\begin{equation} \label{ew3gen}
{\cal W}_3 =\sqrt 2\left[\sum_{k,\ell,m=1}^3 c^{(k\ell)} \, \varepsilon^{k\ell m} \, N_\ell
{~ \rm Tr} \, \Big(\Phi^{(k)}_m \Big)
+ \sum_{k=1}^3 c^{(k4)} \, \Big[ N_4
{~ \rm Tr} \, \Big(\Phi^{(k)}_k\Big) - N_k {~ \rm Tr}\Big( \Phi^{(4)}_k\Big) \Big]\right]\, .
\end{equation}
There is also an additional superpotential
\begin{equation} \label{ew4gen}
{\cal W}_4 = -\sqrt 2 \sum_{k=1}^4 {~ \rm Tr} \Big( \Phi^{(k)}_1 \left[\Phi^{(k)}_2, \Phi^{(k)}_3\right] \Big)\, .
\end{equation}
\refb{egauge} generalizes to
\begin{eqnarray}\displaystyle \label{egaugegen}
V_{gauge} &=&\sum_{k=1}^4 \sum_{\ell =1\atop \ell \ne k}^4 \sum_{i=1}^3 \,
{~ \rm Tr} \Big[\Big( X^{(k)}_i Z^{(k\ell)} - Z^{( k\ell)}
X^{(\ell)}_i \Big)^\dagger \Big( X^{(k)}_i Z^{(k\ell)} - Z^{( k\ell)}
X^{(\ell)}_i \Big)\Big] \nonumber \\
&& + \sum_{k=1}^4 \sum_{i,j=1}^3 {~ \rm Tr} \Big(\big[X^{(k)}_i, \Phi^{(k)}_j\big]^\dagger
\big[X^{(k)}_i, \Phi^{(k)}_j\big]\Big) + {1\over 4} \sum_{k=1}^4 \sum_{i,j=1}^3
{~ \rm Tr} \Big( [X^{(k)}_i, X^{(k)}_j]^\dagger [X^{(k)}_i, X^{(k)}_j]\Big)\, . \nonumber \\
\end{eqnarray}
Finally, the D-term potential \refb{evd} is generalized to
\begin{equation} \label{evdgen}
V_D = {1\over 2} \,
\sum_{k=1}^4 {~ \rm Tr} \bigg[\Big( \sum_{\ell=1\atop \ell \ne k}^4 Z^{(k\ell)} Z^{(k\ell)\dagger}
- \sum_{\ell=1\atop \ell \ne k}^4 Z^{(\ell k)\dagger} Z^{(\ell k)} +
\sum_{i=1}^3 [\Phi^{(k)}_i, \Phi^{(k)\dagger}_i] - c^{(k)} I_{N_k} \Big)^2 \bigg]\, ,
\end{equation}
where $I_{N_k}$ denotes $N_k\times N_k$ identity matrix. The FI parameters
$c^{(k)}$ now satisfy
\begin{equation} \label{ecknonabelian}
\sum_{k=1}^4 c^{(k)} N_k = 0\, .
\end{equation}
The coefficients $c^{(k\ell)}$ and $c^{(k)}$
are to be chosen so that they reproduce the masses of the $Z^{(k\ell)}$'s correctly.
The equations take the form of \refb{eckl1} and \refb{eckl2} with identical right hand sides,
but the left hand sides are different since the masses of $Z^{(k\ell)}$'s expressed in terms
of $c^{(k)}$'s and $c^{(k\ell)}$'s have additional dependence on the $N_k$'s.
The potential given above has a shift symmetry generalizing \refb{eflat}
\begin{eqnarray}\displaystyle \label{eflatgen}
&& \Phi^{(k)}_m \to \Phi^{(k)}_m+\xi_m I_{N_k}, \quad \hbox{for}
\quad 1\le k \le 3, \quad k \ne m; \quad 1\le m\le 3,
\nonumber \\
&& \Phi^{(k)}_k \to \Phi^{(k)}_k + \zeta_k I_{N_k}, \quad \Phi^{(4)}_k \to
\Phi^{(4)}_k+\zeta_k I_{N_4}, \quad \hbox{for} \quad 1\le k\le 3\, , \nonumber \\
&& X^{(k)}_i \to X^{(k)}_i + a_i \, I_{N_k}\, , \quad \hbox{for} \quad 1\le i\le 3\, .
\end{eqnarray}
This generates six complex translations along compact directions and three real translations
along the non-compact directions.
The $\partial{\cal W}/\partial\Phi^{(k)}_m=0$ equations give
\begin{eqnarray}\displaystyle \label{eqzklgen}
Z^{(k\ell)} Z^{(\ell k)} &=&
-c^{(k\ell)} \, N_\ell I_{N_k} +[\Phi^{(k)}_k, \Phi^{(k)}_\ell]
\quad \hbox{for} \quad 1\le k, \ell \le 3\, , \nonumber \\
Z^{(k4)}Z^{(4k)} &=& - c^{(k4)} \, N_4 I_{N_k} + \sum_{\ell, m=1}^3 \varepsilon^{k\ell m} \Phi^{(k)}_\ell \Phi^{(k)}_m\, ,
\quad 1\le k \le 3\, , \nonumber \\
Z^{(4k)}Z^{(k4)} &=& - c^{(k4)} \, N_k I_{N_4} - \sum_{\ell, m=1}^3 \varepsilon^{k\ell m} \Phi^{(4)}_\ell \Phi^{(4)}_m\, ,
\quad 1\le k \le 3\, ,
\end{eqnarray}
generalizing \refb{eqzkl}. The $\partial{\cal W}/\partial Z^{(k\ell)}$ equations give
\begin{eqnarray}\displaystyle \label{ezeqgen}
&& \sum_{m=1}^3 \varepsilon^{k\ell m} \Big( Z^{(\ell k)} \, \Phi^{(k)}_m -\Phi^{(\ell)}_m \, Z^{(\ell k)}
\Big)
+ C\, \sum_{m=1\atop m\ne k,\ell}^4 Z^{(\ell m)} Z^{(mk)} = 0 \quad \hbox{for} \quad
1\le k, \ell \le 3\, , \nonumber \\
&& \Big(\Phi^{(k)}_k Z^{(k 4)} - Z^{(k 4)} \Phi^{(4)}_k\Big)
+ C\, \sum_{\ell=1\atop \ell \ne k}^3 Z^{(k\ell)} Z^{(\ell 4)} = 0 \quad \hbox{for} \quad
1\le k \le 3\, ,
\nonumber \\
&& \Big(Z^{(4 k)} \, \Phi^{(k)}_k - \Phi^{(4)}_k \, Z^{(4 k)} \Big)
+ C\, \sum_{m=1\atop m \ne k}^3 Z^{(4m)} Z^{(mk)} = 0 \quad \hbox{for} \quad
1\le k \le 3\, ,
\end{eqnarray}
generalizing \refb{ezeq}.
It seems reasonable to assume that up to the translation symmetry described in the last
line of \refb{eflatgen}, all the $X^{(k)}_i$'s vanish at the zeroes of the potential
since this makes all the terms in $V_{gauge}$ vanish.
This will also make the classical bound state have zero size in the non-compact directions.
Furthermore, the effect of D-term constraints
is to take the quotient of the space of solutions to \refb{eqzklgen}, \refb{ezeqgen} by
complexified $\prod_{k=1}^4 U(N_k)$ gauge transformations.
Let ${\cal M}$ be the space
of gauge inequivalent solutions to
\refb{eqzklgen}, \refb{ezeqgen} after factoring out the zero mode directions
associated with the shift symmetry \refb{eflatgen}.
The number of supersymmetric states (or more precisely the index $B_{14}$) will be given by the
Euler number of ${\cal M}$.
Thus, duality symmetry of string theory predicts that
\begin{equation}
\chi({\cal M}) = - \widehat c(4 N_1 N_2 N_3 N_4)\, ,
\end{equation}
where $\hat c(u)$ has been defined in \refb{ek6.5}.
If ${\cal M}$ is zero dimensional, then $\chi({\cal M})$ just counts the number of solutions as in the
abelian case. In that case all the microstates would carry strictly zero angular momentum
after factoring out the contribution due to the goldstino fermion modes.
\sectiono{Conclusion} \label{sconc}
In this paper we have set up the general equations whose solutions describe the BPS
states of type II string theory compactified on $T^6$ carrying only RR charges. We have
been able to solve the equations explicitly when the charges take the lowest possible values.
The result is in perfect agreement with the counting of the same states in a U-dual
description.
Admittedly this is only a small beginning of the much more ambitious project. Nevertheless
even at this level our analysis provides a non-trivial test of duality symmetry, since the counting
leading to the magic number 12 is very different from the one that was used to arrive at the
formula \refb{ek6.5}. As far as test of black hole entropy is concerned, a black hole carrying
charges given in \refb{echarge} has large curvature at the horizon and hence the Bekenstein-Hawking
entropy is not expected to agree with $\ln 12$. Nevertheless explicit computation of Bekenstein-Hawking
entropy, together with one loop logarithmic corrections\cite{1005.3044,1106.0080},
give a macroscopic entropy
\begin{equation}
S_{macro} = \pi \sqrt\Delta - 2 \ln \Delta +\cdots \simeq 2\pi - 2 \ln 4 \simeq 3.51\, ,
\end{equation}
which is not very different from the microscopic entropy
\begin{equation} \label{emicro}
S_{micro} = \ln 12 = 2.48\, .
\end{equation}
Thus it is not unreasonable to regard our analysis as the counting of microstates of a black hole
made solely of D-branes although the curvature at the horizon of the black hole is large.
Just for comparison we note that for $\Delta=100$, $\ell_1\ell_2=1$ we shall have
\begin{equation}
S_{macro} = \pi \sqrt{100} - 2 \ln 100 +\cdots \simeq 22.2056, \quad S_{micro} = \ln 3627000060
= 22.012 \, .
\end{equation}
In recent years there has also been progress in computing the macroscopic entropy of
these black holes by evaluating the supergravity path integral in the near horizon geometry
of the black hole using localization
techniques\cite{0905.2686,1012.0265,1111.1161,1208.6221,1404.0033}.
In this approach one regards the ${\cal N}=8$
supersymmetric theory in 3+1 dimensions as an ${\cal N}=2$ supersymmetric theory with
vector, hyper, gravitino and Weyl multiplets and evaluates the path integral.
Although the arguments are not complete
due to the inability to extend the analysis to include hypermultiplets and gravitino
multiplets in the language of ${\cal N}=2$ supergravity,
if we ignore this problem then the result of localization gives the following
result for $S_{macro}$ from the leading saddle point\cite{1111.1161}
\begin{equation}
S_{macro} \simeq \ln\left[\sqrt 2\pi \, \Delta^{-7/4} \, I_{7/2}(\pi\sqrt\Delta)\right]\, ,
\end{equation}
where $I_n(x)$ is the standard Bessel function. For $\Delta=4$ this gives
\begin{equation}
S_{macro}=2.50\, ,
\end{equation}
which is quite close to the microscopic result \refb{emicro}.
Finally we must mention that there is one important aspect of our result which could have
significant impact on our understanding of black hole microstates in the future.
All the microstates of the D-brane system we have constructed have zero angular momentum
after factoring out the contribution due to fermion zero modes, in agreement
with the prediction from the black hole side. Although the D-brane and black hole descriptions
hold in different regions of the moduli space of the theory, and hence the detailed results
on the angular momentum need not match, the results mentioned above indicate that the
D-brane description may be closer to the actual microstates of the black hole than what one
might naively expect. This could eventually help us identify the microstates of the black hole
in the region of the moduli space where the black hole description is actually valid.
\bigskip
\noindent {\bf Acknowledgements:}
We wish to thank Anirban Basu and Boris Pioline
for useful discussions.
This work was
supported in part by the
DAE project 12-R\&D-HRI-5.02-0303.
The work of A.S. was also supported in
part by the
J. C. Bose fellowship of
the Department of Science and Technology, India.
|
1,108,101,562,729 | arxiv | \section{Introduction}\label{section0}
In this paper, we present a simple proof of Perelman's collapsing
theorem for $3$-manifolds (cf. Theorem 7.4 of \cite{Per2003a}),
which Perelman used to verify Thurston's Geometrization Conjecture
on the classification of $3$-manifolds.
\subsection{Statement of Perelman's collapsing theorem and
the main difficulties in its proof.}
\begin{theorem}[\textbf{Perelman's Collapsing Theorem}]\label{thm0.1}
Let $\{ (M^3_\alpha, g_{ij}^\alpha)\}_{\alpha \in \mathbb Z}$ be a
sequence of compact oriented Riemannian $3$-manifolds, closed or
with convex incompressible toral boundary, and let
$\{\omega_\alpha\}$ be a sequence of positive numbers with $
\omega_\alpha \to 0$. Suppose that
{\rm (1)} for each $x \in M^3_{\alpha} $ there exists a radius $\rho
= \rho_{\alpha}(x) $, $0 < \rho < 1$, not exceeding the diameter of
the manifold, such that the metric ball $B_{g^{\alpha}}(x, \rho)$ in
the metric $ g_{ij}^{\alpha} $ has volume at most $\omega_{\alpha}
\rho^3$ and sectional curvatures of $g^\alpha_{ij}$ at least $ -
\rho^{-2}$;
{\rm (2)} each component of toral boundary of $M^3_\alpha$ has
diameter at most $\omega_\alpha$, and has a topologically trivial
collar of length one and the sectional curvatures of $M^3_\alpha$
are between $(-\frac{1}{4} - \varepsilon) $ and $ (-\frac{1}{4} +
\varepsilon)$.
Then, for sufficiently large $\alpha$, $M^3_{\alpha}$ is
diffeomorphic to a graph-manifold.
\end{theorem}
If a $3$-manifold $M^3$ admits an almost free circle action, then we
say that $M^3$ admits a {\it Seifert fibration} structure or {\it
Seifert fibred}. A {\it graph-manifold} is a compact $3$-manifold
that is a connected sum of manifolds each of which is either
diffeomorphic to the solid torus or can be cut apart along a finite
collection of incompressible tori into Seifert fibred $3$-manifolds.
Perelman indeed listed an extra assumption (3) in \autoref{thm0.1}
above. However, Perelman (cf. page 20 of \cite{Per2003a}) also
pointed out that, if the proof of his stability theorem (cf.
\cite{Kap2007}) is available, then his extra smoothness assumption
(3) is in fact redundant.
The conclusion of \autoref{thm0.1} fails if the assumption of toral
boundary is removed. For instance, the product $3$-manifold $S^2
\times [0, 1]$ of a $2$-sphere and an interval can be collapsed
while keeping curvatures non-negative. However, $M^3 = S^2 \times
[0, 1]$ is not a graph-manifold.
Under an assumption on both upper and lower bound of curvatures $-
\frac{C}{\rho^{2}} \le {\rm curv}_{M^3_\alpha} \le \frac{C}{ \rho^{2}}$,
the collapsed manifold $M^3_\alpha$ above admits an $F$-structure of
positive rank, by the Cheeger-Gromov collapsing theory (cf.
\cite{CG1990}). It is well-known that a $3$-manifold $M^3_\alpha$
admits an $F$-structure of positive rank if and only if $M^3_\alpha$
is a graph-manifold, (cf. \cite{R1993}).
On page 153 of \cite{SY2005}, Shioya and Yamaguchi stated a version
of \autoref{thm0.1} above for the case of closed manifolds, but
their proof works for case of manifolds with incompressible convex
toral boundary as well. Morgan and Tian \cite{MT2008} presented a
proof of \autoref{thm0.1} without assumptions on diameters but with
Perelman's extra smoothness assumption (3) which we discuss below.
It turned out that the diameter assumption is related to the study
of diameter-collapsed $3$-manifolds with curvature bounded from
below. To see this relation, we state a local re-scaled version of
Perelman's collapsing \autoref{thm0.1}.
\medskip
\noindent\textbf{Theorem 0.1'.} (Re-scaled version of
\autoref{thm0.1}) {\itshape Let $\{ (M^3_\alpha,
g_{ij}^\alpha)\}_{\alpha \in \mathbb Z} $ be and let
$\{\omega_\alpha\}$ be a sequence of positive numbers as in
\autoref{thm0.1} above, $x_\alpha \in M^3_\alpha$ and $ \rho_\alpha=
\rho_\alpha (x_\alpha)$. Suppose that there exists a re-scaled
family of pointed Riemannian manifolds $\{ ((M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha), x_\alpha)\}_{\alpha \in \mathbb Z}
$ satisfying the following conditions:
\begin{enumerate}[{\rm (i)}]
\item The re-scaled Riemannian manifold $(M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha)$ has ${\rm curv} \ge -1$ on the ball
$B_{\rho^{-2}_\alpha g_{ij}^\alpha }(x_\alpha, 1)$;
\item The diameters of the re-scaled manifolds $\{( M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha) \}$ are uniformly bounded from below
by $1$; i.e.;
\begin{equation}\label{eq:0.1}
{\rm diam}( M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha) \ge 1
\end{equation}
\item The volumes of unit metric balls collapse to zero, i.e.:
$${\rm Vol}[ B_{\rho^{-2}_\alpha g_{ij}^{\alpha}}(x_{\alpha}, 1)] \le
\omega_\alpha \to 0,$$ as $\alpha \to \infty$.
\end{enumerate}
Then, for sufficiently large $\alpha$, the collapsing $3$-manifold
$M^3_\alpha$ is a graph-manifold.}
\medskip
Without inequality \eqref{eq:0.1} above, the volume-collapsing
$3$-manifolds could be diameter-collapsing. Perelman's condition
\eqref{eq:0.1} ensures that the normalized family $\{( M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha)\}$ can {\it not} collapse to a point
uniformly. By {\it collapsing to a point uniformly}, we mean that
there is an additional family of scaling constants $\{
\lambda_\alpha\} $ such that the sequence $\{ (M^3_\alpha,
\lambda_\alpha g_{ij}^\alpha)\}$ is convergent to a $3$-dimensional
(possibly singular) manifold $Y^3_\infty$ with non-negative
curvature. Professor Karsten Grove kindly pointed out that the
study of diameter-collapsing theory for $3$-manifolds might be
related to a weak version of the Poincar\'{e} conjecture. For this
reason, Shioya and Yamaguchi made the following conjecture.
\begin{conj}[Shioya-Yamaguchi \cite{SY2000} page 4]\label{conj0.2}
Suppose that $Y^3_\infty $ is a $3$-dimensional compact,
simply-connected, non-negatively curved Alexandrov space without
boundary and that $Y^3_\infty$ is a topological manifold. Then
$Y^3_\infty$ is homeomorphic to a sphere.
\end{conj}
Shioya and Yamaguchi (\cite{SY2000}, page 4) commented that {\it if
\autoref{conj0.2} is true then the study of collapsed $3$-manifolds
with curvature bounded from below would be completely understood}.
They also observed that \autoref{conj0.2} is true for a special case
when the closed (possibly singular) manifold $Y^3_\infty$ above is a
{\it smooth} Riemannian manifold with non-negative curvature; this
is due to Hamilton's work on $3$-manifolds with non-negative Ricci
curvature (cf. \cite{H1986}).
Coincidentally, Perelman added the extra smoothness assumption (3)
in his collapsing theorem.
\begin{perasum}[cf. \cite{Per2003a}]\label{asum0.3}
For every $w' > 0$ there exist $r = r(w') > 0$ and constants $K_m =
K_m(w') < \infty$ for $m = 0, 1, 2, \cdots$, such that for all
$\alpha$ sufficiently large, and any $0 < r \le \bar{ r}$, if the
ball $B_{g_\alpha}(x, r)$ has volume at least $w'r^3$ and sectional
curvatures at least $-r^{-2}$, then the curvature and its $m$-th
order covariant derivatives at $x$ are bounded by $K_0 r^{-2}$ and
$K_mr^{-m-2}$ for $m = 1, 2,\cdots,$ respectively.
\end{perasum}
Let us explain how Perelman's Smoothness \autoref{asum0.3} is
related to the smooth case of \autoref{conj0.2} and let $\{
((M^3_{\alpha}, \rho^{-2}_\alpha g_{ij}^\alpha), x_\alpha)\}_
{\alpha \in \mathbb Z} $ be as in Theorem 0.1'. If we choose the new
scaling factor $\lambda^2_{\alpha}$ such that $\lambda^2_{\alpha} /
\rho^{-2}_{\alpha} \to +\infty$ as $\alpha \to \infty$, then the
newly re-scaled metric $\lambda^2_{\alpha} g_{ij}^\alpha$ will have
sectional curvature $\ge -
\frac{\rho^{-2}_{\alpha}}{\lambda^2_{\alpha}} \to 0$ as $\alpha \to
\infty$. Suppose that $(Y_{\infty}, y_{\infty}) $ is a pointed
Gromov-Hausdorff limit of a subsequence of $\{ ((M^3_{\alpha},
\lambda^2_\alpha g_{ij}^\alpha), x_\alpha)\}_{\alpha \in \mathbb Z}
$. Then the limiting metric space $Y_{\infty}$ will have
non-negative curvature and a possibly singular metric. When
$\dim[Y_\infty] = 3$, by Perelman's Smoothness \autoref{asum0.3},
the limiting metric space $Y_\infty$ is indeed a {\it smooth}
Riemannian manifold of non-negative curvature. In this smooth case,
\autoref{conj0.2} is known to be true, (see \cite{H1986}).
For simplicity, we let $\hat g_{ij}^\alpha = \rho^{-2}_\alpha
g_{ij}^\alpha$ be as in Theorem 0.1'. By Gromov's compactness
theorem, there is a subsequence of a pointed Riemannian manifolds
$\{ ((M^3_\alpha, \hat{ g}_{ij}^\alpha), x_\alpha) \}$ convergent to
a lower dimensional pointed space $(X^k, x_\infty)$ of dimension
either $1$ or $2$, i.e.:
\begin{equation}\label{eq:0.2}
1 \le \dim [ X^k] \le 2
\end{equation}
using \eqref{eq:0.1}. To establish \autoref{thm0.1}, it is
important to establish that, for sufficiently large $\alpha$, the
collapsed manifold $M^3_\alpha$ has a decomposition $M^3_\alpha =
\cup_i U_{\alpha, i}$ such that each $U_{\alpha, i}$ admits an
almost-free circle action:
$$ S^1 \to U_{\alpha, i } \to X^2_{\alpha, i}$$
We also need to show that these almost-free circle actions are
compatible (almost commute) on possible overlaps.
Let us first recall how Perelman's collapsing theorem for
$3$-manifolds plays an important role in his solution to Thurston's
Geometrization Conjecture on the classification of $3$-manifolds.
\subsection{Applications of collapsing theory to the
classification of $3$-manifolds.} \
\medskip
In 2002-2003, Perelman posted online three important but densely
written preprints on Ricci flows with surgery on compact
$3$-manifolds (\cite{Per2002}, \cite{Per2003a} and \cite{Per2003b}),
in order to solve both the Poincar\'{e} conjecture and Thurston's
conjecture on Geometrization of $3$-dimensional manifolds.
Thurston's Geometrization Conjecture states that {\it ``for any
closed, oriented and connected $3$-manifold $M^3$, there is a
decomposition $[M^3 -\bigcup \Sigma^2_j] = N^3_1 \cup N^3_2 \cdots
\cup N^3_{m}$ such that each $N^3_i$ admits a locally homogeneous
metric with possible incompressible boundaries $\Sigma^2_j$, where
$\Sigma^2_j$ is homeomorphic to a quotient of a $2$-sphere or a
$2$-torus".} There are exactly 8 homogeneous spaces in dimension 3.
The list of $3$-dimensional homogeneous spaces includes 8
geometries: $\mathbb{R}^3$, $\mathbb{H}^3$, $\mathbf{S}^3$,
$\mathbb{H}^2 \times \mathbb{R}$, $\mathbf{S}^2 \times \mathbb{R}$,
$\tilde{SL}(2, \mathbb{R})$, $Nil$ and $Sol$.
Thurston's Geometrization Conjecture suggests the existence of
especially nice metrics on $3$-manifolds and consequently, a more
analytic approach to the problem of classifying $3$-manifolds.
Richard Hamilton formalized one such approach by introducing the
Ricci flow equation on the space of Riemannian metrics:
\begin{equation}\label{eq:0.3}
\frac{\partial g(t)}{\partial t} = - 2{\rm Ric}(g(t))
\end{equation}
where ${\rm Ric}(g(t))$ is the Ricci curvature tensor of the metric
$g(t)$. Beginning with any Riemannian manifold $(M, g_0)$, there is
a solution $g(t)$ of this Ricci flow on $M$ for $t$ in some interval
such that $g(0) = g_0$. In dimension $3$, the fixed points (up to
re-scaling) of this equation include the Riemannian metrics of
constant Ricci curvature. For instance, they are quotients of
$\mathbb{R}^3$, $\mathbb{H}^3$ and $\mathbf{S}^3$ up to scaling
factors. It is easy to see that, on compact quotients of
$\mathbb{R}^3 $ or $\mathbb{H}^3$, the solution to Ricci flow
equation is either stable or expanding. Thus, on compact quotients
of $\mathbb{R}^3 $ and $\mathbb{H}^3$, the solution to Ricci flow
equation exists for all time $t \ge 0$.
However, on quotients of $\mathbf{S}^2 \times \mathbb{R}$ or
$\mathbf{S}^3$, the solution $\{g(t)\}$ to Ricci flow equation
exists only for finite time $t < T_0$. Hence, one knows that in
general the Ricci flow will develop singularities in finite time,
and thus a method for analyzing these singularities and continuing
the flow past them must be found.
These singularities may occur along proper subsets of the manifold,
not the entire manifold. Thus, Perelman introduced a more general
evolution process called Ricci flow with surgery (cf. \cite{Per2002}
and \cite{Per2003a}). In fact, a similar process was first
introduced by Hamilton in the context of four-manifolds. This
evolution process is still parameterized by an interval in time, so
that for each $t$ in the interval of definition there is a compact
Riemannian $3$-manifold $M_t$. However, there is a discrete set of
times at which the manifolds and metrics undergo topological and
metric discontinuities (surgeries). Perelman did surgery along
$2$-spheres rather than surfaces of higher genus, so that the change
in topology for $\{M^3_t\}$ turns out to be completely understood.
More precisely, Perelman's surgery on $3$-manifolds is {\it the
reverse process of taking connected sums:} cut a $3$-manifold along
a $2$-sphere and then cap-off by two disjoint $3$-balls. Perelman's
surgery processes produced exactly the topological operations needed
to cut the manifold into pieces on which the Ricci flow can produce
the metrics sufficiently controlled so that the topology can be
recognized. It was expected that each connected components of
resulting new manifold $M^3_t$ is either a graph-manifold or a
quotient of one of homogeneous spaces listed above. It is well-known
that any graph-manifold is a union of quotients of 7 (out of the 8
possible) homogeneous spaces described above. More precisely,
Perelman presented a very densely written proof of the following
result.
\begin{theorem}[Perelman \cite{Per2002},\cite{Per2003a}, \cite{Per2003b}]\label{thm0.4}
Let $(M^3, g_0)$ be a closed and oriented Riemannian $3$-manifold.
Then there is a Ricci flow with surgery, say $\{ (M_t, g(t))\}$,
defined for all $t \in [0, T)$ with initial metric $(M, g_0)$. The
set of discontinuity times for this Ricci flow with surgery is a
discrete subset of $[0,\infty)$. The topological change in the
$3$-manifold as one crosses a surgery time is a connected sum
decomposition together with removal of connected components, each of
which is diffeomorphic to one of $(S^2 \times S^1)/\Gamma_i$ or
$S^3/\Gamma_j$. Furthermore, there are two possibilities:
{\rm (1)} Either the Ricci flow with surgery terminates at a finite
time $T$. In this case, $M^3_0$ is diffeomorphic to the connect sum
of $(S^2 \times S^1)/\Gamma_i$ and $S^3/\Gamma_j$. In particular, if
$M^3_0$ is simply-connected, then $M^3_0$ is diffeomorphic to $S^3$.
{\rm (2)} Or the Ricci flow with surgery exists for all time, i.e.,
$T= \infty$.
\end{theorem}
The detailed proof of Perelman's theorem above can be found in
\cite{CZ2006}, \cite{KL2008} and \cite{MT2007}.
In fact, if $M^3$ is a simply-connected closed manifold, then using
a theorem of Hurwicz, one can find that $ \pi_3(M^3) = H_3(M^3,
\mathbb Z)=\mathbb Z$. Hence, there is a map $F: S^3 \to M^3$ of
degree $1$. One can view $S^3$ as a two-point suspension of $S^2$,
i.e., $S^3 \sim S^2 \times [0, 1]/\{0, 1\}$. Thus, for such a
manifold $M^3$ with a metric $g$, one can define the $2$-dimensional
width $W_2(M, g) = \inf_{deg(F) = 1} \max_{0\le s \le 1} \{Area_{g}
[F(S^2, s) ] \}$, (compare with [CalC92]). Colding and Minicozzi (cf.
\cite{CM2005},\cite{CM2008a},\cite{CM2008b}) established that
\begin{equation}\label{eq:0.4}
0< W_2(M^3_t, g(t)) \le (t + c_1)^{\frac 34} [c_2 - 16\pi
(t+c_3)^{\frac 14} )]
\end{equation}
for Perelman's solutions $\{M^3_t, g(t)\}$ to Ricci flow with
surgery, where $\{c_1, c_2, c_3\}$ are positive constants
independent of $t \in [0, T) $. Therefore, for a simply-connected
closed manifold $M^3_0$, it follows from \eqref{eq:0.4} that the
Ricci flow with surgery must end at a finite time $T$. Thus, by
\autoref{thm0.4}, the conclusion of the Poincar\'{e} conjecture
holds for such a simply-connected closed $3$-manifold $M^3_0$. Other
proofs of Perelman's finite time extinction theorem can be found in
\cite{Per2003b} and \cite{MT2007}.
It remains to discuss the long time behavior of Ricci curvature flow
with surgery. We will perform the so-called Margulis thick-thin
decomposition for a complete Riemannian manifold $(M^n, g)$. The
thick part is the {\it non-collapsing part} of $(M^n, g)$, while the
thin part is the {\it collapsing } portion of $(M^n, g)$.
Let $\rho(x, t) $ denote the radius $\rho$ of the metric ball
$B_{g(t)}(x, \rho)$, where we may choose $\rho(x, t) $ so that
$$
\inf\{ \sec_{g(t)}|_y \quad| \quad y \in B_{g(t)}(x,\rho)\}\ge -
\rho^{-2}
$$
and $ \frac 12 \rho(x,t) \le \rho(y, t)\le \rho(x, t) $ for all $y
\in B_{g(t)}(x, \rho/2)$. The re-scaled thin part of $M_t$ can be
defined as
\begin{equation}\label{eq:0.5}
M_-(\omega, t) = \{ x \in M^3_t \quad | \quad {\rm Vol}[ B_{g(t)}(x,
\rho(x, t) )] < \omega \rho^3(x, t) \}
\end{equation}
and its complement is denoted by $M_+(\omega, t)$, which is called
the thick part for a fixed positive number $\omega$.
When there is no surgery occurring for all time $t \ge 0$, Hamilton
successfully classified the thick part $M_+(\omega, t)$.
\begin{theorem}[Hamilton \cite{H1999},
non-collapsing part]\label{thm0.5} Let $\{ (M^3_t, g(t)) \}$,
$M_-(\omega, t)$ and $M_+(\omega, t)$ be as above. Suppose that
$M^3_t$ is diffeomorphic to $M^3_0$ for all $t \ge 0$. Then there
are only two possibilities:
\smallskip
\noindent {\rm (1)} If there is no thin part (i.e., $M_-(\omega, t)
= \varnothing$), then either $(M^3_t, g(t))$ is convergent to a flat
$3$-manifold or $(M^3, \frac{2}{t} g(t))$ is convergent to a compact
quotient of hyperbolic space $\mathbb H^3$;
\smallskip
\noindent {\rm (2)} If both $M_+(\omega, t)$ and $M_-(\omega, t)$
are non-empty, then the thick part $M_+(\omega, t)$ is diffeomorphic
to a disjoint union of quotients of real hyperbolic space $\mathbb
H^3$ with finite volume and with cuspidal ends removed.
\end{theorem}
\begin{figure*}[ht]
\includegraphics[width=250pt]{section0_1.pdf}\\
\caption{2-dimensional thick-thin decomposition }\label{2dimttdec}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=250pt]{section0_2.pdf}\\
\caption{3-dimensional thick-thin decomposition }\label{3dimttdec}
\end{figure*}
Perelman (cf. \cite{Per2003a}) asserted that the conclusion of
\autoref{thm0.5} holds if we replace the classical Ricci flow by
{\it the Ricci flow with surgeries}, (cf. \cite{Per2003a}).
Detailed proof of this assertion of Perelman can be found in
\cite{CZ2006}, \cite{KL2008} and \cite{MT2010}.
Suppose that $\mathbb H^3/\Gamma$ is a complete but non-compact
hyperbolic $3$-manifold with finite volume. The cuspidal ends of
$\mathbb H^3/\Gamma$ are exactly the thin parts of $M^3_\infty
=\mathbb H^3/\Gamma$. Each cuspidal end of $\mathbb H^3/\Gamma$ is
diffeomorphic to a product of a torus and half-line (i.e., $T^2 \times [0,
\infty)$). Hence, each cusp is a graph-manifold.
It should be pointed out that possibly infinitely many surgeries
took place {\it only} on thick parts of manifolds $\{M_t\}$ after
appropriate re-scalings, due to the celebrated Perelman's
$\kappa$-non-collapsing theory, (see \cite{Per2002}).
Moreover, Perelman (cf. \cite{Per2003a}) pointed out that the study
of the thin part $M_-(\omega, t)$ has nothing to do with Ricci flow,
but is related to {\it his version} of critical point theory for
distance functions. We now outline our simple proof of
\autoref{thm0.1} using Perelman's version of critical point theory
in next sub-section.
\subsection{Outline of a proof of Perelman's
collapsing theorem.} \
\medskip
In order to illustrate main strategy in the proof of Perelman's
collapsing theorem for $3$-manifolds, we make some general remarks.
Roughly speaking, Perelman's collapsing theorem can be viewed as a
generalization of the implicit function theorem. Suppose that
$\{M^3_\alpha\}$ is a sequence of collapsing $3$-manifolds with
curvature $\ge -1$ and that $\{M^3_\alpha\}$ is not a
diameter-collapsing sequence. To verify that $M^3_\alpha$ is a
graph-manifold for sufficiently large $\alpha$, it is sufficient to
construct a decomposition $M^3_\alpha = \cup^{m_\alpha}_{i=1}
U_{\alpha, i }$ and a collection of {\it regular} functions (or
maps) $F_{\alpha, i}: U_{\alpha, i } \to \mathbb R^{s_i}$, where
$s_i =1$ or $2$. We require that the collection of locally defined
functions (or maps) $\{( U_{\alpha, i }, F_{\alpha,
i})\}^{m_\alpha}_{i=1}$ satisfy two conditions:
\begin{enumerate}[{\rm (i)}]
\item Each function (or map) $F_{\alpha, i}$ is
{\it regular} enough so that Perelman's version of implicit function
theorem (cf. \autoref{thm1.2} below) is applicable;
\item The collection of locally defined {\it regular} functions
(or maps) are compatible on any possible overlaps in the sense of
Cheeger-Gromov (cf. \cite{CG1986} \cite{CG1990}). More precisely, if
$U_{\alpha, i } \cap U_{\alpha, j} \neq \varnothing$ and if $
[F_{\alpha, i}^{-1}(y) \cap F_{\alpha, j}^{-1}(z)] \neq \varnothing
$ with $\dim [ F_{\alpha, i}^{-1}(y) ] \le \dim [ F_{\alpha,
j}^{-1}(z)] $, then we require that either $F_{\alpha, i}^{-1}(y)
\subset F_{\alpha, j}^{-1}(z)$ or the union $[F_{\alpha, i}^{-1}(y)
\cup F_{\alpha, j}^{-1}(z)]$ is contained in a $2$-dimensional orbit
of an almost-free torus action.
\end{enumerate}
If the above two conditions are met, with additional efforts, we can
construct a {\it compatible} family of the locally defined Seifert
fibration structures (which is equivalent to an $F$-structure
$\mathcal F$ of positive rank in the sense of Cheeger-Gromov) on a
sufficiently collapsed $3$-manifold $M^3_\alpha$. It follows that
$M^3_\alpha$ is a graph-manifold for sufficiently large $\alpha$,
(cf. \cite{R1993}).
Perelman's choices of locally defined {\it regular} functions (or
maps) are related to distance functions $r_{A_{\alpha, i}}(x)
={\rm d}(x, A_{\alpha, i})$ from appropriate subsets $A_{\alpha,i}$. We
briefly illustrate the main strategy of our proof for the following
two cases.
\smallskip
\noindent
\textbf{Case 1.}{\it The metric balls $\{
B_{M^3_\alpha}(x_\alpha, r) \}$ collapse to an open interval}.
We will show that $B_{M^3_\alpha}(x_\alpha, r)$ is homeomorphic to a
slim cylinder $N^2_{\alpha}\times I$ with shrinking spherical or
toral factor $N^2_{\alpha}$ for sufficiently large $\alpha$. When a
sequence of the pointed $3$-manifolds $\{(B_{M^3_\alpha}(x_\alpha,
r), x_\alpha) \}$ with curvature $\ge -1$ are convergent to an
$1$-dimensional space $(X^1, x_\infty)$ and $x_\infty$ is an
interior point, Perelman-Yamaguchi fibration theory is applicable.
Thus, we are led to consider the fibration
$$
N^2_\alpha \to B_{M^3_\alpha}(x_\alpha, r)
\stackrel{F_{\alpha}}{\longrightarrow} (-\ell, \ell)
$$
\begin{figure*}[ht]
\includegraphics[width=250pt]{section4_1.pdf}\\
\caption{Slim cylinders with collapsing fibers.}\label{fig:4.1}
\end{figure*}
We now discuss the topological type of the fiber $N^2_\alpha$. Let
$x_\infty \in (-\ell, \ell) $ and $\varepsilon_\alpha$ be the
diameter of $F_{\alpha}^{-1}(x_\infty)$ in $M^3_\alpha$. We further
consider the limiting space $Y^s_\infty$ of re-scaled spaces
$\{(B_{\varepsilon^{-1}_\alpha
M^3_\alpha}(x_\alpha,\frac{r}{\varepsilon_\alpha}), x_\alpha) \},$
as $ \varepsilon_\alpha \to 0$. There are two sub-cases: $\dim
(Y^s_\infty) = 3$ or $\dim(Y^s_\infty) = 2$. Let us consider the
subcase of $\dim (Y^s_\infty) = 3$:
$$
N^2_\infty \to Y^3_\infty \to \mathbb R
$$
where both $N^2_\infty$ and $Y^3_\infty$ are manifolds with possibly
singular metrics of non-negative curvature. To classify singular
surfaces $N^2_\infty$ with non-negative curvature, we use a
splitting theorem and the distance non-increasing property of
Perelman-Sharafutdinov retraction on the universal cover $\tilde
N^2_\infty$, when $N^2_\infty$ has non-zero genus. With some extra
efforts, we will conclude that $N^2_\infty$ must be homeomorphic to
a quotient of 2-sphere or 2-torus, (see \autoref{section2} below).
It now follows from a version of Perelman's stability theorem that
the fiber $N^2_\alpha$ is homeomorphic to $N^2_\infty$, for
sufficiently large $\alpha$. Hence, $N^2_\alpha$ is a quotient of
2-sphere or 2-torus as well, for sufficiently large $\alpha$. Our
new proof of Perelman's collapsing theorem for this subcase is much
simpler than the approach of Shioya-Yamaguchi presented in
\cite{SY2000}.
The sub-case of $\dim(Y^2_\infty) = 2$ is related to the following case:
\smallskip
\noindent \textbf{Case 2}. {\it The metric balls $\{
B_{M^3_\alpha}(x_\alpha, r)\} $ collapse to an open disk}.
We will show that $B_{M^3_\alpha}(x_\alpha, r)$ is homeomorphic to a
fat solid torus $D^2\times S^1_{\varepsilon_\alpha}$ with shrinking
core $S^1_{\varepsilon_\alpha}$ for sufficiently large $\alpha$.
\begin{figure*}[ht]
\includegraphics[width=300pt]{section1_1.pdf}\\
\caption{Fat solid tori with shrinking cores $S^1_\varepsilon$.}\label{fig:1.1}
\end{figure*}
In this case, our strategy can be illustrated in the following
diagram
\begin{diagram}
B_{M^3_{\alpha}}(x_{\alpha},\delta)&\rTo^{F_{\alpha}}&\mathbb{R}^2\\
\dTo_{\text{G-H}}&&\dCorresponds\\
B_{X^2}(x_{\infty},\delta)&\rTo^{F_{\infty}}&\mathbb{R}^2
\end{diagram}
where the sequence of metric balls ${B_{(M^3_{\alpha},
\hat{g}^{\alpha})}(x_{\alpha}, \delta)}$ are convergent to the
metric disk $B_{X^2}(x_{\infty}, \delta)$ for $\delta \le r$. We
will construct an admissible map $F_{\infty}$ which is regular at
the punctured disk, using Perelman's multiple conic singularity
theory, (see \autoref{thm1.17} below). Among other things, we will
use the following result of Perelman to construct the desired map
$F_\infty$.
\begin{theorem}[Conic Lemma in Perelman's critical point theory,
(cf. \cite{Per1994} page 211)]\label{thm0.6} Let $X$ be an
Alexandrov space of dimension $k$, ${\rm curv} \ge -1$ and $x\in X$ be an
interior point of $X$. Then the distance function $r_x(y) = {\rm d}(x,
y)$ has {\it no} critical points on $[B_X(x,{\delta}) -\{ x\}]$ for
sufficiently small $\delta$ depending on $x$.
\end{theorem}
We will use \autoref{thm0.6} and Perelman's semi-flow orbit
stability theorem (cf. \autoref{prop1.14} below) to conclude that
the lifting maps $F_\alpha$ is regular on the annular region
$A_{M^3_{\alpha}} (x_{\alpha}, \varepsilon, \delta)=
[B_{M^3_{\alpha}}(x_{\alpha},\delta)- B_{M^3_{\alpha}}(x_{\alpha},
\varepsilon)]$. With extra efforts, one can construct a local
Seifert fibration structure:
$$
S^1 \to A_{M^3_{\alpha}} (x_{\alpha}, \varepsilon, \delta)
\stackrel{G_{\alpha}}{\longrightarrow} A_{X^2} (x_{\infty},
\varepsilon, \delta)
$$
In summary, Perelman's collapsing theorem for 3-manifolds can be
viewed an extension of the implicit function theorem. Our proof of
Perelman's collapsing theorem benefited from {\it his version} of
critical point theory for distance functions, including his conic
singularity theory and fibration theory. Perelman's multiple conic
singularity theory and his fibration theory are designed for
possibly singular Alexandrov spaces $X^k$. Therefore, the smoothness
of metrics on $X^k$ does {\it not} play a big role in the
applications of Perelman's critical point theory, unless we run into
the so-called essential singularities (or extremal subsets). When
essential singularities do occur on surfaces, we use the MCS theory
(e.g. \autoref{thm0.6}) and the multiple-step Perelman-Sharafutdinov
flows to handle them, (see \autoref{section5.2} below).
Without using Perelman's version of critical point theory,
Shioya-Yamaguchi's proof of the collapsing theorem for $3$-manifolds
was lengthy and involved. For instance, they use their singular
version of Gauss-Bonnet theorem to classify surfaces of non-negative
curvature, (see Chapter 14 of \cite{SY2000}). The proof of the singular
version of the Gauss-Bonnet theorem was non-trivial. In
addition, Shioya-Yamaguchi extended the Cheeger-Gromoll soul theory
to 3-dimensional singular spaces with non-negative curvature, which
was rather technical and occupied the half of their first paper
\cite{SY2000}. Using Perelman's version of critical point theory, we
will provide alternative approaches to classify non-negatively
curved surfaces and open 3-manifolds with possibly singular metrics,
(e.g., the 3-dimensional soul theory). Our arguments inspired by
Perelman are considerably shorter than Shioya-Yamaguchi's proof for
the $3$-dimensional soul theory, (see \autoref{section2.2} below).
For the readers who prefer a traditional proof of the collapsing
theorem without using Perelman's version of critical point theory,
we recommend the important papers of Morgan-Tian \cite{MT2008} and
Shioya-Yamaguchi \cite{SY2000}, \cite{SY2005}. Finally, we should
also mention the recent related work of Gerard Besson et al, (cf.
\cite{BBBMP2010}). Another proof of Perelman's collapsing theorem
for 3-manifolds has been announced by Kleiner and Lott (cf.
\cite{KL2010}).
We refer the organization of this paper to the table of contents at
the beginning. In \autoref{section1}-\ref{section4} below, we mostly
discuss interior points of Alexandrov spaces, unless otherwise
specified.
\section{Brief introduction to Perelman's MCS theory and applications to local Seifert fibration}
\label{section1}
In \S 1-2, we will discuss our proof of Theorem 0.1' for a special
case. In this special case, we assume that the sequence of metric
balls $\{ B_{(M^3_{\alpha},g^{\alpha})}(x_{\alpha},r) \} $ is
convergent to a metric ball $B_{X^2}(x_{\infty},r)$, where
$x_\infty$ is an interior point of $X^2$. Using several known
results of Perelman, we will show that there is a (possibly
singular) circle fibration:
\begin{equation}\label{eq1.01}
S^1\to B_{(M^3_{\alpha},g^{\alpha})}(x_{\alpha},\varepsilon)\to
B_{X^2}(x_{\infty},\varepsilon),
\end{equation}
for some $\varepsilon < r$. In other words, we shall show that
$\hat{B}_{M^3_{\alpha}} (x_{\alpha},\varepsilon)$ looks like a {\it
fat} solid tori with a shrinking core, i.e., $
\hat{B}_{M^3_{\alpha}} (x_{\alpha},\varepsilon) \sim [D^2 \times
S^1_{\varepsilon}] \sim [(D^2 \times (\mathbb R/ \varepsilon \mathbb
Z)]$, (see \autoref{fig:1.1} above and \autoref{ex2.0} below).
In fact, using the Conic Lemma (\autoref{thm0.6} above), Kapovitch
\cite{Kap2005} already established a circle-fibration structure over
the annular region $A_{X^2}(x_{\infty},\delta,\varepsilon)$. Let
$\Sigma_x(X)$ denote the space of unit directions of an Alexandrov
space $X$ of curvature $\ge -1$ at point $x$. When $\dim(X)=2$, it
is known (cf. \cite{BGP1992}) that $X^2$ must be a $2$-dimensional
topological manifold. Thus, $\Sigma_x(X^2)$ is a circle, and hence
an $1$-dimensional manifold.
\begin{prop}[\cite{Kap2005}, page 533]\label{prop1.1}
Suppose that $M^n_{\alpha}\xrightarrow{G-H} X$, where $M^n_{\alpha}$
is a sequence of $n$-dimensional Riemannian manifolds with sectional
curvature $\ge -1$. Suppose that there exists $x_{\infty}\in X$ such
that $\Sigma=\Sigma_{x_{\infty}}(X)$ is a closed Riemannian
manifold. Then there exists $r_0=r_0(x_{\infty})$ such that for any
$M_{\alpha}\ni x_{\alpha}\to x_{\infty}$ we have: For any
sufficiently large $\alpha$, and $r \le r_0$, there exists a
topological fiber bundle
$$
S_{\alpha}\to \partial B_{M_{\alpha}} (x_{\alpha},r)\to
\Sigma_{x_{\infty}}(X)
$$
such that
\begin{enumerate}[{\rm (1)}]
\item $S_{\alpha}$ and $\partial B_{M_{\alpha}}(x_{\alpha},r)$ are
topological manifolds;
\item Both $S_{\alpha}$ and $\partial B_{M_{\alpha}}(x_{\alpha},r)$
are connected;
\item The fundamental group $\pi_1(S_{\alpha})$ of the fiber is
almost nilpotent.
\end{enumerate}
\end{prop}
We will use Perelman's fibration theorem and an multiple conic
singularity theory to establish the desired circle fibration over
the annular region $A_{X^2}(x_{\infty},\delta, \varepsilon)$ for
$\delta<\varepsilon$. Our strategy can be illustrated in the
following diagram
\begin{diagram}
B_{M^3_{\alpha}}(x_{\alpha},r)&\rTo^{F_{\alpha}}&\mathbb{R}^2\\
\dTo_{\text{G-H}}&&\dCorresponds\\
B_{X^2}(x_{\infty},r)&\rTo^{F_{\infty}}&\mathbb{R}^2
\end{diagram}
where the sequence of metric balls
${B_{(M^3_{\alpha},\hat{g}^{\alpha})}(x_{\alpha},r)}$ are convergent
to the metric disk $B_{X^2}(x_{\infty},r)$.
If $F_{\alpha}$ were a ``\emph{topological submersion\/}" to its
image, then we would be able to obtain the desired topological
fibration. For this purpose, we will recall Perelman's Fibration
Theorem for non-smooth maps.
\subsection{Brief introduction to Perelman's critical point theory}\label{section1.1}\
\smallskip
We postpone the definition of admissible maps to
\autoref{section1.2}. In \autoref{section1.2}, we will also recall
the notion of {\it regular points} for a sufficiently wide class of
``\emph{admissible mappings\/}" from an Alexandrov space $X^n$ to
Euclidean space $\mathbb R^k$.
Let $F: X^n \to \mathbb R^k$ be an admissible map. The points of an
Alexandrov space $X$ that are not regular are said to be critical
points, and their images in $\mathbb R^k$ are said to be critical
values of $F: X \to \mathbb R^k$. All other points of $\mathbb R^k$
are called regular values.
\begin{theorem}[Perelman's Fibration Theorem \cite{Per1994} page
207]\label{thm1.2}\
\begin{enumerate}[{\rm (A)}]
\item An admissible mapping is open and admits a trivialization
in a neighborhood of each of its regular points.
\item If an admissible mapping has no critical points and is
proper in some domain, then its restriction to this domain is the
projection of a locally trivial fiber bundle.
\end{enumerate}
\end{theorem}
There are several equivalent definitions of Alexandrov spaces of
${\rm curv} \ge k$. Roughly speaking, a length space $X$ is said to have
curvature $\ge 0$ if and only if, for any geodesic triangle $\Delta$
in $X$, the corresponding triangle $\widetilde{\Delta}$ of the same
side-lengths in $\mathbb R^2$ is thinner than $\Delta$.
More precisely, let $M^2_k$ be a simply connected complete surface
of constant sectional curvature $k$. A triangle in a length space
$X$ consists of three vertices, say $\{a,b,c\}$ and three
length-minimizing geodesic segments $\{\overline{ab}, \overline{bc},
\overline{ac}\}$. Let $|ab|$ be the length of $\overline{ab}$. Given
a real number $k$, a comparison triangle $\widetilde{\Delta}^k_
{\tilde{a},\tilde{b},\tilde{c}}$ is a triangle in $M^2_k$ with the
same side lengths. Its angles are called the comparison angles and
denoted by $\widetilde{\measuredangle}^k_a(b,c)$, etc. A comparison
triangle exists and is unique whenever $k\le 0$ or $k>0$ and
$|ab|+|bc|+|ca|<\frac{2\pi}{\sqrt k}$.
\begin{definition}\label{def1.3}
A length space $X$ is called an Alexandrov space of curvature $\ge
k$ if any $x\in X$ has a neighborhood $U_x$ for any $\{a,b,c,d\}\in
U_{x}$, the following inequality
$$
\widetilde{\measuredangle}^k_a(b,c)+\widetilde{\measuredangle} ^
k_a(c,d)+\widetilde{\measuredangle}^k_a(d,b)\le 2\pi.
$$
\end{definition}
Alexandrov spaces with ${\rm curv} \ge \lambda$ have several nice
properties, (cf. \cite{BGP1992}). For instance, the dimension of an
Alexandrov space $X$ is either an integer or infinite. Moreover, for
any $x\in X$, there is a well defined tangent cone $T_x^-(X)$ along
with an ``inner product" on $T_x^-(X)$.
In fact, if $X$ is an Alexandrov space with the metric $d$, then we
denote by $\lambda X$ the space $(X,\lambda d)$. Let
$i_{\lambda}:\lambda X \to X$ be the canonical map. The
Gromov-Hausdorff limit of pointed spaces $\{(\lambda X, x)\}$ for
$\lambda \to \infty$ is the tangent cone $T_x^-(X)$ at $x$, (see
$\S$7.8.1 of \cite{BGP1992}).
For any function $f: X\to \mathbb R$, the function $d_xf:
T_x^-(X)\to \mathbb R$ such that
$$
d_xf=\lim_{\lambda\to+\infty} \frac{f\circ
i_{\lambda}-f(x)}{1/\lambda}
$$
is called the differential of $f$ at $x$.
Let us now recall the notion of regular points for distance
functions.
\begin{definition}[\cite{GS1977}, \cite{Gro1981}]\label{def1.4}
Let $A\subset X$ be a closed subset of an Alexandrov space $X$ and
$f_A(x)={\rm d}(x,A)$ be the corresponding distance function from $A$.
A point $x\not\in A$ is said to be a regular point of $f_A$ if there
exists a non-zero direction $\vec{\xi}\in T_x^-(X)$ such that
\begin{equation}\label{eq1.1}
d_xf(\vec{\xi})>0.
\end{equation}
\end{definition}
It is well-known that if $X$ has ${\rm curv} \ge0$ then $f(x) =
\frac{1}{2} [{\rm d}(x,p)]^2$ has the property that $\text{Hess}(f)\le
I$, (see \cite{Petr2007}). To explain such an inequality, we recall
the notion of semi-concave functions.
\begin{definition}[\cite{Per1994} page 210]\label{def1.5}
A function $f: X\to \mathbb R$ is said to be $\lambda$-concave in an
open domain $U$ if for any length-minimizing geodesic segment
$\sigma: [a,b]\to U$ of unit speed, the function
$$
f\circ \sigma (t) - \lambda t^2 /2
$$
is concave.
\end{definition}
When $f$ is 1-concave, we say that $\text{Hess}(f)\le I$. It is
clear that if $f: U\to \mathbb R$ is a semi-concave function, then
$$d_xf: T_x^-(X)\to \mathbb R$$
is a concave function.
In order to introduce the notion of semi-gradient vector for a
semi-concave function $f$, we need to recall the notion of
``\emph{inner product\/}" on $T_x^-(X)$. For any pair of vectors
$\overrightarrow{u}$ and $\vec{v}$ in $T_x^-(X)$, we define
$$
\langle\vec{u},\vec{v}\rangle=\frac12 (|\vec{u}|^2 + |\vec{v}|^2 -
|\vec{u}\vec{v}|^2) = |\vec{u}||\vec{v}|\cos\theta
$$
where $\theta$ is the angle between $\vec{u}$ and $\vec{v}$,
$|\vec{u}\vec{v}|={\rm d}_{ T_x^-(X)}(\vec u, \vec v)$, $|\vec
u|={\rm d}_{ T_x^-(X)}(\vec u, o)$ and $o$ denotes the origin of the
tangent cone.
\begin{definition}[\cite{Petr2007}]\label{def1.6}
For any given semi-concave function $f$ on $X$, a vector
$\vec{\eta}\in T_x^-(X)$ is called a gradient of $f$ at $x$ (in
short $\vec{\eta}=\nabla f$) if
\begin{enumerate}[{\rm (i)}]
\item
$d_xf(\vec v) \le \langle\vec{\eta},\vec v\rangle$ for any $\vec v
\in T_x^-(X) $ ;
\item
$d_xf(\vec{\eta})=|\vec{\eta}|^2$.
\end{enumerate}
\end{definition}
It is easy to see that any semi-concave function has a uniquely
defined gradient vector field. Moreover, if $d_xf(\vec v)\le 0$ for
all $\vec v\in T_x^-(X)$, then $\nabla f|_x=0$. In this case, $x$ is
called a critical point of $f$. Otherwise, we set
$$
\nabla f= d_x f(\vec{\xi})\vec{\xi}
$$
where $\vec{\xi}$ is the (necessarily unique) unit vector for which
$d_xf$ attains its positive maximum on $\Sigma_x(X)$, where
$\Sigma_x(X)$ is the space of direction of $X$ at $x$.
\begin{prop}[\cite{Per1994}, \cite{PP1994}]\label{prop1.7}
Let $X^n$ be a metric space with curvature $\ge -1$ and $\hat x $ be
an interior point of $X^n$. Then there exists a strictly concave
function $h: B(\hat x, r) \to (-\infty, 0]$ such that (1) $h(\hat x)
= 0$ and $B(\hat x, \frac{s}{\lambda}) \subset h^{-1}( (-s, 0])
\subset B(\hat x, \lambda s) $ for $s \le \frac{r}{4\lambda}$; (2)
the distance function $r_{\hat x}(y)$ has no critical point in
punctured ball $[B_X(\hat x, \varepsilon) - \{\hat x\}]$, for some
$\{\varepsilon, r, \lambda\}$ depending on $\hat x$.
\end{prop}
\begin{proof}
(1) The construction of the strictly concave function $h$ described
above is available in literature (see \cite{GW1997},
\cite{Kap2002}). In fact, let $f_{\delta'}: X \to \mathbb{R}$ be
defined as on page 129 of \cite{Kap2002} for $\delta' < \delta$. We
choose $h(x) = f_{\delta'}(x) - 1$. Kapovitch showed that the
inequality
$$
1 \le \frac{{\rm d}(x, \hat x) }{t} \le \frac{1}{\cos (3\delta)}
$$
holds for $x \in h^{-1}(-t)$ and $t < < \delta' < \delta$, (see page
132 of \cite{Kap2002}). Thus, there exists a $\lambda$ such that
$B(\hat x, \frac{s}{\lambda}) \subset h^{-1}( (-s, 0])
\subset B(\hat x, \lambda s) $ for $s \le \frac{r}{4\lambda}$.
(2) For the convenience of readers, we add the following alternative
proof of the second assertion. Let us recall that the tangent cone
$(T^-_{\hat x}(X^n), O)$ is the Gromov-Hausdorff limit of the
pointed re-scaled spaces $\{(\lambda X, \hat x)\}_{\lambda\ge 0}$ as
$\lambda \to +\infty$, i.e.
$$(\lambda X, \hat x)\to (T^-_{\hat x}X, O)$$
as $\lambda \to +\infty$, where $O$ is the apex of the tangent cone
$T^-_{\hat x}X$. Let ${\rm d}_{O, T^-_{\hat x}X}(\eta)={\rm d}_{T^-_{\hat
x}X}(O, \eta)$ and ${\rm d}_{\hat x, \lambda X}(y)={\rm d}_{\lambda X}(y,
\hat x)=\lambda {\rm d}_{X}(y, \hat x)$. We consider
$f_{\lambda}(y)=\frac12 ({\rm d}_{\hat x, \lambda X}(y, \hat x))^2$. By
an equivalent definition of curvature $\ge -1$,
$\{f_{\lambda}\}_{\lambda \ge 1}$ and $f_{\infty}$ are semi-concave
functions.
Lemma 1.3.4 of \cite{Petr2007} implies that if $p_\lambda \to
p_\infty$ as $\lambda \to p_\infty$ then $\liminf_{\lambda \to
\infty} |\nabla f_\lambda|(p_\lambda) \ge |\nabla f_{\infty}|(
p_\infty)|$.
Let $A_M(x, r, R) = [\overline{B_M(x,R)}-B_M(x,r)]$ be an annular
region. Our energy function $f_{\infty}(\eta)=\frac12 |\eta|^2$ has
property $|\nabla f_{\infty}| \ge \frac 18 $ on $A_{T_{\hat
x}(X)}(0, \frac12, 1)$. It follows from Lemma 1.3.4 of
\cite{Petr2007} that, for sufficiently large $\lambda\ge \lambda_0
>1$, the function $f_{\lambda}$ has no critical point on the annual
region
$$
A_{\lambda X}(\hat x, \frac12, 1) =A_{X}(\hat x,
\frac{1}{2\lambda},\frac {1}{\lambda}).
$$
Since we have
$$
[B_{X}(\hat x, \frac {1}{\lambda_0})-\{\hat x\}] = \cup_{\lambda\ge
\lambda_0}A_{X} (\hat x, \frac{1}{2\lambda},\frac {1}{\lambda}),
$$
we conclude that the radial distance function has no critical point
on the punctured ball $[B_X(\hat x, \varepsilon) - \{\hat x\}]$.
\end{proof}
\subsection{Regular values of admissible maps}\label{section1.2}\
\smallskip
In this subsection, we recall explicit definitions of admissible
mappings and their regular points introduced by Perelman.
\begin{definition}[\cite{Per1994},\cite{Per1997} admissible maps]\label{def1.9}\ \,
{\rm (1)} Let $X^n$ be a complete Alexandrov space of dimension $n$
and ${\rm curv}_{X^n} \ge c$ and $U \subset X^n$. A function $f: U \to
\mathbb R$ is called admissible if $f(x)=\sum_{i=1}^m \phi_i
({\rm d}_{A_i}(x))$ where $A_i\subset M$ is a closed subset and
$\phi_i: \mathbb R \to \mathbb R$ is continuous.
{\rm (2)} A map $\hat F: X^n \to \mathbb R^k$ is said to be
admissible in a domain $U\subset M$ if it can be represented as
$\hat F=G\circ F$, where $G:\mathbb R^k\to \mathbb R^k$ is
bi-Lipschitz homeomorphism and each component $f_i$ of
$F=(f_1,f_2,\dots,f_k)$ is admissible.
\end{definition}
The definition of regular points for admissible maps $\hat F: X^n
\to \mathbb R^k$ on general Alexandrov spaces is rather technical.
For the purpose of this paper, we only need to consider two lower
dimensional cases of $X^n$: either $X^3$ is a smooth Riemannian
$3$-manifold or $X^2$ is a surface with curvature $\ge c$.
\begin{definition}[Regular points of admissible maps for $\dim \le
3$]\label{def1.10} \
{\rm (1)} Suppose that $\hat F: M^3 \to \mathbb R^2$ is an
admissible map from a smooth Riemannian $3$-manifold $M^3$ to
$\mathbb R^2$ on a domain $U\subset M$ and $\hat F=G\circ F$, where
$G:\mathbb R^2\to \mathbb R^2$ is bi-Lipschitz homeomorphism and
each component $f_i$ of $F=(f_1,f_2)$ is admissible. If $\{ \nabla
f_1, \nabla f_2 \}$ are linearly independent at $x \in U$, then $x$
is said to be a regular point of $\hat F$.
{\rm (2)} (\cite{Per1994} page 210). Suppose that $\hat F: X^2 \to
\mathbb R^2$ is an admissible map from an Alexandrov surface $X^2$
of curvature ${\rm curv} \ge c$ to $\mathbb R^2$ on a domain $U\subset
X^2$ and $\hat F=G\circ F$, where $G:\mathbb R^2\to \mathbb R^2$ is
bi-Lipschitz homeomorphism and each component $f_i$ of $F=(f_1,f_2)$
is admissible. Suppose that $f_1$ and $f_2$ satisfy the following
conditions:
\begin{enumerate}[{\rm (2.a)}]
\item $\langle\nabla f_1,\nabla f_2\rangle _q<-\varepsilon<0$;
\item There exits $\vec{\xi}\in T^-_q(X^2)$ such that
$\min\{d_qf_1(\vec{\xi}),d_qf_2(\vec{\xi})\}>\varepsilon>0$.
\end{enumerate}
Then $q$ is called a regular point of $\hat F |_U$.
\end{definition}
\begin{remark}
It is clear that Perelman's condition (2.a) implies that
${\rm diam}(\Sigma_q(X^2))>\frac{\pi}{2}$. This together with (2.b)
implies that
$$
\frac{\pi}{2} < \measuredangle_q(\nabla f_1,\nabla f_2)<\pi.
$$
Conversely, we would like to point out that if
${\rm diam}(\Sigma_q(X^2))>\frac{\pi}{2}$, then there exists an
admissible map $F=(f_1,f_2): U_q \to\mathbb R^2$ satisfying
Perelman's condition (2.a) and (2.b) mentioned above, where $U_q$ is
a small neighborhood of $p$ in $X^2$. \qed
\end{remark}
We need to single out ``\emph{bad points\/}" (i.e., essential
singularities) for which the condition
$$
{\rm diam}(\Sigma_q(X^2))>\frac{\pi}{2}
$$
fails. These bad points are related to the so-called extremal
subsets (or essential singularities) of and Alexandrov space with
curvature $\ge c$.
\begin{definition}[Extremal points of an Alexandrov surface]\label{def1.12}
Let $X^2$ be an Alexandrov surface and $z $ be an interior point of
$X^2$. If the diameter of space of unit tangent directions
$\Sigma_z(X^2)$ has diameter less than or equal to $\frac{\pi}{2}$,
i.e.
\begin{equation}\label{eq1.10}
{\rm diam}(\Sigma_z(X^2))\le\frac{\pi}{2},
\end{equation}
then $z$ is called an extremal point of the Alexandrov surface
$X^2$. If ${\rm diam}(\Sigma_z(X^2))>\frac{\pi}{2}$, then we say that $z$
is a regular point of $X^2$.
\end{definition}
A direct consequence of \autoref{thm0.6} (i.e., \autoref{prop1.7})
is the regularity of sufficiently small punctured disk in an
Alexandrov surface.
\begin{cor}\label{cor1.12}
Let $X^2$ be an Alexandrov space of curvature $\ge -1$ and $\delta$
be as in \autoref{thm0.6}. Then each point $y\in [B_{X^2}(\hat
x,\delta)-\{\hat x\}]$ in punctured disk is regular.
\end{cor}
We recall the Perelman-Sharafutdinov gradient semi-flows for
semi-concave functions.
\begin{definition}\label{def1.13}
A curve $\phi:[a,b]\to X$ is called an $f$-gradient curve if for any
$t\in [a,b]$
\begin{equation}\label{eq1.10(2)}
\frac{d^+\phi}{dt}=\nabla f|_{\phi(t)}.
\end{equation}
\end{definition}
It is known that if $f:X\to \mathbb R$ is a semi-concave function
then there exists a unique $f$-gradient curve $\phi:[a,+\infty) \to
X$ with a given initial point $\phi(0)=p$, (cf. Prop 2.3.3 of
\cite{KPT2009}). We will frequently use the following result of
Perelman (cf. \cite{Per1994}) and Perelman-Petrunin (cf.
\cite{PP1994}).
\begin{prop}[Lemma 2.4.2 of \cite{KPT2009}, Lemma 1.3.4 of \cite{Petr2007}]\label{prop1.14}
Let $X_{\alpha}\to X_{\infty}$ be a sequence of Alexandrov space of
curvature $\ge -1$ which converges to an Alexandrov space
$X_{\infty}$. Suppose that $f_{\alpha}\to f_{\infty}$ where
$f_{\alpha}: X_{\alpha}\to \mathbb R$ is a sequence of
$\lambda$-concave functions and $f_{\infty}: X_{\infty}\to \mathbb
R$. Assume that $\psi_\alpha:[0,+\infty)\to X_{\alpha}$ is a
sequence of $f_{\alpha}$-gradient curves with
$\psi_{\alpha}(0)=p_{\alpha}\to p_{\infty}$ and
$\psi_{\infty}:[0,+\infty)\to X_{\infty}$ be the $f$-gradient curve
with $\psi_{\infty}(0)=p_{\infty}$. Then the following is true
{\rm (1)} for each $t \ge 0$, we have
$$\psi_{\alpha}(t)\to \psi_{\infty}(t)$$
as $\alpha \to \infty$;
{\rm (2)} $\liminf_{\alpha \to \infty} |\nabla f_\alpha|(p_\alpha)
\ge |\nabla f_{\infty}|( p_\infty)$. Consequently, if $\{q_\alpha\}$
is a bounded sequence of critical points of $f_\alpha$, then
$\{q_\alpha\} $ has a subsequence converging to a critical point
$q_\infty$ of $f_\infty$.
\end{prop}
As we pointed out earlier, the pointed spaces $\{ (\frac{1}
{\varepsilon} X, x)\}$ converge to the tangent cone of $X$ at $x$,
i.e., $(\frac{1}{\varepsilon}X, x)\to(T_x^-(X), 0)$ as
$\varepsilon\to 0$, where $0$ is the origin of tangent cone.
When $X^2$ is an Alexandrov surface of curvature $\ge -1$, it is
known that $X^2$ is a $2$-dimensional manifold. Moreover we have the
following observation.
\begin{prop}[\cite{Per1994}]\label{prop1.15}
Let $X$ be an Alexandrov space of curvature $\ge -1$. Suppose that
$\hat x$ is an interior point of $X$. Then $B_{X}(\hat x,
\varepsilon)$ is homeomorphic to $B_{T^-_{\hat x}X}(0,\varepsilon)$,
where $\varepsilon$ is given by \autoref{prop1.7}. Furthermore,
there exits an admissible map
$$G: T^-_{\hat x}(X^2)\to \mathbb R^2$$
such that $G$ is bi-Lipschitz homeomorphism and $G$ is regular at
$\vec v \ne 0$.
\end{prop}
\begin{proof}
This is an established result of Perelman, (cf. \cite{Per1994},
\cite{Kap2007}). We provide a short proof here only for the
convenience of readers. Let us first prove that $B_{X}(\hat x,
\varepsilon)$ is homeomorphic to $B_{T^-_{\hat x}X}(0,\varepsilon)$,
where $\varepsilon$ is given by \autoref{prop1.7}. Recall that
$\{(\frac{1}{\delta} X, \hat x)\}$ is convergent to $ (T^-_{\hat
x}X, O)$, as $\delta \to 0$. By Perelman's stability theorem (cf.
Theorem 7.11 of \cite{Kap2007}), $B_{\frac{1}{\delta}X}(\hat x, 1)$
is homeomorphic to $B_{T^-_{\hat x}X}(0, 1)$ for sufficiently small
$\delta$. Thus, $B_{X}(\hat x, \delta)$ is homeomorphic to
$B_{T^-_{\hat x}X}(0,\delta)$.
By \autoref{prop1.7}, the function $r_{\hat x}(y) = {\rm d}_X(y, \hat
x)$ has no critical point in punctured ball $[B_{X}(\hat x,
\varepsilon) -\{ x\}]$. Thus, we can apply Perelman's fibration
theorem to the following diagram:
$$
\Sigma_{\hat x}(X) \to A_{X^2}(\hat x, \delta/2, \varepsilon )
\stackrel{r_{\hat x}}{\longrightarrow} (\delta/2, \varepsilon).
$$
Consequently, we see that $A_{X^2}(\hat x, \frac{\delta}{2},
\varepsilon )$ is homeomorphic to a cylinder $C=(\partial
B)\times(\frac{\delta}{2},\varepsilon)$. Furthermore, the metric
sphere $(\partial B)$ is homeomorphic to $\Sigma_{\hat x}(X)$. It
follows that the metric ball $B_{X}(\hat x, \varepsilon) \sim [
B_{T^-_{\hat x}X}(0,\delta) \cup C]$ is homeomorphic to
$B_{T^-_{\hat x}X}(0,\varepsilon)$, where $\varepsilon$ is given by
\autoref{prop1.7}.
It remains to construct the desired map $G$. If
$\theta=\frac{1}{3}{\rm diam}(\Sigma_{\hat x}(X^2))$, then $\theta\le
\pi/3$. Let us choose six vectors $\{\vec{\xi}_1,\vec{\xi}_2,
\vec{\xi}_3, \vec{\xi}_4, \vec{\xi}_5, \vec{\xi}_6\}\subset
\Sigma_{\hat x}(X^2)$ such that $\measuredangle (\vec{\xi}_i,
\vec{\xi}_j)=\theta$ for $i=j+1$ or $\{i=1, j=6 \}$. It is easy to
construct affine map $H$ from an Euclidean sector of angle $\theta$
to an Euclidean sector of angle $\pi/3$. In fact we can first
isometrically embed an Euclidean sector as $\{(x,y)| x\ge 0, y\ge 0,
0 < \varepsilon_1 \le \arctan ( \frac{y}{x}) \le \theta +
\varepsilon_1 \} \subset \mathbb R^2 $. We could choose $h_1(x,y)=x,
h_2(x,y)=\lambda y$, (see \autoref{lifted_regular_graph}).
Each affine map $H=(h_1,h_2)$ has height functions (distance
functions from axes) up to scaling factors as its components. We can
arrange the Euclidean sectors in an appropriate order so that $H$ is
admissible. For instance, we could change the role of $h_1$ and
$h_2$ for adjunct sector such that, by gluing six Euclidean sectors
together, we can recover $T_{\hat x}(X^2)$ and construct an
admissible map $G: T_{\hat x}(X^2)\to \mathbb R^2$.
\end{proof}
\begin{figure*}[ht]
\includegraphics[width=200pt]{section1_2.pdf}\\
\caption{The construction of a regular map form $S_i\cap
A_{X^2}(x_{\infty},\delta,\varepsilon)$.}\label{lifted_regular_graph}
\end{figure*}
Recall that $\lim_{\lambda \to \infty} (\lambda X, x) = (T_x^-(X),
O)$. By lifting the admissible map $G: T_x(X^2) \to \mathbb{R}^2 $
to $F: \lambda X^2 \to \mathbb{R}^2$, we have the following result.
\begin{cor}\label{cor1.16}
Let $X^2$ be an Alexandrov surface of curvature $\ge -1$ and $\hat
x$ be an interior point. Then there exist sufficiently small
$\varepsilon>\delta>0$ and admissible maps
$$
F_{\infty, i}: X^2 \to \mathbb R^2
$$
such that $F_{\infty, i}$ is regular on $S_i\cap A_{X^2} (\hat
x,\delta, \varepsilon)$ for $i = 1, 2, \cdots, 6$, where $S_i
\subset X^2 $ is the lift of the Euclidean sector bounded by
$\{\vec\xi_i, \vec \xi_{i+1}\}$ from $T^-_{\hat x}X^2$ to $X^2$.
\end{cor}
\begin{proof}
Let $\{h_1, h_2\}$ and $\{\vec{\xi}_1,\vec{\xi}_2, \vec{\xi}_3,
\vec{\xi}_4, \vec{\xi}_5, \vec{\xi}_6\} \subset \Sigma_{\hat
x}(X^2)$ be as in the proof of \autoref{prop1.15}. We choose an
1-parameter family of length-minimizing geodesic segments $\hat
\gamma_{i, \lambda}: [0, 2] \to \lambda X^2$ such that $\{ \hat
\gamma_{i, \lambda} \}$ are converging to $\xi_i $ in $T_{\hat
x}(X^2)$, as $\lambda \to \infty$. We also choose another
length-minimizing geodesic segment $\tilde \gamma_{i, \lambda}: [0,
2\delta] \to \lambda X^2$ outside the geodesic hinge $\{\hat
\gamma_{i, \lambda}, \hat \gamma_{i+1, \lambda}\}$ such that $ 0 <
\varepsilon_1/2 \le \angle_{\hat x} ( \tilde \gamma_{i,
\lambda}'(0), \hat \gamma_{i, \lambda}'(0)) \le \varepsilon_1$. (see
\autoref{lifted_regular_graph} above). We consider lifting distance
functions as follows:
$$r_{ \lambda X^2}(y ) ={\rm d}_{ \lambda X^2}(\hat x, y)$$
and
$$f_{\lambda,i}(y) ={\rm d}_{ \lambda X^2}(\tilde \gamma_{i, \lambda},y)$$
It follows from \autoref{prop1.14} that two functions $\{r_{\lambda
X^2}, f_{\lambda,i}\}$ are regular on $S_i\cap A_{\lambda X^2}(\hat
x, \frac{1}{2}, 1)$ for sufficiently large $\lambda$.
Choosing a sufficiently large $\lambda_0$, we consider the map
$$F_{\infty, i}(\cdot) =(\lambda_0 {\rm d}_{X^2}(\hat x, \cdot),
\lambda_0 {\rm d}_{X^2}(\tilde \gamma_i, \cdot))$$ It follows from
\autoref{prop1.14} that $F_{\infty, i}$ is regular on the curved
trapezoid-like region $S_i\cap A_{X^2}(\hat x,\delta, \varepsilon)$,
(see \autoref{lifted_regular_graph}).
\end{proof}
Recall that there is a sequence $\{M^3_{\alpha}\}$ convergent to
$X_{\infty}^2$. We conclude \autoref{section1} by the following
circle-fibration theorem.
\begin{theorem}\label{thm1.17}
Suppose that a sequence of pointed $3$-manifolds
$\{(M^3_{\alpha},x_{\alpha})\}$ is convergent to a $2$-dimensional
Alexandrov surface $(X_{\infty}, x_{\infty})$ such that $x_{\infty}$
is an interior point of $X^2_{\infty}$. Suppose that
$$ F_{\infty}:
A_{X^2}(x_{\infty},\delta,\varepsilon) \to A_{\mathbb R^2}(0,\delta,
\varepsilon)
$$
is a regular map as above. Then there exist maps
$$F_{\alpha}: A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha},
\delta, \varepsilon) \to A_{\mathbb R^2}(0,\delta, \varepsilon)$$
such that $F_{\alpha}$ is regular for sufficiently large $\alpha$.
Moreover, the map $F_\alpha$ gives rise to a circle fibration:
$$S^1\hookrightarrow A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha},
\delta, \varepsilon) \to A_{X^2}(x_{\infty},\delta,\varepsilon)$$
for sufficiently large $\alpha$.
\end{theorem}
\begin{proof}
We will use the same notations as in proofs of \autoref{prop1.15}
and \autoref{cor1.16}. For each $F_{\infty, i}$ constructed in
\autoref{cor1.16}, we consider the map
$$
F_{\infty, i}=({\rm d}_{X^2}(\hat x, \cdot), {\rm d}_{X^2}(\tilde
\gamma_i, \cdot))
$$
up to re-scaling factors, which is regular on $S_i\cap
A_{X^2}(x_{\infty},\delta,\varepsilon)$. Since
$B_{M^3_\alpha}(x_\alpha, r) \to B_{X^2}(x_{\infty}, r)$ as $\alpha \to \infty$,
we can
choose geodesic segment $\tilde \gamma_{i,\alpha}$ in $M^3_{\alpha}$
with $\tilde \gamma_{i,\alpha}\to \tilde \gamma_i$. Then we can
define a map of $F_{i,\infty}$ by
$$F_{i,\alpha}=({\rm d}_{M^3_{\alpha}}(x_{\alpha},\cdot),
{\rm d}_{M^3_{\alpha}}(\tilde \gamma_{i,\alpha}, \cdot)).$$
We further choose $S_{i,\alpha} \sim F_{i,\alpha}^{-1}(S^*_i)
\subset M^3_{\alpha}$ such that $ S_{i,\alpha}\to S_i$ as $\alpha
\to \infty$, where $S^*_i $ is an Euclidean sector of $\mathbb R^2$
described in the proof of \autoref{prop1.15} and \autoref{cor1.16},
(see \autoref{lifted_regular_graph} and \autoref{fig1_3}).
\begin{figure*}[ht]
\includegraphics[width=180pt]{section1_3.pdf}\\
\caption{A regular map from $S_{i,\alpha}\cap
A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha}, \delta,
\varepsilon)$.}\label{fig1_3}
\end{figure*}
It follows from \autoref{prop1.14} that $F_{i,\alpha}$ is
also regular in $S_{i,\alpha}\cap A^3_{(M^3_{\alpha},\hat
g^{\alpha})}(x_{\alpha}, \delta, \varepsilon)$, for sufficiently
large $\alpha$. Using Perelman's fibration theorem (\autoref{thm1.2}
above), we obtain that $F_{\alpha, i}$ defines a $S^1$ fibration on
$S_{i,\alpha}\cap A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha},
\delta, \varepsilon)$. To get a global $S^1$-fibration on
$A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha}, \delta,
\varepsilon)$, we need to glue these local fibration structures
together. We will discuss the detail of the gluing procedure in
\autoref{section6} below.
\end{proof}
\section{Exceptional orbits, Cheeger-Gromoll-Perelman soul
theory and Perelman-Sharafutdinov flows}\label{section2}
\setcounter{theorem}{-1}
We begin with an example of collapsing manifolds with exceptional
orbits of a circle action on a solid tori.
\begin{example}[\cite{CG72}]\label{ex2.0}
Let $\mathbb R\times D^2=\{(x,y,z)| y^2+z^2\le 1)\}$ be an infinite
long $3$-dimensional cylinder. We consider an isometry
$\psi_{\varepsilon,m_0}: \mathbb R^3\to \mathbb R^3$ given by
\begin{equation*}
\psi_{\varepsilon,m_0}: \quad \left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right) \to \left(
\begin{array}{c}
\varepsilon \\
0\\
0 \\
\end{array}
\right)+ \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos\frac{2\pi}{m_0} & -\sin\frac{2\pi}{m_0} \\
0 & \sin\frac{2\pi}{m_0} & \cos\frac{2\pi}{m_0}\\
\end{array}
\right) \left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right)
\end{equation*}
where $m_0$ is a fixed integer $\ge 2$. Let
$\Gamma_{\varepsilon,m_0}=\langle\psi_{\varepsilon,m_0}\rangle$ be
an sub-group generated by $\psi_{\varepsilon,m_0}$. It is clear that
the following equation
\begin{equation*}
\psi_{\varepsilon,m_0}^{m_0} \left(
\begin{array}{c}
x \\
y \\
z \\
\end{array}
\right)= \left(
\begin{array}{c}
x+m_0\varepsilon \\
y \\
z \\
\end{array}
\right)
\end{equation*}
holds. The quotient space $M^3_{\varepsilon,m_0}= (\mathbb R\times
D)/ \Gamma_{\varepsilon, m_0}$ is a solid torus. Let
$$
G_{\varepsilon,m_0}: \mathbb R\times D^2 \to M^3_{\varepsilon,m_0}
$$
be the corresponding quotient map. The orbit $\mathscr
{O}_{0,\varepsilon}=G_{\varepsilon,m_0}(\mathbb R\times\{0\})$ is an
exceptional orbit in $M^3_{\varepsilon,m_0}$. It is clear that such
an exceptional orbit $\mathscr{O}_{0,\varepsilon}$ has non-zero
Euler number. Let $\varepsilon\to 0$, the solid tori
$M^3_{\varepsilon,m_0}$ is convergent to $X^2=D^2/\mathbb Z_{m_0}$,
where $\mathbb Z_{m_0}=\langle\hat {\psi}_{m_0}\rangle$ is a
subgroup generated by $\hat{\psi}_{m_0}: \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)\to \left(
\begin{array}{cc}
\cos\frac{2\pi}{m_0} & -\sin \frac{2\pi}{m_0} \\
\sin \frac{2\pi}{m_0} & \cos\frac{2\pi}{m_0} \\
\end{array}
\right) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right). $
\end{example}
Let us now return to the diagram constructed in the previous section:
\begin{diagram}
A^3_{(M^3_{\alpha},\hat g^{\alpha})}(x_{\alpha},
\delta, \varepsilon) &\rTo^{F_{\alpha}}&\mathbb{R}^2\\
\dTo_{\text{G-H}}&&\dCorresponds\\
A_{X^2}(x_{\infty},\delta,\varepsilon) &\rTo^{F_{\infty}}&\mathbb{R}^2
\end{diagram}
Among other things, we shall derive the following theorem.
\begin{theorem}[Solid tori around exceptional circle orbits]\label{thm2.1}
Let $F_\alpha$, $F_\infty$, $A^3_{(M^3_{\alpha},\hat
g^{\alpha})}(x_{\alpha}, \delta, \varepsilon)$ and $A_{X^2}
(x_{\infty}, \delta,\varepsilon)$ as in \autoref{section1} and the
diagram above. Then there is a $\delta^* > 0$ such that $
B_{M^3_{\alpha}}(x_\alpha, \delta^*) $ is homeomorphic to a solid
torus for sufficiently large $ \alpha$. Moreover, a finite normal
cover of $ B_{M^3_{\alpha}}(x_\alpha, \delta^*)$ admits a free
circle action.
\end{theorem}
We will establish \autoref{thm2.1} by using the
Cheeger-Gromoll-Perelman's soul theory for {\it singular metrics} on
open $3$-dimensional manifolds with non-negative curvature,
(comparing with \cite{SY2000}). It will take several steps.
\subsection{A scaling argument and critical points in collapsed
regions}\label{section2.1}\
\smallskip
We start with the following observation.
\begin{prop}[cf. \cite{SY2000}, \cite{Yama2009}]\label{prop2.2}
Let $\{(B_{(M^3_{\alpha},\hat
g^{\alpha})}(x_{\alpha},\varepsilon),x_{\alpha})\}$ be a sequence of
metric balls convergent to $(B_{X^2}(x_{\infty}, \varepsilon),
x_{\infty})$ as above. Then, there is another sequence of points
$\{x'_\alpha\}$ such that $ x'_\alpha \to x_\infty$ as $M^3_\alpha
\to X^2$ and for sufficiently large $\alpha$, the following is true:
\begin{enumerate}[{\rm (1)}]
\item
$\partial B_{(M^3_{\alpha},\hat
g^{\alpha})}(x'_{\alpha},\varepsilon)$ is homeomorphic to a quotient of torus
$T^2$;
\item
There exists $0<\delta_{\alpha}<\varepsilon$ such that there is no
critical point of $r_{x'_{\alpha}}(z)={\rm d}(z,x'_{\alpha})$ for $z\in
A_{(M^3_{\alpha},\hat g^{\alpha})}(x'_{\alpha},\delta_{\alpha},
\varepsilon)$;
\item There is the furthest critical point $z_{\alpha}$ of the
distance function $r_{x'_{\alpha}}$ in $B_{(M^3_{\alpha},\hat
g^{\alpha})}(x'_{\alpha},\delta_{\alpha})$ with
$\lambda_{\alpha}=r_{x'_{\alpha}}(z_{\alpha}) \to 0$ as $\alpha \to
\infty$.
\end{enumerate}
\end{prop}
\begin{proof}
This result can be found in Shioya-Yamaguch's paper \cite{SY2000}.
For convenience of
readers, we reproduce a proof inspired by Perelman and Yamaguchi (cf. \cite{Per1994}, \cite{Yama2009}) with appropriate modifications.
Our choices of the desired points $\{ x'_{\alpha}\}$
are related to certain averaging distance functions $\{h_\alpha\}$ described below.
Let us first construct the limit function $h_\infty$ of the sequence $\{h_\alpha\}$.
Similar constructions related to $h_\infty$ can be found in
the work of Perelman, Grove and others (see \cite{Per1994} page 211, \cite{PP1994} page 223, \cite{GW1997} page 210, \cite{Kap2002}, \cite{Yama2009}).
Let $\Uparrow_p^A$ denote the set of directions of geodesics from
$p$ to $A$ in $\Sigma_p$. It is clear that if we choose $Y^2 = T_{
x_\infty}(X^2 )$ and if $o= o_{ x_\infty}$ is the apex of $T_{
x_\infty}(X^2 )$, then $\Uparrow_{o}^{\partial B_{Y}(o, R)} =
\Sigma_o(Y^2)$ for any $R >0$. Recall that $(\lambda X^2, x_\infty)
\to (Y^2, o_{ x_\infty})$ as $\lambda \to \infty$. Suppose that
$X^2$ has curvature $\ge -1$. Applying \autoref{prop1.14}, we see
that, for any $\delta >0$, there is a sufficiently small $r$ such
that
\smallskip
\noindent (2.2.4) {\it The minimal set
$\Uparrow_{x_\infty}^{\partial B_{X^2}(x_\infty, 2r)}$ is
$\delta$-dense in $\Sigma_{x_\infty}(X^2)$.}
We now choose $\delta' = \frac{\delta}{800}$ and $\delta =
\frac{2\pi }{1000}$. Let $\{ q_i\}_{1 \le i \le m}$ be a maximal
$(\delta r)$-separated subset in $ \partial B_{X^2}(x_\infty, r)$
and $ \{ q_{i, j}\}_{1 \le j \le N_i}$ be a $(\delta' r) $-net in $
B_{X^2}(q_i, \delta r) \cap \partial B_{X^2}(x_\infty, r)$.
Yamaguchi (\cite{Yama2009}) considered
$$
f_i (x) = \frac{1}{N_i}\sum_{j= 1}^{N_i} {\rm d}(x, q_{i, j})
$$
for $i = 1, \cdots, m$. Our choice of $h_\infty$ is given by
$$
h_\infty(x) = \min_{1 \le i \le m} \{ f_i(x)\}.
$$
We now
verify that $h_\infty$ has a unique maximal point $ x_\infty$. It is
sufficient to establish
$
h_\infty(x) \le [r - \frac{1}{2} {\rm d}(x, x_\infty)]
$
for all $ x \in B_{X^2}(x_\infty, \frac r2)$. This can be done as
follows. For each $ x \in B_{X^2}(x_\infty, \frac r2) - \{x_\infty\}$, we
choose $i_0$ such that $\angle_{x_\infty}(x, q_{i_0,j }) <
5\delta$ for $j = 1, \cdots, N_{i_0}$. Suppose that $\sigma: [0, d]
\to X^2$ is a length-minimizing geodesic segment of unit speed from
$x_\infty$ to $x$. By comparison triangle comparison theorems, one can show
that if $ 0 < t < {\rm d}(x_\infty, x)$ then $
\angle_{\sigma(t)}(q_{i_0,j}, \sigma'(t)) < \frac{\pi}{4}$ for $j = 1, \cdots, N_{i_0}$. It follows
from the first variational formula that
$$
{\rm d}(\sigma(t), q_{i_0,j }) \le r - (\cos \frac{\pi}{4}) t,
$$
for $j = 1, \cdots, N_{i_0}$. In fact, Kapovitch \cite{Kap2002} observed that ${\rm d}(\sigma(t),
q_{i_0,j }) \le [r - t \cos (3 \delta)] $, (see \cite{Kap2002} page
129 or \cite{GW1997}). It follows that $h_\infty(x)_{B_{X^2}(x_\infty, \frac r2)}$ has the unique maximum point
$x_\infty$ with $h_\infty(x_\infty) =r$, because $h_\infty(x) = \min_{1 \le i \le m} \{ f_i(x) \}
\le [r - \frac{1}{2} {\rm d}(x, x_\infty)]$ for $x \in B_{X^2}(x_\infty, \frac r2)$.
Since $ B_{M^3_{\alpha}}(x_{\alpha}, r) \to B_{X^2}(x_{\infty}, r) $ as $\alpha \to \infty$,
we can construct a $ \mu'_\alpha$-approximation of $\{ q_{i, j,
\alpha}\} \subset M^3_\alpha$ of $ \{ q_{i, j} \}\subset X^2 $ with $ \mu'_\alpha \to 0$. Let
$$
f_{i, \alpha}(y) = \frac{1}{N_i}\sum_{j= 1}^{N_i} {\rm d}(y, q_{i, j,
\alpha}) \quad \text{ and} \quad h_\alpha(y) = \min_{1 \le i \le m}
\{ f_{i, \alpha}(y)\}.
$$
Let $A_\alpha$ be a local maximum set of $h_\alpha|_{B_{M^3_{\alpha}}(x_{\alpha}, \frac r4)}$ and $x'_\alpha
\in A_\alpha$. Applying \autoref{prop1.14} to
the sequence $\{h_\alpha\} \to h_\infty$, we see that
${\rm diam}(A_\alpha) \to 0$ and $x'_\alpha \to x_\infty$, as
$B_{M^3_{\alpha}}(x_{\alpha}, r) \to B_{X^2}(x_{\infty}, r) $ and $\alpha \to \infty$.
Let $r_\alpha (y) = d(y, x'_\alpha)$.
Using \autoref{prop1.14} again, we can show
that there exists a sequence $\delta_\alpha \to 0 $ such that
neither the function $h_\alpha$ nor $r_\alpha$ has any
critical points in the annual region $A_{(M^3_{\alpha},\hat g^{\alpha})}(x'_{\alpha},
\delta_{\alpha}, \varepsilon)$, as $B_{M^3_{\alpha}}(x'_{\alpha}, r) \to B_{X^2}(x_{\infty}, r) $. It follows
from \autoref{thm1.17} that the boundary $\partial B_{(M^3_{\alpha},\hat
g^{\alpha})} (x'_{\alpha},t)$ is homeomorphic to $T^2$ or Klein
bottle $T^2/ \mathbb Z^2$ for $t>\delta_{\alpha}$. However,
$(M^3_{\alpha},\hat g^{\alpha})$ is a Riemannian $3$-manifold
$\partial B_{(M^3_{\alpha},\hat g^{\alpha})} (x'_{\alpha}, s)$ is
homeomorphic to a $2$-sphere for $s$ less than the injectivity
radius $ \eta_\alpha$ at $x'_{\alpha}$. Thus, we let $\mu_{\alpha} = \min\{\mu'_\alpha, \eta_\alpha \} $ and
$$
\lambda_{\alpha}=\max\{{\rm d}(x'_{\alpha},z_{\alpha})|z_{\alpha}
\text{ is a critical point of } r_{x'_{\alpha}} \text{ in }
B_{(M^3_{\alpha}, \hat g^{\alpha})}(x'_{\alpha},\delta_{\alpha})\}.
$$
Clearly, we have $\lambda_{\alpha}\in
[\mu_{\alpha},\delta_{\alpha}]$. As $\alpha\to+\infty$, we have
$\lambda_{\alpha} \to 0$. This completes the proof of
\autoref{prop2.2}.
\end{proof}
In what follows, we re-choose $ x_\alpha = x'_\alpha$ as in the proof of
\autoref{prop2.2} for each $\alpha$. We now would like to study the
sequence of re-scaled metrics
\begin{equation}\label{eq:2.1}
\tilde{g}^{\alpha}=\frac{1}{\lambda^2_{\alpha}}\hat g^{\alpha}.
\end{equation}
Clearly, the curvature of $\tilde{g}^{\alpha}$ satisfies $
{\rm curv}_{\tilde{g}^{\alpha}} \ge -\lambda^2_{\alpha} \to 0$, as
$\alpha \to \infty$. By passing to a subsequence, we may assume that
the pointed Riemannian $3$-manifolds $\{((M^3_{\alpha},\tilde
g^{\alpha}),x'_{\alpha})\}$ converge to a pointed Alexandrov space
$(Y_{\infty}, y_{\infty})$ with non-negative curvature.
\begin{prop}[Lemma 3.6 of \cite{SY2000}, Theorem 3.2 of \cite{Yama2009}]\label{prop2.3}
Let $\{((M^3_{\alpha},\tilde g^{\alpha}),x'_{\alpha})\}$, $X^2$,
$\{\delta_{\alpha}\},\{\lambda_{\alpha}\}$ and $(Y_{\infty},
y_{\infty})$ be as above. Then $Y_{\infty}$ is a complete,
non-compact Alexandrov space of non-negative curvature. Furthermore,
we have
\begin{enumerate}[{\rm (1)}]
\item $\dim Y_{\infty}=3$;
\item $Y_{\infty}$ has no boundary.
\end{enumerate}
\end{prop}
\begin{proof}
This is an established result of \cite{SY2000} and \cite{Yama2009}.
We outline a proof here only for convenience of readers, using our proofs of Proposition 2.2 above and Theorem 2.4 and Proposition 2.5 below. The metric $\tilde
g^{\alpha}=\frac{1} {\lambda^2_{\alpha}} \hat g^{\alpha}$ defined above has
curvature $\ge -\lambda^2_{\alpha} \to 0$, as $\alpha \to +\infty$.
By our construction,
the diameter of $Y_{\infty}$ is infinite. Moreover, our Alexandrov
space $Y_{\infty}$ has no finite boundary.
It remains to show that $\dim Y_{\infty}=3.$ Suppose contrary,
$\dim(Y^s_\infty) = 2$. Then, for each $\vec v^* \in \Sigma^1_{y_\infty}(
Y^2_\infty)$, the subset
$$
\Lambda^\perp_{\vec v^*} = \{ \vec w \in \Sigma^1_{y_\infty}(
Y^2_\infty) \, | \angle_{y_\infty}(\vec w, \vec v^*) = \frac{\pi}{2}
\}
$$
has at most two elements. We will find a vector $\vec
v^*$ such that $\Lambda^\perp_{\vec v^*}$ has at least $N_{i_0} \ge 100$
elements for some $i_0$, a contradiction to $\dim(Y_\infty) =2$.
Our choice of $\vec v^*$ will be related to a tangent vector to the minimum set $A$ of a convex function
$f: Y^s_\infty \to \mathbb R$, which we now describe. We will retain the same notations as in
the proof of Proposition 2.2 above. Let $r_{x'_\alpha}(z) = d_{M^3_{\alpha}}(z, x'_\alpha)$, $z_\alpha$ be a critical point of
$r_{x'_\alpha}|_{B_{M^3_{\alpha}}(x'_{\alpha}, \frac r4)}$ be as above and $\bar{z}_\alpha$ be its image in
the scaled manifold $\frac{1}{\lambda_\alpha} M^3_\alpha$. Suppose
that $q_\infty$ is the limit point of a subsequence of $\{
\bar{z}_\alpha \}$. It follows from \autoref{prop1.14} that the
limiting point $q_\infty$ must be a critical point of the distance
function $r_{y_\infty}$ with $r_{y_\infty}(q_\infty)= 1$, where
$r_{y_\infty} (z) = {\rm d}_{Y_\infty}(z, y_\infty)$.
In addition, there exist $(N_1 +\cdots+ N_m)$-many geodesic segments $\{\tilde
\gamma_{i, j, \alpha} \}_{1 \le j \le N_i, 1\le i \le m}$ in
$M^3_{\alpha}$ from $x'_\alpha$ to $q_{i,j, \alpha}$, where $\tilde \gamma_{i,j, \alpha}\to \tilde
\gamma_{i, j, \infty}$ as $B_{M^3_\alpha}(x'_\alpha, r) \to B_{X^2}(x_\infty, r)$ with $\alpha \to \infty$. Let $\bar\gamma_{i,
j, \alpha}: [0, \frac{r }{2\lambda_\alpha}] \to
\frac{1}{\lambda_\alpha} M^3_\alpha$ be the re-scaled geodesic with
starting point $\bar{x}'_\alpha$ in the re-scaled manifold, for $1
\le j \le N_i, 1\le i \le m.$ It can be shown that $\bar\gamma_{i, j, \alpha} \to \bar\gamma_{i,j,
\infty}$ as $\frac{1}{\lambda_\alpha} M^3_\alpha \to Y_\infty$,
after passing to appropriate subsequences of $\{ \alpha\}$.
Therefore, we have $(N_1 + \cdots + N_m)$-many distinct geodesic
rays starting from $y_\infty$ in $Y_\infty$. Let us now consider limiting
Busemann functions:
$$
\tilde{h}_{i, j}(y) = \lim_{t \to \infty}[{\rm d} (y, \bar\gamma_{i,j,
\infty}(t)) - t] \quad \text{and } \quad \hat {h}_i (y) =
\frac{1}{N_i}\sum_{j= 1}^{N_i}\tilde{h}_{i, j}(y).
$$
Since $Y_\infty$ has non-negative curvature, each Busemann function
$(-\tilde{h}_{i, j})$ is a convex function, (see \cite{CG72}, \cite{Wu79} or Theorem 2.4 below). If
$$
\hat h (y) = \min_{ 1\le i \le m}\{ \hat {h}_i \} \quad \text{ and }
\quad f(y) = - \hat h(y),
$$
then $f$ is convex.
Choose $\tilde{h}_{i, j, \alpha}(x) = [\bar{d}(x,
\bar{q}_{i, j, \alpha}) - \bar{d}( \bar{x}'_\alpha,
\bar{q}_{i, j, \alpha}) ] $, $\hat {h}_{i,
\alpha} (x) = \frac{1}{N_i}\sum_{j= 1}^{N_i}\tilde{h}_{i, j,
\alpha}(x)$ and $ \hat h_\alpha (x) = \min _{ 1\le i \le m}\{ \hat
{h}_i(x)\}$ defined on $B_{\frac{1}{\lambda_\alpha} M^3_\alpha} (
\bar{x}'_\alpha, \frac{r}{4\lambda_\alpha} )$. It follows that
$\hat h = \lim_{\alpha \to \infty }
\hat h_\alpha $. Because $\bar{x}'_\alpha$ is a maximum point of $\hat h_\alpha$ with $ \hat h_\alpha (\bar{x}'_\alpha ) = 0 $ and $\bar{x}'_\alpha \to
y_\infty$ as $\alpha \to \infty$, the point
$y_\infty$ is a critical point of the limiting function $\hat h = \lim_{\alpha \to \infty }
\hat h_\alpha $ with $\hat h(y_\infty) = 0$.
Thus, $0 = \hat h (y_\infty)$ is a
critical value of the {\it convex } function $f = (- \hat h)$ with $\inf_{y \in Y^s_\infty}\{ f(y) \} = 0$.
There are two cases for $A = f^{-1} (0)$.
\smallskip
{\bf Case 1.} If $A = \{ y_\infty \}$, then it is known (cf.
\autoref{prop2.5} below) that the distance function $r_{y_\infty}$
does not have any critical point in $[Y_\infty - \{ y_\infty \}]$,
which contracts to the existence of critical points $q_\infty$ of
$r_{y_\infty}$ mentioned above. Thus, this case can not happen.
\smallskip
{\bf Case 2.} $\dim(A) \ge 1$. In this case, our proof becomes more
involved. If $q_\infty \notin A = f^{-1}(0)$, then
$f(q_\infty) = a > a_0 =0$. Using the proof of \autoref{prop2.5}
below, we see that $q_\infty$ can {\it not} be a critical
point of $r_{y_\infty}(z) = {\rm d}(z, y_\infty)$ either, a contradiction.
Thus, $q_\infty \in A$ holds.
If, for any quasi-geodesic segment $\sigma: [a, b] \to Y_\infty$
with ending points $\{ \sigma(a), \sigma(b)\} \subset \Omega$, the
inclusion relation $\sigma([a, b]) \subset \Omega$ holds, then
$\Omega$ is called a totally convex subset of $Y_\infty$. It follows from
the proof of \autoref{prop2.5} below that the sub-level set
$f^{-1}((-\infty, a]) $ is totally convex. Let us choose $\vec{v}^* \in \Uparrow_{y_\infty}^{q_\infty} $, where
$\Uparrow_p^x$ denotes the set of directions of geodesics from $p$
to $x$ in $\Sigma_p$. Because $\{ y_\infty, q_\infty\}$
are contained in the totally convex minimal set $A = f^{-1}(0)$ of
a convex function $f$, one has
$$
\angle_{y_\infty}(\vec v^*, \bar \gamma_{i,j, \infty}'(0)) \ge
\frac{\pi}{2}
$$
holds for all $i, j$ by our construction of $f = - \hat h$, because $\bar
\gamma_{i,j, \infty}'(0) = \vec s_{i, j}$ is a support vector of
$d_{y_\infty}(-f)$, (see the proof of \autoref{prop2.5} below).
Let $\sigma: [0, \ell] \to Y$ be a geodesic segment from $y_\infty$
to $q_\infty$. Since $A$ is totally convex and $f$ is convex, we have $\sigma( [0,
\ell]) \subset A$ and $f( \sigma (t)) = 0$ for all $t \in [0,
\ell]$. Hence, $\hat h ( \sigma (t)) = 0$ for all $t \in [0,
\ell]$.
Recall that $\hat h (z) = \min_{ 1\le i \le m}\{ \hat {h}_i
(z)\} $. We choose $i_0 $ such that $ \hat h_{i_0} (\sigma(\frac{\ell}{2})) = \min_{ 1\le i \le m}\{ \hat {h}_i
(\sigma(\frac{\ell}{2}))\} = \hat h ( \sigma (\frac{\ell}{2})) =0 $. Because $\hat
h_{i_0}(\sigma(t))$ is a concave function of $t$ with $ 0 = \hat h (\sigma(0))= \hat h (\sigma(\ell)) \le \min\{ \hat
h_{i_0} (\sigma(0)), \hat
h_{i_0} (\sigma(\ell)) \} $ and $ \hat
h_{i_0} (\sigma(\frac{\ell}{2})) = 0$, one concludes that $\hat h_{i_0} ( \sigma (t)) = 0$ for all $t \in [0,
\ell]$. Since $\hat h_{i_0} ( \sigma (t)) = 0$ for all $t \in [0,
\ell]$, choosing $\vec{v^*}= \sigma'(0) $ one has
$$
0 = (d_{y_\infty} \hat h_{i_0}) (\vec{v^*} ) = \frac{1}{N_{i_0}}
\sum^{N_{i_0}}_{j=1} [ - \cos ( \angle_{y_\infty}(\vec v^*,
\bar\gamma_{i_0, j, \infty}'(0)))],
$$
This together with inequalities
$\angle_{y_\infty}(\vec v^*, \bar\gamma_{i_0, j, \infty}'(0)) \ge
\frac{\pi}{2}$ implies that
$$
\angle_{y_\infty}(\vec v^*, \bar\gamma_{i_0, j, \infty}'(0)) =
\frac{\pi}{2}
$$
holds for $j = 1, 2,\cdots, {N_{i_0}}.$ Hence, we conclude that
$\bar\gamma_{i_0,j, \infty}'(0) \in \Lambda^\perp_{\vec v^*}$, for
$j= 1, 2, \cdots, {N_{i_0}}$, where ${N_{i_0}} \ge 100$. Therefore, we
demonstrated that $\# |\Lambda^\perp_{\vec v^*}| \ge N_{i_0} \ge
100$. This contradicts to $\# |\Lambda^\perp_{\vec v^*}| \le 2$ when
$\dim(Y_\infty) = 2$. This completes the proof of the assertion $\dim ( Y_{\infty}) =3$.
\end{proof}
\subsection{The classification of non-negatively curved
surfaces and 3-dimensional soul theory}\label{section2.2}\
\smallskip
In what follows, if $X$ is an open Alexandrov space of non-negative
curvature, then we let $X(\infty)$ be the boundary (or called the
ideal boundary) of $X$ at infinity. For more information about the
ideal boundary $X(\infty)$, one can consult with work of Shioya, (cf. \cite{Shio94}).
In this sub-section, we briefly review the soul theory for
non-negatively curved space $Y^k_{\infty}$ of dimension $\le 3$.
The soul theory and the splitting theorem are two important tools
in the study of low dimensional collapsing manifolds.
Let $X$ be an $n$-dimensional non-negatively curved Alexandrov
space. Suppose that $X$ is a non-compact complete space and that $X$
has no boundary. Fix a point $x_0\in X$, we consider the
Cheeger-Gromoll type function
$$f(x)=\lim_{t\to+\infty}[t-{\rm d}(x,\partial B(x_0,t))].$$
Let us consider the sub-level sets $\Omega_c=f^{-1}((-\infty,c])$.
We will show that $\Omega_c$ is a totally convex subset for any $c$
in \autoref{thm2.4} below.
H.Wu \cite{Wu79} and Z. Shen \cite{Shen1996} further observed that
\begin{equation}\label{eq2.21}
f(x)=\sup_{\sigma\in \Lambda}\{h_{\sigma}(x)\}
\end{equation}
where $\Lambda=\{\sigma:[0,+\infty)\to X| \sigma(0)=x_0,
{\rm d}(\sigma(s), \sigma(t))=|s-t|\}$ and $h_{\sigma}$ is a Busemann
function associated with a ray $\sigma$ by
\begin{equation}\label{eq2.22}
h_{\sigma}(x)=\lim_{t\to \infty}[t-{\rm d}(x,\sigma(t))].
\end{equation}
Since $\Omega_c=f^{-1}((-\infty,c])$ is convex, by \eqref{eq2.21} we
see that $\Omega_c$ contains no geodesic ray starting from $x_0$.
Choose $\hat c = \max\{c, f(x_0) \}$. Since $\Omega_{\hat c}$ is
totally convex and contains no geodesic rays, $\Omega_{\hat c}$ must
be compact. It follows that $\Omega_c \subset \Omega_{\hat c}$ is
compact as well. Thus the Cheeger-Gromoll function $f(x)$ has a
lower bounded
$$a_0=\inf_{x\in M^n}\{f(x)\}=\inf_{x\in\Omega_0}\{f(x)\}>-\infty.$$
If $\Omega_{a_0}=f^{-1}(a_0)$ is a space without boundary,
$\Omega_{a_0}$ is called a soul of $X$. Otherwise, $\partial
\Omega_{a_0}\ne \varnothing$ we further consider
$$
\Omega_{a_0-\varepsilon}=\{x\in\Omega_{a_0}|
{\rm d}(x,\partial\Omega_{a_0})\ge\varepsilon\}
$$
When $X$ is a smooth Riemannian manifold of non-negative
curvature, Cheeger-Gromoll \cite{CG72} showed that
$\Omega_{a_0-\varepsilon}$ remains to be convex. For more general
case when $X$ is an Alexandrov space of non-negative curvature,
Perelman \cite{Per1991} also showed that
$\Omega_{a_0-\varepsilon}$ remains to be convex, (see
\cite{Petr2007} and \cite{CMD2009} as well).
Let $l_1=\max_{x\in\Omega_{a_0}}\{{\rm d}(x,\partial\Omega_{a_0})\}$
and $a_1=a_0-l_1$. If $\Omega_{a_1}=\Omega_{a_0-l_1}$ has no
boundary, then we call $\Omega_{a_1}$ a soul of $X$. Otherwise, we
repeat above procedure by setting
$$
\Omega_{a_1-\varepsilon}=\{x \in\Omega_{a_1}|
{\rm d}(x,\partial\Omega_{a_1})\ge\varepsilon\}
$$
for $0 \le \varepsilon \le l_1$. Observe that
$$n=\dim(X)>\dim(\Omega_{a_0})>\dim(\Omega_{a_1})>\cdots$$
Because $X$ has finite dimension, after finitely many steps we will
eventually get a sequence
$$a_0>a_1>a_2>\cdots>a_m$$
such that
$\Omega_{a_i-s}=\{x\in\Omega_{a_i}|{\rm d}(x,\partial\Omega_{a_i})\ge
s\}$ for $0\le s\le a_i-a_{i+1}$ and $i=0,1,2,\cdots,m-1$. Moreover,
$\Omega_{a_m}$ is a convex subset without boundary, which is called
a soul of $X$.
A subset $\Omega$ is said to be {\it totally convex} in $X$ if for
any quasi-geodesic segment $\sigma:[a,b]\to X$ with endpoints
$\{\sigma(a),\sigma(b)\}\subset \Omega$, we must have
$\sigma([a,b])\subset \Omega$. The definition of quasi-geodesic can
be found in \cite{Petr2007}.
\begin{theorem}[Cheeger-Gromoll \cite{CG72}, Perelman \cite{Per1991}, \cite{SY2000}]\label{thm2.4}
Let $X$ be an $n$-dimensional open complete Alexandrov space of
curvature $\ge 0$, $f(x)=\lim_{t\to+\infty}[t-{\rm d}(x,\partial B(\hat
x,t))]$, $\{\Omega_s\}_{s\ge a_m}$ and $a_0>a_1>\cdots>a_m$ be as
above. Then the following is true.
{\rm (1)} For each $s\ge a_0$, $\Omega_s=f^{-1}((-\infty,s])$ is a
totally convex and compact subset of $X$;
{\rm (2)} If $a_i\le s<t\le a_{i-1}$, then
$$\Omega_s=\{x\in \Omega_t|{\rm d}(x,\partial \Omega_t)\ge t-s\}$$
remains to be totally convex;
{\rm (3)} The soul $N^k=\Omega_{a_m}$ is a deformation retract of
$X$ via multiple-step Perelman-Sharafutdinov semi-flows, which are
distance non-increasing.
\end{theorem}
\begin{proof}
(1) For $s\ge a_0$, we would like to show that
$\Omega_s=f^{-1}((-\infty,s])$ is totally convex. Suppose contrary,
there were a quasi-geodesic $\sigma:[a,b]\to X$ with
$$\max\{f(\sigma(a)),f(\sigma(b))\}\le s$$
and $c$ with $a<c<b$ and
$$f(\sigma(c))>s\ge\max\{f(\sigma(a)),f(\sigma(b))\}.$$
\begin{figure*}[ht]
\includegraphics[width=180pt]{section2_3.pdf}\\
\end{figure*}
For each integer $i\gg 1$, we choose $y_i\in\partial B(x_i,i)$ such
that
$${\rm d}(y_i,\sigma(c))={\rm d}(\partial B(x_0,i),\sigma(c)).$$
Let $\alpha_i=\measuredangle_{\sigma(c)}(y_i,\sigma(a))$ and
$\beta_i=\measuredangle_{\sigma(c)}(y_i,\sigma(b))$. Since $X$ has
non-negative curvature and $\sigma:[a,b]\to X$ is a quasi-geodesic,
it is well-known (\cite{Petr2007}) that
$$\cos\alpha_i + \cos\beta_i\ge 0.$$
It follows that
$$\min\{\alpha_i,\beta_i\}\le\frac{\pi}{2}.$$
After passing a sub-sequence and re-indexing, we may assume that
$$\beta_{i_j}\le\frac{\pi}{2}$$
for all $j\ge 1$. By law of cosine, we have
$$[{\rm d}(y_{i_j},\sigma(b))]^2\le[{\rm d}(\sigma(c),y_{i_j})]^2+|b-c|^2.$$
Therefore, we have
\begin{align*}
f(\sigma(b))&\ge\lim_{j\to+\infty}[i_j-{\rm d}(\sigma(b),y_{i_j})]\\
&\ge \lim_{j\to+\infty}[i_j- \sqrt{[{\rm d}(\sigma(c),y_{i_j})]^2+|b-c|^2}]\\
&=\lim_{j\to+\infty}\frac{i_j^2-[{\rm d}(\sigma(c),y_{i_j})]^2
-|b-c|^2}{i_j+ \sqrt{[{\rm d}(\sigma(c),y_{i_j})]^2+|b-c|^2}}\\
& = \lim_{j\to+\infty} [i_j - {\rm d}( \sigma(c),y_{i_j} )]+ 0 \\
& = \lim_{j\to+\infty} [i_j - {\rm d}( \sigma(c), \partial B(\hat x, i_j))] \\
&=f(\sigma(c))
\end{align*}
which is contracting to
$$f(\sigma(c))>f(\sigma(b)).$$
Hence, $\Omega_c$ is a totally convex subset of $X$.
(2) Perelman \cite{Per1991} showed that if $\Omega_c$ is a convex
subset of $X$ with non-empty boundary, then the distance function
$$r_{\partial\Omega_c}(x)={\rm d}(x,\partial\Omega_c)$$
is concave for $x\in \Omega_c$, (see \cite{Petr2007} and
\cite{CMD2009} as well).
(3) Because our function $(-f(x))$ and $r_{\partial\Omega_{a_i}}$
are concave in $ \Omega_{a_i}$, the corresponding semi-flows are
distance non-increasing, (see Chapter 6 of \cite{Per1991}, section 2
of \cite{KPT2009} or \cite{Petr2007}). Using the
Perelman-Sharafutdinov flow $\frac{d^+ \psi}{dt} = \frac{\nabla
(-f)}{|\nabla (-f)|^2}|_{\psi (t)} $, Perelman (cf. \cite{Per1991})
showed that $X$ is contractible to $\Omega_{a_0}$. Let $r_i:
\Omega_i \to \mathbb R$ be the distance function $r_i(z) = {\rm d}(z,
\partial \Omega_{a_i})$ if $\partial \Omega_{a_i} \neq \emptyset$
for $i = 0, 1, \cdots , m-1$. For the same reasons, $\Omega_{a_0}$
is contractible to $\Omega_{a_1}$ via the Perelman-Sharafutdinov
flow $ \frac{d^+ \psi}{dt} = \frac{\nabla r_0}{|\nabla r_0
|^2}|_{\psi (t)} $. Using the $m$-step Perelman-Sharafutdinov flows,
we can see that the soul $N^k = \Omega_{a_m}$ is a deformation
retract of $X$.
\end{proof}
\begin{prop}\label{prop2.5}
Let $f(y) $ be a {\it convex} function on $Y$ with
$a_0=\inf_{w\in Y}\{ f(w)\} > - \infty $ and $A = f^{-1}( a_0)$ as
in the proofs of \autoref{prop2.3} and \autoref{thm2.4} above.
Suppose that $Y$ is an open and complete Alexandrov space with
non-negative curvature and $A' \subset A$ is a closed subset of $A$.
Then the distance function $r_{A'}(y) = {\rm d}(y, A')$ from $A'$ has
no critical points in the complement $[Y - A]$ of $A$.
\end{prop}
\begin{proof}
For each $y \notin A$
and $a = f(y) > a_0$, we observe that $A \subset {\rm int}
(\Omega_a) = f^{-1}( ( -\infty, a))$. Let $\sigma: [0, \ell] \to Y$ be a length-minimizing
geodesic segment of unit speed from $A'$ to $y$ with $\sigma(0) \in A'$ and $\sigma(\ell) =
y$. Since $Y$ has no boundary, any geodesic $\sigma$ can be extended
to a longer quasi-geodesic of unit speed $\tilde \sigma: [0, \ell +
\varepsilon] \to Y$, (see \cite{Petr2007}). Since $f$ is convex, the
composition of function $t \to f( \tilde \sigma (t))$ remains
convex for any quasi-geodesics $ \tilde \sigma (t) $ (see \cite{Petr2007}). It follows that
$$
\frac{d^+ (f \circ \tilde \sigma)}{dt} (\ell)
\ge \frac{a - a_0}{\ell} > 0.
$$
Let us consider a minimum direction $\vec \xi_{min} \in \Sigma_y(Y)$
of $d_y (-f)$ and $ \vec s = [ - d_y(-f)( \vec \xi_{min})] \vec
\xi_{min} $, where we used the fact that
$$
[ - d_y(-f)( \vec \xi_{min})] \ge \frac{d^+ (f \circ \tilde \sigma)}{dt} (\ell) > 0.
$$
Hence we have $ \vec s = [ - d_y(-f)( \vec \xi_{min})] \vec\xi_{min}
\neq 0 $. The vector $\vec s$ is called a support vector of $d_y(-f)$.
For any support vector $\vec s$,
one has (cf. \cite{Petr2007} page 143) that inequality
$$
d_y( -f) ( \vec u) \le - \langle \vec s, \vec u \rangle
$$
holds for all $\vec u \in \Sigma_y(Y)$, where $(-f)$ is a semi-concave
function.
Let $\Omega_c = f^{-1}((-\infty, c])$. When $f$ is convex, one has
$$
f( \tilde \sigma(t) ) \le \max \{f( \tilde \sigma(a) ), f( \tilde
\sigma(b) ) \}
$$
for any quasi-geodesic $\tilde \sigma: [ a, b] \to Y$ and $t \in [a,
b]$. Thus, $\Omega_c$ is totally convex. It follows that, for any
direction $\vec w \in \Uparrow_y^{A'}$, we have $\vec w \in T_y(
\Omega_c)$. Moreover, we have
$$
d_y(-f) (\vec w) \ge \frac{ - a_0 - (-a) }{ \ell } > 0
$$
for all $\vec w \in \Uparrow_y^{A'}$, since $(-f)$ is a concave
function. Because $\vec s$ is a support vector of $d_y(-f)$, one also has
$$
0 < d_y(-f) (\vec w) \le - \langle \vec s, \vec w \rangle
$$
holds for $\vec w \in \Uparrow_y^{A'}$, (cf. \cite{Petr2007} page 143).
Thus, we have
$$
\angle_y( \vec w, \vec s) > \frac{\pi}{2}
$$
for all $\vec w \in \Uparrow_y^{A'}$. It follows that $d_y(r_{A'})(
\vec s) > 0$ and $y $ is not a critical point of the distance
function $r_{A'}(.) $, where $y \notin A = f^{-1}(a_0)$, (compare
with \cite{Grv1993}). This completes the proof of \autoref{prop2.5}.
\end{proof}
\begin{figure*}[ht]
\includegraphics[width=100pt]{section2_4.pdf}\\
\caption{The minimum set $A$ of a convex function.}\label{fig2_4}
\end{figure*}
Using soul theory and splitting theorem, we can classify
non-negatively curved surfaces with possibly singular metrics.
\begin{theorem}\label{thm2.6}
Let $X^2$ be an oriented, complete and open surface of non-negative
curvature. Then $X^2$ is either homeomorphic to $\mathbb R^2$ or
isometric to a flat cylinder.
\end{theorem}
\begin{proof}
It is known that $X^2$ is a manifold. Let $N^s = \Omega_{a_m}$ be a
soul of $X^2$. If the soul $N^s = \Omega_{a_m}$ is a single point,
then $X^2$ is homeomorphic to $\mathbb R^2$. When $N^s =
\Omega_{a_m}$ has dimension $1$, then $N^1 = \Omega_{a_m}$ is
isometric to embedded closed geodesic $\sigma: S^1 \to X^2$, (i.e.,
$N^1=\sigma(S^1)$).
Let $\tilde X^2$ be the universal cover of $X^2$ with lifted metric
and $\tilde{\sigma}: \mathbb R\to \tilde X^2$ be a lift of
$N^1 = \Omega_{a_m}$ in $X^2$. We observe that
\begin{diagram}
\tilde X^2 &\rTo^{\tilde P}&\tilde \sigma\\
\dTo &&\dTo\\
X^2&\rTo^{P}& N^1
\end{diagram}
Suppose that $P: X^2\to N^1$ is the Perelman-Sharafutdinov distance
non-increasing projection from open space $X^2$ to its soul $N^1$.
Such a distance non-increasing map $P: X^2\to N^1$ can be lifted to
a distance non-increasing map $\tilde P: \tilde X^2\to \tilde
\sigma$. Thus $\tilde \sigma: \mathbb R\to \tilde X^2$ is a line in
an open surface $\tilde X^2$ of non-negative curvature. Applying the
splitting theorem, we see that $\tilde X^2$ is isometric to $\mathbb
R^2$. It follows that $X^2$ is a flat cylinder.
\end{proof}
Let us now turn our attention to closed surfaces of curvature.
\begin{cor}\label{cor2.7}
Let $X^2$ be a closed $2$-dimensional Alexandrov space of
non-negative curvature. Then the following holds:
{\rm (1)} If the fundamental group $\pi_1(X^2)$ is finite, then
$X^2$ is homeomorphic to $S^2$ or $\mathbb{RP}^2$.
{\rm (2)} If the fundamental group $\pi_1(X^2)$ is an infinite
group, then $X^2$ is isometric to a flat tours or flat Klein bottle.
\end{cor}
\begin{proof}
After passing through to its double when needed, we may assume that
$X^2$ is oriented.
When $|\pi_1(X^2)|<\infty$, $X^2$ is covered by $S^2$.
When $|\pi_1(X^2)|=+\infty$ and $X^2$ is oriented, for a non-trivial
free homotopy class of a closed curve $[\hat{\sigma}]\ne 0$ in
$\pi_1(X^2)$ with $[\hat{\sigma}^n]\ne 0$ for all $n\ne 0$, we
choose a length minimizing closed geodesic $\sigma: S^1\to X^2$.
Suppose that $\tilde X^2$ is a universal cover of $X^2$ and $\tilde
\sigma: \mathbb R\to \tilde X^2$ is a lift of $\sigma$ in $X^2$.
Then we can check that $\tilde \sigma$ is a geodesic line of $\tilde
X^2$. Thus, $\tilde X^2$ is isometric to $\mathbb R^2$. It follows
that $X^2$ is isometric to a flat torus, whenever $X^2$ is oriented
with $|\pi_1(X^2)|=+\infty$.
\end{proof}
\begin{example}\label{ex2.9}
When $X^2$ is an open surface of non-negative curvature, it might
happen that $\Omega_{a_0}$ is an interval. For instance, let $\hat
Y^2=[0,1]\times [0,+\infty)$ be a flat half-strip in $\mathbb R^2$.
If we take two copies of $\hat Y^2$ and glue them along the
boundary, the resulting surface $X^2= {\rm dbl}(\hat Y^2)$ is
homeomorphic to $\mathbb R^2$. A result of Petrunin implies that
$X^2= {\rm dbl}(\hat Y^2)$ still have non-negative curvature (e.g.,
\cite{Petr2007} or \cite{BBI2001}). In this case, we have
$\Omega_{a_0}$ is an interval. Of course, the soul $N^s =
\Omega_{a_1}$ of $X^2$ is a single point.
\end{example}
We now say a few words for non-negatively curved surfaces $X^2$ with
non-empty convex boundary. By definition of surface $X^2$ with
curvature $\ge k$, its possibly non-empty boundary $\partial X^2$
must be convex.
\begin{cor}\label{cor2.9}
Let $X^2$ be a surface with non-negative curvature and non-empty
boundary. Then
{\rm (1)} If $X^2$ is compact, then $X^2$ is either homeomorphic to
$D^2$ or isometric to $S^1\times [0,l]$ or a flat M\"{o}bius band;
{\rm (2)} If $X^2$ is non-compact and oriented, then $X^2$ is either
homeomorphic to $[0,+\infty)\times \mathbb R$ or isometric to one of
three types: $S^1\times [0,+\infty)$, a half flat strip or
$[0,l]\times (-\infty,+\infty)$.
\end{cor}
\begin{proof}
If we take two copies of $X^2$ and glue them together along their
boundaries, the resulting surface ${\rm dbl}(X^2)$ still has
curvature $\ge 0$, due to a result of Petrunin \cite{Petr2007}.
Clearly, ${\rm dbl}(X^2)$ has no boundary.
(1) When ${\rm dbl}(X^2)$ is compact and oriented, then ${\rm
dbl}(X^2)$ is homeomorphic to the unit $2$-sphere or is isometric to
a flat strip. Hence, $X^2$ is either homeomorphic to $D^2$ or
isometric to $S^1\times [0,l]$ or a flat M\"{o}bius band.
(2) When ${\rm dbl}(X^2)$ is non-compact, then ${\rm dbl}(X^2)$ is
homeomorphic to $\mathbb R^2$ or isometric to $S^1\times \mathbb R$
or $X^2$ is isometric to $[0, \ell] \times [0, \infty)$.
To verify this assertion, we consider the soul $N^s$ of ${\rm
dbl}(X^2)$. If $N^s$ is a circle, then ${\rm dbl}(X^2)$ is isometric
to an infinite flat cylinder: $ S^1(r) \times \mathbb R$. If the
soul $N^s$ is a point, then ${\rm dbl}(X^2)$ is homeomorphic to
$\mathbb R^2$.
There is a special case which we need to single out: $X^2$ is
isometric to $[0, \ell] \times [0, \infty)$. We will elaborate this
special case in \autoref{section5} below.
\end{proof}
\begin{remark}\label{remark2.11} In \autoref{section5} below, we will
estimate the number of extremal points, i.e. essential
singularities, on surfaces with non-negative curvature, using
multi-step Perelman-Sharafutdinov flows associated with the
Cheeger-Gromoll convex exhaustion.
\end{remark}
Finally, we would like to classify all non-negatively curved open
$3$-manifolds with possibly singular metrics.
\begin{theorem}[\cite{SY2000}]\label{thm2.11}
Let $Y^3_{\infty}$ be an open complete $3$-manifold with a possibly
singular metric of non-negative curvature. Suppose that
$Y^3_{\infty}$ is oriented and $N^s$ is a soul of $Y^3_\infty$. Then
the following is true.
\begin{enumerate}[{\rm (1)}]
\item
When $\dim(N^s)= 1$, then the soul of $Y^3_{\infty}$ is isometric to
a circle. Moreover, its universal cover $\tilde Y^3_{\infty}$ is
isometric to $\tilde X^2\times \mathbb R$, where $\tilde X^2$ is
homeomorphic to $\Bbb R^2$;
\item
When $\dim(N^s) = 2$, then the soul of $Y^3_{\infty}$ is
homeomorphic to $S^2/\Gamma$ or $T^2/\Gamma$. Furthermore,
$Y^3_{\infty}$ is isometric to one of four spaces: $S^2 \times
\mathbb R$, $\mathbb R P^2 \ltimes \mathbb R = (S^2 \times \mathbb
R)/ \mathbb Z_2$, $T^2 \times \mathbb R$ or $K^2 \ltimes \mathbb R =
(T^2 \times \mathbb R)/\mathbb Z_2$, where $K^2$ is the flat Klein
bottle and $\mathbb R P^2 \ltimes \mathbb R$ is homeomorphic to $
[\mathbb {RP}^3 - \bar{B}^3(x_0, \varepsilon)]$;
\item
When $\dim(N^s) = 0$, then the soul of $Y^3_{\infty}$ is a single
point and $Y^3_{\infty}$ must be homeomorphic to $\mathbb R^3$.
\end{enumerate}
\end{theorem}
\begin{proof}
This theorem is entirely due to Shioya-Yamaguchi \cite{SY2000}. A
special case of \autoref{thm2.11} for smooth open $3$-manifold with
non-negative curvature was stated as Theorem 8.1 in Cheeger-Gromoll's
paper \cite{CG72}.
Shioya-Yamaguchi's proof is quiet technical, which occupied the half
of their paper \cite{SY2000}. For convenience of readers, we present
an alternative shorter proof of Shioya-Yamaguchi's soul theorem for
3-manifolds with possible singular metrics.
\smallskip
\noindent {\bf Case 1.} When the soul $N^1$ of $Y^3_\infty$ is a
closed geodesic $\sigma^1$, there are distance non-increasing
multi-step Perelman-Sharafutdinov retractions from $Y^3_\infty$ to
$\sigma^1$. Thus, $\sigma^1$ is length-minimizing in its free
homotopy class. It follows that the lifting geodesic $\tilde
\sigma^1$ is a geodesic line in the universal covering space $\tilde
Y^3_\infty$ of $Y^3_\infty$. Using the splitting theorem (cf.
\cite{BBI2001}) for non-negatively curved space $\tilde Y^3_\infty$,
we see that $\tilde Y^3_\infty$ is isometric to $\tilde X^2\times
\mathbb R$, where $\tilde X^2$ is a contractible surface with
non-negative curvature. Hence, $\tilde{X}^2$ is homeomorphic to
$\Bbb R^2$.
\smallskip
\noindent {\bf Case 2.} When the soul $N^2$ of $Y^3_\infty$ is a
surface $X^2$, we observe that $X^2 = f^{-1}(a_0)$ is a convex
subspace of $Y^3_\infty$, where $f(x) = \lim_{t \to \infty} [ t -
{\rm d}(x, \partial B_{Y^3_\infty} (\hat x, t))]$ and $a_0 = \inf_{x
\in Y^3_\infty}\{f(x)\}$. Since $f$ is convex and $Y^3$ has
non-negative curvature, $X^2$ has non-negative curvature as well. By
\autoref{cor2.9}, we see that $X^2$ is either homeomorphic to a
quotient of $S^2$ or isometric to a quotient of a flat torus.
For this case, our strategy goes as follows. We will show that there
is a {\it ``normal line bundle"} over the soul $X^2$. After passing
its double cover if needed, we may assume that such a {\it ``normal
line bundle"} is topologically trivial in $Y^3_\infty$. In this
case, with some extra efforts, one can show that there is a geodesic
line $\hat \sigma^1$ orthogonal to $X^2$ in $Y^3_\infty$. Thus, the
space $Y^3_\infty$ (or its double cover) splits isometrically to
$X^2\times \mathbb R$.
Here is the detail of our {\it ``normal line bundle"} argument. For
each point $x$ in the soul $X^2$, its unit tangent space
$\Sigma_x^1(X^2)$ is homeomorphic to $S^1$. Recall that the space of
unit tangent directions $\Sigma^2_{y_\infty}(Y^3_\infty)$ of
$Y^3_\infty$ at $y$ is homeomorphic to the sphere $S^2$, because
$Y^3_\infty$ is a $3$-manifold. Observe that $\Sigma_x^1(X^2)$ is a
convex subset of $\Sigma^2_{y_\infty}(Y^3_\infty)$. Moreover we see
that $\Sigma_x^1(X^2)$ divides $\Sigma^2_{y_\infty}(Y^3_\infty)$
into exactly two parts:
$$
[\Sigma^2_{y_\infty}(Y^3_\infty) - \Sigma_x^1(X^2)] = \Omega^2_{x,
+} \cup \Omega^2_{x, +}
$$
Since the curvature of $ \Sigma^2_{y_\infty}(Y^3_\infty)$ is greater
than or equal to 1, using Theorem 6.1 of \cite{Per1991} (cf.
\cite{Petr2007}), we obtain that there is a unique unit vector $\xi_\pm
\in \Omega^2_{x, \pm}$ such that
$$
\ell_\pm = \angle_{x}(\xi_\pm, \Sigma_x^1(X^2)) = \max\{
\angle_{x}(w_\pm, \Sigma_x^1(X^2)) \, | \, w_\pm \in \Omega^2_{x,
\pm}\}.
$$
We claim that $\ell_\pm \le \frac{\pi}{2}$. Suppose contrary,
$\ell_\pm > \frac{\pi}{2}$ were true. We derive a contradiction as
follows. Let $\psi: [0, \ell] \to \Sigma^2_{y_\infty}(Y^3_\infty)$
be a length-minimizing geodesic segment of unit speed from
$\Sigma_x^1(X^2)$ of length $\ell \ge \frac{\pi}{2}$ with $\psi(0) =
u \in \Sigma_x^1(X^2)$ and ${\rm d}(\psi(\ell), \Sigma_x^1(X^2)) =
\ell$. We now choose another geodesic segment $\eta: [0, \delta] \to
\Sigma_x^1(X^2)$ be a geodesic segment of unit speed with $\eta(0) =
u$. Since ${\rm curv}_{\Sigma^2} \ge 1$, applying the triangle comparison
theorem (cf. \cite{BBI2001}) to the our geodesic hinge at $u$ in
$\Sigma^2_{y_\infty}(Y^3_\infty)$, we see that ${\rm d}_{\Sigma^2}
(\psi(\frac{\pi}{2}), \eta(\delta) ) \le \frac{\pi}{2}$. Thus, for
the point $w = \psi(\frac{\pi}{2})$, there are at least two points
$\{ u, \eta(\delta)\} \subset \Sigma_x^1(X^2)$ with angular distance
${\rm d}_{\Sigma^2}(w, u) = {\rm d}_{\Sigma^2}(w, \eta(\delta) ) =
\frac{\pi}{2}$. In another words, there were at least two distinct
length-minimizing geodesic segments from $w = \psi(\frac{\pi}{2})$
to $\Sigma_x^1(X^2)$. Hence, $\psi|_{[0, \frac{\pi}{2} +
\varepsilon] }$ is no longer length-minimizing for any $\varepsilon
>0$, a contradiction. It follows that $\ell_\pm \le \frac{\pi}{2}$.
Moreover, the equality $\ell_\pm = \frac{\pi}{2}$ holds if and only
if $\Sigma^2_{y_\infty}(Y^3_\infty)$ is isometric to the two point
spherical suspension of $\Sigma_x^1(X^2)$. In this case,
$T_x(Y^3_\infty) $ is isometric to $T_x(X^2) \times \mathbb R$.
Recall that $X^2 = f^{-1}(a_0)$ is a level set of the Busemann
function. We can write $f(x) = [c - {\rm d}(x, f^{-1}(c))]$ for $x \in
f^{-1}(-\infty, c])$. By the first variational formula (cf.
\cite{BBI2001} page 125), we see that
$$
\ell_\pm \ge \frac{\pi}{2}.
$$
Combining our earlier inequality $\ell_\pm \le \frac{\pi}{2}$, we
see that $\ell_\pm = \frac{\pi}{2}$. Therefore, we conclude that
$T_x(Y^3_\infty)$ is isometric to $T_x(X^2) \times \mathbb R$.
Hence, there is a {\it ``normal line bundle"} over the soul $X^2$.
After passing its double cover if necessary, such a {\it ``normal
line bundle"} of $X^2$ in $Y^3_\infty$ is topologically trivial.
Thus, we assume that $[Y^3_\infty - X^2] = \Omega^3_+ \cup
\Omega^3_-$ has exactly two ends, where we replace $Y^3_\infty$ by
its double cover $\hat Y^3$ if needed. For each end and each $x \in
X^2$, there exists a ray $\sigma_{x, \pm}: (0, \infty) \to
\Omega^3_\pm$ with starting point $x$. One can verify that
$\sigma_{x, -} \cup \sigma_{x, +}$ is a geodesic line in
$Y^3_\infty$ (or in its double cover). By the splitting theorem, we
conclude that $Y^3_\infty$ (or its double cover) is isometric to
$X^2 \times \mathbb R$.
\smallskip
(3) When the soul $N^k$ of $Y^3_\infty$ is a single point $\{
y_\infty\}$, our proof becomes more involved. Let $f(x) = \lim_{t
\to \infty}[ t - {\rm d}(x, \partial B_{Y^3_\infty}(\hat x, t))]$ and
$a_0 = \inf_{x \in Y^3_\infty}\{f(x)\}$ be as above. There are three
possibilities for $\dim[ f^{-1}(a_0)] = 0, 1, 2$.
\smallskip
\noindent {\bf Subcase 3.0.} {\it $\dim[ f^{-1}(a_0)] = 0$ and $A =
f^{-1}(a_0) = \{ y_\infty\}$}.
In this subcase, the space of unit tangent directions
$\Sigma^2_{y_\infty} (Y^3_\infty)$ at $y$ is homeomorphic to the
sphere $S^2$ and its tangent cone $T_{y_\infty}(Y^3_\infty)$ is
homeomorphic to $\mathbb R^3$. Recall that the pointed spaces
$(\lambda Y^3_\infty, y_\infty) $ is convergent to the tangent cone
$(T_{y_\infty}(Y^3_\infty), O)$ as $\lambda \to \infty$, where $O$
is the origin of $T_{y_\infty}(Y^3_\infty)$.
By the pointed version of Perelman's stability theorem (cf. Theorem 7.11
of \cite{Kap2007}), we see that for sufficiently small
$\varepsilon$, $(B_{\frac{1}{\varepsilon} Y^3_\infty}(y_\infty, 1),
y_\infty) $ is homeomorphic to $(B_{T_{y_\infty}(Y^3_\infty)}(O, 1),
O)$. It follows that $B_{Y^3_\infty}(y_\infty, \varepsilon)$ is
homeomorphic to the unit ball $D^3$ for sufficiently small
$\varepsilon >0$, because $Y^3_\infty$ is a $3$-manifold.
We now use Perelman's fibration theorem to complete our proof for
this subcase. It follows from \autoref{prop2.5} that $r_A$ has no
critical value in $(0, \infty)$. Perelman's fibration theorem (our
\autoref{thm1.2} above) implies that there is a fibration structure
$$
(\partial D^3) \to [Y^3_\infty - U_{\frac{\varepsilon}{2}}(A) ]
\stackrel{r_A}{\longrightarrow} (\frac{\varepsilon}{2}, \infty)
$$
It follows that $ [Y^3_\infty - U_{\frac{\varepsilon}{2}}(A) ] $ is
homeomorphic to $S^2 \times (\frac{\varepsilon}{2}, \infty)$ and
that $Y^3_\infty $ is homeomorphic to $D^3 \cup [S^2 \times \mathbb
R] $. Thus, $Y^3_\infty $ is homeomorphic to $\mathbb R^3$, for this
subcase.
\smallskip
\noindent {\bf Subcase 3.1.} {\it $\dim[ f^{-1}(a_0)] = 1$ and $A =
f^{-1}(a_0) = \sigma$ is a geodesic segment.}
It follows from \autoref{prop2.5} that $r_A$ has no critical value
in $(0, \infty)$. For the same reasons as above, it remains
important to verify that $U_\varepsilon(A)$ is homeomorphic to
$D^3$.
Let $\sigma: [0, \ell] \to Y^3$ be as above and $\sigma([0, \ell]) $
be the minimal set of $f$. We denote a $\varepsilon$-neighborhood of
$A$ by $U_\varepsilon(A)$. Let $A_s = \sigma([s, \ell-s])$ for some
$s > 0$. We observe that
$$
U_\varepsilon(A_0) = B_\varepsilon(\sigma(0)) \cup U_\varepsilon(A_{
s} ) \cup B_\varepsilon(\sigma( \ell))
$$
for $s >0$. For the same reason as in Subcase 3.0 above, both
$B_\varepsilon(\sigma(0))$ and $B_\varepsilon(\sigma( \ell))$ are
homeomorphic to $D^3$, because $Y^3_\infty$ is a $3$-manifold. It is
sufficient to show that $U_\varepsilon(A_{ s} )$ is homeomorphic to
a finite cylinder $C = [s, \ell-s] \times D^2$ for sufficient small
$s$.
Let $p= \sigma(0)$. We consider the distance function $r_p(y) =
{\rm d}_{Y^3_\infty}(y, p)$. We observe that the distance function has
no critical point on geodesic sub-segment $\sigma([s, \ell-s])$. A
result of Petrunin (cf. \cite{Petr2007} page 142) asserts that if
$x_n \to x$ as $n \to \infty$, then $\liminf_{n\to \infty} |
\nabla_{x_n} r_p | \ge | \nabla_{x} r_p|$. Hence, there exists a
sufficiently small $\varepsilon >0$ such that $r_p$ has no critical
point in $U_{\varepsilon}(A_{ s} )$. For the same reason as in
Subcase 3.0, we can apply Perelman's fibration theorem to our case:
$$
D^2 \to U_{\varepsilon}(A_{ s} ) \stackrel{r_p}{\longrightarrow} (s,
\ell-s),
$$
where we used the fact that $ [\partial B_\varepsilon(\sigma(0))]
\cap U_{\varepsilon}(A_{ s/2} )$ is homeomorphic to $D^2$. It
follows that $U_\varepsilon(A_{ s} )$ is homeomorphic to a finite
cylinder $C = (s, \ell-s) \times D^2$. Therefore, $U_\varepsilon(A_{
0})$ is homeomorphic to $D^3$. It follows that $Y^3_\infty \sim [
D^3 \cup (S^2 \times \mathbb R)]$ is homeomorphic to $\mathbb R^3$.
\smallskip
\noindent {\bf Subcase 3.2.} {\it $\dim[ f^{-1}(a_0)] = 2$ and $A =
f^{-1}(a_0) = \Omega^2_0 \sim D^2$ is a totally convex surface with
boundary.}
For the same reason as the two subcases above, it is sufficient to
establish that $ U_\varepsilon(A)$ is homeomorphic to the unit
$3$-ball $D^3$:
$$
U_\varepsilon(A) \sim D^3.
$$
Let $A_s = \{ x \in \Omega^2_0 \, | \, {\rm d}(x, \partial \Omega^2_0)
\ge s \}$. By our discussion in Case 2 above, we see that for each
interior point $x \in \Omega^2_0$, there is a unique {\it normal
line} orthogonal to $\Omega^2_0$ at $x$. Thus, the interior
${\rm int}(\Omega^2_0)$ has a normal line bundle $\mathbb R$.
Because ${\rm int}(\Omega^2_0)$ is contractible to a soul point
$y_0$, any line bundle over ${\rm int}(\Omega^2_0)$ is topologically
trivial.
In this subcase, our technical goals are to show the following:
\smallskip
\noindent (3.2a) $U_\varepsilon(A_s)$ is homeomorphic to $D^2 \times
(-\varepsilon, \varepsilon)$;
\smallskip
\noindent (3.2b) $U_\varepsilon(\partial A_0)$ is homeomorphic to a
solid tori $(\partial D^2) \times D^2_\varepsilon = S^1 \times
D^2_\varepsilon$.
\smallskip
To establish (3.2a), we use a theorem of Perelman (cf. Theorem 6.1
of \cite{Per1991}) to show that there is a product metric on a
subset $U_\varepsilon(A_s)$ of $Y^3_\infty$. Inspired by Perelman,
we consider the distance function $r_{A_0} (y) = {\rm d}_{Y^3_\infty}
(y, A_0)$. Since $Y^3_\infty$ has non-negative curvature and
${\rm int}(A_s)$ is {\it weakly concave} towards its complement $[
U_{s/4}(A_s) - A_s]$, Perelman observed that $r_{A_0}$ is {\it
concave} on $[ U_{s/4}(A_s) - A_s]$, (see the proof of Theorem 6.1
in \cite{Per1991}, \cite{Petr2007} or \cite{CMD2009}). We already
showed that for each interior point $x \in A_0$, there is a unique
{\it normal line} orthogonal to $A_0$ at $x$. With extra efforts, we
can show that, for each interior point $x \in {\rm int}(A_0)$ and
each unit normal direction $\xi_\pm \perp_x ({\rm int} A_0)$, there
is a unique ray $\sigma_{x, \pm}: [0, \infty) \to Y^3_\infty$ with
$\sigma_{x, \pm}(0) = x$ and $\sigma_{x, \pm}'(0) = \xi_\pm$.
Moreover, we have
$$
f(\sigma_{x, \pm}(t)) = a_0 + t.
$$
Therefore each $y \in [ U_{s/4}(A_s) - A_s]$ with $s>0$, we have
$$
\nabla (-f) |_y = - \nabla r_{A_0} |_y.
$$
Hence, our Busemann function $f$ is both convex and concave on the
subset $[ U_{s/4}(A_s) - A_s]$. Thus, for any geodesic segment
$\varphi: [a, b] \to [ U_{s/4}(A_s) - A_s]$, the function
$f(\varphi(t))$ is a linear function in $t$. Using the fact that
$f(\varphi(t))$ is a linear function in $t$ and the sharp version of
triangle comparison theorem (cf. \cite{BGP1992}), we can show that
there is a sub-domain $V$ of $Y^3_\infty$ such that the metric of
$Y^3_\infty$ on $V$ splits isometrically as
$$
V = {\rm int}(A_0) \times \mathbb R.
$$
Since ${\rm int}(A_0)$ is homeomorphic to $D^2$, we conclude that
$U_\varepsilon(A_s)$ is homeomorphic to $D^2 \times (-\varepsilon,
\varepsilon)$ (compare with the proof of \autoref{thm5.4} below).
Hence, our Assertion (3.2a) holds.
It remains to verify (3.2b). We consider the doubling surface
${\rm dbl}(A_0) = A_0 \cup_{\partial A_0} A_0$. It follows from a result
of Petrunin that ${\rm dbl}(A_0)$ has non-negative curvature. By
\autoref{prop1.7}, we see that the essential singularities (extremal
points) in ${\rm dbl}(A_0)$ are isolated. Thus, there are only finitely
many points $\{ x_1, \cdots, x_k\}$ on $\partial A_0$ such that
$$
{\rm diam}[\Sigma_{x_j}(A_0) ] \le \frac{\pi}{2}
$$
for $j =1,\cdots, k$. We
can divide our boundary curve $\partial A_0$ into $k$-many arcs, say
$[
\partial A_0 - \{ x_1, \cdots, x_k\}] = \cup \gamma_j$. Using a similar
argument as in Subcase 3.1, we can show that, for each $\gamma_j$,
its $\varepsilon$-neighborhood $U_\varepsilon(\gamma_j)$ is
homeomorphic to a finite cylinder $C_j \sim [D^2 \times (0,
\ell_j)]$. Since $Y^3_\infty$ is a $3$-manifold, by the proof of
\autoref{prop1.15}, we know that $B_{Y^3_\infty}(x_j, \varepsilon)$
is homeomorphic to $D^3$. Consequently, we have
$$
U_\varepsilon(\partial A_0) = [\cup C_j] \bigcup [\cup
B_{Y^3_\infty}(x_j, \varepsilon)],
$$
which is homeomorphic to a solid tori $D^2 \times S^1$. This
completes our proof of the assertion that $ U_\varepsilon(A_0) \sim
\{U_\varepsilon(\partial A_0) \cup [ A_0 \times ( - \varepsilon,
\varepsilon)] \} $ is homeomorphic to $D^3$. Therefore, $Y^3_\infty
\sim [D^3 \cup (S^2 \times \mathbb R)]$ is homeomorphic to $\mathbb
R^3$.
We now finished the proof of our soul theorem for all cases.
\end{proof}
\subsection{Applications of the soul theory to proof of \autoref{thm2.1}.}\label{section2.3} \
\smallskip
\smallskip
Using \autoref{thm2.11}, we can complete the proof of
\autoref{thm2.1}.
\begin{proof}[Proof of \autoref{thm2.1}]
Let $\lambda_\alpha$ and $\tilde g^\alpha$ be defined by
\eqref{eq:2.1}. We may assume that $\{((M^3_{\alpha},\tilde
g^{\alpha}),x_{\alpha})\}$ is convergent to a pointed Alexandrov
space $(Y^s_{\infty}, y_{\infty})$ of non-negative curvature, by
replacing $x_\alpha$ with $x'_\alpha$ in \autoref{prop2.2} if
needed. By \autoref{prop2.3}, we see that the limiting space
$Y^s_\infty$ is a non-compact and complete space of $\dim Y^3_\infty
=3$. Furthermore, $Y^3_\infty$ has no boundary. By Perelman's
stability theorem (cf. \cite{Kap2007}), we see that the limit space
$Y^3_\infty$ is a topological $3$-manifold.
By \autoref{thm1.17}, we see that $\partial B_{(M^3_\alpha, \tilde
g^{\alpha})} (x_\alpha, r)$ is homeomorphic to a quotient of the
$2$-torus $T^2$. The notion of ideal boundary $Y^3_\infty(\infty)$
of $Y^3_\infty$ can be found in \cite{Shio94}. In our case, the
ideal boundary $Y^3_\infty(\infty)$ at infinity of $Y^3_\infty$ is
homeomorphic to a circle $\partial B_{X^2}(x_\infty, r)$. We will
verify that $N^k \sim S^1$ as follows. Let $U_\varepsilon(N^k)$ be a
$\varepsilon$-tubular neighborhood of the soul $N^k$ in $Y^3_\infty
$. By Perelman's stability theorem and our assumption that
$M^3_\alpha $ is oriented, we observe that, for sufficiently large
$\alpha$, the boundary $\partial U_r(N^k)$ is homoeomorphic to
$\partial B_{(M^3_\alpha, \tilde g^{\alpha} )}(x_\alpha, r') \sim
T^2$. Thus, the soul $N^k$ of $Y^3$ must be a circle $S^1$. In this
case, it follows from \autoref{thm2.11} that the metric on $Y^3$ (or
on its universal cover) splits. Therefore, it follows from
Perelman's stability theorem (cf. \cite{Kap2007}) that
$B_{(M^3_\alpha, \tilde g^{\alpha} )}(x_\alpha, r')$ is homeomorphic
to a solid tori, which is foliated by orbits of a free circle action
for sufficiently large $\alpha$. We discuss more about gluing and
perturbing our local circle actions in \autoref{section6}.
\end{proof}
We will use \autoref{thm2.1}-\ref{thm2.11} to derive more refined
results for collapsing $3$-manifolds with curvature $\ge -1$ in
upcoming sections.
\section{Admissible decompositions for collapsed
$3$-manifolds}\label{section3}
Let $S^2(\varepsilon)$ be a round sphere of constant curvature
$\frac{1} {\varepsilon^2}$. It is clear that $S^2(\varepsilon)\times
[a,b]$ is convergent to $[a,b]$ with non-negative curvature as
$\varepsilon\to 0$. The product space $W_{\varepsilon} =
S^2(\varepsilon) \times [a,b]$ is not a graph manifold. However, if
$W_{\varepsilon}$ is contained in the interior of collapsed
$3$-manifold $M^3_{\alpha}$ with boundary, then for topological
reasons, $W_{\alpha}$ still has a chance to become a part of
graph-manifold $M^3_{\alpha}$.
Let us now use the language of Cheeger-Gromov's $F$-structure theory
to describe $3$-dimensional graph-manifold. It is known that a
$3$-manifold $M^3$ is a graph-manifold if and only if $M^3$ admits
an $F$-structure of positive rank, which we now describe.
An F-structure, $\mathscr{F}$, is a topological structure which
extends the notion of torus action on a manifold, (see \cite{CG1986}
and \cite{CG1990}). In fact, the more significant concept is that of
atlas (of charts) for an F-structure.
An atlas for an F-structure on a manifold $M^n$ is defined by a
collection of triples $\{(U_i, V_i, T^{k_i})\}$, called charts,
where $\{U_i\}$ is an open cover of $M^n$ and the torus, $T^{k_i}$,
acts effectively on a finite normal covering, $\pi_i: V_i\to U_i$,
such that the following conditions hold:
\begin{enumerate}[(3.1)]
\item There is a homomorphism, $\rho_i: \Gamma_i=\pi_1(U_i)\to
Aut(T^{k_i})$, such that the action of $T^{k_i}$ extends to an
action of the semi-direct product $T^{k_i}\ltimes _{\rho_i}
\Gamma_i$, where $\pi_1(U)$ is the fundamental group of $U$;
\item If $U_{i_1}\cap U_{i_2}\ne\varnothing$, then $U_{i_1}\cap
U_{i_2}$ is connected. If $k_{i_1} \le k_{i_2}$, then on a suitable
finite covering of $U_{i_1}\cap U_{i_2}$, their lifted tori-actions
commute after appropriate re-parametrization.
\end{enumerate}
The compatibility condition (3.2) on lifted actions implies that
$M^n$ decomposes as a disjoint union of orbits, $\mathcal {O}$, each
of which carries a natural flat affine structure. The orbit
containing $x\in M^n$ is denoted by $\mathcal {O}_x$. The dimension
of an orbit of minimal dimension is called the rank of the
structure.
\begin{prop}[\cite{CG1986}, \cite{CG1990}]\label{prop3.1}
A $3$-dimensional manifold $M^3$ with possible non-empty boundary is
a graph-manifold if and only if $M^3$ admits an F-structure of
positive rank.
\end{prop}
For $3$-dimensional manifolds, we will see that 7 out of 8
geometries admits F-structure. Therefore, there seven types of
locally homogeneous spaces are graph-manifolds.
\begin{example}\label{ex3.2}
Let $M^3$ be a closed locally homogeneous space of dimension 3, such
that its universal covering spaces $\tilde M^3$ is isometric to
seven geometries: $\mathbb R^3, S^3, \mathbb H^2\times\mathbb R,
S^2\times \mathbb R, \tilde{SL}(2,\mathbb R), Nil$ and $Sol$. Then
$M^3$ admits an F-structure and hence it is a graph-manifold.
Let us elaborate this issue in detail as follows.
\begin{enumerate}[{\rm (i)}]
\item
If $M^3=\mathbb R^3/\Gamma$ is a flat $3$-manifold, then it is
covered by $3$-dimensional torus. Hence it is a graph-manifold.
\item
If $M^3=S^3/\Gamma$ is a lens space, then its universal cover $S^3$
admits the classical Hopf fibration:
$$S^1\to S^3\to S^2.$$
It follows that $M^3$ is a graph-manifold.
\item If $M^3= (\mathbb H^2\times\mathbb R)/\Gamma$ is
a closed $3$-manifold, then a theorem of Eberlein implies that a
finite normal cover $\hat M^3$ of $M^3$ is diffeomorphic to
$N^2\times S^1$, where $N^2$ is a closed surface of genus $\ge 1$,
(see Proposition 5.11 of \cite{CCR2001}, \cite{CCR2004}).
\item
If $M^3= (S^2\times\mathbb R)/\Gamma$, then a finite cover is
isometric to $S^2\times S^1$. Clearly, $M^3$ is a graph-manifold. We
should point out that a quotient space $(S^2\times S^1)/\mathbb Z_2$
may be homeomorphic to $\mathbb {RP}^3 \# \mathbb {RP}^3$.
\item
If $M^3= \tilde{SL}(2,\mathbb R)/\Gamma$, then a finite cover $\hat
M^3$ of $M^3$ is diffeomorphic to the unit tangent bundle of a
closed surface $N_k^2$ of genus $k\ge 2$. Thus, we may assume that
$\hat M^3=SN^2_k=\{(x,\vec v)| x\in N_k^2, \vec v\in T_x(N_k^2),
|\vec v|=1\}$. Clearly, there is a circle fibration
$$S^1\to \hat M^3\to N_k^2.$$
It follows that $M^3$ is a graph-manifold.
\item If $M^3= Nil/\Gamma$, then the universal cover
$$\tilde
M^3=Nil=\left\{\left(\left.
\begin{array}{ccc}
1 & x & z \\
0 & 1 & y \\
0 & 0 & 1 \\
\end{array}
\right) \right| x,y,z \in \mathbb R\right\}
$$
is a $3$-dimensional Heisenberg group. Let
$$\hat \Gamma=\left\{\left(\left.
\begin{array}{ccc}
1 & m & k \\
0 & 1 & n \\
0 & 0 & 1 \\
\end{array}
\right)
\right|k,m,n\in \mathbb Z\right\}
$$
be the integer lattice
group of $Nil$. A finite cover $\hat M^3$ of $M^3$ is a circle
bundle over a $2$-torus. Therefore $M^3$ is a graph-manifold, which
can be a diameter-collapsing manifold.
\item If $M^3= Sol/\Gamma$, then $M^3$ is foliated
by tori, M\"{o}bius bands or Klein bottles, which is a
graph-manifold.
\end{enumerate}
\end{example}
Let us consider a graph-manifold which is not a compact quotient of
a homogeneous space.
\begin{example}
Let $N^2_i$ be a surface of genus $\ge 2$ and with a boundary
circle, for $i=1,2$. Clearly , $\partial (N^2_i\times
S^1)=S^1\times S^1$. We glue $N^2_1\times S^1$ to $S^1\times
N^2_2$ along their boundaries with $S^1$-factor switched. The
resulting manifold $M^3=(N^2_1\times S^1)\cup(S^1\times
N^2_2)$ does not admit a global circle fibration, but $M^3$ is
a graph-manifold.
\end{example}
As we pointed out above, $S^2\times [a,b]$ can be collapsed to an
interval $[a,b]$ with non-negative curvature. Suppose that $W$ is a
portion of collapsed $3$-manifold $M^3$ such that $W$ is
diffeomorphic to $S^2\times[a,b]$. We need to glue extra solid
handles to $W$ so that our collapsed $3$-manifold under
consideration becomes a graph-manifold. For this purpose, we divided
$S^2\times [a,b]$ into three parts. In fact, $S^2$ with two disks
removed, $S^2-(D^2_1\sqcup D^2_2)$, is diffeomorphic to an annulus
$A$. Thus, $S^2\times [a,b]$ has a decomposition
$$
S^2\times [a,b]=\big(D^2_1\times [a,b]\big) \sqcup (D^2_2\times
[a,b]) \sqcup (A\times [a,b]).
$$
The product space $A\times [a,b]$ clearly admits a free circle
action, and hence is a graph-manifold. For solid cylinder part
$D^2_i\times [a,b]$, if one can glue two solid cylinders together,
then one might end up with a solid torus $D^2\times S^1$ which is
again a graph-manifold.
We will decompose a collapsed $3$-manifold $M^3_{\alpha}$ with
curvature $\ge -1$ into four major parts according to the dimension
$k$ of limiting set $X^k$:
$$
M^3_{\alpha}=V_{\alpha,X^0}\cup V_{\alpha, {\rm int} (X^1)}\cup
V_{\alpha,{\rm int} (X^2)}\cup W_{\alpha}
$$
where ${\rm int} (X^s)$ denotes the interior of the space $X^s$.
The portion $V_{\alpha,X^0}$ of $M^3_{\alpha}$ consists of union of
closed, connected components of $M^3_{\alpha}$ which admit
Riemannian metric of non-negative sectional curvature.
\begin{prop}\label{prop3.4}
Let $V_{\alpha,X^0}$ be a union of one of following:
\begin{enumerate}[{\rm (1)}]
\item a spherical $3$-dimensional space form;
\item a manifold double covered by $S^2\times S^1$;
\item a closed flat $3$-manifold.
\end{enumerate}
Then $V_{\alpha,X^0}$ can be collapsed to a $0$-dimensional manifold
with non-negative curvature. Moreover, $V_{\alpha,X^0}$ is a
graph-manifold.
\end{prop}
We denote the regular part of $X^2$ by $X^2_{reg}$. Let us now
recall a reduction for the proof of Perelman's collapsing theorem
due to Morgan-Tian. Earlier related work on 3-dimensional collapsing
theory was done by Xiaochun Rong in his thesis, (cf. \cite{R1993}).
\begin{theorem}[Morgan-Tian \cite{MT2008}, Compare \cite{R1993}]\label{thm3.5}
Let $\{M^3_{\alpha}\}$ be a sequence of compact $3$-manifolds
satisfying the hypothesis of Theorem 0.1' and $V_{\alpha,X^0}$ be as
in \autoref{prop3.4} above. If, for sufficiently large $\alpha$,
there exist compact, co-dimension 0 submanifolds
$V_{\alpha,X^1}\subset M^3_{\alpha}$ and $V_{\alpha, X^2_{reg}}
\subset M^3_{\alpha}$ with $\partial M^3_{\alpha}\subset
V_{\alpha,X^1}$ satisfying six conditions listed below, then
\autoref{thm0.1} holds, where six conditions are:
\begin{enumerate}[{\rm (1)}]
\item Each connected component of $V_{\alpha,X^1}$ is diffeomorphic
to the following.
\begin{enumerate}[{\rm (a)}]
\item a $T^2$-bundle over $S^1$ or a union of two twisted
I-bundle over the Klein bottle along their common boundary;
\item $T^2\times I$ or $S^2\times I$, where $I=[a,b]$ is a closed
interval;
\item a compact $3$-ball or the complement of an open $3$-ball in
$\mathbb{RP}^3$ which is homeomorphic to $\mathbb {RP}^2 \ltimes
[0, \frac 12]$;
\item
a twisted I-bundle over the Klein bottle, or a solid torus.
\end{enumerate}
In particular, every boundary component of $V_{\alpha,X^1}$ is
either a $2$-sphere or a $2$-torus.
\item
$V_{\alpha,X^1} \cap V_{\alpha,X^2_{reg}}=(\partial
V_{\alpha,X^1})\cap(\partial V_{\alpha,X^2_{reg}})$;
\item If $N^2_0$ is a $2$-torus component of $\partial
V_{\alpha,X^1}$, then $N^2_0 \subset \partial
V_{\alpha,X^2_{reg}}$ if and only if $N_0^2$ is not boundary of
$\partial M^3_{\alpha}$;
\item If $N^2_0$ is a $2$-sphere component of $\partial
V_{\alpha,X^1}$, then $N^2_0 \cap \partial
V_{\alpha,X^2_{reg}}$ is diffeomorphic to an annulus;
\item $V_{\alpha,X^2_{reg}}$ is the total space of a locally trivial
$S^1$-bundle and the intersection $V_{\alpha,X^1}\cap
V_{\alpha,X^2_{reg}}$ is saturated under this fibration;
\item The complement $W_{\alpha}=[M^3_{\alpha}
-(V_{\alpha,X^0}\cup V_{\alpha,X^1}\cup V_{\alpha,X^2_{reg}})]$
is a disjoint union of solid tori and solid cylinders. The boundary
of each solid torus is a boundary component of
$V_{\alpha,X^2_{reg}}$, and each solid cylinder $D^2\times I$ in
$W_\alpha$ meets $V_{\alpha,X^1}$ exactly in $D^2\times \partial I$.
\end{enumerate}
\end{theorem}
\begin{proof}
(\cite{MT2008}) The proof of \autoref{thm3.5} is purely topological,
which has noting to do with the collapsing theory.
Morgan and Tian \cite{MT2008} first verified \autoref{thm3.5} for
special cases under additional assumption on $V_{\alpha,X^1}$:
\begin{enumerate}[(i)]
\item $V_{\alpha,X^1}$ has no closed components;
\item Each $2$-sphere component of $\partial V_{\alpha,X^1}$ bounds
a $3$-ball component of $V_{\alpha,X^1}$;
\item Each $2$-torus component of $\partial V_{\alpha,X^1}$ that is
compressible in $M^3_{\alpha}$ bounds a solid torus component of
$V_{\alpha,X^1}$.
\end{enumerate}
The general case can be reduced to a special case by a purely
topological argument.
\end{proof}
\begin{definition}\label{def3.6}
If a collapsed $3$-manifold $M^3_{\alpha}$ has a decomposition
$$M^3_{\alpha}= V_{\alpha,X^0}\cup V_{\alpha,X^1}\cup
V_{\alpha,X^2_{reg}}\cup W_{\alpha}$$ satisfying six properties
listed in \autoref{thm3.5} and if $V_{\alpha,X^0}$ is a union of
closed $3$-manifolds which admit smooth Riemannian metrics of
non-negative sectional curvature, then such a decomposition is
called an admissible decomposition of $M^3_{\alpha}$.
\end{definition}
In \autoref{section1}-\ref{section2}, we already discussed the part
$V_{\alpha, {\rm int} (X^2)}$ and a portion of $W_{\alpha}$. In next
section, we discuss the collapsing part $V_{\alpha, {\rm int}
(X^1)}$ with spherical or toral fibers for our $3$-manifold
$M^3_{\alpha}$, where ${\rm int} (X^s)$ is the interior of $X^s$.
\section{Collapsing with spherical and toral fibers }\label{section4}
In this section, we discuss the case when a sequence of metric balls
$\{(B_{(M^3_{\alpha}, \tilde g^{\alpha}_{ij})}(x_\alpha, r),
x_{\alpha})\}$ collapse to $1$-dimensional space $(X^1,x_{\infty})$.
There are only two choices of $X^1$, either diffeomorphic to a
circle or an interval $[0,l]$.
By Perelman's fibration theorem \cite{Per1994} or Yamaguchi's
fibration theorem, we can find an open neighborhood $U_{\alpha}$ of
$x_{\alpha}$ such that, for sufficiently large $\alpha$, there is a
fibration
$$ N^2_{\alpha}\to U_{\alpha}\to {\rm int} (X^1)$$
where ${\rm int} (X^1)$ is isometric to a circle $S^1$ or an open
interval $(0,l)$.
We will use the soul theory (e.g., \autoref{thm2.11}) to verify that
a finite cover of the collapsing fiber $N^2_\alpha$ must be
homeomorphic to either a $2$-sphere $S^2$ or a $2$-dimensional torus
$T^2$, (see \autoref{fig:4.1} in \S 0)
Let us begin with two examples of collapsing $3$-manifold with toral
fibers
\begin{example}\label{ex4.1}
Let $M^3=\mathbb H^3/\Gamma$ be an oriented and non-compact quotient
of $\mathbb H^3$ such that $M^3$ has finite volume and $M^3$ has
exactly one end. Suppose that $\sigma: [0,\infty)\to \mathbb
H^3/\Gamma$ be a geodesic ray. We consider the corresponding
Busemann function $h_{\sigma}(x)=\lim_{t\to
+\infty}[t-{\rm d}(x,\sigma(t))]$. For sufficiently large $c$, the
sup-level subset $V_{c, X^1}=h^{-1}_{\sigma}([c,+\infty))$ has
special properties. It is well known that, in this case, the cusp
end $V_{c,X^1}$ is diffeomorphic to $T^2\times [c,+\infty)$. Of
course, the component $V_{c, X^1}\cong T^2\times [c,+\infty)$ admits
a collapsing family of metric $\{g_{\varepsilon}\}$, such that
$(V_{c, X^1},g_{\varepsilon})$ is convergent to half line
$[c,+\infty)$. \qed
\end{example}
We would like to point out that $2$-dimensional collapsing fibers
can be collapsed at two different speeds.
\begin{example}
Let $M^3_{\varepsilon}=(\mathbb R/\varepsilon\mathbb
Z)\times(\mathbb R/\varepsilon^2\mathbb Z)\times[0,1]$ be the
product of rectangle torus $T_{\varepsilon,\varepsilon^2}$ and an
interval. Let us fix a parametrization of $M^3_{1}=\{(e^{2\pi
si},e^{2\pi t i}, u)|u\in [0,1], s,t\in \mathbb R\}$ and
$g_{\varepsilon}=\varepsilon^2ds^2+\varepsilon^4dt^2+du^2$. Then the
re-scaled pointed spaces $\{
((M^3_{1},\frac{1}{\varepsilon^2}g_{\varepsilon}),(1,1,\frac12))\}$
are convergent to the limiting space $(Y^2_{\infty}, y_{\infty})$,
where $Y^2_{\infty}$ is isometric to $S^1\times (-\infty,+\infty)$.
\end{example}
Similarly, when the collapsing fiber is homeomorphic to a 2-sphere
$S^2$, the collapsing speeds could be different along longitudes and
latitudes. We may assume that latitudes shrink at speed
$\varepsilon^2$ and longitudes shrink at speed $\varepsilon$. For
the same reason, after re-scaling, the limit space $Y^2_\infty$
could be isometric to $ [0,1] \times (-\infty, +\infty)$. Thus, the
non-compact limiting space $ Y^2_\infty$ could have boundary. \qed
\medskip
Let us return to the proof of Perelman's collapsing theorem.
According to the second condition of \autoref{thm0.1}, we consider a
boundary component $N^2_{\alpha, X^1} \subset
\partial M^2_{\alpha} $, where the diameter of $N^2_{\alpha,
X^1}$ is at most $\omega_\alpha \to 0$ as $\alpha \to \infty$.
Moreover, there exists a topologically trivial collar $\hat
V_{\alpha, X^1}$ of length one and sectional curvatures of
$M^3_{\alpha}$ are between $(-\frac14-\varepsilon)$ and
$(-\frac14+\varepsilon)$. In this case, we have a trivial fibration:
\begin{equation}\label{eq4.1}
T^2_{\alpha}\to \hat V_{\alpha, X^1}\xrightarrow{\pi_{\alpha}}[0,1]
\end{equation}
such that the diameter of each fiber $\pi^{-1}_{\alpha}(t)$ is at
most $[\frac{e+e^{-1}}{2}w_{\alpha}]$ by standard comparison
theorem. As $\alpha \to +\infty$, the sequence $\{\hat V_{\alpha,
X^1}\}$ converge to an interval $X^1=[0,1]$.
We are ready to work on the main result of this subsection.
\begin{theorem}\label{thm4.3}
Let
$\{((M^3_{\alpha},\rho^{-2}_{\alpha}g^{\alpha}_{ij}),x_{\alpha})\}$
be as in Theorem 0.1'. Suppose that an $1$-dimensional space $X^1$
is contained in the $1$-dimensional limiting space and $x_{\alpha}\to x_{\infty}$ is
an interior point of $X^1$. Then, for sufficiently large $\alpha$,
there exists a sequence of subsets $\hat V_{\alpha, X^1}\subset
M^3_{\alpha}$ such that $\hat V_{\alpha, X^1}$ is fibering over
${\rm int} (X^1)$ with spherical or toral fibers.
$$N^2_\alpha\to \hat V_{\alpha, {\rm int} (X^1)}\to {\rm int} (X^1)$$
where $N^2_\alpha$ is homeomorphic to a quotient of a $2$-sphere
$S^2$ or a $2$-torus $T^2$.
When $M^3_\alpha$ is oriented, $N^2_\alpha$ is either $S^2$ or
$T^2$.
\end{theorem}
\begin{proof}
As we pointed out above, since $ {\rm int} (X^1)$ is a
$1$-dimensional space, there exists a fibration
$$N^2_\alpha\to \hat V_{\alpha, {\rm int} (X^1)}\to {\rm int} (X^1)$$
for sufficiently large $\alpha$. It remains to verify that the fiber
is homeomorphic to $S^2, \mathbb{RP}^2, T^2$ or Klein bottle
$T^2/\mathbb Z_2$. For this purpose, we use soul theory for possibly
singular space $Y^k_{\infty}$ with non-negative curvature.
By our discussion, $B_{\rho^{-2}_{\alpha}g^{\alpha}}(x_{\alpha},1)$
is homeomorphic to $N^2_\alpha\times (0,l)$, where $N^2_\alpha$ is a
closed $2$-dimensional manifold. Thus, the distance function
$r_{x_{\alpha}}(x)={\rm d}_{\rho^{-2}_{\alpha}g^{\alpha}}(x_{\alpha},x)$
has at least one critical point $y_{\alpha}\ne x_{\alpha}$ in
$B_{\rho^{-2}_ {\alpha}g^{\alpha}}(x_{\alpha},1)$, because
$B_{g^{\alpha}}(x_{\alpha},\rho_{\alpha})$ is not contractible. Let
$\lambda_\alpha$ be
$$
\max\{{\rm d}_{\rho^{-2}_{\alpha}g^{\alpha}}(x_{\alpha},y_{\alpha})|
y_{\alpha}\ne x_{\alpha} \text { is a critical point of }
r_{x_{\alpha}} \text { in } B_{\rho^{-2}_ {\alpha}
g^{\alpha}}(x_{\alpha},1)\}.
$$
We claim that $0<\lambda_{\alpha}<1$ and $\lambda_{\alpha}\to 0$ as
$\alpha\to +\infty$. To verify this assertion, we observe that the
distance functions $\{r_{x_\alpha}\}$ are convergent to
$r_{x_{\infty}} : X^1\to \mathbb R$. By Perelman's convergent
theorem, the trajectory of gradient semi-flow
$$
\frac{d^+\varphi}{dt}= \frac{\nabla r_{x_\alpha}}{|\nabla
r_{x_\alpha}|^2}
$$
is convergent to the trajectory in the limit space $X^1$:
\begin{equation}\label{eq4.2}
\frac{d^+\varphi}{dt}= \nabla r_{x_\infty}
\end{equation}
(see \cite{Petr2007} or \cite{KPT2009}).
Clearly, $r_{x_\infty}: X^1\to \mathbb R$ has no critical value in
$(0,\delta_{\infty})$ for $\delta_{\infty}>0$. Thus, for
sufficiently large $\alpha$, the distance function $r_{x_{\alpha}}$
has no critical value in $(\lambda_{\alpha},
\delta_{\infty}-\varepsilon_{\alpha})$, where $\lambda_{\alpha}\to
0$ and $\varepsilon_{\alpha}\to 0$ as $\alpha\to +\infty$.
Let us now re-scale our metrics
$\{\rho^{-2}_{\alpha}g^{\alpha}_{ij}\}$ by $\{\lambda^{-2}_ {\alpha}
\}$ again. Suppose $\tilde g^{\alpha}_{ij}= \frac{1}
{\lambda^2_{\alpha} \rho^2_{\alpha}}g^{\alpha}_{ij}$. Then a
subsequence of the sequence of the pointed spaces
$\{(M^3_{\alpha},\tilde g^{\alpha}_{ij} ), x_{\alpha}\}$ will
converge to $(Y^k_{\infty}, y_{\infty})$. The curvature of
$Y^k_{\infty}$ is greater than or equal to $0$, because
${\rm curv}_{\frac{1} {\lambda^2_{\alpha} \rho^2_{\alpha}}
g^{\alpha}_{ij}} \ge -{\lambda^2_{\alpha}}\to 0$ as $\alpha\to
+\infty$.
Since the distance $r_{y_{\infty}}(y)={\rm d}(y, y_{\infty})$ has a
critical point $z_{\infty}\ne y_{\infty}$ with ${\rm d}(z_{\infty},
y_{\infty})\le 1$. Thus $\dim(Y^k_{\infty})>1$.
When $\dim(Y^k_{\infty})=3$, we observe that $Y^3_{\infty}$ has
exactly two ends in our case. Thus $Y^3_{\infty}$ admits a line and
hence its metric splits. A soul $N^2_\infty$ of $Y^3_{\infty}$ must
be of $2$-dimensional. Thus $Y^3_{\infty}$ is isometric to
$N^2_\infty \times \mathbb R$. It follows that the soul $N^2_\infty$
of $Y^3_{\infty}$ has non-negative curvature. By Perelman's
stability theorem, $N^2_\alpha$ is homeomorphic to $N^2_\infty$ for
sufficiently large $\alpha$. A closed possibly singular surface
$N^2_\infty$ of non-negative curvature has been classified in
\autoref{section2}: $S^2, \mathbb{RP}^2, T^2$ or Klein bottle
$T^2/\mathbb Z_2$.
Let us consider the case of $\dim (Y^k_{\infty})=2$. We may assume
that the limiting space $Y^2_{\infty}$ has exactly two ends. When
$Y^2_\infty$ has no boundary, then the limit space is isometric to
$S^1\times \mathbb R$, because $Y^2_{\infty}$ has exactly two ends.
By our discussion in \S 1-2, we have fibration structure
\begin{equation}\label{eq4.3}
S^1\to M^3_{\alpha}\to Y^2_{\infty}
\end{equation}
for sufficiently large $\alpha$.
When $Y^2_\infty$ has non-empty boundary (i.e., $\partial
Y^2_{\infty} \neq \varnothing$), there are two subcases. If
$y_\infty \in {\rm int}(Y^2_\infty)$, by our discussion in
\autoref{section2}, we still have $ N^2_\alpha \sim T^2/\Gamma$. If
$y_\infty \in \partial Y^2_\infty$, using \autoref{thm5.4} below, we
see that $N^2_\alpha \sim [\partial B_{M^3_\alpha}(x_\alpha, r)]
\sim S^2$. This completes the proof of our theorem for all cases.
\end{proof}
In next section, we will discuss the end points or $\partial X^1$
when $X^1\cong [0, l]$ is an interval. In addition, we also discuss
the $2$-dimensional boundary $\partial X^2$ when $X^2$ is a surface
with convex boundary.
\section{Collapsing to boundary points or endpoints of the limiting
space}\label{section5}
Suppose that a sequence of $3$-manifolds is collapsing to a lower
dimensional space $X^s$. In previous sections we showed that there
is (possibly singular) fibration
$$
N^{3-s}_\alpha \to B_{M^3_{\alpha}} (x_\alpha, r) \xrightarrow{G-H}
{\rm int} (X^s).
$$
In this section we will consider the points on the boundary of $X^s$,
we will divided our discussion into two cases: namely (1) when $s=1$
and $X^1$ is a closed interval; and (2) when $s=2$ and $X^2$ is a
surface with boundary.
\subsection{Collapsing to a surface with boundary} \
\smallskip
Since $X^2$ is a topological manifold with boundary, without loss of
generality, we can assume that $\partial X^2=S^1$. First we provide
two examples to demonstrate that the collapsing could occur in many
different ways.
\begin{example}\label{ex 5.1}
Let $M^3_1 =D^2 \times S^1$ be a solid torus, where $D^2 = \{ (x, y)
| x^2 + y^2 < 1 \}$. We will construct a family of metrics
$g_\varepsilon$ on the disk so that $D^2_\varepsilon = (D^2,
g_\varepsilon) $ is converging to the interval $[0, 1)$. It follows
that if $M^3_{\frac{1}{\varepsilon}} = D^2_\varepsilon \times S^1$
then the sequence of $3$-spaces is converging to a finite cylinder:
i.e., $M^3_{\frac{1}{\varepsilon}} \to \big([0, 1) \times S^1 \big)$
as $\varepsilon \to 0$.
For this purpose, we let
$$
D^2_\varepsilon=\{(x, y, z)\in \mathbb R^3|z^2=\tan ( \frac{1}{
\varepsilon })[ x^2 + y^2],z\ge 0, x^2 + y^2 + z^2 < 1 \}
$$
We further make a smooth perturbation around the vertex $(0, 0,
0)$, while keeping curvature non-negative. Clearly, as $\varepsilon
\to 0$, the family of conic surfaces $\{ D^2_\varepsilon \}$
collapses to an interval $[0, 1)$. Let $X^2 = [0, 1) \times S^1$ be
the limiting space. As $\varepsilon \to 0$, our $3$-manifolds $
M^3_{\frac{1}{\varepsilon}}= D^2_\varepsilon \times S^1$ collapsed
to $X^2$ with $\partial X^2 \neq \varnothing$.
In this example, for every point $p\in
\partial X^2$, the space of direction $N_p(X^2)$ is a closed
half circle with diameter $\pi$.
\end{example}
\begin{example}\label{ex5.2}
In this example, we will construct the limiting surface with
boundary corner points. Let us consider the unit circle $S^1 =
\mathbb R/\mathbb Z$ and let $\varphi: \mathbb R^1 \to \mathbb R^1$
be an involution given by $\varphi(s) = -s $. Then its quotient
$S^1/\langle \varphi \rangle$ is isometric to $[0, \frac 12]$. Our
target limiting surface will be $ X^2 = [0, 1) \times [0, \frac
12]$. Clearly, $X^2$ has boundary corner point with total angle
$\frac{\pi}{2}$.
To see that $X^2$ is a limit of smooth Riemannian $3$-manifolds $\{
\bar{M}^3_{ \frac{1}{ \varepsilon } } \}$ while keeping curvatures
non-negative, we proceed as follows. We first viewed a 2-sheet cover
$M^3$ as an orbit space of $N^4$ by a circle action. Let $N^4 = D^2
\times \big (\mathbb R^2/(\mathbb Z \oplus \mathbb Z ) \big) $ and
let $\{(re^{i\theta},s,t)|0\le\theta\le 2\pi,0\le s\le 1,0\le t\le 1
\}$ be a parametrization of $D^2 \times \mathbb R^2$ and hence for
its quotient $N^4$. There is a circle action $\psi_\lambda: N^4 \to
N^4$ given by $\psi_\lambda ( re^{i\theta}, s, t) = ( re^{i(\theta +
2\pi \lambda)}, s, t+ \lambda)$ for each $e^{i2\pi\lambda} \in S^1$.
We also define an involution $ \tau: N^4 \to N^4$ by
$\tau(re^{i\theta}, s, t) = (re^{i\theta}, -s, t + \frac 12)$. It
follows that $ \tau \circ \tau = id $. Let $S^1 \ltimes \mathbb Z_2$
be subgroup generated by $\{\tau, \psi_\lambda\}_{\lambda \in S^1
}$. We introduce a family of metrics:
$$
g_\varepsilon = dr^2 + rd\theta^2 + ds^2 + \varepsilon^2 dt^2.
$$
The transformations $\{\tau, \psi_\lambda\}_{\lambda \in S^1 }$
remain isometries for Riemannian manifolds $(N^4, g_\varepsilon)$.
Thus, $\bar{M}^3_{ \frac{1}{\varepsilon} } = (N^4, g_\varepsilon)/ (
S^1 \ltimes \mathbb Z_2)$ is a smooth Riemannian $3$-manifold with
non-negative curvature. It is easy to see that $\bar{M}^3_{
\frac{1}{\varepsilon} }$ is homeomorphic to a solid torus. As
$\varepsilon \to 0$, our Riemannian manifolds $\{ \bar{M}^3_{
\frac{1}{\varepsilon} } \}$ collapse to a lower dimensional space
$X^2 = [0, 1) \times [0, \frac 12] = \{ (r, s) | 0 \le r < 1, 0 \le
s \le \frac 12 \}$. The surface $X^2$ has a corner point with total
angle $\frac{\pi}{2}$.
\end{example}
In above two examples, if we set $\alpha = \frac{1}{\varepsilon}$,
then there exist an open subset $U_\alpha \subset M^3_\alpha$ and a
continue map $G_\alpha: U_\alpha \to X^2$ such that $G_\alpha^{-1}(
A ) = U_\alpha $ is homeomorphic to a solid torus, where $A$ is an
annular neighborhood of $\partial X^2$ in $X^2$.
Let us recall an observation of Perelman on the distance function
$r_{\partial X^2} $ from the boundary $\partial X^2$.
\begin{lemma}\label{lem5.3}
Let $X^2$ be a compact Alexandrov surface of curvature $\ge c$ and
with non-empty boundary $\partial X^2$, $\Omega_{- \varepsilon}=\{ p
\in X^2 | {\rm d}(p,
\partial X^2) \ge \varepsilon\}$ and $A_\varepsilon = [X^2 -
\Omega_{- \varepsilon}]$. Then, for sufficiently small
$\varepsilon$, the distance function $r_{\partial X^2}
={\rm d}_{X^2}(\cdot, \partial X^2)$ from the boundary has no critical
point in $A_\varepsilon$.
\end{lemma}
\begin{proof}
We will use a calculation of Perelman and a result of Petrunin to
complete the proof. By the definition, if $X^2$ has curvature $\ge
c$, then $\partial X^2$ must be convex. For any $x \in \partial
X^2$, we let $\theta(x) = {\rm diam}[\Sigma_x(X^2)]$ be the total tangent
angle of the convex domain $X^2$. It follows from the convexity $0 <
\theta(x) \le \pi$.
We consider the distance function $f(y) = {\rm d}_{X^2}(y, \partial
X^2)$ for all $y \in X^2$. Perelman (cf. \cite{Per1991} page 33,
line 1) calculated that
$$
|\nabla f(x)| = \sin \left(\frac{\theta(x)}{2}\right) > 0,
$$
for all $x \in \partial X^2$. (Perelman stated his formula for
spaces with non-negative curvature, but his proof using the first
variational formula is applicable to our surface $X^2$ with
curvature $\ge -1$).
Corollary 1.3.5 of \cite{Petr2007} asserts that if there is a
converging sequence $\{ x_n\} \subset X^2 $ with $x_n \to x \in
\partial X^2$ as $n \to \infty$ then
$$\liminf_{n \to \infty}|\nabla f (x_n) |\ge
|\nabla f (x) |.
$$
Since $X^2$ is compact, it follows from the above discussion that
there is an $\varepsilon >0$ such that $|\nabla f (y) | > 0 $ for $y
\in A_\varepsilon$.
\end{proof}
We now recall a theorem of Shioya-Yamaguchi with our own proof. The
proof of Shioya-Yamaguchi used a version of Margulis Lemma, which we
will use our \autoref{thm1.17} in \S1 instead.
\begin{theorem}[\cite{SY2000} page 2]\label{thm5.4}
Let $\{(M^3_\alpha, p_\alpha) \}$ be a sequence of collapsing
$3$-manifolds as in Theorem 0.1'. Suppose that $\{ (B_{M^3_\alpha}(
p_\alpha, r), p_\alpha) \} \to (X^2, p_\infty) $ with $p_\infty \in
\partial X^2$ and $\partial X^2 $ is homeomorphic to $S^1$. Then
there is $\delta_1 > 0$ such that $B_{M^3_\alpha}(p_\alpha,
\delta_1) $ is homeomorphic to $D^2 \times I \cong D^3$ for all
sufficiently large $\alpha$.
Moreover, there exist an $\varepsilon > 0$ and a sequence of closed
curves $\varphi_\alpha: S^1 \to M^3_\alpha $ with $\{\varphi_\alpha
(S^1)\} \to \partial X^2$ as $ B_{M^3_\alpha}( p_\alpha, r)
\stackrel{G-H}{\longrightarrow} X^2$ such that a
$\varepsilon$-tubular neighborhood $ U_{\varepsilon}[\varphi_\alpha
(S^1)] $ of $\varphi_\alpha (S^1)$ in $M^3_\alpha$ is homeomorphic
to a solid tori $D^2 \times S^1$ for sufficiently large $\alpha$.
\end{theorem}
\begin{proof}
By conic lemma (cf. \autoref{prop1.7}), the number of points $p\in
\partial X^2$ with ${\rm diam} (\Sigma_p)\le \frac{\pi}{2}$ is finite,
denoted by $\{b_1, \cdots, b_s\}$. Since $\partial X^2 \sim S^1$ is
compact, it follows from \autoref{prop1.15} that there is a common
$\ell >0$ such that (1) the distance function $r_p(y) = {\rm d}(p, y)$
has not critical point in $[B_{X^2}(p, \ell) -\{p\}]$; and (2)
$B_{X^2}(p, \ell)$ is homeomorphic to the upper half disk $D^2_+$,
for all $p \in \partial X^2$.
Since $\partial X^2 $ is homeomorphic to $S^1$, we can approximate
$\partial X^2$ by a sequence of closed broken geodesics $\{
\sigma_\infty^{(j)} \}$ with vertices $\{ q_1, \cdots, q_j\} \subset
\partial X^2$. We may assume that $\{b_1, \cdots, b_s\}$ is a
subset of $\{ q_1, \cdots, q_j\}$ for all $j \ge s$. We also require
that the distance between two consecutive vertices is less than
$c/j$ (i.e., ${\rm d}_{\partial X^2}(q_i, q_{i+1}) \le \frac{c}{j} <
\frac{\ell}{8}$), for sufficiently large $j$.
We now choose a sequence of finite sets $\{ p_1^\alpha, \cdots,
p_j^\alpha\}$ such that $p^\alpha_i \to q_i$ as $\alpha \to \infty$
and $\{ p_1^\alpha, \cdots, p_j^\alpha\}$ span an embedded broken
geodesic $\sigma_\alpha^{(j)}$ in $M^3_\alpha$. It follows that
\begin{equation}\label{eq5.1}
\sigma_\alpha^{(j)} \to \sigma_\infty^{(j)}
\end{equation}
as $\alpha \to \infty$. Therefor, we may assume that there is a
sequence of smooth embedded curves $\varphi_\alpha: S^1 \to
M^3_\alpha$ such that
\begin{equation}\label{eq5.2}
\varphi_\alpha(S^1) \to \partial X^2
\end{equation}
as $M^3_\alpha\to X^2$. Thus, when $\{z(B_{M^3_\alpha}( p_\alpha,
r), p_\alpha) \} \to (X^2, p_\infty) $, the corresponding closed
curves $\varphi_\alpha(S^1) \stackrel{G-H}{\longrightarrow}
\partial X^2$ in the Gromov-Hausdorff topology, as $M^3_\alpha \to X^2$.
We now choose a sufficiently large $j_0 $ and divide our closed
curve $\varphi_\alpha: S^1 \to M^3_\alpha$ into $j_0$-many arcs of
constant speed:
$$
\varphi_{\alpha, i}: [a_i, a_{i+1}] \to M^3_\alpha
$$
for $i = 0, 1,\cdots, j_0$, where $\varphi_\alpha(a_{j_0+1}) =
\varphi_\alpha(a_0)$.
For a fixed $i$, we let
$$
A^i_{\alpha, s} = \varphi_\alpha([a_i + s, a_{i+1} -s])
$$
and $q_{\alpha, i} = \varphi_\alpha(a_i)$. We will show that there
is an $\varepsilon_i $ such that
\begin{equation}\label{eq5.3}
U_{\varepsilon_i}( A^i_{\alpha, 0}) = B_{M^3_\alpha}(q_{\alpha, i},
\varepsilon_i ) \cup U_{\varepsilon_i}( A^i_{\alpha, s} ) \cup
B_{M^3_\alpha}(q_{\alpha, i+1}, \varepsilon_i )
\end{equation}
is homeomorphic to the unit $3$-ball $D^3$. Our theorem will follow
from \eqref{eq5.3}. It remains to establish \eqref{eq5.3}.
We will show that each of $\{ B_{M^3_\alpha}(q_{\alpha, i},
\varepsilon_i ), U_{\varepsilon_i}( A^i_{\alpha, s} ),
B_{M^3_\alpha}(q_{\alpha, i+1}, \varepsilon_i )\}$ is homeomorphic
to $D^3$.
For $U_{\varepsilon_i}( A^i_{\alpha, s} )$, we first observe that
$\partial X^2$ is convex. Thus, the boundary arc $\varphi_\infty
([a_i + s, a_{i+1} -s]) \subset \partial X^2$ is a
Perelman-Sharafutdinov semi-gradient curve of the distance function
$r_{q_{\infty, i}}(y) = {\rm d}_{X^2}(y, q_{\infty, i})$. We already
choose $\ell$ sufficiently small and $j_0$ sufficiently large so
that $r_{q_{\infty, i}}$ has no critical point on $[B_{X^2}(
q_{\infty, i}, \ell) -\{q_{\infty, i} \}]$. By our construction, we
have $r_{q_{\alpha, i}}(\cdot) \to r_{q_{\infty, i}}(\cdot)$, as
$\alpha \to \infty$. It follows from \autoref{prop1.14} that the
distance function has no critical points on $U_{\varepsilon_i}(
A^i_{\alpha, s} )$. Using Perelman's fibration theorem, we obtain a
fibration
$$
N^2_{\alpha, i} \to U_{\varepsilon_i}( A^i_{\alpha, s} )
\stackrel{r_{q_{\alpha, i}}}{\longrightarrow} (s, \ell_{\alpha,
i}-s).
$$
It follows that $ U_{\varepsilon_i}( A^i_{\alpha, s} ) \sim
[N^2_{\alpha, i} \times (s, \ell_{\alpha, i}-s)]$. We will first use
\autoref{thm1.17} and its proof to show that $\partial N^2_{\alpha,
i} \sim S^1$.
Let $\varepsilon_0$ be given by \autoref{lem5.3} and $r_{\alpha}( y)
={\rm d}_{M^3_\alpha}(y, \varphi_\alpha(S^1))$. By the proof of
\autoref{thm1.17}, the map $F_\alpha( z) = (r_{q_{\alpha, i}}(z),
r_{\alpha}(z)) $ is regular on $Z^3_{\alpha, i}(s/2, \varepsilon_0)
= [ U_{\varepsilon_0}( A^i_{\alpha, s} ) - U_{s/2}( A^i_{\alpha, s}
) - B_{M^3_\alpha}(q_{\alpha, i}, \varepsilon_i ) -
B_{M^3_\alpha}(q_{\alpha, i+1}, \varepsilon_i )]$. There is a circle
fibration $ S^1 \to Z^3_{\alpha, i}(\frac s2, \varepsilon_0)
\stackrel{F_\alpha}{\longrightarrow} \mathbb R$. This proves that
$\partial N^2_{\alpha, i} \sim S^1$. It follows that
$H^2_{\alpha, i} = \{\partial [ U_{\varepsilon_i}( A^i_{\alpha, s})]
- B_{M^3_\alpha}(q_{\alpha, i}, \varepsilon_i ) -
B_{M^3_\alpha}(q_{\alpha, i+1}, \varepsilon_i )\} $ is homeomorphic
to a cylinder $S^1 \times (s, \ell_{\alpha, i}-s)$.
Using two points
suspension of the cylinder $H^2_{\alpha, i}$, we can further show
that
$$
\partial [ U_{\varepsilon_i}( A^i_{\alpha, s})] \sim S^2
$$
It remains to show that $ U_{\varepsilon_i}( A^i_{\alpha, s} ) \sim
D^3$ for sufficiently small $\varepsilon_i >0$. Suppose contrary, we
argue as follows. Using \autoref{prop1.14}, we see that $r_\alpha$
has no critical points in $ [U_{\varepsilon_0}( A^i_{\alpha, s} )-
U_{\varepsilon_0/2}( A^i_{\alpha, s} )]$. Let $\lambda_\alpha$ be
the largest critical value of $r_{\alpha}$ in $U_{\varepsilon_0}(
A^i_{\alpha, s} )$. By our assumption, $\lambda_\alpha \to 0$ as
$\alpha \to \infty$. We now consider a sequence of re-scaled spaces
$\{(\frac{1}{\lambda_\alpha}M^3_\alpha, q_{\alpha, i} ) \}$. Its
sub-sequence converges to a limiting space $(Y^s_\infty, \bar
q_{\infty, i})$. Recall that $\dim(X^2)= 2$. For the same reason as
in the proof of \autoref{prop2.3}, we can show that
$\dim(Y^s_\infty) = 3$ and that $Y^3_\infty$ has no boundary. Let
$\bar N^s_\infty$ be the soul of $Y^3_\infty$. There are three
possibilities.
(1) If the soul $\bar N^s_\infty$ is a point, then by
\autoref{thm2.11} we obtain that $Y^3_\infty \sim \mathbb R^3$. It
follows that $U_{\varepsilon_i}( A^i_{\alpha, s} ) \sim D^3$ by
Perelman's stability theorem, we are done.
(2) If the soul $\bar N^s_\infty$ is a circle, then $Y^3_\infty$ (or
its double cover) is isometric to $S^1 \times Z^2$, where $Z^2 $ is
homeomorphic to $\mathbb R^2$. Let $\bar \varphi_\infty ( \mathbb R
) $ be the limit curve in the re-scaled limit space $Y^3_\infty$.
Since $Y^3_\infty$ (or its double cover) is isometric to $S^1 \times
Z^2$, we have $U_r(\bar \varphi_\infty ([-R, R]))$ is homeomorphic
to $ S^1 \times D^2$. By Perelman's stability theorem, we would have
$\partial [ U_{\varepsilon_i}(A^i_{\alpha, s})]\sim \partial [
S^1\times D^2 ]\sim T^2$, which contradicts to the assertion $
\partial [U_{\varepsilon_i}(A^i_{\alpha,s})]\sim S^2$.
(3) If the soul $\bar N^s_\infty$ of $Y^3_\infty$ has dimension $2$,
then it follows from that the infinity of $Y^3_\infty$ would have at
most two points. However, since $\dim(X^2) =2$, the infinity of
$Y^3_\infty$ has an arc, a contradiction. This completes the proof
of $U_{\varepsilon_i}( A^i_{\alpha, s} ) \sim D^3$ for sufficiently
large $\alpha$.
With extra efforts, we can also show that $B_{M^3_\alpha}(q_{\alpha,
i}, \varepsilon) \sim D^3$ for sufficiently large $\alpha$. Hence, $
U_{\varepsilon}[\varphi_\alpha (S^1)] \sim \cup_i D^3_i \sim [D^2
\times S^1]$. This completes our proof. \end{proof}
\subsection{Collapsing to a closed interval}\label{section5.2} \
\smallskip
Since all our discussions in this sub-section are semi-local, we may
have the following setup:
\begin{equation}\label{eq5.4}
\lim_{\alpha\to+\infty}(M^3_\alpha, p_\alpha) = (I, O)
\end{equation}
in the pointed Gromov-Hausdorff distance, and $I = [0, \ell]$ is an
interval, $O\in I$ is an endpoint of $I$. We will study the topology
of $B_{M^3_\alpha}(p_\alpha, r)$ for a given small $r$.
We begin with four examples to illustrate how smooth Riemannian
$3$-manifolds $M^3_\alpha$ collapse to an interval $[0, \ell]$ with
curvature bounded from below. Collapsing manifolds in these
manifolds are homeomorphic to one of the following: $\{ D^3,
[\mathbb {RP}^3 -D^3], S^1 \times D^2, K^2 \ltimes [0,\frac 12]\}$,
where $D^3 = \{ x \in \mathbb R^3 | |x | < 1\}$ and $K^2$ is the
Klein bottle.
\begin{example}\label{ex5.5}
($ M^3_\alpha$ is homeomorphic to $D^3$). For each $\varepsilon >
0$, we consider a convex hypersurface in $\mathbb R^4$ as follows.
We glue a lower half of the $3$-sphere
$$
S^{3}_{\varepsilon, -} = \{ (x_1, x_2, x_3, x_4) \in \mathbb R^4 | 0
\le x_4 \le \varepsilon, x_1^2 + x_2^2 + x_3^2 + (x_4 -
\varepsilon)^2 = \varepsilon^2\}
$$
to a finite cylinder
$$
S^2_\varepsilon \times [\varepsilon, 1] = \{(x_1, x_2, x_3, x_4) \in
\mathbb R^4 |\varepsilon \le x_4 \le 1, x_1^2 + x_2^2 + x_3^2 =
\varepsilon^2 \}.
$$
Our $3$-manifold $M^3_\varepsilon = S^{3}_{\varepsilon, -} \cup
\big( S^2_\varepsilon \times [\varepsilon, 1] \big)$ collapse to the
unit interval as $\varepsilon \to 0$.
\end{example}
For other cases, we consider the following example.
\begin{example}\label{ex5.6}
($ M^3_\alpha$ homeomorphic to $S^1 \times D^2 $). Let us glue a
lower half of $2$-sphere
$$
S^{2}_{\varepsilon, -} = \{ (x_1, x_2, x_3) \in \mathbb R^3 | 0 \le
x_3 \le \varepsilon, x_1^2 + x_2^2 + (x_3 - \varepsilon)^2 =
\varepsilon^2\}
$$
to a finite cylinder $S^1_\varepsilon \times [\varepsilon, 1]$. The
resulting disk
$$
D^2_\varepsilon = S^{2}_{\varepsilon, -} \cup \big( S^1_\varepsilon
\times [\varepsilon, 1] \big)
$$
is converge to unit interval, as $\varepsilon \to 0$. We could
choose $M^3_{\frac{1}{\varepsilon}} = S^1_{ \varepsilon^2 } \times
D^2_\varepsilon$. It is clear that $M^3_{\frac{1}{\varepsilon}} \to
[0, 1]$ as $\varepsilon \to 0$.
\end{example}
\medskip
We now would like to consider the remaining cases. Of course, two
un-oriented surfaces $\mathbb {RP}^2_\varepsilon = S^2_\varepsilon
/\mathbb Z^2$ and $K^2 = T^2/\mathbb Z_2$ would converge to a point,
as $\varepsilon \to 0$. However, the {\it twisted } $I$-bundle over
$\mathbb {RP}^2$ (or $K^2$) is homeomorphic to an oriented manifold
$ \mathbb {RP}^2 \ltimes [0, \frac 12] = [\mathbb {RP}^3 -D^3]$ (or
$K^2 \ltimes [0, \frac 12] = M$\"o$\ltimes S^1$), where $M$\"o is
the M\"obius band.
\begin{example}\label{ex5.7}($ M^3_\alpha$ homeomorphic to
$\mathbb {RP}^2 \ltimes [0, \frac 12] $ or $M$\"o$ \ltimes S^1$).
Let us first consider round sphere $S^2_\varepsilon = \{ (x_1, x_2,
x_3) \mathbb R^3 | |x| = \varepsilon\}$. There is an orientation
preserving involution $\tau: \mathbb R^3 \times [-\frac 12, \frac
12]$ given by $\tau (x, t) = (-x, -t)$. Suppose that $\langle\tau
\rangle = \mathbb Z_2$ is the subgroup generated by $\tau$. Thus,
the quotient of $S^2_\varepsilon \times [-\frac 12, \frac 12]$ is an
orientable manifold $\mathbb {RP}^2 \ltimes [0, \frac 12] = [\mathbb
{RP}^3 -D^3]$.
Similarly, we can consider the case of $K^2_\varepsilon \ltimes [0,
\frac 12] \to [0, \frac 12]$, where $K^2_\varepsilon$ is a Klein
bottle.
\end{example}
Shioya and Yamaguchi showed that the above examples exhausted all
cases up to homeomorphisms.
\begin{theorem}[Theorem 0.5 of \cite{SY2000}]\label{thm5.8}
Suppose that $\lim_{\alpha\to+\infty}(M^3_\alpha, p_\alpha)= (I, O)$
with curvature $\ge -1$ and $I = [0, \ell]$. Then $M^3_\alpha$ is
homeomorphic to a gluing of $C_0$ and $C_\ell$ to $N^2 \times (0,
1)$, where $C_0$ and $C_\ell$ are homeomorphic to one of $\{ D^3,
[\mathbb {RP}^3 - D^3], S^1 \times D^2, K^2 \ltimes [0, \frac 12]\}$
and $N^2$ is a quotient of $T^2$ or $S^2$.
\end{theorem}
For the proof of \autoref{thm5.8}, we need to establish two
preliminary results (see \autoref{thm5.9} - \ref{thm5.10} below).
Let us consider possible exceptional orbits in the Seifert fibration
\begin{equation}\label{eq5.5}
S^1 \to M^3_\alpha \to {\rm int} (Y^2)
\end{equation}
for sufficiently large $\alpha$. We emphasize that the topological
structure of $M^3_\alpha$ depends on the number of extremal points
(or called essential singularities) of $Y^2$ in this case. Moreover,
the topological structure of $M^3_\alpha$ also depends on the type
of essential singularity of $Y^2$, when we glue a pair of solid tori
together, (see \eqref{eq5.14} below). Therefore, we need the
following theorem with a new proof.
\begin{theorem}[Compare with Corollary 14.4 of \cite{SY2000}]\label{thm5.9}
Let $Y^2$ be a connected, non-compact and complete surface with
non-negative curvature and with possible boundary. Then the
following is true.
\begin{enumerate}[{\rm (i)}]
\item If $Y^2$ has no boundary, then $Y^2$ has at most two extremal points
(or essential singularities). When $Y^2$ has exactly two extremal
points, $Y^2$ is isometric to the double ${\rm dbl}( [0, \ell]
\times [0, \infty))$ of the flat half strip.
\item If $Y^2$ has non-empty boundary $\partial Y^2 \neq 0$, then $Y^2$
has at most one interior essential singularity.
\end{enumerate}
\end{theorem}
\begin{proof}
For (i), we will use the multi-step Perelman-Sharafutdinov
semi-flows to carry out the proof. For the assertion (i), we
consider the Cheeger-Gromoll type Busemann function
\begin{equation}\label{eq5.6}
f(y) = \lim_{t \to \infty}[t - {\rm d}(y, \partial B_{Y^2}(y_0, t))].
\end{equation}
In \autoref{section2}, we already showed that $\Omega_c =
f^{-1}((-\infty, c])$ is compact for any finite $c$. Let $a_0 = \inf
\{f(y)| y \in Y^2\}$. Then the level set $\Omega_{a_0} = F^{-1}(
a_0)$ has dimension at most $1$. Recall that $\Omega_{a_0}$ is
convex by the soul theory. Thus, $\Omega_{a_0}$ is either a point or
isometric to a length-minimizing geodesic segment: $\sigma_0: [0,
\ell] \to Y^2$.
{\it Case a.} If $\Omega_{a_0} = \{ z_0 \}$ is a point soul of
$Y^2$, by an observation of Grove \cite{Grv1993} we see that the
distance function $r(y) = {\rm d}(y, z_0)$ has no critical point in
$[Y^2-\{ z_0\}]$. It follows that $z_0$ is only possible extremal
point of $Y^2$.
{\it Case b.} If $\Omega_{a_0} = \sigma_0 ([0, \ell])$, then for
$s\in (0, \ell)$ Petrunin \cite{Petr2007}] showed that
$T^-_{\sigma_0(s)}(Y^2) = \mathbb R^2$. Hence, two endpoints
$\sigma_0(0)$ and $\sigma_0(\ell)$ are only two possible extremal
points of $Y^2$.
Suppose that $Y^2$ has exactly two extremal points $\sigma_0(0)$ and
$\sigma_0(\ell)$. We choose two geodesic rays $\psi_0: [0, \infty)
\to Y^2$ and $\psi_\ell: [0, \infty) \to Y^2$ from two extremal
points $\sigma_0(0)$ and $\sigma_0(\ell)$ respectively. The broken
geodesic $\Gamma = \sigma_0 \cup \psi_0 \cup \psi_\ell$ divides our
space $Y^2$ into two connected components:
\begin{equation}\label{eq5.7}
[Y^2 - \Gamma] = \Omega_- \cup \Omega_+.
\end{equation}
We now consider the distance function
\begin{equation}\label{eq5.8}
w_\pm(u) = {\rm d}_{\Omega_\pm}(\psi_\ell(u), \psi_0(u)).
\end{equation}
Let us consider the Perelman-Sharafutdinov semi-gradient flow
$\frac{d^+w}{dt} = \nabla (-f)|_w$ for Busemann function $-f(w)$.
Recall that the semi-flow is distance non-increasing, since the
curvature is non-negative. Hence, we see that $t \to w_\pm(u-t)$ is
a non-increasing function of $t \in [0, u]$. It follows that
\begin{equation}\label{eq5.9}
w_\pm(u) \ge w_\pm(0) = \ell
\end{equation}
for $u \ge 0$. On other hand, we could use multi-step geodesic
triangle comparison theorem as in \cite{CMD2009} to verify that
\begin{equation}\label{eq5.10}
w_\pm(u) \le w_\pm(0) = \ell
\end{equation}
with equality holds if and only if four points $\{ \sigma_0(0),
\sigma_0(\ell), \psi_\ell(u), \psi_0(u)\}$ span a flat rectangle in
$\Omega_\pm$. Therefore, $\Omega_\pm $ is isometric to $[0, \ell]
\times [0, \infty)$. It follows that $Y^2$ is isometric to the
double ${\rm dbl}( [0, \ell] \times [0, \infty))$.
The second assertion (ii) follows from (i) by using $\text{dbl}
(Y^2)$.
\end{proof}
For compact surfaces $X^2$ of non-negative curvature, we use an
observation of Perelman (\cite{Per1991} page 31) together with
multi-step Perelman-Sharafutdinov semi-flows, in order to estimate
the number of extremal points in $X^2$.
\begin{theorem}\label{thm5.10}
{\rm (1)} Suppose that $X^2$ is a compact and oriented surface with
non-negative curvature and with non-empty boundary $\partial X^2
\neq \varnothing$. Then $X^2$ has at most two interior extremal
points. When $X^2$ has two interior extremal points, then $X^2$ is
isometric to a gluing of two copies of flat rectangle along their
three corresponding sides.
{\rm (2)} Suppose that $X^2$ is a closed and oriented surface with
non-negative curvature. Then $X^2$ has at most four extremal points.
\end{theorem}
\begin{proof}
(1) We consider the double ${\rm dbl}(X^2)$ of $X^2$. If ${\rm
dbl}(X^2)$ is not simply-connected and if it is an oriented surface
with non-negative curvature, then we already showed that ${\rm
dbl}(X^2)$ is a flat torus.
We may assume that $X^2$ is homeomorphic to a disk: $X^2 \cong D^2$
and $\partial X^2=S^1$. Let $X_{-\varepsilon}=\{x\in X^2 | {\rm d}(x,
\partial X^2)\ge\varepsilon\}$. Perelman \cite{Per1991} already
showed that $X_{-\varepsilon}$ remains to be convex, (see also
\cite{CMD2009}). If $a_0=\max\{{\rm d}(x, \partial X^2) \}$, then $X_{-
a_0}$ is either a geodesic segment or a single point set. Thus,
$X^2$ has at most two interior extremal points. The rest of the
proof is the same as that of \autoref{thm5.9} with minor
modifications.
(2) Suppose that $X^2$ has 2 distinct extremal points $p$ and $q$.
Let $N$ be a geodesic segment connecting $p$ and $q$. We need to
show for the function $f(x)={\rm d}_N(x)={\rm d}(N, x)$ is concave for all
$x\in X^2\setminus N$. Clearly for $x\in X^2\setminus N$ there
exists $x_N\in N$ such that $|xx_N|={\rm d}(x,N)$. There are two
possibilities:
\begin{enumerate}[(i)]
\item
$x_N$ is in the interior of $N$, then proof of the concavity of $f$
is exactly same as the one of Theorem 6.1 in \cite{Per1991} (see
also \cite{Petr2007} p156 and \cite{CMD2009}).
\item
$x_N$ is one of the endpoints, say $p$, then by first variational
formula we have
\begin{equation}\label{eq:5.10.1}
{\rm d}_{\Sigma_p}(\Uparrow_p^x, \Uparrow_p^q)\ge \frac{\pi}{2}
\end{equation}
where $\Uparrow_p^x$ denotes the set of directions of geodesics from
$p$ to $x$ in $\Sigma_p$. On the other hand, by our assumption $p$
is an interior extremal point so
\begin{equation}\label{eq:5.10.2}
{\rm diam} \Sigma_p\le \frac{\pi}{2}
\end{equation}
\end{enumerate}
Combine \eqref{eq:5.10.1} and \eqref{eq:5.10.2} we have
$${\rm d}_{\Sigma_p}(\Uparrow_p^x, \Uparrow_p^q) = \frac{\pi}{2}$$
The rest of the proof is same as the one of Theorem 6.1 in
\cite{Per1991} (or \cite{Petr2007}, \cite{CMD2009}).
\begin{figure*}[ht]
\includegraphics[width=250pt]{section5_0.pdf}\\
\caption{A $2$-sphere $S^2$ with $4$ essential singularities}\label{fig:5.0}
\end{figure*}
Hence, we have shown that $f$ is concave function on $X^2\setminus
N$. Let $A$ be the maximum set. Then $\Omega_1$ is either a geodesic
segment or one point. Thus, $X^2\setminus N$ contains at most 2
extremal points by our proof of (1). Therefore, the number of total
extremal points is at most 4.
\end{proof}
The case of exactly 4 extremal points on a topological $2$-sphere
can be illustrated in \autoref{fig:5.0}. Let us now complete the
proof of \autoref{thm5.8}.
\bigskip
\noindent \textbf{Proof of \autoref{thm5.8}:}
\bigskip
The proof is due to Shioya-Yamaguchi \cite{SY2000}. Our new
contribution is the simplified proof of \autoref{thm5.9}, which will
be used in the study of subcases (1.b) and (2.b) below. For the
convenience of readers, we provide a detailed argument here.
Recall that in \autoref{section4}, we already constructed a
fibration
$$
N^2_\alpha \to U_\alpha \to {\rm int} (X^1)
$$
with shrinking fiber $N^2_\alpha$ is homeomorphic to either $S^2$ or
$T^2$ for sufficiently large $\alpha$.
When $\{ p_\alpha \} \to p_\infty \in \partial X^1$, since the
fibers $\{N^2_\alpha\}$ are shrinking, we may assume that
\begin{equation}\label{eq5.11}
\partial B_{M^3_\alpha}(p_{\alpha},\delta) = S^2 \quad \quad \text{or}\quad \quad
\partial B_{M^3_\alpha}(p_{\alpha},\delta) = T^2
\end{equation}
for sufficiently large $\alpha$, where $X^1 = [0, \ell]$ and $\delta
= \ell/2$.
If there exists an $\varepsilon_0$ such that $r_\alpha(x) =
{\rm d}_{M^3_\alpha}(x, p_\alpha)$ has no critical points on
the punctured ball $[B_{M^3_\alpha}(p_{\alpha},\varepsilon_0) -
\{p_{\alpha}\}]$ for sufficiently large $\alpha$, then, using the
proof of \autoref{prop1.7}, we can show that
$B_{M^3_\alpha}(p_{\alpha},\varepsilon_0) \sim D^3$ for sufficiently
large $\alpha$. Thus, conclusion of \autoref{thm5.8} holds for this
case.
Otherwise, there exists a subsequence $\{ \varepsilon_\alpha \} \to
0$ such that $r_\alpha(x) = {\rm d}_{M^3_\alpha}(x, p_\alpha)$ has a
critical point $q_\alpha$ with ${\rm d}(p_\alpha, q_\alpha) =
\varepsilon_\alpha$. It follows that $
\left(\frac{1}{\varepsilon_{\alpha}}M^3_{\alpha},
\bar{p}_{\alpha}\right)\to (Y^k, \bar{p}_{\infty})$. Using a similar
argument as in the proof of \autoref{prop2.3}, we can show that the
limiting space $Y^k$ has non-negative curvature and $\dim(Y^k) = k
\ge 2$, where $\bar{p}_{\alpha}$ is the image of $p_\alpha$ under
the re-scaling. There are two cases described in \eqref{eq5.11}
above.
\medskip
{\bf Case 1.} $\partial B_{M^3_\alpha}(p_{\alpha},\delta) = S^2$.
If $B_{M^3_\alpha}(p_{\alpha},\delta)$ is homeomorphic to $ D^3$,
then no further verification is needed. Otherwise, we may assume
$B_{M^3_\alpha}(p_{\alpha},\delta)\ne D^3$ and $\partial
B_{M^3_\alpha}(p_{\alpha},\delta)=S^2$.
Under these two assumptions, we would like to verify that
$B_{M^3_\alpha}(p_{\alpha},\delta)$ is homeomorphic to $\mathbb
{RP}^2\ltimes I$. There are two sub-cases:
\medskip
{\it Subcase 1.a. } If $\dim Y=3$, then $Y^3$ is non-negatively
curved, open and complete. Let $N^k$ be a soul of $Y^3$.
Using Perelman's stability theorem, we claim that the soul $N^k$
cannot be $S^1$. Otherwise the boundary of
$B_{M^3_\alpha}(p_{\alpha},\delta)$ would be $T^2$, a contradiction.
For the same reason, the soul $N^k$ cannot be $T^2$ or $K^2$.
Otherwise the boundary $\partial B_{M^3_\alpha}(p_{\alpha},\delta)$
would be homeomorphic to $T^2$, contradicts to our assumption.
Because the boundary of $B_{M^3_\alpha}(p_{\alpha},\delta)$ only
consists of one component, the soul of $N^k$ of $Y$ cannot be $S^2$;
otherwise we would have $Y^3 \sim S^2 \times \mathbb R$ and $
\partial B_{M^3_\alpha}(p_{\alpha},\delta) \sim S^2 \cup S^2$, a
contradiction.
Therefore, we have demonstrated that the soul $N^k$ must be either a
point or $\mathbb {RP}^2$. It follows that either $B_Y(N^k,R)\cong
D^3\text{ or } \mathbb {RP}^2\ltimes I$ by soul theorem. Hence, we
conclude that $B_Y(N^k,R)\cong \mathbb {RP}^2\ltimes I$ for
sufficiently large $R$. Using Perelman's stability theorem, we
conclude that $B_{M^3_\alpha} (p_{\alpha},\delta)$ is homeomorphic
to $\mathbb {RP}^2\ltimes I$ for this sub-case.
\medskip
{\it Subcase 1.b. } If $\dim Y=2$, i.e. $Y$ is a surface with
possibly non-empty boundary. First we claim $\partial Y\ne
\varnothing$. For any fixed $r>0$, we have
\begin{equation}\label{eq5.12}
\partial B_{M^3_\alpha}(p_\alpha, r)\cong S^2
\end{equation}
by our assumption that the regular fiber is $S^2$. Suppose contrary,
$\partial Y^2 = \varnothing$. We would have $Y\cong \mathbb R^2$ (or
a M\"obius band) by \autoref{thm2.6}, because $Y^2$ has one end.
Thus, for sufficiently large $R$ we have $\partial B_Y(\bar
p_\infty, R)= S^1$. Applying fibration theorem for the collapsing to
the surface case, we would further have
$$
\partial B_{M^3_\alpha}(\bar{p}_{\alpha}, R)= T^2
$$
which contradicts to our boundary condition \eqref{eq5.12}. Hence,
$Y$ has non-empty boundary: $\partial Y^2 \neq \varnothing$.
Since $Y^2$ has one end and $\partial Y^2 \neq \varnothing$, by
\autoref{cor2.9} we know that $Y^2$ is homeomorphic to either
upper-half plane $[0, \infty) \times \mathbb R$ or isometric a half
cylinder. By previous argument, $Y^2$ can not be isometric to a half
cylinder. Hence, we have
\begin{equation}\label{eq5.13}
Y^2 \cong [0, \infty) \times \mathbb R
\end{equation}
and that $\partial Y$ is a non-compact set.
We further observe that if $\bar{p}_\infty$ is a boundary point of
$Y^2$, then, by \autoref{thm5.4}, we would have
$$
B_{M^3_\alpha}(p_\alpha, r) \cong [D^2_+ \times I] \cong D^3
$$
where $D^2_+$ is closed half disk, a contradiction.
Thus we may assume that $\bar{p}_\infty$ is an interior point of
$Y^2$. Let us consider the Seifert fiber projection
$B_{\frac{1}{\lambda_\alpha}M^3_{\alpha}}(\bar{p}_{\alpha},R)\to
B_Y(\bar{p}_\infty,R)$ for some large $R$. If
$B_Y(\bar{p}_\infty,R)$ contains no interior extremal point, then by
the proof of \autoref{thm5.4} one would have
$B_{\frac{1}{\varepsilon_\alpha}M^3_{\alpha}}(\bar{p}_{\alpha},R)\cong
D^2\times I\cong D^3$, a contradiction. Therefore, there exists an
extremal point inside $B(\bar{p}_\infty, R)$. Without loss of
generality, we may assume this extremal point is $O$.
By \autoref{thm5.9} and the fact that $\partial Y^2 \neq
\varnothing$, we observe that $Y^2$ has at most one interior
extremal point. In our case, $Y^2$ is isometric to the following
flat surface with a singularity. Let $\Omega = [0, h] \times [0,
\infty)$ be a half flat strip and $\Gamma = \{ (x, y) \in \Omega |
xy = 0\} \subset \partial \Omega $. Our singular flat surface $Y^2$
is isometric to a gluing two copies of $\Omega$ along the curve
$\Gamma$.
The picture of $B_Y(O,R)$ for large $R$ will look like
\autoref{figure5.1}, where the bold line denotes $\partial Y\cap
B_Y(O,R)$.
\begin{figure}[ht]
\includegraphics[width=150pt]{section5_1.pdf}\\
\caption{A metric disk $B_{Y^2}(O,R)$ in $Y^2$ for large $R$}\label{figure5.1}
\end{figure}
By our assumption, we have $\partial B(\bar{p}_{\alpha}, R) \cong
S^2$. Thus we can glue a $3$-ball $D^3$ to $B(\bar{p}_{\alpha}, R)$
along $\partial B(\bar{p}_{\alpha}, R) \cong S^2$ to get a new
closed $3$-manifold $\hat M^3_\alpha$.
Recall that we have a (possibly singular) fibration
$$
S^1 \to B(\bar{p}_{\alpha}, R)
\stackrel{G_\alpha}{\longrightarrow}{\rm int} (Y^2).
$$
By \autoref{thm2.1}, we have $ G_\alpha^{-1}(B_{Y^2}(O, h/2)) \cong
D^2 \times S^1. $ With some efforts, we can further show that $
[\hat M^3_\alpha - G_\alpha^{-1}(B_{Y^2}(O, h/4))] \cong S^1 \times
D^2. $
The exceptional orbit $G_\alpha^{-1}(O)$ is corresponding to the
case of $m_0=2$ in \autoref{ex2.0}. Finally we conclude that $ \hat
M^3_\alpha $ is homeomorphic to real projective $3$-space:
\begin{equation}\label{eq5.14}
\hat M^3_\alpha \cong (D^2\times S^1)\cup_{\psi_{\mathbb Z_2}}
(S^1\times D^2)\cong \mathbb {RP}^3.
\end{equation}
Therefore, by our construction, we have
$$B_{M^3_\alpha}(p_{\alpha},\delta) = [\hat M^3_\alpha \setminus D^3 ]
\cong [\mathbb{RP}^3\setminus D^3] \cong \mathbb{RP}^2\ltimes I.$$
This completes the first part of our proof for the case of $\partial
B_{M^3_\alpha}(p_{\alpha},\delta)= S^2$.
\medskip
{\bf Case 2. } If $\partial B_{M^3_\alpha}(p_{\alpha},\delta)=T^2$,
our discussion will be similar to the previous case with some
modifications. In fact, there are still two sub-cases:
\medskip
{\it Subcase 2.a.} If $\dim Y=3$, then the $Y$ has a soul $N^k$. We
assert that $N^k$ cannot be a point nor $S^2$ nor $\mathbb {RP}^2$
because the boundary $\partial B_{M^3_\alpha}(p_{\alpha},\delta) =
T^2$. In addition, we observe that $N^k$ cannot be $T^2$ since the
boundary of $B_{M^3_\alpha}(p_{\alpha},\delta)$ consists only one
component. Therefore, there are two remaining cases: $N^k$ is
homeomorphic to either $S^1$ or Klein bottle $K^2$. If the soul is
$S^1$ then $B_{M^3_\alpha} (p_{\alpha},\delta)\cong D^2\times S^1$.
Similarly, if soul is $K^2$, then
$B_{M^3_\alpha}(p_{\alpha},\delta)\cong K^2\ltimes I$.
\medskip
{\it Subcase 2.b.} $\dim Y=2$. It is clear that our limiting space
$Y^2$ is non-compact.
If $\partial Y^2 = \varnothing$, we proceed as follows. (i) When the
soul of $Y^2$ is $S^1$, by the connectedness of $\partial
B_{M^3_\alpha}(p_{\alpha}, \delta)$ we know that $Y$ is a open
M\"{o}bius band. Therefore, $B_{M^3_\alpha}(p_{\alpha},\delta)$ is
homeomorphic to product of M\"{o}bius band and $S^1$, i.e. a twist
$I$-bundle over $K^2$. (ii) When the soul of $Y^2$ is a single
point, $Y=\mathbb R^2$, which is non-compact. By \autoref{thm5.9},
we see that the number $k$ of extremal points in $Y^2$ is at most
$2$. Recall that there is (possibly singular) fibration: $S^1 \to
B_{M^3_\alpha}(p_{\alpha},\delta)
\stackrel{G_\alpha}{\longrightarrow} {\rm int} (Y^2)$. If $k \le 1$,
we have $B_{M^3_\alpha}(p_{\alpha},\delta)\cong S^1\times D^2$. If
$k =2$, by \autoref{thm5.9}, we can further show that
$B_{M^3_\alpha}(p_{\alpha},\delta)\cong K^2\ltimes I$.
Let us now handle the remaining subcase when $\partial Y^2 \neq
\varnothing$ is not empty. By a similar argument as in subcase 1.b
above, we conclude that $B_{M^3_\alpha}(p_{\alpha},\delta)$ is
homeomorphic to either $\mathbb{RP}^2\ltimes I$ or $D^3$, which
contradicts to our assumption $\partial
B_{M^3_\alpha}(p_{\alpha},\delta) \cong T^2$.
This completes the proof of \autoref{thm5.8} for all cases. \qed
\section{Gluing local fibration structures and Cheeger-Gromov's compatibility
condition}\label{section6}
\setcounter{theorem}{-1}
In this section, we complete the proof of Perelman's collapsing
theorem for $3$-manifolds (Theorem 0.1'). In previous five sections,
we made progress in decomposing each collapsing $3$-manifold
$M^3_{\alpha}$ into several parts:
\begin{equation}\label{eq6.1}
M^3_{\alpha}= V_{\alpha,X^0}\cup V_{\alpha,X^1}\cup V_{\alpha,
{\rm int} (X^2)}\cup W_{\alpha}
\end{equation}
where $V_{\alpha,X^0}$ is a union of closed smooth $3$-manifolds of
non-negative sectional curvature, $V_{\alpha,X^1}$ is a union of
fibrations over $1$-dimensional spaces with spherical or toral
fibers and $V_{\alpha, {\rm int} (X^2)}$ admits locally defined
almost free circle actions.
Extra cares are needed to specify the definition of $ V_{\alpha,
X^i}$ for each $\alpha$. For example, we need to choose specific
parameters for collapsing $3$-manifolds $\{M^3_\alpha \}$ so that
the decomposition in \eqref{eq6.1} becomes well-defined. The choices
of parameters can be made in a similar way as in \autoref{section3}
of \cite{SY2005}.
\begin{theorem}\label{thm6.0}
Suppose that $\{ (M^3_\alpha, g^\alpha)\}$ be as in
\autoref{thm0.1} and that $M^3_\alpha$ has no connected components
which admit metrics of non-negative curvature. Then there are two
small constants $\{c_1, \varepsilon_1\}$ such that
$$
V_{\alpha,X^1} = \{x \in M^3_\alpha \quad | \quad {\rm d}_{GH}
(B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, 1) ,
[0,\ell])\le \varepsilon_1 \}
$$
and
$$
V_{\alpha, X^2} = \left\{x \in M^3_\alpha | {\rm Vol}(B_{g^\alpha} (x,
\rho_\alpha(x))) \le c_1 \varepsilon_1 [\rho_\alpha(x)]^3 \right\}
$$
which are described as in \eqref{eq6.1} above and $ \{
\rho_\alpha(x) \}$ satisfy inequality \eqref{eq6.3} below.
\end{theorem}
\begin{proof}
Recall that if the pointed Riemannian manifolds $\{((M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha), x_\alpha) \}$ converge to $(X^k,
x_\infty)$, then by our assumption we have ${\rm diam}(X^k) \ge 1$ and
hence
$$
1 \le \dim[X^k] \le 2.
$$
Therefore, there are two cases: (1) $\dim [X^k] = 1$ and (2)
$\dim[X^k] = 2$.
\medskip
\noindent {\it Case 1.} $\dim[X^k] = 1$. By the proof of
\autoref{thm5.8}, there exists $\varepsilon_1>0$ such that if
\begin{equation}\label{eq:6.0.1}
{\rm d}_{GH}(B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, 1) ,
[0, \ell])\le \varepsilon_1
\end{equation}
then $B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, 1) $
admits a possibly singular fibration over $[0,1]$, where the
curvature of the $ \rho^{-2}_\alpha g_{ij}^\alpha$ is bounded below
by $-1$. Clearly, $\ell$ must satisfy $1 - \varepsilon_1 \le \ell
\le 2 + \varepsilon_1$.
If the inequality \eqref{eq:6.0.1} holds, then the unit metric ball
is very thin and can be covered by at most
$\frac{4\ell}{\varepsilon_1 }$ many small metric balls $\{
B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(y_j,
2\varepsilon_1) \}$. By Bishop volume comparison theorem, we have
a volume estimate:
\begin{equation}
{\rm Vol}[B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, 1)] \le
\frac{4\ell}{\varepsilon_1 } c_0 [\sinh( 2\varepsilon_1)]^3 \le
c_0^* \ell \varepsilon_1^2
\end{equation}
In this case, we set $x \in V_{\alpha, X^1}$.
\noindent {\it Case 2.} $\dim[X^k] = 2$. There are two subcases for
\begin{equation}
{\rm d}_{GH}(B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, 1),
B_{X^2}(x_\infty, 1))\le \varepsilon_2
\end{equation}
\smallskip
\noindent
{\it Subcase 2a.} The metric disk $B_{X^2}(x_\infty, 1)$ has small area.
In this subcase, one can choose constant $\varepsilon_2>0$ such that
if a metric disk in $X^2$ with radius $r$ satisfies $1/2\le r\le 1$
and area $\le \varepsilon_2$ then by comparison theorems one can
prove
\begin{equation}
{\rm d}_{GH}(B_{X^2}(x_\infty, 1), [0,\ell])\le \varepsilon_1/3
\end{equation}
for some $\ell>0$. It follows that
\begin{equation*}
\begin{split}
{\rm d}_{GH}(B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)
}(x_\alpha, 1),[0,\ell])& \le {\rm d}_{GH}(B_{(M^3_\alpha,
\rho^{-2}_\alpha g_{ij}^\alpha)}(x_\alpha,
1),B_{X^2}(x_\infty, 1) )\\
&\quad+ {\rm d}_{GH}(B_{X^2}(x_\infty, 1), [0,\ell])\\
&\le\frac{\varepsilon_1}{3}+\frac{\varepsilon_1}{3}<\varepsilon_1
\end{split}
\end{equation*}
We can now view the ball $B_{\rho^{-2}_\alpha g_{ij}^\alpha
}(x_\alpha, 1)$ is fibred over $[0,\ell]$, instead of fibred over
the disk $B_{X^2}(x_\infty, 1)$.
In this subcase, we let $x_\alpha$ be in both $ V_{\alpha, X^1}$ and
$V_{\alpha, X^2}$.
\smallskip
\noindent {\it Subcase 2b.} The metric disk $B_{X^2}(x_\infty, 1)$
is very fat (thick).
In this subcase, we may assume that the length of metric circle
$\partial B_{X^2}(x_\infty, r)$ is 100 times greater than
$\varepsilon_1$ for $r \in [\frac 12, 1]$. Hence, the length of
collapsing fiber $F^{-1}_\alpha(z)$ for $ z \in \partial
B_{X^2}(x_\infty, r)$. Thus, the metric spheres $\{ \partial
B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha) }(x_\alpha, r)\}$
collapse in only one direction. Because $G_\alpha$ is an almost
metric submersion due to Perelman's semi-flow convergence theorem,
(cf. \autoref{prop1.14} above), volumes of metric balls collapse at
an order $o( \varepsilon_1)$:
\begin{equation}
{\rm Vol}[ B_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha)}(x, r) ] \le
c_1 \varepsilon_1 r^3
\end{equation}
for $r \in [\frac 12, 1]$. Moreover, the free homotopy class of
collapsing fibers is {\it unique} in the annular region
$A_{(M^3_\alpha, \rho^{-2}_\alpha g_{ij}^\alpha) }(x_\alpha, \frac
14, 1)$.
By the discussion above, we have the following decomposition of the
manifold $M^3_\alpha$ for $\alpha$ large. More precisely, for
sufficiently large $\alpha$, $M^3_\alpha$ has decomposition
$$
M^3_\alpha = V_{\alpha,X^1}\cup V_{\alpha, {\rm int} (X^2)} =
V_{\alpha, {\rm int} (X^1) }\cup V_{\alpha, {\rm int} (X^2)}\cup
W_{\alpha}
$$
where $W_\alpha$ contains collapsing parts near $\partial X^1$ and
$\partial X^2$ described in \autoref{section5} above. This completes
the proof.
\end{proof}
Let us now recall an observation of Morgan-Tian about the choices of
functions $\{\rho_{\alpha}(x)\}$ and volume collapsing factors
$\{w_{\alpha}\}$.
\begin{prop}[Morgan-Tian \cite{MT2008}]\label{prop6.1}
Let $M^3_{\alpha}, w_{\alpha}$ and $\rho_{\alpha}(x)$ be as in
Theorem 0.1'. Suppose that none of connected components of
$M^3_{\alpha}$ admits a smooth Riemannian metric of non-negative
sectional curvature. Then, by changing $w_{\alpha}$ by a factor
$\hat c$ independent of $\alpha$, we can choose $\rho_{\alpha}$ so
that
\begin{equation}\label{eq6.2}
\rho_{\alpha}(x)\le \dim(M^3_{\alpha})
\end{equation}
and
\begin{equation}\label{eq6.3}
\rho_{\alpha}(y)\le \rho_{\alpha}(x)\le 2\rho_{\alpha}(y)
\end{equation}
for all $y\in B_{g_{\alpha}}(x,\frac{\rho_{\alpha}}{2})$
\end{prop}
\begin{proof}
(cf. Proof of Lemma 1.5 of \cite{MT2008}) Since there is a typo in
Morgan-Tian's argument, for convenience of readers, we reproduce
their argument with minor corrections.
Let us choose
$$C_0=\max_{0\le s\le 1}\left\{\frac{s^3}{4 \pi\int_0^s(\sinh
u)^2du}\right\}$$
For each connected component $N^3_{\alpha,j}$ of
$M^3_{\alpha}$, the Riemannian sectional curvature of
$N^3_{\alpha,j}$ can not be everywhere non-negative. Thus, for each
$x\in N^3_{\alpha,j}$, there is a maximum $r=r_{\alpha}(x)\ge
\rho_{\alpha}(x)$ such that sectional curvature of $g_{\alpha}$ on
$B_{g_{\alpha}}(x,r)$ is greater than or equal to $-\frac{1}{r^2}$.
Consequently, curvature of $\frac{1}{r^2}g_{\alpha}$ on
$B_{\frac{1}{r^2}g_{\alpha}}(x,1)$ is $\ge -1$. By Bishop-Gromov
relative comparison theorem and our assumption
${\rm Vol}(B_{g_{\alpha}}(x,\rho_{\alpha}))\le
w_{\alpha}\rho_{\alpha}^3$, we have
\begin{equation} \label{eq6.4}
\begin{split}
& {\rm Vol}(B_{g_{\alpha}}(x,r))=r^3{\rm Vol} [B_{\frac{1}{r^2}g_{\alpha}}(x,1)]
\le r^3\frac{V_{\rm hyp}(1)}{V_{\rm hyp}(\frac{\rho_\alpha}{r})}
{\rm Vol}(B_{\frac{1}{r^2}g_{\alpha}}(x,\frac{\rho_\alpha}{r})) \\
& = \frac{V_{\rm hyp}(1)}{V_{\rm hyp}(\frac{\rho_\alpha}{r})}
{\rm Vol}(B_{g_{\alpha}}(x, \rho_\alpha ))
\le \frac{V_{\rm hyp}(1)}{V_{\rm hyp}(\frac{\rho_\alpha}{r})} w_{\alpha}\rho_{\alpha}^3 \\
& = \frac{V_{\rm hyp}(1)}{V_{\rm hyp}(\frac{\rho_\alpha}{r})}
(\frac{\rho_{\alpha}}{r})^3\frac{1}{(\frac{\rho_{\alpha}}{r})^3} w_{\alpha}\rho_{\alpha}^3
\le C_0 V_{\rm hyp}(1) w_{\alpha}r^3
\end{split}
\end{equation}
where $V_{\rm hyp}(s)$ is the volume of the ball $B_{\mathbb
H^3}(p_0,s)$ of radius $s$ in $3$-dimensional hyperbolic space with
constant negative curvature $-1$.
Let $
\hat C=C_0 V_{\rm hyp}$
be a constant number independent of $\alpha$, and $Rm_g$ be the
sectional curvature of the metric $g$. We now replace
$\rho_\alpha(x)$ by
\begin{equation}\label{eq6.6}
\rho_\alpha^*(x)=\max\{r|
Rm_{g_{\alpha}}|_{B_{g_{\alpha}}(x,r)}\ge-\frac{1}{r^2}\}
\end{equation}
Our new choice $\rho_\alpha^*(x)$ clearly satisfies
\begin{equation} \label{eq6.7}
\left\{ \begin{aligned}
\rho_\alpha^*(x) &\le {\rm diam}(M^3_{\alpha}) \\
\frac{1}{2}\rho_\alpha^*(x)\le&\rho_\alpha^*(y)\le 2\rho_\alpha^*(x)
\end{aligned} \right.
\end{equation}
for all $y\in B_{g_{\alpha}}(x,\frac{\rho_{\alpha}^*}{2})$ and
$x\in M^3_{\alpha}.$
\end{proof}
Our next goal in this section is to show that we can perturb our
decomposition above along their boundaries so that the new
decomposition admits an F-structure in the sense of Cheeger-Gromov,
and hence $M^3_{\alpha}$ is a graph manifold for sufficiently large
$\alpha$. Let us begin with a special case.
\subsection{Perelman's collapsing theorem for a special case} \
\medskip
\
In this subsection, we prove Theorem 0.1' for a special case when
Perelman's fibrations are assumed to be circle fibrations or toral
fibrations. In next sub-section, we reduce the general case to the
special case, (i.e. the case when no spherical fibration occurred).
As we pointed out earlier, for the proof of \autoref{thm0.1} it is
sufficient to verify that $M^3_{\alpha}$ admits an F-structure of
positive rank in the sense of Cheeger-Gromov for sufficiently large
$\alpha$, (cf. \cite{CG1986}, \cite{CG1990}, \cite{R1993}).
Recall that an F-structure on a $3$-manifold $M^3$ is a collection
of charts $\{(U_i,V_i,T^{k_i})\}$ such that $T^{k_i}$ acts on a
finite normal cover $V_i$ of $U_i$ and the $T^{k_i}$-action on $V_i$
commutes with deck transformation on $V_i$. Moreover the tori-group
actions satisfy a compatibility condition on any possible overlaps.
Let us state compatibility condition as follows.
\begin{definition}[Cheeger-Gromov's compatibility condition]\label{def6.2}
$\quad$\\ Let $\{(U_i,V_i,T^{k_i})\}$ be a collection of charts as
above. If, for any two charts $(U_i,V_i,T^{k_i})$ and $ (U_j,V_j,
T^{k_j})$ with non-empty intersection $U_i\cap U_j\ne \varnothing$
and with $k_i\le k_j$, the $T^{k_i}$ actions commutes with the
$T^{k_j}$-actions on a finite normal cover of $U_i\cap U_j$ after
re-parametrization if needed, then the collection $\{(U_i,V_i,
T^{k_i}) \}$ is said to satisfy Cheeger-Gromov's compatibility
condition.
\end{definition}
For the purpose of this paper, since our manifolds under
consideration are $3$-dimension, the choice of free tori $T^{k_i}$
actions must be either circle action or $2$-dimensional torus
action. Thus we only have to consider following three possibilities:
\begin{enumerate}[(i)]
\item Both overlapping charts $ (U_i,V_i, S^1)$ and $ (U_j,V_j, S^1)$
admit almost free circle actions;
\item
Both overlapping charts $ (U_i,V_i, T^2)$ and $ (U_j,V_j, T^2)$
admit almost free torus actions;
\item
There are a circle action $(U_i,V_i, S^1)$ and a torus-action $
(U_j,V_j, T^2)$ with non-empty intersection $U_i\cap U_j\ne
\varnothing$.
\end{enumerate}
Let us begin with the sub-case (ii).
\begin{prop}\label{prop6.3}
Let $U_{\alpha,i_1}$ and $U_{\alpha,i_2}$ be two overlapping open
subsets in $U_{\alpha, X^1}$ with toral fibers $F^2_{i_1}\cong F^2
_{i_2} \cong T^2$ described in \autoref{section4}. Suppose that $
U_{\alpha,i_1}\cap U_{\alpha,i_2}\ne\varnothing$. Then we can modify
charts $(U_{\alpha,i_1},V_{\alpha,i_1},T^2)$ so that the perturbed
torus-actions on modified charts satisfy the Cheeger-Gromov's
compatibility condition.
\end{prop}
\begin{proof}
Recall that in \autoref{section4} we worked on the following
diagrams
\begin{diagram}
U_{\alpha,i_1}&\rTo^{f_{i_1}}&\mathbb{R}&&U_{\alpha,i_2}&\rTo^{f_{i_2}}&\mathbb{R}\\
\dTo_{\text{G-H}}&&\dCorresponds&{\quad {\rm and} \quad}&\dTo_{\text{G-H}}&&\dCorresponds\\
X^1_{i_1}&\rTo^{r_{\rho_{i_1}}}&\mathbb{R}&&X^1_{i_2}&\rTo^{r_{\rho_{i_2}}}&\mathbb{R}
\end{diagram}
Without loss of generality, we may assume that
$X_{i_1}=(a_{i_1},b_{i_1})$ and $X_{i_2}=(a_{i_2},b_{i_2})$ with
non-empty intersection $X_{i_1}\cap X_{i_2}=(a_{i_2},b_{i_1})\ne
\varnothing$, where $a_{i_1}< a_{i_2}< b_{i_1}< b_{i_2}$. Let
$\{\lambda_1,\lambda_2\}$ be a partition of unity of
$[a_{i_1},b_{i_2}]$ corresponding to the open cover
$\{(a_{i_1},b_{i_1}),(a_{i_2},b_{i_2}))\}$. After choosing $\pm$
sign and $\lambda_1, \lambda_2$ carefully, we may assume that
$$
\hat f(t)=\lambda_1(t)f_{i_1}(t)\pm \lambda_2(t)f_{i_2}(t)
$$
will not have any critical point $t\in [a_{i_1},b_{i_2}]$. We also
require that $\hat f: X_{i_1}\cup X_{i_2}\to \mathbb R$ is a {\it
regular} function in the sense of Perelman (i.e. $\hat f$ is a
composition of distance function given by
\autoref{def1.9}-\ref{def1.10}). Thus, we can lift the admissible
function $\hat f$ to a function
$$f_{\alpha}: U_{\alpha,i_1}\cup U_{\alpha,i_2}\to \mathbb R $$
such that
$$\nabla f_{\alpha}|_{y}\ne 0$$
for $y\in [U_{\alpha,i_1}\cup U_{\alpha,i_2}]$, $f_{\alpha}\cong
f_{\alpha,i_1}$ on $[U_{\alpha, i_1}^*-U_{\alpha,i_2}^*]$ and
$f_{\alpha}\cong f_{\alpha, i_2}$ on
$[U_{\alpha,i_2}^*-U_{\alpha,i_1}^*]$, where $U_{\alpha, i_1}^*$ is
a perturbation of $U_{\alpha, i_1}$ and $U_{\alpha, i_2}^*$ is a
perturbation of $U_{\alpha, i_2}$. Thus there is a new perturbed
torus fibration.
$$T^2\to U_{\alpha,i_1}\cup U_{\alpha,i_2}\to Y^1. $$
This completes the proof.
\end{proof}
Similarly, we can modify admissible maps to glue two circle actions
together.
\begin{prop}\label{prop6.4}
Let $U_{\alpha,i_1}$ and $U_{\alpha,i_2}$ be two open sets contained
in $U_{\alpha,X^2_{reg}}$ corresponding to a decomposition of
$M^3_\alpha$ described in \autoref{section1}. Suppose that two
charts $\{(U_{\alpha,i_1}, V_{\alpha,i_1}, S^1),(U_{\alpha,i_2},
V_{\alpha,i_2}, S^1)\}$ have non-empty overlap $U_{\alpha,i_1}\cap
U_{\alpha,i_2}\ne \varnothing$. Then the union $U_{\alpha,i_1}\cup
U_{\alpha,i_2}$ admits a global circle action after some
modifications when needed.
\end{prop}
\begin{proof}
We may assume that, in the limiting processes of $U_{\alpha,i_1} \to
X^2_1 $ and $U_{\alpha,i_1} \to X^2_1 $, both limiting surfaces
$X^2_1$ and $X^2_2$ are fat (having relatively large area growth).
For the remaining cases when either $X^2_1$ and $X^2_2$ are
relatively thin, by the proof of \autoref{thm6.0} we can view either
$ U_{\alpha,i_1} $ or $ U_{\alpha,i_1}$ is a portion of $V_{\alpha,
X^1}$ instead. These remaining situations can be handled by either
\autoref{prop6.3} above or \autoref{prop6.5} below respectively.
As we pointed out in \autoref{section1}-\ref{section2} the
exceptional orbits are isolated. Without loss of generality, we may
assume that there is no exceptional circle orbits occurs in the
overlap $U_{\alpha,i_1}\cap U_{\alpha,i_2}$.
Since both limiting surfaces $X^2_1$ and $X^2_2$ are very {\it fat},
the lengths of metric circles $\partial B_{X^2_i}(x_{\infty, i}, r)$
is at least 100 times larger than $\varepsilon_1$, which is great
than the length of collapsing fibers, where $ r \in [\frac 12, 1] $.
Hence, on the overlapping region $U_{\alpha,i_1}\cap
U_{\alpha,i_2}$, two collapsing fibres are freely homotopic each
other in the shell-type region $ A_{(M^3_\alpha, \rho^{-2}_\alpha
g_{ij}^\alpha) }(x_\alpha,\frac 14, 1)$.
Let us consider the corresponding diagrams again.
\begin{diagram}
U_{\alpha,i_1} &\rTo^{\psi_{i_1}}&\mathbb{R}^2&&U_{\alpha,i_2} &\rTo^{\psi_{i_2}}&\mathbb{R}^2\\
\dTo_{\text{G-H}}&&\dCorresponds&{\quad {\rm and} \quad}&\dTo_{\text{G-H}}&&\dCorresponds\\
X^2_{i_1}&\rTo&\mathbb{R}^2&&X^2_{i_2}&\rTo&\mathbb{R}^2
\end{diagram}
We can use $2\times 2$ matrix-valued function $A(x)$ to construct a
new {\it regular} map
$$\psi_{\alpha}: U_{\alpha,i_1}\cup U_{\alpha,i_2} \to \mathbb R^2$$
from $\{\psi_{\alpha,i_1},\psi_{\alpha,i_2}\}$ as follows. Let
$$\psi_{\alpha}=\lambda_1 A_1\psi_{i_1}+\lambda_2 A_2\psi_{i_2}$$
where $\{\lambda_1,\lambda_2\}$ is a partition of unity for
$U_{i_1}, U_{i_2}$, $A_{i_1}(x)$ and $A_{i_2}(x)$ are $2\times 2$
matrix-valued functions such that
\begin{enumerate}[(i)]
\item $A_{i_1}(x)\cong \left(
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right)$ is close to the identity on
$[U^*_{i_1}-U^*_{i_2}]$.
\item Similarly, $A_{i_2}(x)\cong \left(
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right)$ is close to the identity on
$[U^*_{i_2}-U^*_{i_1}]$.
\end{enumerate}
where $U^*_{i_1}$ is a perturbation of $U_{i_1}$ and $U^*_{i_2}$ is
a perturbation of $U_{i_2}$. We leave the details to readers.
\end{proof}
We now discuss relations between circle actions and torus action.
\begin{prop}\label{prop6.5}
Let $U_{\alpha,i_1}\subset U_{\alpha,X^1}$ and
$U_{\alpha,i_2}\subset U_{\alpha,X^2_{reg}}$, where
$\{U_{\alpha,X^1}$, $U_{\alpha,X^2_{reg}}, U_{\alpha,X^0},
W_{\alpha}\}$ is a decomposition of $M^3_{\alpha}$ as in
\autoref{section1}-\ref{section5}. Suppose that $(U_{\alpha,i_1},
V_{\alpha,i_1}, T^2)$ and $(U_{\alpha,i_2}, V_{\alpha,i_2}, S^1)$
have an interface $U_{\alpha,i_1}\cap U_{\alpha,i_2}\ne
\varnothing$. Then, after a perturbation if need, the circle orbits
are contained in torus orbits on the overlap.
\end{prop}
\begin{proof}
Let $f_{\alpha,i_1}: U_{\alpha,i_1}\to \mathbb R$ be the regular
function which induces the chart $(U_{\alpha,i_1}, V_{\alpha,i_1},
T^2)$. Suppose that
$\psi_{\alpha,i_1}=(f_{\alpha,i_2},g_{\alpha,i_2}):
U_{\alpha,i_2}\to \mathbb R^2$ is the corresponding regular map.
Since any component of a regular map must be regular, after
necessary modifications, we may assume that $f^*_{\alpha,i_1}:
U^*_{\alpha,i_1}\to \mathbb R$ is equal to $f^*_{\alpha,i_2}$ in the
modified regular map
$\psi^*_{\alpha,i_2}=(f^*_{\alpha,i_2},g^*_{\alpha,i_2}):
U^*_{\alpha,i_2}\to \mathbb R^2$ on the overlap
$U^*_{\alpha,i_1}\cap U^*_{\alpha,i_2}$. It follows that the
modified $S^1$-orbits are contained in the newly perturbed
$T^2$-orbits. Thus, the modified charts $(U^*_{\alpha,i_1},
V^*_{\alpha,i_1}, T^2)$ and $(U^*_{\alpha,i_2}, V^*_{\alpha,i_2},
S^1)$ satisfy the Cheeger-Gromov's compatibility condition.
\end{proof}
We now conclude this sub-section by a special case of Perelman's
collapsing theorem.
\begin{theorem}\label{thm6.6}
Suppose that $\{(M^3_{\alpha}, g_{\alpha})\}$ satisfies all
conditions stated in \autoref{thm0.1}. Suppose that all Perelman
fibrations are either (possible singular) circle fibrations or toral
fibrations. Then $M^3_\alpha$ must admits a F-structure of positive
rank in the sense of Cheeger-Gromov for sufficiently large $\alpha$.
Consequently, $M^3_\alpha$ is a graph manifold for sufficiently
large $\alpha$.
\end{theorem}
\begin{proof}
By our assumption and \autoref{prop6.1}, there is a collection of
charts $\{(U_{\alpha,i}, V_{\alpha,i}, T^{k_i})\}_{i=1}^n$ such that
\begin{enumerate}[(i)]
\item $1\le k_i\le 2$;
\item $\{U_{\alpha,i}\}_{i=1}^n$ is an open cover of $M^3_\alpha$;
\item $(U_{\alpha,i}, V_{\alpha,i}, T^{k_i})$ satisfies condition
(3.1).
\end{enumerate}
It remains to verify that our collection
$$
\{(U_{\alpha,i}, V_{\alpha,i}, T^{k_i})\}_{i=1}^n
$$
satisfies the Cheeger-Gromov compatibility condition, after some
modifications. Since exceptional circle orbits with non-zero Euler
number are isolated, we may assume that on any possible overlap
$$U_{\alpha,i_1}\cap\cdots \cap U_{\alpha,i_j}\ne \varnothing$$
there is no exceptional circular orbits. Applying
\autoref{prop6.3}-\ref{prop6.5} when needed, we can perturb our
charts so that the modified collection of charts $\{(U^*_{\alpha,i},
V^*_{\alpha,i}, T^{k_i})\}_{i=1}^n$ satisfy the Cheeger-Gromov
compatibility condition. It follows that $M^3_\alpha$ admits an
F-structure $\mathscr{F}^*$ of positive rank. Therefore,
$M^3_\alpha$ is a graph manifold for sufficiently large $\alpha$.
\end{proof}
\subsection{Perelman's collapsing theorem for general case} \
\medskip
\
Our main difficulty is to handle a Perelman fibration with possible
spherical fibers:
$$S^2\to U_{\alpha,i}\to {\rm int} (X^1) $$
because the Euler number of $S^2$ is non-zero and $S^2$ does not
admit any free circle actions.
\begin{prop}\label{prop6.7}
Let $\{(M^3_{\alpha}, g_{\alpha})\}$ be a sequence of Riemannian
$3$-manifolds as in \autoref{thm0.1}. If there is a Perelman
fibration
$$N^2\to U_{\alpha,X^1}\to (a,b)$$
with spherical fiber $N^2\cong S^2$, then $U_{\alpha, X^1}$
must be contained in the interior of $M^3_\alpha$ when $M^3_\alpha$
has non-empty boundary $\partial M^3_\alpha\ne \varnothing$.
\end{prop}
\begin{proof}
According to condition (2) of \autoref{thm0.1}, for each boundary
component $N^2_{\alpha,j}\subset \partial M^3_{\alpha}$, there
is a topologically trivial collar $V_{\alpha,j}$ of length one such
that $V_{\alpha,j}$ is diffeomorphic to $T^2\times [0,1)$. Thus, we
have $U_{\alpha,j}\cap
\partial M^3_\alpha=\varnothing$
\end{proof}
We need to make the following elementary but useful observation.
\begin{prop}\label{cor6.8}
(1) Let $A=\{(x_1,x_2,x_3)\in \mathbb R^3\big |\ \ |\vec x|=1,
|x_3|\le \frac12\}$ be an annulus. The product space $S^2\times
[0,1]$ has a decomposition
$$\big(D^2_+\times [0,1]\big) \cup \big( A\times [0,1] \big) \cup
\big( D^2_-\times [0,1]\big), $$
where $D^2_{\pm}=\{(x_1,x_2,x_3)\in S^2(1)|
\pm x_3\ge \frac12\}$.
(2) If $\{(M^3_{\alpha}, g_{\alpha})\}$ satisfies conditions of
\autoref{thm0.1}, then, for sufficiently large $\alpha$,
$M^3_\alpha$ has a decomposition $\{U_{\alpha,j}\}_{j=1}^{m_\alpha}$
such that
(2.a) either a finite normal cover $V_{\alpha,j}$ of
$U_{\alpha,j}$ admits a free $T^{k_i}$-action with $k_i=1,2$.
(2.b) or $U_{\alpha,j}$ is homeomorphic to a finite solid cylinder
$D^2\times [0,1]$ with $U_{\alpha,j}\cap \partial M^3_\alpha=
\varnothing$.
(3) If $U_{\alpha,j}$ is a finite cylinder as in (2.b), then it must
be contained in a chain $\{U_{\alpha,j_1},\cdots, U_{\alpha,j_m}\}$
of finite solid cylinders such that their union
$$\hat W_{\alpha,h_j}=\bigcup_{i=1}^m U_{\alpha,j_i}$$
is homeomorphic to a solid torus $D^2\times S^1$.
\end{prop}
\begin{proof}
The first two assertions are trivial. It remains to verify the third
assertion. By our construction, if $U_{\alpha,j}$ is a finite
cylinder homeomorphic to $D^2\times I$, then $U_{\alpha,j}$ meets
$V_{\alpha,X^1}$ exactly in $D^2\times \partial I$. Moreover, such a
finite solid cylinder $U_{\alpha,j}$ is contained in a chain
$\{U_{\alpha,j_1},\cdots,U_{\alpha,j_m}\}$ of solid cylinders. It is
easy to see that the union $\hat W_{\alpha, h_j}$ is homotopic to
its core $S^1$. i.e. $\hat W_{\alpha,h_j}$ is homeomorphic to a
solid torus $D^2\times S^1$.
\end{proof}
We are ready to complete the proof of Perelman's collapsing theorem.
\begin{proof}[{\bf Proof of \autoref{thm0.1}}]
Using \autoref{prop6.1} we can choose $\rho_{\alpha}$ so that
$$
\rho_{\alpha}(x)\le \dim(M^3_{\alpha})
$$
and
$$
\rho_{\alpha}(y)\le \rho_{\alpha}(x)\le 2\rho_{\alpha}(y)
$$
for all $y\in B_{g_{\alpha}}(x,\frac{\rho_{\alpha}}{2})$.
Therefore, it is sufficient to verify Theorem 0.1' instead.
By our discussion above, for sufficiently large $\alpha$, our
$3$-manifold $M^3_\alpha$ admits a collection of (possibly singular)
Perelman fibration. Thus $M^3_\alpha$ has a decomposition
$$
M^3_{\alpha}= V_{\alpha,X^0}\cup V_{\alpha,X^1}\cup V_{\alpha,
{\rm int} (X^2)} \cup W_\alpha
$$
For each chart in $ V_{\alpha,X^0}\cup V_{\alpha, {\rm int} (X^2)}$,
it admits a Seifert fibration. However, remaining charts could be
homeomorphic to $S^2\times I$, $[\mathbb {RP}^3-D^3]$ or a solid
cylinder $D^2\times I$. It follows from \autoref{cor6.8}(3) that
$M^3_\alpha$ has a more refined decomposition
$$M^3_\alpha=\bigcup_{i=1}^m U_{\alpha,j}$$
such that
\begin{enumerate}[(i)]
\item either a finite normal cover $V_{\alpha,j}$ of $U_{\alpha,j}$
admits a free $T^{k_j}$-action with $k_j=1,2$;
\item or $U_{\alpha,j}$ is homeomorphic to a solid torus $D^2\times
S^1$, which is obtained by a chain of solid cylinders.
\end{enumerate}
Observe that possibly exceptional circular orbits are isolated.
Moreover if $\{U_{\alpha,j}\}$ are of type (2.b) in
\autoref{cor6.8}, these cores $\{0\}\times S^1$ are isolated as
well.
It remains to verify that our collection of charts
$\{(U_{\alpha,j},V_{\alpha,j}, T^{k_j})\}$ satisfy Cheeger-Gromov
compatibility condition. By observations on exceptional orbits and
cores of solid cylinders, we may assume that on possibly overlap
$$U_{\alpha,j_1}\cap\cdots\cap U_{\alpha,j_k}\ne \varnothing$$
there is no exceptional orbits nor cores of solid cylinders.
We require that $V_{\alpha, {\rm int} (X^2)}$ meets $S^2$-factors as
annular type $A_\alpha$. Thus, if $U_{\alpha, j} \cong S^1 \times
D^2$, we can introduce $T^2$-actions on $\partial U_{\alpha, j}
\cong S^1 \times S^1$ which are compatible with $S^1$-actions on
$A_\alpha$. After modifying our charts as in proofs of
\autoref{prop6.3}-\ref{prop6.5}, we can obtain a new collection of
charts $\{(U^*_{\alpha,j},V^*_{\alpha,j}, T^{k_j})\}$ satisfying the
Cheeger-Gromov's compatibility condition. Therefore, $M^3_\alpha$
admits an F-structure $\mathscr{F}_{\alpha}$ of positive rank. It
follows that $M^3_\alpha$ is a $3$-dimensional graph manifold, (cf.
\cite{R1993}).
\end{proof}
\noindent {\bf Acknowledgement:} Both authors are grateful to
Professor Karsten Grove for teaching us the modern version of
critical point theory for distance functions. Professor Takashi
Shioya carefully proofread every sub-step of our simple proof in the
entire paper, pointing out several inaccurate statements in an
earlier version. He also generously provided us a corrected proof of
\autoref{prop2.2}. We are very much indebted to Professor Xiaochun
Rong for supplying \autoref{thm6.0} and its proof. Hao Fang and
Christina Sormani made useful comments on an earlier version.
Finally, we also appreciate Professor Brian Smyth's generous help
the exposition in our paper. We thank the referee for his (or her)
suggestions, which led many improvements.
\bibliographystyle{amsalpha}
|
1,108,101,562,730 | arxiv | \section{Introduction}
The main goal of this paper is to provide a new
calculation of the value of the
gluino condensate
in four-dimensional ${\cal N}=1$ supersymmetric pure $SU(N)$ gauge theory.
Our approach incorporates recent results and ideas of
Refs.~\cite{HKLM,LY,KL,LL,KvBzer,KvBone,KvBtwo}.
Previous to this, two conceptually different approaches for calculating
$\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$ have been followed in the literature:
1. In the first methodology \cite{NSVZone,ARV,AKMRV}, the so-called
strong-coupling instanton (SCI) approach, the gluino condensate
$\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$
is determined directly in the strongly coupled theory
using an explicit one-instanton calculation of
$\Vev{{\mathop{\rm tr}\nolimits\lambda^2(x_1)\over16\pi^2}\cdots{\mathop{\rm tr}\nolimits\lambda^2(x_N)\over16\pi^2}}$.
Cluster decomposition arguments are then invoked in order to
extract $\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$.
2. In the second methodology \cite{NSVZtwo},
the so-called weak-coupling instanton (WCI) approach,
the calculation is
performed with additional matter fields whose presence ensures that the theory
is weakly coupled and a semi-classical `constrained instanton' calculation
is justified \cite{ADS}.
Holomorphicity of supersymmetric F-terms is then used to decouple
the matter fields and to flow to the original pure ${\cal N}=1$ gauge theory.
As reviewed in \cite{HKLM},
these two methods give two different
values for the gluino condensate \cite{NSVZtwo,FS,AKMRV,FP}:
\begin{subequations}\begin{align}
&\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\rm SCI}\ =\
{2 \over [(N-1)! \ (3N-1)]^{1/N}} \ \Lambda^3
\ ,\elabel{stwka} \\
&\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\rm WCI}\ =\ \Lambda^3
\ .\elabel{stwkb}
\end{align}\end{subequations}
These results are quoted in the Pauli-Villars scheme
with $\Lambda$ being the corresponding dimensional transmutation scale
of the theory. The reason for the discrepancy between the SCI versus WCI
calculations, as well as the question as to which is correct,
has been a long-standing controversy
\cite{NSVZtwo,AKMRV,KS,SVrev}.
The new ingredient in this old puzzle
is the fact that over the last few years {\it multi\/}-instanton
technology has been developed \cite{MO,measone,meastwo,KMS,DHKMV}
to the extent that
calculations
can be performed in supersymmetric (and in principle non-supersymmetric)
gauge theories,
both in the weak-coupling
\cite{MO} and in the strong-coupling regimes \cite{DHKMV}, providing us
with successful quantitative tests of, respectively, the Seiberg-Witten
solution of ${\cal N}=2$ theories \cite{SW} and the Maldacena duality
\cite{Maldacena} in the ${\cal N}=4$ theory.
In \cite{HKLM} we re-examined the gluino condensate controversy using
these recently developed
methods. In particular, we evaluated the large $N$ contribution of
$k$ instantons to gluino correlation functions
and demonstrated conclusively that an essential technical step in the SCI
calculation
of the gluino condensate, namely the use of cluster decomposition,
is actually invalid.
The central idea pursued in the present paper
is that there are additional configurations
which contribute to gluino condensate in the strongly-coupled regime,
implying that the instanton-induced SCI expression \eqref{stwka} is
incomplete. The existence of other contributions necessarily affects
the application of cluster decomposition in the following sense.
Since each single instanton has $2N$
adjoint fermion zero-modes the $k$-instanton configuration
contributes to the correlation function
\begin{equation}
\VEV{{\mathop{\rm tr}\nolimits\lambda^2(x_1)\over16\pi^2}\cdots{\mathop{\rm tr}\nolimits\lambda^2(x_{kN})\over16\pi^2}}\
,
\elabel{cork}\end{equation}
rather than {\it directly\/} to
$\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$ itself. In the SCI approach
gluino condensate is obtained by applying cluster decomposition to
\eqref{cork} with the additional assumption that the instanton calculation
averages over the $N$ physically equivalent vacua of the ${\cal N}=1$ theory.
In the following, we will show that
the correlator \eqref{cork} is not dominated solely by
instantons and hence the clustering argument must be applied to the complete
expression and not just to the instanton contribution. This is of course
in complete agreement with our earlier observation \cite{HKLM}\ that
clustering fails when only multi-instantons are included in the SCI calculation.
Furthermore, when the theory is partially compactified on
${\Bbb R}^3\times S^1$,
we will identify the configurations contributing {\it directly\/} to
$\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$ with monopoles.
By considering the contribution of
the monopole configurations, we will be able to argue that the complete
strong coupling expression for gluino condensate
is different from the SCI expression \eqref{stwka} but agrees perfectly with
the WCI result \eqref{stwkb}.
It has been suspected for a long time \cite{BFST,FFS,BL,Osborn}
that in the strongly coupled theories, such as QCD or its
supersymmetric brethren,
instantons should be thought of as composite states of more basic
configurations, loosely referred to as `instanton partons'. These partons
would give important and possibly dominant contributions to
the non-perturbative dynamics at
strong coupling. Our intention is to make this idea more precise.
A necessary evil in our approach is to consider the theory with one of
its dimensions compactified.\footnote{Our approach is different from
the toron
calculations of Ref.~\cite{CG} where all four dimensions were compactified
on a torus.
The advantage of our method compared to that of \cite{CG} is that we
do not have to fine-tune the compactification parameters for the finite-action
configurations to exist.
We also note that the value gluino condensate extracted from the
toron approach of \cite{CG} in the finite-volume torus with the fine-tuned
periods is difficult to interpret in the infinite volume and its numerical value
agrees neither with the WCI \eqref{stwkb} nor with the SCI
\eqref{stwka} results. In the alternative toron set-up advocated in
\cite{Zh}, the fine-tuning problem was avoided at the cost of introducing
{\it singular} toron-like configurations with a branch cut and an IR regulator.}
To this end, we suppose that, say,
$x_4$, is periodic with period $\beta/2 \pi$.\footnote{The indices run
over $m=1,2,3,4$ and $\mu=1,2,3$. Our other
conventions in four
and three non-compact dimensions as well as instanton and monopole basics
follow closely Appendices A and C of Ref.~\cite{DKMTV}.} We must then impose
periodic boundary conditions for bosons and fermions
\begin{equation}
A_m (x_\mu,x_4=0) \ = \ A_m (x_\mu,x_4=\beta) \ , \quad
\lambda (x_\mu,x_4=0) \ = \ \lambda (x_\mu,x_4=\beta) \ ,
\elabel{ssbc}\end{equation}
to preserve supersymmetry. An important additional ingredient,
as explained in Sec.~II of \cite{GPY}, is that the local gauge group itself
must also be composed of `proper' gauge transformations, i.e.~those
that are periodic on $S^1$:
\begin{equation}
U(x_\mu,x_4=0) \ = \ U(x_\mu,x_4=\beta) \ .
\elabel{ssub}\end{equation}
We will refer to the aforementioned theory as the `theory
on the cylinder ${\Bbb R}^3 \times S^1$' to distinguish it
from the finite temperature
compactification where the fermions have {\it anti-periodic\/}
boundary conditions.
The situation we envisage is similar to that discussed in
\cite{SWthree}, since the theory on
${\Bbb R}^3 \times S^1$ interpolates between the four-dimensional ${\cal N}=1$
pure gauge theory, for $\beta=\infty$, and a three-dimensional
${\cal N}=2$ gauge theory, for $\beta=0$. {}From now on we will work at
finite $\beta$ and only at the end of the calculation
take the limit of $\beta \to \infty$ in order
to recover the genuinely four-dimensional theory.
Some time ago,
Gross, Pisarski and Yaffe \cite{GPY} gave a complete topological classification of the
smooth finite-action gauge fields on ${\Bbb R}^3 \times S^1$
which may contribute to the path integral in the semi-classical approximation.
The relevant configurations are characterized by three sets of invariants:
the instanton number $k$, the magnetic charge $q$, and the eigenvalues
of the asymptotics of the Wilson line:
\begin{equation}\Omega \ = \ {\Bbb P}\exp \ \int_0^\beta dx_4 \ A_4
(x_4,x_\mu\to \infty)\ .
\elabel{wil}\end{equation}
One consequence of this classification
is that at finite radius $\beta$ instanton
configurations do not exhaust the set of semi-classical contributions because
configurations with magnetic charges can---and in fact do---contribute to the
non-perturbative dynamics including the value of the gluino condensate.
Ref.~\cite{GPY} further
argued that in the {\it finite temperature\/}
compactification---not the one under present consideration---the
non-trivial values of the asymptotic Wilson line \eqref{wil} are suppressed
in the infinite volume limit. Consequently, the classically flat directions
\begin{equation}
\langle A_4 \rangle = {\rm diag}(a_1,a_2,\ldots,a_N)
\elabel{ppr}\end{equation}
are lifted by thermal
quantum corrections and the true vacuum of the theory is
$\langle A_4 \rangle =0$.
In this case the configurations with magnetic
charges are not relevant and the semi-classical physics involves
instantons only.
Remarkably, for the theory on the cylinder, with periodic boundary
conditions on the fermions, the argument of \cite{GPY}
does not apply and, as we shall see, the opposite scenario
ensues:
(i) The semi-classical physics of the
${\Bbb R}^3 \times S^1$ $SU(N)$ theory is described by configurations of
monopoles of $N$ different types.
(ii) The classical moduli space of the
${\Bbb R}^3 \times S^1$ $SU(N)$ theory \eqref{ppr}
is lifted in a non-trivial way
\begin{equation}
\langle A_4 \rangle \ = \
{\rm diag}\Big({N-1 \over N}{\pi \over i \beta} \ ,
\ {N-3 \over N}{\pi \over i \beta} \ ,
\ldots \ , \
-{N-1 \over N}{\pi \over i \beta}
\Big) \ ,
\elabel{lgen}\end{equation}
leaving behind $N$ supersymmetry-preserving vacua labelled
by the $N$ discrete values of the $\theta$-angle\footnote{In general, the
$\theta F^{ \ *}F$ term in the microscopic Lagrangian
can be rotated away with an anomalous chiral transformation of gluinos.
However $\theta$ is an angular variable in the sense that $\theta =
2\pi n$, $n\in{\Bbb Z}$,
is indistinguishable from $\theta=0$. We will demonstrate in the following that the
topological charge $Q$ of the configurations contributing to the gluino
condensate is $Q=1/N$ and thus the net effect of the $\theta F^{ \ *}F$ term
in each vacuum is the phase factor $\theta_u /N$.}
\begin{equation}\theta_u \ = \
2 \pi (u-1) \ , \qquad
u=1,2,\ldots,N\ .
\elabel{ltht}\end{equation}
The $N$ vacua are related to each other by
the chiral subgroup ${\Bbb Z}_N$, which permutes $\theta_u$'s,
but leaves the Wilson line \eqref{lgen} unchanged.
Each such vacuum
contributes a factor of $1$ to the Witten index
${\rm tr}(-1)^{\rm F}=N$
\cite{Windx}.
The values of
gluino condensate in each of these vacua will be related to each other
by a trivial phase transformation $\exp[i\theta_u /N]$.
{}From now on we will concentrate
on simply one of the vacua, with $\theta_u=0$.
The distinctive feature of \eqref{lgen} is the constant equal spacing
between the VEVs $a_j$:
\begin{equation} a_{j} - a_{j+1} \ = \ {2\pi \over iN\beta} \ {\rm
mod} \ {2\pi\over i\beta}\ ,
\qquad j=1,2,\ldots,N \ .
\elabel{lequ}\end{equation}
In general, one would think that
the field configurations of the
${\Bbb R}^3 \times S^1$ theory which are relevant in the semi-classical regime
are {\it both\/} instantons {\it and\/} monopoles. Remarkably,
however, the instanton configurations are themselves included as
specific multi-monopole configurations. This happens in the following way:
first of all, an instanton configuration
on the cylinder follows from a
standard instanton configuration
in ${\Bbb R}^4$ \cite{BPST} by imposing periodic
boundary conditions in $x_4$ \eqref{ssbc}. In addition we need
instanton solutions in the presence of a
non-vanishing VEV for the gauge field component
$A_4$, or equivalently a non-trivial expectation of the Wilson line,
as in Eqs.~\eqref{wil} and \eqref{ppr}. Such
periodic instantons in the presence of a Wilson line were recently analyzed in
Refs.~\cite{LY,KL,LL,KvBzer,KvBone,KvBtwo}.\footnote{For the simpler case of
$\langle A_4 \rangle =0$ periodic instantons were previously constructed
in Refs.~\cite{GPY,HS}.} It transpires that instantons on the
cylinder can be understood as composite configurations
of $N$ single monopoles, one of each of the $N$ different types
\cite{Nahmtwo,Garland,LY,KL,LL,KvBzer,KvBone,KvBtwo}. One expects in
an $SU(N)$ theory on ${\Bbb R}^4$ that the lowest charged, or
fundamental, monopoles come in
$N-1$ different varieties, carrying a unit of magnetic charge from
each of the $U(1)$ factors of the $U(1)^{N-1}$ gauge group left
unbroken by the VEV.
The additional monopole, needed to make up the $N$ types,
is specific to the compactification on the cylinder
\cite{LY,KL,LL,KvBzer} and will be called here a KK-monpole.
The new monopole carries specific magnetic charges
of the unbroken $U(1)^{N-1}$ gauge group as well as an instanton charge.
The magnetic charge of the KK-monopole is such that
when all $N$ types of monopoles are present, the magnetic charges
cancel and the resulting configuration only carries a unit instanton
charge.
The $N-1$ fundamental monopoles are the embeddings of the standard
$SU(2)$ BPS monopole \cite{thm,polm,Bog,PS}
on ${\Bbb R}^3$ spanned by $x_{1,2,3}$ (independent of the $S^1$ coordinate
$x_4$) in the gauge group $SU(N)$. At finite radius $\beta$, these
monopoles have finite action and hence contribute
to the path integral in the semi-classical regime as described in
Refs.~\cite{Pol,AHW} and \cite{DKMTV,PP,dkmthreed}.
The monopole solutions satisfy Bogomol'nyi equations that
are precisely the 4D self-duality equations\footnote{In the usual
interpretation of the self-duality equations for the monopole, the
time component of the gauge field is interpreted as the Higgs
field; in the present discussion this field {\it is\/} the component
of the gauge field along the compact direction.}
\begin{equation}F_{mn} \ ={ \ }^* F_{mn} \ ,\elabel{seld}\end{equation}
and each solution has two
adjoint fermion zero-modes as enumerated by the Callias index theorem
\cite{Cal}. The same consideration applies to the
KK-monopole
as well, since it is at least formally gauge equivalent to a standard fundamental
monopole via an improper (non-periodic) gauge transformation
\cite{LL}. Since there are two adjoint fermion zero-modes in the
background of each of the $N$ types of monopoles, these configurations
can contribute directly to $\Vev{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}$.
The remainder of the paper is organized as follows. Initially, we
focus on the case with $SU(2)$ gauge group; the generalization to
$SU(N)$ is then obvious. In general, the
theory on the cylinder can develop a VEV for the
gauge field component along the $S^1$ direction, Eq.~\eqref{ppr}, which for the case
of $SU(2)$ gauge group we parametrize as
\begin{equation}
\langle A_4 \rangle \ = \ v \ {\tau^3 \over 2i} \ \equiv \
{\rm diag}\big({v \over 2i} ,-{v \over 2i}\big) \ , \elabel{pptwo}\end{equation}
where $v$ is an arbitrary real parameter
which parametrizes the classical moduli
space. For every fixed $v$ there are actually two distinct vacua
corresponding to the choice of theta angle
Eq.~\eqref{ltht}.
In the Section II, we will show using field theory arguments, backed-up by
a D-brane analysis, that:
(i) The classical moduli space is a circle,
\begin{equation}
v \in S^1 \ : \quad 0 \le v \le {2 \pi \over \beta} \ . \elabel{clms}\end{equation}
Consequently, $v$ is an angular variable such that
for any fixed $v\neq 0,2\pi/\beta$, the gauge group is broken to $U(1)$.
(ii) There is a conventional `t Hooft-Polyakov BPS monopole
and an additional `compensating' KK-monopole,
each of which satisfies the self-duality equations \eqref{seld} and
admits two adjoint fermion
zero-modes. The singly-charged instanton solution is a composite
configuration of these two monopoles and as expected has
four adjoint fermion zero-modes.
Section III is devoted to an evaluation of the monopole-generated
superpotential
which has the effect of lifting the classical degeneracy parametrized
by $v$. We argue that the true quantum vacuum state is simply the point
\begin{equation}
v_{\rm vac}= {\pi \over \beta} \ .\elabel{vvac}\end{equation}
Furthermore, at $v=v_{\rm vac}$ the effective potential is zero and
supersymmetry remains unbroken.
Hence, there are two supersymmetry-preserving vacua with
\eqref{vvac} and labelled by $\theta_1=0$ and $\theta_2=2\pi$, as per \eqref{ltht},
in agreement with the calculation of the Witten index ${\rm tr}(-1)^{\rm F}=2$
\cite{Windx}.
Moreover, we discover that the superpotential not only lifts the classically
flat direction, but also gives a mass to the dual (magnetic) photon,
which implies confinement of the original electric photon and disappearance
of all the massless modes.
Section IV then goes on to consider the monopole contribution to the gluino condensate.
In the quantum vacua, the gluino condensate $\Vev{\mathop{\rm tr}\nolimits
\lambda^2\over16\pi^2}$ receives contributions
from both the BPS and KK-monopoles. After summing these contributions,
we then take the
decompactification limit $\beta=\infty$ to obtain the value of
gluino condensate in the strongly coupled ${\cal N}=1$ theory which agrees
with the WCI calculation \eqref{stwkb}. Section V concludes with a
brief discussion.
\setcounter{equation}{0}
\section{Semi-classical Configurations}
In this section, we consider in more detail the configurations which
contribute to the semi-classical physics for the theory on the
cylinder. We begin with a discussion of the $SU(2)$ case, follow with
an alternative description in terms of D-branes and then indicate how
the results generalize to the $SU(N)$ gauge group.
\subsection{Gauge group $SU(2)$}
To verify that the classical moduli space is $S^1$, consider a
non-periodic (hence `improper') gauge transformation \cite{LL}
\begin{equation}
U_{\rm special} \ = \ \exp \, \big({\pi x_4 \over i\beta} \tau^3 \big) \ .
\elabel{lgt}\end{equation}
Improper gauge transformations are treated differently from proper
ones in the path integral, in
the sense that two field configurations related by such a gauge
transformation do not belong to the same gauge orbit. The transformation
$U_{\rm special}$, however, has a special property: even though it is not itself
periodic,
$U_{\rm special}(x_4=0)=-U_{\rm special}(x_4=\beta)$, the corresponding
gauge transformed field configurations:
\begin{equation}\begin{split}
A'_m \ &= \ \exp \, \big({\pi x_4 \over i\beta} \tau^3 \big) \
\left(A_m + \partial_m \right) \
\exp \, \big(-{\pi x_4 \over i\beta} \tau^3\big )
\ , \\
\lambda' \ &= \ \exp \, \big({\pi x_4 \over i\beta} \tau^3\big) \
\lambda \ \exp\, \big (-{\pi x_4 \over i\beta} \tau^3 \big)
\ , \elabel{sgtc}\end{split}\end{equation}
remain strictly periodic, i.e.
\begin{equation}
A'_m (x_\mu,x_4=0) \ = \ A'_m (x_\mu,x_4=\beta) \ , \quad
\lambda' (x_\mu,x_4=0) \ = \ \lambda' (x_\mu,x_4=\beta) \ .
\elabel{prbc}\end{equation}
Applied to the third component of the gauge field \eqref{pptwo},
the transformation $U_{\rm special}$ shifts $v$ according to
\begin{equation}
\langle A'_4 \rangle \ = \ (v - {2\pi \over \beta})
\ {\tau^3 \over 2i} \ . \elabel{ppnew}\end{equation}
Thus, using a sequence of these transformation one
can ratchet-down an arbitrary value of $v \in{\Bbb R}$, to the range
specified in \eqref{clms}. In fact the
sectors of the theory with $v=\tilde v$ and $v=\tilde v+ 2\pi/\beta$
are physically indistinguishable; one is obtained from the other by relabelling
the Kaluza-Klein modes of the compact direction, i.e.~relabelling
the Matsubara frequencies $\omega_n = 2 n \pi /\beta$ with $n\in{\Bbb Z}$
associated to the compact $x_4\in S^1$ variable.
The standard BPS monopole solution in Hedgehog gauge \cite{Bog,PS} is
\begin{equation}\begin{split}
A^{\scriptscriptstyle\rm BPS}_4 (x_\nu)\ &= \ \big(v|x| \ {\rm coth}(v|x|) -1
\big)
{x_a \over |x|^2}{\tau^a \over 2i} \ , \\
A^{\scriptscriptstyle\rm BPS}_\mu (x_\nu)
\ &= \ \Big(1-{v|x| \over {\rm sinh}(v|x|)}
\Big)\epsilon_{\mu\nu a}{x_\nu \over |x|^2}{\tau^a \over 2i}
\ .
\elabel{bpscn}\end{split}\end{equation}
These expressions are obviously independent of the
$S^1$ variable $x_4$,
since the latter can be thought of as the time coordinate of
the static monopole.
The boundary values of \eqref{bpscn} as $|x| \to \infty$, when gauge
rotated to unitary (singular) gauge, agree with \eqref{pptwo}:
\begin{equation}A^{\scriptscriptstyle\rm BPS}_4 \ \to \ v{\tau^3 \over 2i} \ =
\ \langle A_4 \rangle \ . \elabel{bcsi}\end{equation}
The monopole solution \eqref{bpscn} satisfies the self-duality equations
\eqref{seld} and has topological charge
\begin{equation}Q \ \equiv \ {1 \over 16 \pi^2} \ \int_0^\beta dx_4 \int d^3 x
\ \mathop{\rm tr}\nolimits\,{}^*F_{mn}F^{mn} \ = \ {\beta v \over 2 \pi} \
. \elabel{topq}\end{equation}
There are precisely two adjoint fermion zero modes
\cite{Cal} in the monopole background \eqref{bpscn}.
These modes can be generated by supersymmetry transformations
of the bosonic monopole components in \eqref{bpscn}, yielding
\begin{equation}\lambda^{\scriptscriptstyle\rm BPS}_\alpha \ = \ {\textstyle{1\over2}} \xi_\beta
(\sigma^m \bar\sigma^n)_\alpha^{\ \beta} F^{\scriptscriptstyle\rm BPS}_{mn} \ . \elabel{lss}\end{equation}
Here $\sigma^m$ and $\bar\sigma^n$ are the four Pauli matrices and
$\xi_\beta$ is the two-component Grassmann collective coordinate;
see footnote 2.
Finally, the monopole has magnetic charge one,
instanton charge zero, and the action $S_{\scriptscriptstyle\rm BPS}$ is
\begin{equation}S_{\scriptscriptstyle\rm BPS} \ = \ {4\pi \over g^2} \beta v \ .\elabel{macn}\end{equation}
The monopole of the second type---the KK
monopole---can be obtained \cite{LL} from the expressions \eqref{bpscn}
by, firstly, replacing the VEV $v$ on the right-hand side of \eqref{bpscn}
with $2\pi /\beta -v$, and then gauge transforming the resulting expression
with $U_{\rm special}$ as in \eqref{sgtc}. Finally to install the original
VEV $v$ one performs the reflection $v\to -v$ implemented by the discrete
transformation $U_{\rm refl}= \exp [i\pi \tau^2 /4]$.
The resulting configuration is the KK-monopole $A^{\scriptscriptstyle\rm KK}_m$
and, though gauge related to $A^{\scriptscriptstyle\rm BPS}_m$, it
must, as described earlier, be accounted in the path integral as contributing
to a different topological sector.
The improper gauge transformation $U_{\rm special}$ changes \cite{GPY}
the instanton charge, $k \to k+q$, and reverses the sign of the magnetic
charge, $q \to - q$. Thus the KK-monopole
has instanton charge $k=1$ and monopole charge $q=-1$.
The KK-monopole is itself self-dual and its action and topological charge
are:
\begin{equation}\begin{split}S_{\scriptscriptstyle\rm KK} \ &= \ {4\pi \over g^2} \beta
\ ({2\pi \over \beta} -v) \ , \\
Q_{\scriptscriptstyle\rm KK} \ &= \ 1 - {\beta v \over 2 \pi}
\ .\elabel{makk}\end{split}\end{equation}
As for the original BPS monopole,
there are two adjoint fermion zero-modes (and no anti-fermion
zero-modes) in the KK-monopole background:\footnote{The KK-monopole is
self-dual and not anti-self-dual, and the fact that it has negative
rather than positive magnetic charge is irrelevant for the fermion zero
mode counting.}
\begin{equation}\lambda^{\scriptscriptstyle\rm KK}_\alpha \ = \ {\textstyle{1\over2}} \xi_\beta
(\sigma^m \bar\sigma^n)_\alpha^{\ \beta} F^{\scriptscriptstyle\rm KK}_{mn} \ .
\elabel{lskk}\end{equation}
As was mentioned earlier, Ref.~\cite{GPY} gave a complete topological
classification of the
smooth finite-action gauge fields on ${\Bbb R}^3 \times S^1$
in terms of the three invariants:
the instanton number $k$, magnetic charge $q$ and the VEV $v$, in
terms of which
\begin{equation}\begin{split}S_{\rm cl} \ &= \ {8\pi^2 \over g^2}
\ (k +q{\beta v \over 2 \pi} ) \ , \\
Q \ &= \ k + q {\beta v \over 2 \pi}\ .
\elabel{tcln}\end{split}\end{equation}
Comparing Eqs.~\eqref{tcln} for the BPS monopole
and the KK-monopole with the charges of a single instanton,
it is tempting to interpret
the latter as the mixed BPS-monopole/KK-monopole configuration.
This interpretation is made precise in Ref.~\cite{KL,LL,KvBzer,KvBone,KvBtwo},
based on earlier work of Refs.~\cite{Nahmtwo,Garland} and \cite{LY}.
We also note that the two gaugino zero-modes of the KK-monopole
combined with the two zero-modes of the BPS-monopole produce the requisite
four adjoint fermion zero-modes of the $SU(2)$ instanton.
\subsection{D-brane description}
Identical conclusions to those reached in Sec.~II.1, can be reached
in a more geometrical way using D-brane technology and for the
additional insight that this point-of-view provides we describe it
here.\footnote{For a discussion of ${\cal N}=1$ theories and instantons
in the context of branes see Ref.~\cite{Brodie:1998bv}.}
For the geometrical interpretation of the construction, the ${\cal N}=4$ case is
most straightforward; the modification
of this set-up relevant to describe the ${\cal N}=1$ theory will be considered subsequently.
Therefore we begin with two coincident D3-branes whose collective
dynamics is described by ${\cal N}=4$ supersymmetric $SU(2)$ Yang-Mills on
the four-dimensional world volume
\cite{Witpbr,Polch}.
We now proceed to wrap the world-volume of our D3-branes on the cylinder ${\Bbb R}^3 \times S^1$,
with the radius $R=\beta /2\pi$. With this accomplished, one
performs a T-duality transformation along the compact direction.
The T-duals of the D3-branes are D2-branes stretched along ${\Bbb
R}^3$ and lying orthogonal
to the dual circle $\tilde{S}^1$ with radius $\tilde{R}=\alpha'/R$.
In the presence of the non-trivial Wilson line Eq.~\eqref{pptwo}, the D2-branes become
separated by a distance $2\pi \alpha'v$ along the direction of the dual circle \cite{Polch}.
Due to the periodicity around the circle,
we may restrict $0 \le 2\pi \alpha'v \le (2\pi)^2 \alpha' /\beta$,
which is equivalent to Eq.~\eqref{clms}.
In the T-dual picture, a
BPS monopole can be represented by
a D0-brane of length $L_{\scriptscriptstyle\rm BPS}=2\pi \alpha' v$
stretched between the two D2-branes. The orientation of the D0-brane
(whether the D0-brane is stretched between the first and second
D2-brane, or vice-versa) corresponds
to positive or negative magnetic charge, i.e.~the monopole or anti-monopole.
The monopole mass is the product of the D0-brane tension
$\tau_0=2/(\alpha' g^2)$ and the D0-brane length $L_{\scriptscriptstyle\rm BPS}$:
\begin{equation}M_{\scriptscriptstyle\rm BPS}\ = \ \tau_0 \ L_{\scriptscriptstyle\rm BPS} \ = \ {4 \pi v \over g^2}
\ ,\ \elabel{mbps}\end{equation}
in agreement with Eq.~\eqref{macn}.
Actually, as one might have guessed, there is an infinite tower of
monopoles of the same magnetic
charge coming from the Kaluza-Klein tower over $\tilde{S}^1$ formed by
wrapping the D0-brane an arbitrary number of times around the
circle. Another way to view the same phenomenon, is to consider the freedom
to add to a monopole a closed
D0-loop starting and ending on the same D2-brane and winding around
$\tilde{S}^1$; the length of the loop being
$L_{\rm loop}= (2\pi)^2 \alpha' /\beta $.
This D0-loop over the D2-brane can be identified with the instanton. Indeed,
after the T-duality transformation along the $\tilde{S}^1$ direction,
the D2-brane becomes the D3-brane and the D0-loop becomes a
D$(-1)$-brane, or D-instanton.
The D3/D$(-1)$ bound-state is the identified with an instanton having charge equal
the winding number of the D0-loop over $\tilde{S}^1$, and vanishing
magnetic charge. The instanton action is in a similar fashion to Eq.~\eqref{mbps}
\begin{equation}S_{\rm inst} \ = \ \beta M_{\rm inst}\ = \
\beta \ \tau_0 \ L_{\rm loop} \ = \ {8 \pi^2 \over g^2}
\ ,\elabel{sins}\end{equation}
in agreement with Eq.~\eqref{tcln}. In summary, the
standard BPS-monopole is the lowest-lying Kaluza-Klein state with
magnetic charge one: the D0-brane between the first and the second D2-brane.
The monopole of the second type---the KK-monopole---appears as the D0-brane between the second D2-brane
and the first one (hence the magnetic charge $q=-1$) completing the dual circle
$\tilde{S}^1$ (hence carrying instanton number $k=1$). Furthermore,
the bound-state of the standard and the KK-monopole is the D0-loop,
i.e.~the instanton. Notice that
standard BPS monopole and the KK-monopole (together with their respective
anti-monopoles) are the elemental configurations, out of which the whole
set of semi-classical configurations with arbitrary $k$ and $q$ can be
built.
Although we have described this picture in terms of the ${\cal N}=4$ theory,
the whole analysis applies to the ${\cal N}=1$ case as well with certain
modifications.
The ${\cal N}=1$ four-dimensional Yang-Mills theory is obtained in a
D-braney fashion from a configuration
of two coincident D4-branes suspended between two NS5-branes, in a
manner described in
Refs.~\cite{EGK,Witqcd}. The world-volume of the D4-branes is infinite
in four directions ${\Bbb R}^4$ and is finite in the fifth direction
$\Delta_5$, which is
the separation between the NS5-branes along the D4-branes.
Following the same line of reasoning as in the ${\cal N}=4$ case,
the {\it infinite\/} part of the world-volume of the D4-branes is put on the
cylinder and T-dualized. The T-duals of the D4-branes on
$\Delta_5\times {\Bbb R}^3 \times S^1$
are the D3-branes stretched along $\Delta_5\times {\Bbb R}^3$ and orthogonal
to the dual circle $\tilde S^1$ with the dual radius $\tilde{R}=\alpha'/R$.
In the
presence of the non-trivial Wilson line Eq.~\eqref{pptwo}, the D3-branes become
separated \cite{Polch} by the distance $2\pi \alpha'v$ along the dual circle,
which is equivalent to Eq.~\eqref{clms}.
In the ${\cal N}=4$ theory the D2-branes are BPS configurations and
consequently, when at rest, there is no interaction
between them. Thus, their separation along the dual circle is
arbitrary, i.e.~$v$ is an arbitrary modulus.
In the ${\cal N}=1$ theory, the D3-branes are {\it not\/} BPS configurations.
In the next Section, via an explicit calculation of a
superpotential, we will prove that they
actually repel each other. Geometrically this implies that the two D3-branes
stay at the opposite ends of the dual circle and consequently $v=\pi/\beta$.
Hence, the classically flat direction is lifted precisely in the
manner predicted by Eq.~\eqref{vvac}.
The previous set-up described in the context of $SU(2)$ can be immediately
generalized to $SU(N)$.
We now have $N$ D3-branes positioned along a circle and repelling
each other; hence one expects
\begin{equation} a_{j} - a_{j+1} \ = \ {2\pi \over iN\beta} \ {\rm
mod} \ {2\pi\over i\beta}\ ,
\qquad j=1,2,\ldots,N \ ,
\elabel{lenw}\end{equation}
and hence, \eqref{lgen}. The $N$ types of the monopole-like
configurations are composed from $N-1$ standard BPS monopoles represented
by the D1-branes of minimal lengths stretched between the adjacent
$u^{\rm th}$ and $(u+1)^{\rm th}$ D3-branes, $u=1,\ldots,N-1$. The
KK-monopole is the D1-brane stretched
between the $N^{\rm th}$ and $1^{\rm st}$ D3-branes. The instanton, as
before, is the closed D1-loop around the $\tilde{S}^1$ direction.
\setcounter{equation}{0}
\section{Evaluation of the Superpotential}
In this section, we will determine the superpotential of the ${\cal N}=1$
supersymmetric
$SU(2)$ Yang-Mills theory on ${\Bbb R}^3 \times S^1$. The superpotential
is trivial
in perturbation theory, but receives non-perturbative contributions
as described in Ref.~\cite{AHW}. Contributions arise from both types
of monopole: BPS and KK. As advertised earlier, the classical moduli
space \eqref{clms} will be lifted by
the superpotential in accordance with Eqs.~\eqref{lgen} and \eqref{vvac}.
In the presence of a non-vanishing VEV $v$, fields with isospin components not
aligned with the scalar VEV Eq.~\eqref{pptwo}, acquire masses $m \propto v$
via the Higgs
mechanism. The massless fields are consequently the $U(1)$ projections
$A_m^3$ and $\lambda^3$ of the non-abelian fields $A_m^a$ and
$\lambda^a$, $a=1,2,3$.
Moreover, since $x_4$ is periodic, each field can
be Fourier analyzed as an expansion over the discrete Matsubara frequencies
$\omega_n=2\pi n/\beta$ with all the $n \neq 0$ modes being massive
Kaluza-Klein modes. The $n=0$ modes correspond to the fields independent
of the $x_4$ coordinate. Thus, the classically massless degrees-of-freedom
are the $x_4$-independent $U(1)$ fields $A_\mu (x_\nu)$, $\phi(x_\nu)$,
$\chi(x_\nu)$ and $\tilde{\chi}(x_\nu)$ defined\footnote{Note that
the calculations in Appendix A of Ref.~\cite{DKMTV} and the details of the
comactification to 3D were given in Minkowski space with $x_3$ and not $x_4$
being the compactified direction. Here we analytically continue the results
of \cite{DKMTV} to Euclidean space.} as in Appendix A
of Ref.~\cite{DKMTV}:
\begin{subequations}\begin{align}
A_\mu \ &= \ A_\mu^3 \ : \quad {\rm for} \ \mu=1,2,3 \ , \elabel{adef}\\
\phi \ &= \ A_4^3 \ , \elabel{phid}\\
\chi_\alpha \ &= \ {1 \over \sqrt{2}}(\lambda^3_\alpha
+\bar\lambda^3_{{\dot\alpha}}) \ , \qquad
\tilde{\chi}_\alpha \ = \ -{i \over \sqrt{2}}(\lambda^3_\alpha
-\bar\lambda^3_{{\dot\alpha}}) \ .
\elabel{chde}\end{align}\end{subequations}
Here $\chi$ and $\tilde{\chi}$ are the Majorana two-spinors in three-dimensions.
The classical action for the massless fields $S_{\rm cl}^{U(1)}$
can be read-off from the four-dimensional
action of ${\cal N}=1$ supersymmetric Yang-Mills (cf.~\cite{DKMTV}):
\begin{equation}S_{\rm cl}^{U(1)} \ = \ {\beta \over g^2} \int d^3 x
\big(\tfrac14 F_{\mu \nu} F^{\mu \nu} + \tfrac12 \partial_\mu \phi
\partial^\mu \phi - \tfrac12 \chi \hat{\partial} \chi
- \tfrac12 \tilde{\chi} \hat{\partial} \tilde{\chi} \big)
\ , \elabel{smls}\end{equation}
where $\hat{\partial}=\gamma^\mu\partial_\mu$, and $\gamma^\mu$ are
the three-dimensional gamma-matrices.
The presence of the monopoles in the microscopic theory means we must also
include a surface term $S_{\rm sf}$ in the action \eqref{smls}:
\begin{equation}S_{\rm sf} \ = \ -{i\sigma \beta \over 8 \pi} \int d^3x
\ \epsilon^{\mu\nu\rho}\partial_\mu F_{\nu \rho} \ .\elabel{ssur}\end{equation}
Due to Dirac quantization of magnetic charge:
\begin{equation}
q \ = \ {1 \over 8 \pi} \int d^3x
\ \epsilon^{\mu\nu\rho}\partial_\mu F_{\nu \rho} \ \in \ {\Bbb Z} \ ,
\elabel{qdef}\end{equation}
in \eqref{ssur} $\sigma$ is a periodic Lagrange multiplier variable with period
$2\pi/\beta$.
Following Polyakov \cite{Pol} an equivalent dual description of the low-energy
theory \eqref{smls} and \eqref{ssur} can be obtained by promoting $\sigma$ to be a dynamical
field $\sigma(x)$. This field serves as the Lagrange multiplier for the
Bianchi identity constraint. The classical action for massless fields
then contains the terms
\begin{equation}\beta \int d^3 x
\big({1 \over 4g^2} F_{\mu \nu} F^{\mu \nu} -i{\sigma \over 8 \pi}
\epsilon^{\mu\nu\rho}\partial_\mu F_{\nu \rho} + \cdots \big) \ .\elabel{phts}\end{equation}
At this stage the photon field-strength $F_{\mu\nu}(x)$ can be integrated out,
and the resulting classical massless action reads
\begin{equation}S_{\rm cl} \ = \ {\beta \over g^2} \int d^3 x
\big(\tfrac12\partial_\mu \gamma \partial^\mu \gamma
+ \tfrac12 \partial_\mu \phi
\partial^\mu \phi - \tfrac12 \chi \hat{\partial} \chi
- \tfrac12 \tilde{\chi} \hat{\partial}\tilde{\chi} \big)
\ , \elabel{scmp}\end{equation}
where we have introduced the dual photon scalar field $\gamma(x)$:
\begin{equation}\gamma(x)\ = \ {g^2 \over 4\pi} \ \sigma(x) \ .\elabel{dphd}\end{equation}
This action is invariant under infinitesimal ${\cal N}=2$ supersymmetry transformations in three dimensions:
\begin{equation}\begin{split}\delta \phi \ &= \ \sqrt{2}\xi_1^\alpha \chi_\alpha
- \sqrt{2}\xi_2^\alpha \tilde{\chi}_\alpha \ , \\
\delta \gamma \ &= \ \sqrt{2}\xi_1^\alpha \tilde{\chi}_\alpha
+ \sqrt{2}\xi_2^\alpha \chi_\alpha \ , \\
\delta \chi^\alpha \ &= \ \sqrt{2}\xi_1^\beta \hat{\partial}_\beta^{\ \alpha}
\phi + \sqrt{2}\xi_2^\beta \hat{\partial}_\beta^{\ \alpha}\gamma \ , \\
\delta \tilde{\chi}^\alpha \ &= \ \sqrt{2}\xi_1^\beta
\hat{\partial}_\beta^{\ \alpha}\gamma
- \sqrt{2}\xi_2^\beta \hat{\partial}_\beta^{\ \alpha}\phi \ .
\elabel{sctr}\end{split}\end{equation}
It is more convenient for our purposes to use a more compact form
of Eqs.~\eqref{scmp} and \eqref{sctr} involving the complex
complex scalar $Z$ and fermion $\Psi$:
\begin{equation}\begin{split}
Z \ = \ \phi + i \gamma \ , \qquad
&\bar{Z} \ = \ \phi - i \gamma \ , \\
\Psi_\alpha \ = \ \chi_\alpha + i \tilde{\chi}_\alpha \ , \qquad
&\bar{\Psi}_\alpha \ = \ \chi_\alpha - i \tilde{\chi}_\alpha \ ,
\elabel{zps}\end{split}\end{equation}
in terms of which the action is
\begin{equation}S_{\rm cl} \ = \ {\beta \over g^2} \int d^3 x
\big( \tfrac12\partial_\mu \bar{Z}
\partial^\mu Z - \tfrac12\bar{\Psi} \hat{\partial}\Psi \big) \ ,
\elabel{scmm}\end{equation}
and the supersymmetry transformations are
\begin{equation}\begin{split}
\delta Z \ = \ \sqrt{2}\theta^\alpha \Psi_\alpha
\ , \qquad
&\delta\bar{Z} \ = \ \sqrt{2}\bar{\theta}^\alpha \bar{\Psi}_\alpha
\ , \\
\delta\Psi^\alpha \ = \ \sqrt{2}(\bar{\theta}\hat{\partial})^\alpha Z
\ , \qquad
&\delta\bar{\Psi}^\alpha \ = \ \sqrt{2}(\theta \hat{\partial})^\alpha \bar{Z}
\ ,\elabel{stzp}\end{split}\end{equation}
where we have introduced the infinitesimal supersymmetry transformation parameter
$\theta^\alpha=\xi_1^\alpha +i\xi_2^\alpha$.
Non-perturbative quantum effects will modify the classical action
for massless fields, Eq.~\eqref{scmm}, by generating a superpotential
${\cal W}(\Phi)$ and ${\bar{\cal W}}(\bar{\Phi})$ written in terms of the
chiral and anti-chiral ${\cal N}=1$ superfields:
\begin{equation}\begin{split}\Phi \ &= \ Z+\sqrt{2}\theta^\alpha \Psi_\alpha
+ \theta^\alpha \theta_\alpha {\cal F} \ , \\
\bar{\Phi} \ &= \ \bar{Z}+\sqrt{2}\bar{\theta}^\alpha \bar{\Psi}_\alpha
+ \bar{\theta}^\alpha \bar{\theta}_\alpha \bar{\cal F} \ .
\elabel{sfde}\end{split}\end{equation}
With the addition of this superpotential, the
resulting quantum low-energy effective action reads
\begin{equation}S_{\rm eff} \ = \ S_{\rm cl} \ + \
{\beta \over g^2} \int d^3 x\, \Big(\int d^2 \theta\, {\cal W}(\Phi)
\ + \ \int d^2 \bar{\theta}\, {\bar{\cal W}}(\bar{\Phi}) \Big) \ . \elabel{sptg}\end{equation}
As usual, the scalar potential $V_{\rm eff}$ is determined by the
derivatives of the superpotential
with respect to the scalar fields
\begin{equation}V_{\rm eff} \ = \ {\cal F}\bar{\cal F} \ = \
{\partial {\cal W} \over \partial Z}
{\partial \bar{\cal W} \over \partial\bar{Z}} \ .\elabel{vefd}\end{equation}
The true vacuum corresponds to the minimum of $V_{\rm eff}$. In general
$V_{\rm eff}\ge 0$, and supersymmetry is unbroken only if the vacuum
solution has $V_{\rm eff}= 0$.
We are now ready to calculate the superpotential ${\cal W}(\Phi)$
and hence the true ground state of the theory in the semi-classical approximation.
Since the standard BPS monopole and the KK-monopole have two fermion
zero-modes apiece, Eqs.~\eqref{lss} and \eqref{lskk}, they both
generate mass terms
for classically massless fermions $\bar{\Psi}$, while the corresponding
anti-monopoles will generate a mass for $\Psi$:
\begin{equation}{\cal L}_{\rm mass} \ = \
{m_{\bar{\Psi}} \over 2} \bar{\Psi}\bar{\Psi} \ + \
{m_{\Psi} \over 2} \Psi\Psi \ = \
m_{\bar{\Psi}} \bar\lambda^3\bar\lambda^3 \ + \
m_{\Psi} \lambda^3\lambda^3 \ .\elabel{sems}\end{equation}
The supersymmetric completion
of \eqref{sems} in the low-energy effective action will give
the superpotential in question.
The masses \eqref{sems} are determined by examining the large distance behaviour
of the correlators
\begin{subequations}\begin{align}
G^{(2)}_{\alpha\beta}(x,y) \ &= \
\langle\lambda^3_\alpha(x)\lambda^3_\beta(y)\rangle \ ,\elabel{lcor}\\
\bar{G}^{(2)}_{\alpha\beta}(x,y) \ &= \
\langle\bar\lambda^3_\alpha(x)\bar\lambda^3_\beta(y)\rangle \
. \elabel{lbco}
\end{align}\end{subequations}
Using the LSZ reduction formulae, somewhat along the lines of
Ref.~\cite{AHW}, we find
\begin{subequations}\begin{align}
G^{(2)}_{\alpha\beta}(x,y) \ &\to \ 2m_{\bar{\Psi}} \
\beta\int d^3 X \, {\cal S}_{\rm F}(x-X)_{\alpha \rho}\epsilon^{\rho \delta}
{\cal S}_{\rm F}(y-X)_{\beta \delta} \ , \elabel{mbar}\\
\bar{G}^{(2)}_{\alpha\beta}(x,y) \ &\to \ 2m_{\Psi} \
\beta\int d^3 X \, {\cal S}_{\rm F}(x-X)_{\alpha \rho}\epsilon^{\rho \delta}
{\cal S}_{\rm F}(y-X)_{\beta \delta} \ . \elabel{mmm}
\end{align}\end{subequations}
Here ${\cal S}_{\rm F}(x)$ is the massless fermion propagator in 3D, or,
equivalently, the Weyl-fermion propagator on ${\Bbb R}^3\times S^1$ with
zero Matsubara frequency: ${\cal S}_{\rm F}(x)=\gamma^\mu x_\mu/(4\pi|x|)^2$.
We first consider the contribution of a single standard BPS-monopole,
\eqref{bpscn} and \eqref{lss}, to $m_{\bar{\Psi}}$:
\begin{equation}\langle\lambda^3_\alpha(x)\lambda^3_\beta(y)\rangle_{\scriptscriptstyle\rm BPS}
\ = \
\int d\mu^{\scriptscriptstyle\rm BPS} \lambda^{\scriptscriptstyle\rm LD}_\alpha(x)\lambda^{\scriptscriptstyle\rm LD}_\beta(y)
\ ,\elabel{lccr}\end{equation}
where $\lambda^{\scriptscriptstyle\rm LD}_\alpha(x)$ is the large distance (LD) limit
of the fermion zero modes \eqref{lss} as computed in Appendix C of Ref.~\cite{DKMTV}:
\begin{equation}\lambda^{\scriptscriptstyle\rm LD}_\alpha(x)\ = \
8\pi {\cal S}_{\rm F}(x-X)_{\alpha}^{\ \rho}\xi_\rho \ ,\elabel{fzld}\end{equation}
and $d\mu^{\scriptscriptstyle\rm BPS}$ is the semiclassical integration measure
of the standard single-monopole on ${\Bbb R}^3\times S^1$:
\begin{equation}
\int d\mu^{\scriptscriptstyle\rm BPS} \ = \ M_{\scriptscriptstyle\rm PV}^3 \ e^{-S_{\scriptscriptstyle\rm BPS}} \
\int {d^3 X \over (2\pi)^{3/2}}[g^2 S_{\scriptscriptstyle\rm BPS}]^{3/2} \
\int_0^{2\pi} {d \Omega \over \sqrt{2\pi}}[g^2 S_{\scriptscriptstyle\rm BPS}/v^2]^{1/2} \
\int d^2 \xi {1 \over 2g^2 S_{\scriptscriptstyle\rm BPS}} \ .\elabel{msst}\end{equation}
This measure is obtained in the standard way by changing variables
in the path integral from field fluctuations around the monopole
to the monopole's collective coordinates: $X_{\mu}$ (position),
$\Omega$ ($U(1)$-angle) and $\xi_\alpha$ (Grassmann collective coordinates).
The relevant Jacobian factors in \eqref{msst} are taken from Ref.~\cite{DKMTV}.
In contradistinction with the 3D calculation of \cite{DKMTV}, our
present calculation is locally four-dimensional, i.e~in the path integral
we have integrated over the fluctuations around the monopole configuration
in ${\Bbb R}^3\times S^1$. Thus, the UV-regularized
determinants over non-zero eigenvalues of the
quadratic fluctuation operators cancel between fermions and bosons
due to supersymmetry as in Ref.~\cite{Adda}.\footnote{In order to invoke the
result of \cite{Adda} one needs the self-duality of the solution, a covariant background gauge,
four dimensions and supersymmetry.}
The ultra-violet divergences are regularized in the Pauli-Villars scheme,
which explains the appearance of the Pauli-Villars mass scale
$M_{\scriptscriptstyle\rm PV}$ to a power given by $n_{\rm b} -n_{\rm f}/2=3$, where
$n_{\rm b}=4$ and $n_{\rm f}=2$ are, respectively, the numbers of
bosonic and fermionic zero-modes of the monopole.
Collecting together the expressions in
Eqs.~\eqref{macn}, \eqref{mbar}, \eqref{lccr}, \eqref{fzld} and
\eqref{msst}, we find the single-monopole
contribution to $m_{\bar{\Psi}}$ is
\begin{equation}m_{\bar{\Psi}}^{\scriptscriptstyle\rm BPS} \ = \ 16 \pi^2 \beta^2 M_{\scriptscriptstyle\rm PV}^3
\exp\big[-{4\pi \over g^2} \beta v\big] \ .\elabel{ssmm}\end{equation}
This expression ignores the contributions of
monopole--anti-monopole pairs in the
background of the single monopole configuration and since
monopole--anti-monopole interactions are long-range (Coulombic) their
effects are considerable and must be taken into account. This is
precisely the famous Polyakov effect \cite{Pol} and fortunately
there is a very elegant way to incorporate it. The interactions of a single
monopole with the monopole--anti-monopole medium can be taken into
account in a way by simply coupling the monopole to the magnetic photon
$\gamma(x)$ (or $\sigma(x)$) introduced earlier, in Eqs.~\eqref{ssur} and \eqref{dphd},
and at the same time promoting the VEV $v$ to a dynamical scalar field
$\phi(x)$. The coupling of the dual photon to the monopole of magnetic
charge $q$ is dictated by the surface term in Eq.~\eqref{ssur}. Naturally enough, one
is instructed \cite{AHW,Pol} to change
the action \eqref{tcln} of the original semi-classical configuration as follows:
\begin{equation}
S_{\rm cl} \ = \ {8\pi^2 \over g^2}
\ \big(k +q{\beta v \over 2 \pi} \big) \ \rightarrow \
{8\pi^2 \over g^2}
\ \big(k +q{\beta \phi(x) \over 2 \pi} \big) \ + \
i q\beta \sigma(x) \ . \elabel{nact}\end{equation}
This means that the mass becomes a local coupling:
\begin{equation}m_{\bar{\Psi}}^{\scriptscriptstyle\rm BPS}(x) \ = \ 16 \pi^2 \beta^2 M_{\scriptscriptstyle\rm PV}^3
\exp\big[-{4\pi\over g^2}\beta \phi(x) +
i{4\pi\over g^2}\beta \gamma(x)\big] \ .\elabel{locc}\end{equation}
It is straightforward to derive
the single KK-monopole contribution
to $m_{\bar{\Psi}}$. It is obtained in the same way as the expression
on the right hand side of \eqref{ssmm}, but instead of $S_{\scriptscriptstyle\rm BPS}$ in \eqref{msst}
one has to use $S_{\scriptscriptstyle\rm KK}$ of \eqref{makk}:\footnote{This is because the KK-monopole is gauge equivalent
to the standard monopole with a `wrong' VEV as explained in the Sec.~II.1}
\begin{equation}
m_{\bar{\Psi}}^{\scriptscriptstyle\rm KK} \ = \ 16 \pi^2 \beta^2 M_{\scriptscriptstyle\rm PV}^3
\exp\big[-{4\pi \over g^2} (2\pi -\beta v)\big] \ .\elabel{sskk}\end{equation}
The total mass coupling $m_{\bar{\Psi}}(x)$ is then given by the sum of the
standard BPS- and the KK-monopole contributions, each embedded into
the dual magnetic field theory as per Eq.~\eqref{nact}:
\begin{equation}\begin{split}
m_{\bar{\Psi}}(x) \ = &\ 16 \pi^2 \beta^2 M_{\scriptscriptstyle\rm PV}^3 \\
&\times\Big(\exp\big[-{4\pi\over g^2}\beta \phi(x) +
i{4\pi\over g^2}\beta \gamma(x)\big] \ + \
\exp\big[-{8\pi^2 \over g^2}+{4\pi\over g^2}\beta \phi(x) -
i{4\pi\over g^2}\beta \gamma(x)\big]
\Big) \ .\elabel{mtot}\end{split}\end{equation}
In the second term above, we used the fact that the KK-monopole has
$q_{\scriptscriptstyle\rm KK}=-1$ and $k_{\scriptscriptstyle\rm KK}=1$.
Denoting the overall coefficient in \eqref{mtot} as $M$:
\begin{equation}M \ \equiv \ 16 \pi^2 \beta^2 M_{\scriptscriptstyle\rm PV}^3\
,\elabel{mdfn}
\end{equation}
and making use of the complex scalar field and fermion of \eqref{zps} we finally
get the following expression for the mass term:
\begin{equation}{\cal L}_{\rm mass} \ = \
{M \over 2} \ \bar{\Psi}(x)\bar{\Psi}(x) \ \Big(
\exp\big[-{4\pi\beta\over g^2}\bar{Z}(x)\big] \ + \
\exp\big[-{8\pi^2\over g^2}+{4\pi\beta\over g^2}\bar{Z}(x)\big]
\Big) \ .\elabel{mtps}\end{equation}
This coupling corresponds to
a superpotential ${\bar{\cal W}}(\bar{\Phi})$ term in \eqref{sptg}
of the form:
\begin{equation}{\bar{\cal W}}(\bar{\Phi}) \ = \
\Big({g^2 \over 4\pi\beta} \Big)^2 \ M \ \Big(
\exp\big[-{4\pi\beta\over g^2}\bar{\Phi}\big] \ + \
\exp\big[-{8\pi^2\over g^2}+{4\pi\beta\over g^2}\bar{\Phi}\big]
\Big) \ .\elabel{spdf}\end{equation}
Equivalently, the anti-monopoles generate the hermitian conjugate:
\begin{equation}{\cal W}(\Phi) \ = \
\Big({g^2 \over 4\pi\beta} \Big)^2 \ M \ \Big(
\exp\big[-{4\pi\beta\over g^2}\Phi\big] \ + \
\exp\big[-{8\pi^2\over g^2}+{4\pi\beta\over g^2}\Phi\big]
\Big) \ .\elabel{spdh}\end{equation}
With the expression for the superpotential in hand,
we can now calculate the scalar potential \eqref{vefd} and determine
the true vacuum state of the theory. Consider
\begin{equation}{\cal F} \ = \ {\partial {\cal W} \over \partial Z}
\ = \ -{ M g^2 \over 4\pi\beta} \ \Big(
\exp\big[-{4\pi\beta\over g^2} Z\big] \ - \
\exp\big[-{8\pi^2\over g^2}+{4\pi\beta\over g^2} Z\big]
\Big) \ .\elabel{fded}\end{equation}
The supersymmetry preserving vacuum
$\langle Z \rangle=\langle \phi \rangle+i\langle \gamma \rangle$
corresponds to
\begin{equation}
{\cal F}(\langle Z \rangle) \ = \ 0 \ \quad \implies\quad
\langle Z \rangle \ = \ {\pi \over \beta} \ , \elabel{spvc}\end{equation}
which corresponds to the scalar VEV
$\langle \phi \rangle\equiv v=\pi/\beta$ as predicted in \eqref{vvac}.
Note that since $\langle \gamma \rangle=0$ the dual photon
does not condense, as expected.
What is much more interesting is that the dual photon becomes massive,
\begin{equation}V_{\rm eff} (\phi={\pi \over \beta}, \gamma(x)) \ = \
2 \ \Big({M g^2 \over 4\pi \beta }\Big)^2
\exp\big[-{8\pi^2 \over g^2}\big] \
\big(1 - \cos {8\pi\beta \over g^2}\gamma(x) \big) \ , \elabel{vefg}\end{equation}
which implies confinement of the original electric photon
and the corresponding disappearance of all the massless modes.
\setcounter{equation}{0}
\section{Gluino Condensate from Monopoles}
In this section, we use our description of the quantum vacuum state of
the theory to evaluate the monopole contribution to the gluino condensate.
\subsection{Gauge group $SU(2)$}
We are now in a position to directly compute gluino condensate in
the $SU(2)$ theory.
Firstly, we evaluate the standard BPS monopole contribution
to $\Vev{\mathop{\rm tr}\nolimits \lambda^2}$:
\begin{equation}\begin{split}
\VEV{\mathop{\rm tr}\nolimits \lambda^2}_{\scriptscriptstyle\rm BPS}
=\ M_{\scriptscriptstyle\rm PV}^3 \ e^{-S_{\scriptscriptstyle\rm BPS}} \
&\int {d^3 X \over (2\pi)^{3/2}}[g^2S_{\scriptscriptstyle\rm BPS}]^{3/2} \
\int_0^{2\pi} {d \Omega \over \sqrt{2\pi}}[g^2S_{\scriptscriptstyle\rm BPS}/v^2]^{1/2} \\
&\times\int d^2 \xi {1 \over 2g^2S_{\scriptscriptstyle\rm BPS}} \
{\rm tr}( \lambda^{{\scriptscriptstyle\rm BPS} \ \alpha}(x)
\lambda^{\scriptscriptstyle\rm BPS}_\alpha(x) ) \ ,
\elabel{glbp}\end{split}\end{equation}
where we have used the expression \eqref{msst} for the monopole measure.
To evaluate \eqref{glbp}, we use the normalization of fermion zero modes from
Ref.~\cite{DKMTV}:
\begin{equation}\int d^3 X \int d^2 \xi \
{\rm tr}\left( \lambda^{{\scriptscriptstyle\rm BPS} \ \alpha}(x)
\lambda^{\scriptscriptstyle\rm BPS}_\alpha(x) \right) \ = \ 2S_{\scriptscriptstyle\rm BPS} \ {g^2 \over \beta} \ .
\elabel{nfzm}\end{equation}
A straightforward calculation gives
\begin{equation}
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm BPS} \ = \ {1 \over 2} {\beta v \over \pi}
M_{\scriptscriptstyle\rm PV}^3 \exp\big[-{8\pi^2 \over g^2}{\beta v \over 2\pi}\big] \ . \elabel{lcst}\end{equation}
The KK-monopole contribution is obtained by changing
$S_{\scriptscriptstyle\rm BPS} \rightarrow S_{\scriptscriptstyle\rm KK}$ in the BPS-expressions
above to give
\begin{equation}
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm KK} \ = \ {1 \over 2} \ \big(2- {\beta v \over \pi}\big)
\ M_{\scriptscriptstyle\rm PV}^3 \exp\big[-{8\pi^2 \over g^2}
+{8\pi^2 \over g^2}{\beta v \over 2\pi}\big] \ . \elabel{llkk}\end{equation}
The expressions \eqref{lcst} and \eqref{llkk} explicitly depend on the UV-cutoff
$M_{\scriptscriptstyle\rm PV}$ and do not appear to be renormalization group invariant.
However, it is pleasing that in the true ground-state established in
the last section, this worrisome dependence disappears. At
$v=\pi\beta$, we get
\begin{equation}
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm BPS} \ =\
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm KK}\ =\
{\textstyle{1\over2}} M_{\scriptscriptstyle\rm PV}^3 \exp\big[-{4\pi^2 \over g^2}\big] \ .\elabel{lctw}\end{equation}
Finally, introducing the renormalization group invariant scale
$\Lambda_{\scriptscriptstyle\rm PV}$ of the theory via
\begin{equation}
M_{\scriptscriptstyle\rm PV}^3 \exp\big[-{4\pi^2 \over g^2}\big]
\ = \ \Lambda^3 \ , \elabel{lamd}\end{equation}
and adding together both monopole contributions we obtain a
value for the gluino condensate:
\begin{equation}\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}\ =\ \Lambda^3 \ .
\elabel{restw}\end{equation}
This is precisely the value obtained in the WCI approach \eqref{stwkb}.
\subsection{Generalization to $SU(N)$}
The calculation of the superpotential and the gluino condensate can be
straightforwardly generalized to the
case of $SU(N)$ gauge group. The quantum vacuum has
\begin{equation} a_{j} - a_{j+1} \ = \ {2\pi \over iN\beta} \ {\rm
mod} \ {2\pi\over i\beta}\ ,
\qquad j=1,2,\ldots,N \ ,
\elabel{lnnn}\end{equation}
and so each of the $N$ types of monopoles ($N-1$ standard BPS and one
KK) have equal actions and equal topological charges:
\begin{equation}
S_{\rm mono} \ = \ {8 \pi^2 \over N g^2} \ , \qquad
Q_{\rm mono} \ = \ {1\over N }
\ .\elabel{smon}\end{equation}
The contribution of a single monopole to the gluino condensate will be,
in analogy with Eq.~\eqref{lctw},
\begin{equation}
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm BPS} \ = \
\VEV{\mathop{\rm tr}\nolimits \lambda^2\over16\pi^2}_{\scriptscriptstyle\rm KK}\ =
\ {1 \over N}
M_{\scriptscriptstyle\rm PV}^3 \exp\big[-{8\pi^2 \over N g^2}\big] \ . \elabel{lcth}\end{equation}
The first coefficient of the $\beta$-function is now
$b_0=3N$ and the analog of \eqref{lamd} reads
\begin{equation}
M_{\scriptscriptstyle\rm PV}^{3N} \exp\big[-{8\pi^2 \over g^2}\big]
\ = \ \Lambda^{3N} \ . \elabel{latwo}\end{equation}
Finally, the total contribution of the $N$ monopoles to the gluino
condensate, as in $SU(2)$, reproduces the WCI value \eqref{stwkb}.
\setcounter{equation}{0}
\section{Discussion}
More than twenty years ago Polyakov \cite{Pol} famously observed that in
three-dimensional gauge-Higgs theory without fermions
the magnetic
photon $\gamma(x)$ gets a non-zero mass due to
Debye screening in the monopole--anti-monopole plasma. The mass term
for the dual photon then implies confinement of the original electric
photon.
A na\"\i ve attempt to generalize this mechanism to four
dimensions by simply substituting the three-dimensional instantons
(i.e.~monopoles) with four-dimensional instantons fails,
since in four dimensions instantons and anti-instantons
have a dipole--dipole interaction which is short-ranged
and hence, the instanton--anti-instanton medium cannot
form a Coulomb plasma essential for Polyakov's Debye mechanism.
However, it has been suspected for a long time that instantons
and anti-instantons can be thought of as composite states of more basic
configurations---instanton partons---which would
have long-range interactions and lead to a Coulomb plasma and to the Debye screening.
In this paper, following earlier ideas of \cite{LY,KL,LL,KvBzer,KvBone,KvBtwo},
we have identified the instanton partons with monopoles in the
four-dimensional gauge theory compactified on ${\Bbb R}^3 \times S^1$.
The Debye screening in the monopole plasma induces a non-zero
mass for the dual photon.
Hence, we have successfully generalized Polyakov's mechanism of confinement
to the four-dimensional supersymmetric gauge theory compactified
on ${\Bbb R}^3 \times S^1$.
As the VEVs in Eq.~\eqref{lgen}
are inversely proportional to the radius $\beta$,
the theory becomes weakly coupled at small $\beta$ and can be analysed
semi-classically. To return to the strongly coupled theory in
Minkowski space, we need to consider the opposite limit of large $\beta$.
Since all the F-terms are holomorphic functions of the fields and since
the VEVs of the fields \eqref{lgen} are holomorphic functions of $\beta$,
the power of holomorphy \cite{Seiberg93} allows to analytically continue
the semi-classical values of the F-terms to the strong-coupling regime.
As a useful practical application and a test of monopole physics in
${\Bbb R}^3 \times S^1$, we have calculated the value of gluino condensate
and taken the decompactification limit to reproduce the WCI result.
\medskip
\centerline{$\scriptscriptstyle**********************$}
\medskip
We thank Diego Bellisai, Pierre van Baal and Misha Shifman
for valuable discussions.
VVK and MPM acknowledge a NATO Collaborative Research Grant,
TJH and VVK acknowledge the TMR network grant FMRX-CT96-0012
and NMD acknowledges a PPARC studentship.
|
1,108,101,562,731 | arxiv | \section{Introduction}
The Large sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest optical telescope in China\citep{2012RAA....12.1197C}. The size of its main mirror is 6.67 meters with 4 meters effective aperture. LAMOST is characterized by both a large field of view and large aperture.
After first light in 2008, the pilot sky survey from 2010 to 2012, and the regular sky survey is in progressing from 2012 to 2017. In the last four years, the LAMOST telescope has operated about 900 night. The raw data volume is about 18TB, and the product data contains about 4 million spectra, 5TB FITS files.
\section{Architecture}
The data cycle management system architecture shows in Figure \ref{fig1}.
\articlefigure[width=.7\textwidth]{P051_f1.eps}{fig1}{The data cycle management system architecture}
\section{Application}
\subsection{Data Flow}
The data flow is shown in Figure \ref{fig1}:
\begin{itemize}
\checklistitemize
\item The site transfer raw data to China-VO Data Center. \citep{adass2014_hebl}
\item Data Center push raw data to Pipeline Server.
\item Pipeline generate product data (catalog and spectra FITS files) and return to Data Center.
\item Backup raw data and product data to Backup Storage(in the third place).
\item Push product data push to Data Release Server. \citep{adass2014_fdw}
\end{itemize}
\subsection{Data Statistic}
In the Last few years, the telescope runs about 250 days per year. Till now, LAMOST has about 900 observation nights. As show in Figure \ref{fig2}, in these 900 night, the average size of raw data is about 20GB per night.
\articlefigure[width=.7\textwidth]{P051_f2.eps}{fig2}{Raw data statistic.}
\subsection{Data Release}
After the LAMOST pipeline, the product data (catalog and spectra) will be released to users:
\begin{itemize}
\checklistitemize
\item In 2012, before the IAU General Assembly in Beijing, LAMOST Release the Pilot Data Release (PDR)\citep{2012RAA....12.1243L}
\item In 2013, LAMOST release the first Data Release (DR1)\footnote{LAMOST DR1: \url{http://dr1.lamost.org}.}, and in 2015 spring, all the DR1 data release to public.\citep{2015RAA....15.1095L}. As shown in Figure \ref{fig3}.
\item In 2014, LAMOST release the second Data Release (DR2)\footnote{LAMOST DR2: \url{http://dr2.lamost.org}.}. As shown in the left part of Figure \ref{fig4}.
\item In 2015, LAMOST release the third Data Release (DR3)\footnote{LAMOST DR3: \url{http://dr3.lamost.org}.}. As shown in the right part of Figure \ref{fig4}.
\end{itemize}
The data release software requires:
\begin{itemize}
\item Java Web Framework: Spring Framework\footnote{Spring Framework: \url{http://spring.io}}.
\item Database: PostgreSQL\footnote{PostgreSQL: \url{http://www.postgresq.org}}, pgSphere \footnote{pgSphere: \url{http://pgsphere.projects.pgfoundry.org/}, China-VO branch: \url{https://github.com/china-vo/pgSphere}}.
\item Web Server: Nginx\footnote{Nginx: \url{https://www.nginx.org/}}.
\item User Management: CSTNET Passport\footnote{CSTNET Passport: \url{https://passport.escience.cn/}}.
\item Code Management: China-VO Code Repository Management\footnote{China-VO Code Repository Management: \url{http://code.china-vo.org}}, based on GitLab\footnote{GitLab: \url{https://www.gitlab.org/}},
\end{itemize}
\section{Conclusion}
LAMOST is an effective spectra telescope, so the data management is very challenging. Over the practical experience in past few years, we have set up a series of software and tools. In the future, We will upgrade these software and tools continually.
\acknowledgements The Guo Shou Jing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. Data resources are supported by Chinese Astronomical Data Center(CAsDC, http://casdc.china-vo.org).
|
1,108,101,562,732 | arxiv | \section{Introduction}
Classical orthogonal polynomials $p_n(x)$ and their generalizations in
the Askey and $q$-Askey scheme have the property that they are eigenfunctions
of some second order operator $L$ with eigenvalues depending on $n$, which
therefore may be called the spectral variable. Moreover, being orthogonal
polynomials, the $p_n(x)$ satisfy a three-term recurrence relation and are
therefore, as functions of $n$, eigenfunctions of a so-called Jacobi operator
with eigenvalues $x$. This duality phenomenon was also guiding for the
author in the companion paper \cite{33}, where he derived the dual
addition formula for continuous $q$-ultraspherical polynomials.
This paper gives a brief tutorial type survey of duality, mainly for orthogonal
polynomials, but also a little bit for transcendental special functions.
This paper is based on the first part of the
\emph{R.~P. Agarwal Memorial Lecture}, which
the author delivered on November 2, 2017 during the conference
ICSFA-2017 held in Bikaner, Rajasthan, India. See \cite{33} for the paper
based on the second part.
With pleasure I
remember to have met Prof.\ Agarwal during
the workshop on Special Functions and Differential Equations
held at the Institute of Mathematical Sciences in Chennai,
January 1997, where he delivered the opening address \cite{14}.
I cannot resist to quote from it the following wise words, close
to the end of the article:
\mLP
``{\sl
I think that I have taken enough time and I close my discourse- with a word of caution and advice to the research workers in the area of special functions and also those who use them in physical problems. The corner stones of classical analysis are
`elegance, simplicity, beauty and perfection.' Let them not be lost in your work. Any analytical generalization of a special function, only for the sake of a generalization by adding a few terms or parameters here and there, leads us nowhere. All research work should be meaningful and aim at developing a quality technique or have a bearing in some allied discipline.}''
\paragraph{Note}
For definition and notation of ($q$-)shifted factorials and
($q$-)hypergeometric series see \cite[\S1.2]{3}.
In the $q=1$ case we will mostly meet terminating hypergeometric series
\begin{equation}
\hyp rs{-n,a_2,\ldots,a_r}{b_1,\ldots,b_s}z:=
\sum_{k=0}^n \frac{(-n)_k}{k!}\,\frac{(a_2,\ldots,a_r)_k}
{(b_1,\ldots,b_s)_k}\,z^k.
\label{76}
\end{equation}
Here $(b_1,\ldots,b_s)_k:=(b_1)_k\ldots(b_s)_k$ and
$(b)_k:=b(b+1)\ldots(b+k-1)$ is the \emph{Pochhammer symbol} or
\emph{shifted factorial}. In \eqref{76} we even allow that
$b_i=-N$ for some $i$ with $N$ integer $\ge n$. There is no problem
because the sum on the right terminates at $k=n\le N$.
In the $q$-case we will always assume that $0<q<1$.
We will only meet terminating $q$-hypergeometric series of the form
\begin{equation}
\qhyp{s+1}s{q^{-n},a_2,\ldots,a_{s+1}}{b_1,\ldots,b_s}{q,z}:=
\sum_{k=0}^n \frac{(q^{-n};q)_k}{(q;q)_k}\,
\frac{(a_2,\ldots,a_{s+1};q)_k}{(b_1,\ldots,b_s;q)_k}\,z^k.
\label{77}
\end{equation}
Here $(b_1,\ldots,b_s;q)_k:=(b_1;q)_k\ldots(b_s;q)_k$ and
$(b;q)_k:=(1-b)(1-qb)\ldots(1-q^{k-1}b)$ is the
\emph{$q$-Pochhammer symbol} or \emph{$q$-shifted factorial}.
In \eqref{77} we even allow that $b_i=q^{-N}$ for some $i$ with
$N$ integer $\ge n$.
For formulas on orthogonal polynomials in the ($q$-)Askey scheme we
will often refer to Chapters 9 and 14 in \cite{2}. Almost all
of these formulas, with different numbering, are available in open
access on \url{http://aw.twi.tudelft.nl/~koekoek/askey/} .
\section{The notion of duality in special functions}
\label{72}
With respect to a (positive) measure $\mu$ on $\RR$ with support
containing infinitely many
points we can define \emph{orthogonal polynomials} (OPs)
$p_n$ ($n=0,1,2,\ldots$), unique up to nonzero real constant factors,
as (real-valued) polynomials $p_n$ of degree $n$ such that
\begin{equation*}
\int_\RR p_m(x)\,p_n(x)\,d\mu(x)=0\qquad(m,n\ne0).
\end{equation*}
Then the polynomials $p_n$ satisfy a \emph{three-term recurrence relation}
\begin{equation}
x\,p_n(x)=A_n\,p_{n+1}(x)+B_n\,p_n(x)+C_n\,p_{n-1}(x)
\qquad(n=0,1,2,\ldots),
\label{52}
\end{equation}
where the term $C_n\,p_{n-1}(x)$ is omitted if $n=0$, and where
$A_n,B_n,C_n$ are real and
\begin{equation}
A_{n-1}C_n>0\qquad(n=1,2,\ldots).
\label{53}
\end{equation}
By \emph{Favard's theorem} \cite{15}
we can conversely say that if $p_0(x)$ is
a nonzero real constant, and the
$p_n(x)$ ($n=0,1,2,\ldots$) are generated
by \eqref{52}
for certain real $A_n,B_n,C_n$ which satisfy \eqref{53}, then the $p_n$
are OPs with respect to a certain measure $\mu$ on
$\RR$.
With $A_n,B_n,C_n$ as in \eqref{52} define a \emph{Jacobi operator} $M$,
acting on infinite sequences $\{g(n)\}_{n=0}^\iy$, by
\begin{equation*}
(Mg)(n)=M_n\big(g(n)\big):=A_n\,g(n+1)+B_n\,g(n)+C_n\,g(n-1)
\qquad(n=0,1,2,\ldots),
\end{equation*}
where the term $C_n\,g(n-1)$ is omitted if $n=0$. Then \eqref{52}
can be rewritten as the eigenvalue equation
\begin{equation}
M_n\big(p_n(x)\big)=x\,p_n(x)\qquad(n=0,1,2,\ldots).
\label{54}
\end{equation}
One might say that the study of a system of OPs $p_n$
turns down to the spectral theory and harmonic analysis associated
with the operator $M$. From this perspective one can wonder if
the polynomials $p_n$ satisfy some \emph{dual} eigenvalue equation
\begin{equation}
(Lp_n)(x)=\la_n\,p_n(x)
\label{55}
\end{equation}
for $n=0,1,2,\ldots$,
where $L$ is some linear operator acting on the space of polynomials.
We will consider varioua types of operators $L$ together with the
corresponding OPs, first in the Askey scheme and next in the
$q$-Askey scheme.
\subsection{The Askey scheme}
\label{71}
\paragraph{Classical OPs}
\emph{Bochner's theorem} \cite{16}
classifies the second order differentai operators
$L$ together with the OPs $p_n$ such that
\eqref{55} holds for certain eigenvalues $\la_n$.
The resulting \emph{classical orthogonal polynomials} are essentially
the polynomials listed in the table below. Here $d\mu(x)=w(x)\,dx$
on $(a,b)$ and the closure of that interval is the support of $\mu$.
Furthermore, $w_1(x)$ occurs in the formula for $L$ to be given after
the table.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
name&$p_n(x)$&$w(x)$&$\frac{w_1(x)}{w(x)}$&$(a,b)$&constraints&$\la_n$\\
\hline
Jacobi&$P_n^{(\al,\be)}(x)$&{\small$(1-x)^\al(1+x)^\be$}&{\small$1-x^2$}&$(-1,1)$&$\al,\be>-1$&{\small$-n(n+\al+\be+1)$}\\
Laguerre&$L_n^{(\al)}(x)$&$x^\al e^{-x}$&$x$&$(0,\iy)$&$\al>-1$&$-n$\\
Hermite&$H_n(x)$&$e^{-x^2}$&$1$&$(-\iy,\iy)$&&$-2n$\\
\hline
\end{tabular}
\end{center}
Then
\begin{equation*}
(Lf)(x)=w(x)^{-1} \frac d{dx}\big( w_1(x)\,f'(x)\big).
\end{equation*}
For these classical OPs the duality goes much further than the two
dual eigenvalue equations \eqref{54} and \eqref{55}.
In particular for Jacobi polynomials it is true to a large extent
that every formula or property involving $n$ and $x$ has a dual
formula or property where the roles of $n$ and $x$ are interchanged.
We call this the \emph{duality principle}.
If the partner formula or property is not yet known then it is usually
a good open problem to find it (but one should be warned that there
are examples where the duality fails).
The Jacobi, Laguerre and Hermite families
are connected by limit transitions, as is already suggested by limit
transitions for their (rescaled) weight functions:
\begin{itemize}
\item
Jacobi $\to$ Laguerre:\quad $x^\al(1-\be^{-1}x)^\be\to x^\al e^{-x}$\quad
as\quad$\be\to\iy$;
\item
Jacobi $\to$ Hermite:\quad $\big(1-\al^{-1}x^2\big)^\al\to e^{-x^2}$\quad
as\quad$\al\to\iy$;
\item
Laguerre $\to$ Hermite:\quad
$e^{\al(1-\log\al)}\big((2\al)^\half x+\al\big)^\al
e^{-(2\al)^\half x-\al}\to e^{-x^2}$\quad as\quad$\al\to\iy$.
\end{itemize}
Formulas and properties of the three families can be expected to be
connected under these limits. Although this is not always the case,
this \emph{limit principle} is again a good source of open problems.
\paragraph{Discrete analogues of classical OPs}
Let $L$ be a second order difference operator:
\begin{equation}
(Lf)(x):=a(x)\,f(x+1)+b(x)\,f(x)+c(x)\,f(x-1).
\label{56}
\end{equation}
Here as solutions of \eqref{55} we will also allow OPs
$\{p_n\}_{n=0}^N$ for some \emph{finite} $N\ge0$, which will be orthogonal
with respect to positive weights $w_k$ ($k=0,1,\ldots,N$) on a finite
set of points $x_k$ ($k=0,1,\ldots,N$):
\begin{equation*}
\sum_{k=0}^N p_m(x_k)\,p_n(x_k)\,w_k=0\qquad(m,n=0,1,\ldots,N;\;m\ne n).
\end{equation*}
If such a finite system of OPs satisfies \eqref{55} for $n=0,1,\ldots,N$
with $L$ of the form \eqref{56} then the highest $n$ for which
the recurrence relation \eqref{52}
holds is $n=N$, where the zeros of $p_{N+1}$ are precisely the
points $x_0,x_1,\ldots,x_N$.
The classification of OPs satisfying \eqref{55}
with $L$ of the form \eqref{56} (first done by O. Lancaster, 1941,
see \cite{17})
yields the four families of
Hahn, Krawtchouk, Meixner and Charlier polynomials, of which
Hahn and Krawtchouk are finite systems, and Meixner and Charlier
infinite systems with respect to weights on countably infinite sets.
\emph{Krawtchouk polynomials} \cite[(9.11.1)]{2} are given by
\begin{equation}
K_n(x;p,N):=
\hyp21{-n,-x}{-N}{p^{-1}}\quad(n=0,1,2,\ldots,N).
\label{57}
\end{equation}
They satsify the orthogonality relation
\begin{equation*}
\sum_{x=0}^N (K_m\,K_n\,w)(x;p,N)=\frac{(1-p)^N}{w(n;p,N)}\,\de_{m,n}
\end{equation*}
with weights
\begin{equation*}
w(x;p,N):=\binom Nx p^x(1-p)^{N-x}\quad(0<p<1).
\end{equation*}
By \eqref{57} they are \emph{self-dual}\,:
\begin{equation*}
K_n(x;p,N)=K_x(n;p,N)\qquad(n,x=0,1,\ldots,N).
\end{equation*}
The three-term recurrence relation \eqref{54} immediately
implies a dual equation \eqref{55} for such OPs.
The four just mentioned families of discrete OPs
are also connected by limit relations. Moreover, the classical OPs
can be obtained as limit cases of them, but not conversely.
For instance, \emph{Hahn polynomials} \cite[(9.5.1)]{2}
are given by
\begin{equation}
Q_n(x;\al,\be,N):=\hyp32{-n,n+\al+\be+1,-x}{\al+1,-N}1\qquad
(n=0,1,\ldots,N)
\label{58}
\end{equation}
and they satisfy the orthogonality relation
\begin{equation*}
\sum_{x=0}^N (Q_mQ_n w)(x;\al,\be,N)=0\qquad(m,n=0,1,\ldots,N;\;m\ne n;
\;\al,\be>-1)
\end{equation*}
with weights
\begin{equation*}
w(x;\al,\be,N):=\frac{(\al+1)_x\,(\be+1)_{N-x}}{x!\,(N-x)!}\,.
\end{equation*}
Then by \eqref{58} (rescaled) Hahn polynomials tend to (shifted) Jacobi
polynomials:
\begin{equation}
\lim_{N\to\iy}Q_n(Nx;\al,\be,N)=
\hyp21{-n,n+\al+\be+1}{\al+1}x=
\frac{P_n^{(\al,\be)}(1-2x)}{P_n^{(\al,\be)}(1)}\,.
\label{66}
\end{equation}
\paragraph{Continuous versions of Hahn and Meixner polynomials}\quad\\
A variant of the difference operator \eqref{56} is the operator
\begin{equation}
(Lf)(x):=A(x)\,f(x+i)+B(x)\,f(x)+\overline{A(x)}\,f(x-i)\qquad(x\in\RR),
\label{59}
\end{equation}
where $B(x)$ is real-valued.
Then further OPs satisfying \eqref{55} are
the continuous Hahn polynomials and the Meixner-Pollaczek polynomials
\cite[Ch.~9]{2}.
\paragraph{Insertion of a quadratic argument}\quad\\
For an operator $\wt L$ and some polynomial $\si$
of degree 2 we can define an operator $L$ by
\begin{equation}
(Lf)\big(\si(x)\big):=\wt{L}_x\Big(f\big(\si(x)\big)\Big),
\label{61}
\end{equation}
Now we look for OPs satisfying \eqref{55} where $\wt L$ is
of type \eqref{56} or \eqref{59}. So
\begin{equation}
\wt L_x\Big(p_n\big(\si(x)\big)\Big)=\la_n\,p_n\big(\si(x)\big).
\label{60}
\end{equation}
The resulting OPs are the
Racah polynomials and dual Hahn polynomials for \eqref{60} with
$\wt L$ of type \eqref{56}, and Wilson polynomials and continuous
dual Hahn polynomials for \eqref{60} with
$\wt L$ of type~\eqref{59}, see again \cite[Ch.~9]{2}.
\mPP
The OPs satisfying \eqref{55} in the cases discussed until now
form together the \emph{Askey scheme}, see Figure \ref{fig:1}.
The arrows denote limit transitions.
\begin{figure}[t]
\setlength{\unitlength}{3mm}
\begin{picture}(30,36)
\put(8.5,34.5) {\framebox(5,3) {Wilson}}
\put(9.5,34.5) {\vector(-1,-2){2.2}}
\put(12.5,34.5) {\vector(0,-1){4.5}}
\put(18.5,34.5) {\dashbox(5,3) {Racah}}
\put(19.5,34.5) {\vector(0,-1){5}}
\put(22.5,34,5) {\vector(1,-2){2.5}}
\put(3.4,26.5) {\framebox(7,3.5) {\shortstack {Cont.\\[1mm]dual Hahn}}}
\put(6.5,26.5) {\vector(0,-1){4.5}}
\put(10.5,26.5) {\framebox(5,3.5) {\shortstack {Cont.\\[1mm]Hahn}}}
\put(12.5,26.5) {\vector(-1,-1){4.5}}
\put(13.5,26.5) {\vector(0,-1){5}}
\put(16.5,26.5) {\dashbox(5,3) {Hahn}}
\put(17.5,26.5) {\vector(-1,-2){2.7}}
\put(18.5,26.5) {\vector(0,-1){5}}
\put(19.5,26.5) {\vector(1,-1){5}}
\put(21.5,26.5) {\dashbox(8.5,3) {Dual Hahn}}
\put(24.5,26.5) {\vector(-1,-1){5}}
\put(26.5,26.5) {\vector(0,-1){5}}
\put(3,18.5) {\framebox(6.5,3.5)
{\shortstack {Meixner-\\[1mm]Pollaczek}}}
\put(7,18.5) {\vector(1,-2){2.5}}
\put(12.9,19.9) {\oval(5,3)}
\put(10.5,18.5) {\makebox(5,3) {Jacobi}}
\put(12.0,18.5) {\vector(0,-1){5}}
\put(14.5,18.5) {\vector(0,-1){13}}
\put(16.5,18.5) {\dashbox(5.5,3) {Meixner}}
\put(17.5,18.5) {\vector(-1,-1){5}}
\put(19.5,18.5) {\vector(0,-1){5}}
\put(23,18.5) {\dashbox(8,3) {Krawtchouk}}
\put(25,18.5) {\vector(-1,-2){2.5}}
\put(10.8,11.9) {\oval(6,3)}
\put(8,10.5) {\makebox(6,3) {Laguerre}}
\put(10.5,10.5) {\vector(1,-2){2.5}}
\put(18.5,10.5) {\dashbox(5.5,3) {Charlier}}
\put(21.5,10.5) {\vector(-1,-1){5.2}}
\put(14.5,3.9) {\oval(5.5,3)}
\put(12.0,2.5) {\makebox(5.5,3) {Hermite}}
\put(32,14.5) {\framebox(10.5,3) {self-dual family}}
\put(32,11.5) {\framebox(3,2) {}}
\put(35,11.5) {\framebox(3,2) {}}
\put(39,11.5) {\makebox(8,2) {dual families}}
\put(32,7.5) {\dashbox(10,3) {discrete OPs}}
\put(36.9,4.9) {\oval(10,3)}
\put(32,3.5) {\makebox(10,3) {classical OPs}}
\end{picture}
\caption{The Askey scheme}
\label{fig:1}
\end{figure}
In the Askey scheme we emphasize the self-dual families:
Racah, Meixner, Krawtchouk and Charlier for the OPs with discrete
orthogonality measure,
and Wilson and Meixner-Pollaczek for the OPs with non-discrete
orthogonality measure. We already met perfect self-duality for
the Krawtchouk polynomials, which is also the case for Meixner and
Charlier polynomials. For the Racah polynomials the dual OPs are
still Racah polynomials, but with different values of the parameters:
\begin{multline*}
R_n\big(x(x+\de-N);\al,\be,-N-1,\de\big)
:=
\hyp43{-n,n+\al+\be+1,-x,x+\de-N}{\al+1,\be+\de+1,-N}1\\
=R_x(n(n+\al+\be+1);-N-1,\de,\al,\be)\qquad
(n,x=0,1,\ldots,N).
\end{multline*}
The orthogonality relation for these Racah polynomials involves a weighted
sum of terms $(R_mR_n)\big(x(x+\de-N);\al,\be,-N-1,\de\big)$ over
$x=0,1,\ldots N$.
For Wilson polynomials we have also self-duality with a change of
parameters but the self-duality is not perfect, i.e., not related to the
orthogonality relation:
\begin{multline}
\const {W_n(x^2;a,b,c,d)}:=
\hyp43{-n,n+a+b+c+d-1,a+ix,a-ix}{a+b,a+c,a+d}1\\
=\const W_{-ix-a}\Big(\big(i(n+a')\big)^2;a',b',c',d'\Big),
\label{62}
\end{multline}
where $a'=\thalf(a+b+c+d-1)$, $a'+b'=a+b$, $a'+c'=a+c$, $a'+d'=a+d$.
The duality \eqref{62} holds for $-ix-a=0,1,2,\ldots$, while the orthogonality
relation for the Wilson polynomials involves a weighted integral
of $(W_mW_n)(x^2;a,b,c,d)$ over $x\in[0,\iy)$.
As indicated in Figure \ref{fig:1}, the dual Hahn polynomials
\begin{equation*}
R_n\big(x(x+\al+\be+1);\al,\be,N\big):=
\hyp32{-n,-x,x+\al+\be+1}{\al+1,-N}1\qquad(n=0,1,\ldots,N)
\end{equation*}
are dual to the Hahn polynomials \eqref{58}:
\begin{equation*}
Q_n(x;\al,\be,N)=R_x\big(n(n+\al+\be+1);\al,\be,N\big)\qquad
(n,x=0,1,\ldots,N).
\end{equation*}
The duality is perfect: the dual orthogonality relation for the Hahn
polynomials is the orthogonality relation for the dual Hahn polynomials,
and conversely.
There is a similar, but non-perfect duality between continuous Hahn
and continuous dual Hahn.
The classical OPs are in two senses exceptional within the
Askey scheme. First, they are the only families which are not
self-dual or dual to another family of OPs.
Second, they are the only
continuous families which are not related by analytic continuation
to a discrete family.
With the arrows in the Askey scheme given it can be taken as
a leading principle to link also the formulas and properties of
the various families in the Askey scheme by these arrows.
In particular, if one has some formula or property for a family lower
in the Askey scheme, say for Jacobi, then one may look for the
corresponding formula or property higher up, and try to find it if it
is not yet known. In particular, if one could find the result on the highest
Racah or Wilson level, which is self-dual then, going down along the
arrows, one might also obtain two mutually dual results in the Jacobi case.
\subsection{The $q$-Askey scheme}
The families of OPs in the $q$-Askey scheme\footnote{See
\url{http://homepage.tudelft.nl/11r49/pictures/large/q-AskeyScheme.jpg}}
\cite[Ch.~14]{2}
result from the classification \cite{21}, \cite{18}, \cite{19},
\cite{20}
of OPs satisfying \eqref{55}, where $L$ is defined in terms of
the operator $\wt L$ and the function $\si$ by \eqref{61}, where
$\wt L$ is of type \eqref{56} or \eqref{59}, and where
$\si(x)=q^x$ or equal to a quadratic polynomial in $q^x$. This choice
of $\si(x)$ is the new feature deviating from what we discussed about the
Askey scheme. And here $q$ enters, with $0<q<1$ always assumed.
The $q$-Askey scheme is considerably larger than the Askey scheme,
but many features of the Askey scheme return here, in particular it has
arrows denoting limit relations. Moreover, the $q$-Askey scheme is
quite parallel to the Askey scheme in the sense that
OPs in th $q$-Askey scheme,
after suitable rescaling, tend to OPs in the Askey scheme as
$q\uparrow1$. Parallel to Wilson and Racah polynomials at the top
of the Askey scheme there are Askey--Wilson polynomials \cite{9}
and $q$-Racah polynomials at the top of the $q$-Askey scheme.
These are again self-dual families, with the self-duality for
$q$-Racah being perfect.
The guiding principles discussed before about formulas or properties
related by duality or limit transitions now extend to the $q$-Askey scheme:
both within the $q$-Askey scheme and in relation to the Askey scheme
by letting $q\uparrow1$. For instance, one can hope to find as many
dual pairs of significant formulas and properties of Askey--Wilson
polynomials as
possible which have mutually dual limit cases for Jacobi
polynomials. In fact, we realize this in \cite{33}
with the addition and dual addition formula by taking limits
from the continuous $q$-ultraspherical polynomials (a self-dual
one-parameter subclass of the four-parameter class of
Askey--Wilson polynomials) to the ultraspherical polynomials
(a one-parameter subclass of the two-parameter class of Jacobi
polynomials).
One remarkable aspect of duality in the two schemes concerns
the discrete OPs living there.
Leonard (1982)
classified all systems of OPs $p_n(x)$ with respect to weights
on a countable set $\{x(m)\}$
for which there is a system
of OPs $q_m(y)$ on a countable set $\{y(n)\}$
such that
\[
p_n\big(x(m)\big)=q_m\big(y(n)\big).
\]
His classification yields the OPs in the $q$-Askey scheme
which are orthogonal with respect to weights on a countable set
together with their limit cases for $q\uparrow1$ and $q\downarrow-1$
(where we allow $-1<q<1$ in the $q$-Askey scheme).
The $q\downarrow -1$ limit case yields the
Bannai--Ito polynomials~\cite{22}.
\subsection{Duality for non-polynomial special functions}
For Bessel functions $J_\al$ see \cite[Ch.~10]{11}
and references given there.
It is convenient to use a different standardization and notation:
\begin{equation*}
\FSJ_\al(x):=\Ga(\al+1)\,(2/x)^\al\,J_\al(x).
\end{equation*}
Then (see \cite[(10.16.9)]{11})
\[
\FSJ_\al(x)=
\sum_{k=0}^\iy\frac{(-\tfrac14 x^2)^k}{(\al+1)_k\,k!}
=\hyp01-{\al+1}{-\tfrac14x^2}\qquad(\al>-1).
\]
$\FSJ_\al$ is an even entire analytic function. Some special cases are
\begin{equation}
\FSJ_{-1/2}(x)=\cos x,\quad
\FSJ_{1/2}(x)=\frac{\sin x}x\,.
\label{63}
\end{equation}
The \emph{Hankel transform} pair \cite[\S10.22(v)]{11}, for $f$ in a suitable
function class, is given by
\begin{equation*}
\begin{cases}
&\dstyle\wh f(\la)=\int_0^\iy f(x)\FSJ_\al(\la x) x^{2\al+1}\,dx,\mLP
&\dstyle
f(x)=\frac1{2^{2\al+1}\Ga(\al+1)^2}\int_0^\iy \wh f(\la)
\FSJ_\al(\la x) \la^{2\al+1}\,d\la.
\end{cases}
\end{equation*}
In view of \eqref{63} the Hankel transform contains the
Fourier-cosine and Fourier-sine transform as special cases for
$\al=\pm\half$.
The functions $x\mapsto\FSJ_\al(\la x)$ satisfy the eigenvalue equation
\cite[(10.13.5)]{11}
\begin{equation}
\left(\frac{\pa^2}{\pa x^2}+\frac{2\al+1}x\,\frac \pa{\pa x}\right)
\FSJ_\al(\la x)=-\la^2\,\FSJ_\al(\la x).
\label{64}
\end{equation}
Obviously, then also
\begin{equation}
\left(\frac{\pa^2}{\pa \la^2}+\frac{2\al+1}\la\,\frac \pa{\pa\la}\right)
\FSJ_\al(\la x)=-x^2\,\FSJ_\al(\la x).
\label{65}
\end{equation}
The differential operator in \eqref{65} involves the
spectral variable $\la$
of \eqref{64}, while the eigenvalue in \eqref{65} involves the
$x$-variable in the differential operator in \eqref{64}.
The Bessel functions and the Hankel transform are
closely related to the Jacobi polynomials \eqref{66}
and their orthogonality relation. Indeed, we have the limit formulas
\begin{equation*}
\lim_{n\to\iy}\frac{P_n^{(\al,\be)}\big(\cos(n^{-1}x)\big)}
{P_n^{(\al,\be)}(1)}=\FSJ_\al(x),\qquad
\lim_{\bisub{\vphantom{|}\nu\to\iy}{\vphantom{|}\nu\la=1,2,\ldots}}
\frac{P_n^{(\al,\be)}\big(\cos(\nu^{-1}x)\big)}
{P_n^{(\al,\be)}(1)}=\FSJ_\al(\la x).
\end{equation*}
There are many other examples of non-polynomial special functions
being limit cases of OPs in the ($q$-)Askey scheme, see for instance
\cite{23}, \cite{24}.
In 1986 Duistermaat \& Gr\"unbaum \cite{12} posed the question
if the pair of eigenvalue equations \eqref{64}, \eqref{65}
could be generalized to a pair
\begin{equation}
\begin{split}
L_x\big(\phi_\la(x)\big)&=-\la^2\,\phi_\la(x),\\
M_\la\big(\phi_\la(x)\big)&=\tau(x)\,\phi_\la(x)
\end{split}
\label{67}
\end{equation}
for suitable differential operators $L_x$ in $x$ and $M_\la$ in $\la$
and suitable functions $\phi_\la(x)$ solving the two equations.
Here the functions $\phi_\la(x)$ occur as eigenfunctions in two ways:
for the operator $L_x$ with eigenvalue depending on $\la$ and for
the operator $M_\la$ with eigenvalue depending on $x$.
Since the occurring eigenvalues of an operator form its spectrum,
a phenomenon as in \eqref{67} is called \emph{bispectrality}.
For the case of a second order differential operator $L_x$
written in potential form $L_x=d^2/dx^2-V(x)$ they classified
all possibilities for \eqref{67}. Beside the mentioned Bessel cases
and a case with Airy functions (closely related to Bessel functions)
they obtained two other families where $M_\la$ is a higher than
second order differential operator. These could be obtained by
successive \emph{Darboux transformations} applied to $L_x$
in potential form.
A Darboux transformation produces
a new potential from a given potential $V(x)$
by a formula which involves an eigenfunction of $L_x$ with eigenvalue 0.
Their two new families get a start by the application of
a Darboux transformation
to the Bessel differential equation \eqref{64}, rewritten in
potential form
\begin{equation*}
\phi_\la''(x)-(\al^2-\tfrac14)x^{-2}\phi_\la(x)=-\la^2\phi_\la(x),\qquad
\phi_\la(x)=(\la x)^{\al+\half} \FSJ_\al(\la x).
\end{equation*}
Here $\al$ should be in $\ZZ+\thalf$ for a start of the first new family
or in $\ZZ$ for a start of the second new family. For other values
of $\al$ one would not obtain a dual eigenvalue equation with
$M_\la$ a finite order differential operator.
Just as higher order differential operators $M_\la$ occur in \eqref{67},
there has been a lot of work on studying OPs satisfying
\eqref{55} with $L$ a higher order differential operator.
See a classification in \cite{25}, \cite{26}. All occurring
OPs, the so-called \emph{Jacobi type} and
\emph{Laguerre type polynomials},
have a Jacobi or Laguerre orthogonality measure with integer
values of the parameters, supplemented by mass points at one or
both endpoints of the orthogonality interval. Some of the
Bessel type functions in the second new class in \cite{12} were
obtained in \cite{27} as limit cases of Laguerre type polynomials.
\subsection{Some further cases of duality}
The self-duality property of the family of Askey-Wikson polynomials
is reflected in Zhedanov's \emph{Askey--Wilson algebra} \cite{28}.
A larger algebraic structure is the \emph{double affine Hecke
algebra} (DAHA), introduced by Cherednik and extended by Sahi.
The related special functions are so-called
\emph{non-symmetric} special functions. They are functions in several
variables and associated with root systems. Again there is a duality,
both in the DAHA and for the related special functions.
For the (one-variable) case
of the non-symmetric Askey--Wilson polynomials this is treated in
\cite{29}. In \cite{30} limit cases in the $q$-Askey scheme are also
considered.
Finally we should mention the manuscript \cite{31}.
Here the author extended the duality \cite[(4.2)]{33} for
continuous $q$-ultraspherical polynomials to Macdonald polynomials
and thus obtained the so-called Pieri formula \cite[\S VI.6]{32}
for these polynomials.
\paragraph{Acknowledgement}
I thank Prof.\ M.~A. Pathan and Prof.\ S.~A. Ali for the invitation
to deliver the 2017 R.~P. Agarwal Memorial Lecture and for their
cordiality during my trip to India on this occasion.
|
1,108,101,562,733 | arxiv | \section{Introduction}
\label{sec:intro}
We have been developing a general algorithm framework for
one-step spectral CT image reconstruction (OSSCIR) that
we have applied to experimental data acquired employing a spectral CT system
with photon-counting detectors \cite{Schmidt2017}.
The OSSCIR algorithm framework involves direct one-step image reconstruction of basis
material maps from energy-windowed X-ray transmission data. The one-step approach
contrasts with standard two-step processing where the photon transmission data is converted
to material sinograms followed by image reconstruction to material maps \cite{schlomka2008experimental}.
The one-step
approach enables unconventional scan configurations where the transmission rays need not
be co-registered for all energy-windows \cite{chen2017image}, and the image reconstruction process
can be regularized by applying constraints directly to the material maps.
Implementing OSSCIR consists of: (1) specifying the material maps with an optimization problem
that includes a nonconvex data discrepancy term with convex constraints, and (2) solution of the
nonconvex
optimization problem by the \underline{m}irr\underline{o}red \underline{c}onvex/\underline{c}onc\underline{a}ve
(MOCCA) algorithm \cite{Barber2016a,Barber2016b}.
MOCCA is the heart of the OSSCIR framework.
It is an extension of the
Chambolle-Pock primal-dual (CPPD) algorithm for large-scale convex optimization \cite{chambolle2011first,sidky:CP:2012}.
The MOCCA extension applies to certain forms of large-scale nonconvex optimization composed
of a smooth nonconvex objective function and convex nonsmooth functions, such as convex constraints.
The design of MOCCA is based on the idea that for some classes of nonconvex smooth objective
functions
the difficulty for algorithm design results from local saddle points and not local minima.
Local saddle points have directions of negative curvature that can result in spurious
update steps. Accordingly, a MOCCA iteration consists of constructing a local
convex quadratic approximation to the objective function, removing
directions of negative curvature, and performing a CPPD step on this approximation.
An important aspect of MOCCA is the diagonal step-preconditioner (SPC) for CPPD proposed by
Pock and Chambolle \cite{pock2011diagonal}. Because the convex approximation to the objective
function is changing at every iteration, the CPPD step length parameters need to
be recomputed at every iteration. The
step lengths of diagonal-SPC CPPD Ref. \cite{pock2011diagonal} can be computed at the cost of
two additional matrix-vector product operations, which is equivalent to an additional
forward- and back-projection per iteration for CT IIR.
In this contribution, we extend diagonal SPC to block-diagonal SPC
that effectively counteracts slow convergence due to the near linear dependence
from the basis material attenuation curves. In our original work on spectral CT IIR, we
had already encountered slow convergence rates with two-material expansion of the attenuation
map, and in that work we
proposed $\mu$-preconditioning ($\mu$-PC), where the materials expansion set is transformed to
an orthogonal set of functions in X-ray energy. The $\mu$-PC transformation
was effective at improving convergence rates.
In attacking three-materials expansion sets,
$\mu$-PC also improves convergence, but in this case the convergence
issue is more acute than the two-materials case. In our original application
of MOCCA to spectral CT in Ref. \cite{Barber2016b}, we successfully demonstrated
one-step reconstruction for three materials, but the simulation modeled five ideal
photon-counting spectral response windows with sharp boundaries and no window
overlap. The three-material simulation we consider here involves only four windows
with realistic spectral responses that have significant overlap with each other.
Accordingly, the worse conditioning of the realistic setup can impact convergence.
We propose
a block-diagonal SPC that has slightly more computational overhead
per iteration but dramatically improves convergence of MOCCA in the spectral CT
setting with three basis materials and realistic spectral responses.
We briefly summarize OSSCIR and MOCCA with $\mu$-preconditioning;
and introduce the new block-diagonal preconditioner in Sec. \ref{sec:methods}.
The improvement in convergence gained by the new preconditioner is demonstrated
in Sec. \ref{sec:results} on a challenging, idealized spectral CT simulation.
\section{Methods}
\label{sec:methods}
As in Ref. \cite{Barber2016b}, the spectral CT data model is written
\begin{equation}
\label{model}
I_{w,\ell} = \int S_{w,\ell}(E) \exp \left[
- \int_{\ell} \mu(E,\vec{r}(t)) dt \right] dE,
\end{equation}
where $I_{w,\ell}$ is the transmitted X-ray photon fluence along ray $\ell$ in energy window $w$;
$t$ is a parameter indicating location along $\ell)$;
$S_{w,\ell}(E)$ is the spectral response; and
$\mu(E,\vec{r}(t))$ is the energy and spatially dependent linear X-ray attenuation coefficient.
We employ a standard material-expansion decomposition to model the attenuation map
\begin{equation}
\label{matdecomp}
\mu(E,\vec{r}(t)) = \sum_m \left(\frac{\mu_m(E)}{\rho_m} \right) \rho_m f_m(\vec{r}[t]),
\end{equation}
where $\rho_m$ is the density of material $m$; $\mu_m(E)/\rho_m$ is the mass attenuation
coefficient of material $m$; and $f_m(\vec{r})$ is the spatial map
for material $m$.
To obtain the final discrete data model, we combine
Eq. (\ref{model}) with Eq. (\ref{matdecomp}); normalize the spectral response;
and discretize all integrations.
The standard detected counts model becomes
\begin{multline}
\label{fullModel}
\hat{c}^{\text{(standard)}}_{w,\ell}(f) = \\
N_{w,\ell} \sum_i s_{w,\ell,i} \exp \left( - \sum_{m,k} \mu_{m,i} X_{\ell,k} f_{k,m} \right),
\end{multline}
where $N_{w,\ell}$ is the total number of incident photons along ray $\ell$ in energy window
$w$; $s_{w,\ell,i}$ is the normalized spectral response, i.e.
$ \sum_i s_{w,\ell,i}=1$;
$i$ indexes the energy $E_i$; $X_{\ell,k}$ represents X-ray projection
along the ray $\ell)$; and $f_{k,m}$ is the pixelized material map with $k$ and
$m$ indexing pixel and expansion-material, respectively.
The spectral responses are assumed known, and the goal is to reconstruct the material
maps $f$ from measured counts data $c$.
The model in Eq. (\ref{fullModel}) can cause numerical problems for IIR, because at
early iterations it is possible for the sum, $\sum_{m,k} \mu_{m,i} X_{\ell,k}f_{k,m}$,
to take on large negative values which can lead to large positive arguments for the
exponential function. This issue can be remedied by imposing constraints on $f$, but
the approach we take here is to replace the exponential function for positive arguments
with a function that has slower growth; i.e. replace $\exp(\cdot)$ with $\text{softexp}(\cdot)$
where
\begin{equation*}
\text{softexp}(x) = \begin{cases}
\exp(x) & x \le 0 \\
x+1 & x>0
\end{cases}
\end{equation*}
replaces the exponential function for $x>0$ with a linear function that matches the value
and derivative at $x=0$. Other cut-off points besides $x=0$ and
extrapolations of $\exp(x)$ are possible,
but this is the form that we employ for the presented results.
The rationale for use of $\text{softexp}(\cdot)$ is that
positive arguments of $\exp(\cdot)$ correspond to
the unphysical situation that the beam intensity increases through the object; thus
replacing $\exp(\cdot)$ with $\text{softexp}(\cdot)$ does not introduce further approximation.
At the same time we avoid the need to impose constraints on $f$.
Accordingly, the counts data model used here is
\begin{multline}
\label{finalModel}
\hat{c}_{w,\ell}(f) = \\
N_{w,\ell} \sum_i s_{w,\ell,i} \, \text{softexp} \left( - \sum_{m,k} \mu_{m,i} X_{\ell,k} f_{k,m} \right).
\end{multline}
This modification causes a small change in the MOCCA derivation and implementation for spectral
CT that was presented in Ref. \cite{Barber2016b}.
\noindent
\paragraph*{Transmission Poisson likelihood maximization}
Maximizing the transmission Poisson likelihood is equivalent to
minimizing the Kullback-Leibler distance between the counts data, $c$,
and counts model, $\hat{c}(f)$,
\begin{multline}
\label{DTPL}
D_\text{TPL}(c,\hat{c}(f)) = \\
\sum_{w,\ell}
\left[ \hat{c}_{w,\ell}(f)-c_{w,\ell} -c_{w,\ell}
\log \frac{\hat{c}_{w,\ell}(f)}{c_{w,\ell}} \right],
\end{multline}
where $c_{w,\ell}$ are the measured counts in energy window $w$ along ray
$\ell$. This objective function is nonconvex as can be verified by
computing the Hessian (the multivariable second derivative)
of $D_\text{TPL}(c,\hat{c}(f))$ with respect to $f$. The non-linearity of $\hat{c}(f)$ as
a function of $f$ gives rise to directions of negative curvature
in $D_\text{TPL}(c,\hat{c}(f))$.
The MOCCA algorithm is designed to minimize
the nonconvex $D_\text{TPL}(c,\hat{c}(f))$ objective function and the pseudo-code
for doing so is given in Eqs. (47)-(52) in Ref. \cite{Barber2016b}. The algorithm
results from making a local convex quadratic approximation to Eq. (\ref{DTPL}).
In order to form the quadratic approximation, we need to compute the
first and second derivatives of
\begin{equation*}
L_\text{TPL}(f) =D_\text{TPL}(c,\hat{c}(f)).
\end{equation*}
These derivatives were computed in Ref. \cite{Barber2016b}, but they
must be modified to account for the use of $\text{softexp}(\cdot)$:
\begin{align*}
\nabla_f L_\text{TPL}(f) =& Z^\top A(f)^\top r(f), \\
\nabla^2_f L_\text{TPL}(f) =& -Z^\top \diag(B(f)^\top r(f)) Z+ \\
& Z^\top A(f)^\top \diag(\hat{c}(f)+r(f))A(f) Z,
\end{align*}
where the $w,\ell$ component of the residual $r(f)$ is
\begin{equation*}
r_{w,\ell}(f) = c_{w,\ell} -\hat{c}_{w,\ell}(f).
\end{equation*}
The component form of the matrices $Z$, $A(f)$ and $B(f)$ are
\begin{equation*}
Z_{\ell i,m k} = \mu_{m, i} X_{\ell, k},
\end{equation*}
\begin{equation}
\label{aeq}
A_{w \ell ,\ell^\prime i}(f) = \frac{s_{w \ell i} \text{softexp}^\prime[-(Zf)_{\ell i}]}
{\sum_{i^\prime}s_{w \ell i^\prime} \text{softexp}[-(Zf)_{\ell i^\prime}]}
\mathbf{I}_{\ell \ell^\prime},
\end{equation}
and
\begin{equation*}
B_{w \ell ,\ell^\prime i}(f) = \frac{s_{w \ell i} \text{softexp}^{\prime \prime}[-(Zf)_{\ell i}]}
{\sum_{i^\prime}s_{w \ell i^\prime} \text{softexp}[-(Zf)_{\ell i^\prime}]}
\mathbf{I}_{\ell \ell^\prime},
\end{equation*}
where
\begin{equation*}
\mathbf{I}_{\ell \ell^\prime} = \begin{cases}
1 & \ell = \ell^\prime \\
0 & \ell \neq \ell^\prime \\
\end{cases}.
\end{equation*}
The use of $\text{softexp}(\cdot)$ introduces a small complication because
\begin{equation*}
\text{softexp}^{\prime\prime}(x) \neq \text{softexp}^\prime(x),
\end{equation*}
while the original MOCCA derivation made use of the fact
that the first and second derivatives of $\exp(x)$ are equal.
Accordingly the first term of the Hessian $\nabla^2_f L_\text{TPL}(f)$
has the matrix $B(f)$ instead of $A(f)$.
The MOCCA derivation for spectral CT relies on splitting the Hessian
matrix $\nabla^2_f L_\text{TPL}(f)$ into the difference of two positive
semi-definite (PSD) matrices. To accomplish this, we need to use the fact
\begin{equation}
\label{cond1}
\text{softexp}^{\prime\prime}(x) \leq \text{softexp}^\prime(x),
\end{equation}
a condition which is satisfied in our definition of $\text{softexp}(\cdot)$.
This condition allows us to write
\begin{equation*}
B = A-(A-B)= A-C,
\end{equation*}
where
$A$ and $C$ are matrices with non-negative matrix elements.
That $C$ has non-negative matrix elements, is shown by using Eq. (\ref{cond1})
and the fact that the spectral sensitivities $s_{w,\ell,i}$ are non-negative.
Realizing that $B$ can be expressed as $A-C$, the algebra in MOCCA derivation from
Ref. \cite{Barber2016b} can be followed through carrying the extra term $-C$.
The extra term turns out to have no impact on the final pseudocode; thus the
MOCCA algorithm remains the same except for the adjustment to the matrix $A$
in Eq. (\ref{aeq}).
For the purposes here, the salient fact is that with the various
derivatives of $L_\text{TPL}(f)$ computed, a convex quadrative local upperbound
can be formed.
In the neighborhood of an expansion point $f_0$, we approximate
$L_\text{TPL}(f)$ with
\begin{equation*}
L_\text{TPL}(f) \approx Q(K(f_0) f),
\end{equation*}
where the precise form of the quadratic function $Q$ is specified
in Ref. \cite{Barber2016b}. The matrix $K(f)$
is
\begin{equation*}
K_{w \ell,m k}(f) = \sum_{\ell^\prime i} A_{w \ell,\ell^\prime i}(f) Z_{\ell^\prime i,m k}.
\end{equation*}
The rows of $K(f)$ index the data space consisting of energy windows, $w$,
and rays, $\ell$, and the columns index the image space consisting of materials, $m$,
and pixels, $k$.
\noindent
\paragraph*{Step lengths of MOCCA and $\mu$-PC}
The MOCCA algorithm is primal-dual as it is based on the diagonal-SPC CPPD.
Following Refs. \cite{pock2011diagonal,Barber2016b}, the step lengths for the dual
and primal updates are
\begin{align*}
\Sigma_{w \ell} = \frac{1/\lambda}{ \sum_{m, k} |K_{w \ell , m k}(f_0)| }, \\
T_{m k} = \frac{\lambda}{ \sum_{w, \ell} |K_{w \ell , m k}(f_0)| },
\end{align*}
respectively, and $\lambda$ is a step size ratio parameter
that must be tuned.
In our previous work (Ref. \cite{Barber2016b}), we found that faster convergence
can be obtained by applying $\mu$-PC to the materials basis, which transforms it to an
orthogonal basis; in this new formulation of the optimization problem,
the step lengths are computed the same way as before by substituting the new matrix
$K(f_0)$ calculated in this transformed basis.
\noindent
\paragraph*{A $m$-block diagonal SPC for MOCCA applied to spectral CT}
The condition on $\Sigma$ and $T$ that leads to convergence for SPC CPPD
is that the matrix
\begin{equation*}
M = \left( \begin{array}{cc}
T^{-1} & -K^\top \\
-K & \Sigma^{-1} \end{array}
\right)
\end{equation*}
is positive semi-definite, i.e.
$v^\top M v \ge 0$
for any vector $v$.
In designing step-matrices $\Sigma$ and $T$ for MOCCA, we respect the constraint
imposed by positive definiteness of $M$ with $K(f_0)$ changing at each iteration.
We propose a $m$-block diagonal SPC for $\Sigma$ and
$T$ that is motivated by preserving invariance to rotations of the materials expansion set;
in other words, the output of the algorithm would be identical regardless of any rotation
applied to the selected basis of materials,
which is a natural property that is not satisfied by the $\mu$-PC method.
In the process of developing $\mu$-PC we had noticed sensitive convergence behavior simply
by performing such rotations. This sensitivity was traced to the diagonal PC strategy for $\sigma$ and $\tau$.
The proposed step matrices are
\begin{equation*}
\left(\Sigma^{-1} \right)_{w \ell, w^\prime \ell^\prime} = \lambda
\sum_k \sqrt{ \sum_m K^2_{w \ell, m k}(f_0) } \; \mathbf{I}_{w \ell, w^\prime \ell^\prime}
\end{equation*}
for the dual step
and
\begin{equation*}
\left(T^{-1} \right)_{m k, m^\prime k^\prime} = \frac{1}{\lambda}
\sum_{w, \ell} \frac{ K_{w \ell, m k}(f_0) K_{w \ell, m^\prime k}(f_0) }
{\sqrt{ \sum_{m^{\prime\prime}}K^2_{w \ell, m^{\prime\prime} k}(f_0) }} \mathbf{I}_{k,k^\prime},
\end{equation*}
for the primal step.
As before, the $\Sigma^{-1}$ matrix is diagonal, and inverting to find $\Sigma$ only
involves computing the reciprocal of the diagonal elements.
The new definition of $T^{-1}$, however, is diagonal only in $k,k^\prime$
and each diagonal element indexed by $k$ consists of an $m \times m$ block.
Inversion to find $\Sigma$ thus involves inversion of an $m \times m$ matrix
where each entry is a $N_k$-length vector, where $N_k$ is the total number of pixels
in a single material map. The inversion of such an $m \times m$ matrix is feasible,
because the number of expansion materials is low. In this work in fact we use $N_m=3$.
The matrix inversion must be computed at every iteration because $K(f_0)$ is a function
of the expansion center, which changes at every iteration for our application of
MOCCA. The overhead in inverting the 3x3 blocks is negligible
in comparison with the computationally intensive X-ray forward- and back-projections.
\section{Results}
\label{sec:results}
\begin{figure}[!h]
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[width=0.99\linewidth]{figs/spectra.eps}}
\end{minipage}
\caption{Realistic X-ray normalized spectral response curves for 4-window spectral CT with a photon-counting
detector. Shown is the response curves for the first detector pixel; other pixels have slight
variations from these curves.
\label{fig:spectra}}
\end{figure}
Spectral CT counts data are generated based on a simulation of our
bench-top X-ray system including a
photon-counting detector with
192 pixels. Mean transmitted photon
counts acquired in four energy windows are computed based on spectra generated from
calibration of our system. The precise spectra vary as a function of detector pixel, and
example spectra are shown in Fig. \ref{fig:spectra}.
For the spectral CT data, 200 projections are generated from a phantom simulation
of one of our physical test objects: a 6.35cm-diameter
Poly(methyl methacrylate) (PMMA) cylinder with four inserted rods including PMMA,
Air (empty),
Teflon, and low-density polyethylene (LDPE) inserts.
In the empty insert, Gd contrast agent is included at a density fraction of 0.003
(Note this is only possible in simulation). An Aluminum/PMMA/Gd materials expansion
set is used form image reconstruction, and the corresponding material maps of the
phantom are shown in Fig. \ref{fig:rods}.
\begin{figure}[!h]
\begin{minipage}[b]{\linewidth}
\centering
\centerline{Aluminum~~~~~~~~~~~PMMA~~~~~~~~~~~~Gadolinium}
\centerline{\includegraphics[width=0.95\linewidth]{figs/rodphantom.eps}}
\end{minipage}
\caption{Rods phantom decomposed into Aluminum, PMMA, and Gd maps. The structure of the
phantom is most easily visible in the PMMA map, where the PMMA background cylinder is
clearly visible. The rods, clockwise from the upper left are: Gd at a density fraction of 0.003,
Teflon, PMMA, and LDPE. The Gd "rod" is only visible in the Gd map. The display windows
are [-0.1,0.2], [0.5,1.5], and [-0.003,0.006] for Aluminum, PMMA, and Gd maps, respectively.
\label{fig:rods}}
\end{figure}
The test data are the noiseless mean counts, and the goal of this ``inverse crime''
set up is to characterize MOCCA convergence for $\mu$-PC and $m$-block
diagonal SPC by observing the accurate recovery of the test object.
The difficulty of the problem lies in the fact that we employ realistic spectra that
include non-flux-dependent physical factors that blur the sharp energy-window borders.
The blurred spectra have realistic overlap with each other as opposed to ideal spectral
responses with no overlap.
\begin{figure}[!h]
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[width=0.99\linewidth]{figs/convergence.eps}}
\end{minipage}
\caption{The log-log plot shows convergence of $D_\text{TPL}(c,\hat{c}(f^{(n)}))$, where
$f^{(n)}$ is the material map estimates at iteration $n$. The curves show results for MOCCA
with $\mu$-PC and with $m$-block diagonal SPC.
\label{fig:conv}}
\end{figure}
In Fig. \ref{fig:conv}, we display the $D_\text{TPL}$ data discrepancy as a function
of iteration number for both PC strategies.
In each case the $\lambda$ parameter
is tuned for most rapid convergence in this quantity. Both versions of MOCCA are run
for 2,000 iterations and in this example it is clear that $m$-block diagonal SPC
outperforms $\mu$-PC. Not shown is the result for MOCCA with diagonal SPC,
which exhibits divergent behavior for all tested $\lambda$ values.
Divergent behavior can occur with MOCCA,
when only a single ``inner loop'' is performed \cite{Barber2016a,Barber2016b}. Due
to efficiency constraints, we aim to operate MOCCA with parameter and preconditioning
choices that allow its operation without nested inner and outer loops.
\begin{figure}[!h]
\begin{minipage}[b]{\linewidth}
\centering
\centerline{~$m$-block diag. SPC~~~~~~~~~~~$\mu$-PC~~~~~~~~~}
\centerline{\includegraphics[width=0.9\linewidth]{figs/gdims.eps}}
\end{minipage}
\caption{Gd material maps at various iteration numbers for MOCCA with the new
$m$-block diagonal SPC and with $\mu$-PC.
From top to bottom the iteration numbers are: 100, 200, 1000, and 2000.
The display window is [-0.003,0.006] for all panels.
\label{fig:gdconv}}
\end{figure}
Of particular interest for convergence studies, in this case, is the Gd material map.
It has such low density that lack of convergence is obvious in visualizing the corresponding
images. In Fig. \ref{fig:gdconv}, we display a series of intermediate estimates
of the Gd map for both pre-conditioning methods. Of particular interest is the
fact that at 100 iterations the proposed $m$-block method has little contamination
from the PMMA and aluminum maps, while $\mu$-PC shows significant bleed-through
from the other expansion materials at 100 and 200 iterations. From the images series
it is also clear that the $m$-block method achieves accurate Gd recovery much earlier
than $\mu$-PC. We also note that the artifact patterns are rather complex
at intermediate iterations; this results from the variations of spectral response
across detector pixels.
\section{Summary}
We propose a new $m$-block diagonal step-preconditioner for use with MOCCA applied to spectral CT.
In these preliminary convergence studies we have primarily been concerned with K-edge imaging with
the use of a three-material expansion set: a soft-tissue equivalent, a bone equivalent, and Gd contrast
agent. In this setting, the new preconditioner enables MOCCA to be applied effectively
for one-step reconstruction of three three material maps from four-window photon-counting data with
realistic spectral responses.
At the conference, we will also present experimental results on our K-edge imaging phantom
using MOCCA with $m$-block diagonal step-preconditioning.
\section{Acknowledgment}
RFB is supported by an Alfred P. Sloan Fellowship and by NSF award DMS-1654076.
This work is also supported in part by NIH
Grant Nos. R01-EB018102, and R01-CA182264.
The contents of this article are solely the responsibility of
the authors and do not necessarily represent the official
views of the National Institutes of Health.
\bibliographystyle{ieeebib}
|
1,108,101,562,734 | arxiv | \section{Introduction}
Symplectic field theory (SFT), introduced by H. Hofer, A. Givental and Y. Eliashberg in 2000 ([EGH]), is a very large
project and can be viewed as a topological quantum field theory approach to Gromov-Witten theory. Besides providing a
unified view on established pseudoholomorphic curve theories like symplectic Floer homology, contact homology and
Gromov-Witten theory, it leads to numerous new applications and opens new routes yet to be explored. \\
While symplectic field theory leads to algebraic invariants with very rich algebraic structures, it was pointed out by Eliashberg in his ICM 2006 plenary talk ([E]) that the integrable systems of rational Gromov-Witten theory very naturally appear in rational symplectic field theory by using the link between the rational symplectic field theory of prequantization spaces in the Morse-Bott version and the rational Gromov-Witten potential of the underlying symplectic manifold, see the recent papers \cite{R1}, \cite{R2} by the second author. Indeed, after introducing gravitational descendants as in Gromov-Witten theory, it is precisely the rich algebraic formalism of SFT with its Weyl and Poisson structures that provides a natural link between symplectic field theory and (quantum) integrable systems. \\
On the other hand, carefully defining a generalization of gravitational descendants and adding them to the picture, the first author has shown in \cite{F2} that one can assign to every contact manifold an infinite sequence of commuting Hamiltonian systems on
SFT homology and the question of their integrability arises. For this it is important to fully understand the algebraic structure of
gravitational descendants in SFT. \\
While it is well-known that in Gromov-Witten theory the topological meaning of gravitational descendants leads to new differential
equations for the Gromov-Witten potential, in this paper we want to proceed with our project of understanding how these rich algebraic structures carry over from Gromov-Witten theory to symplectic field theory. While we have already shown in \cite{FR} how the well-known string, dilaton and divisor equations translate from Gromov-Witten theory to SFT, as a next step we want to show how classical genus-zero topological recursion generalizes to symplectic field theory. \\
Although this is a first concrete step in the study of integrability of the Hamiltonian systems of SFT, notice that topological recursion relations in the forms we study here might not be enough to answer the question of integrability: the Hamiltonian systems arising from SFT are, a priori, much more general than those associated with Gromov-Witten invariants, involving in particular more than just local functionals (see \cite{R2}), and topological recursion relations, even together with string, dilaton and divisor equations, might not yet restrictive enough to grant complete control over the algebra of commuting Hamiltonians. They seem however to give an affirmative answer to the fundamental question of the reconstructability of the gravitational descendants from the primary invariants (i.e. without descendants) in genus $0$.\\
From the computation of the SFT of a Reeb orbit with descendants in \cite{F2} it can be seen that the genus-zero topological recursion requires a non-equivariant version of SFT, which is generated by parametrized instead of unparametrized Reeb orbits. The definition of this non-equivariant version of SFT is currently a very active field of research and related to the work of Bourgeois and Oancea in \cite{BO}, where a Gysin-type spectral sequence relating linearized contact homology (a slight generalization of cylindrical contact homology depending on a symplectic filling) and symplectic homology of this filling is established by viewing the one as the
(non-)equivariant version of the other. \\
Since the topological recursion relation is already interesting in the case of cylindrical contact homology and the non-equivariant version of it is already understood, in this first paper on topological recursion we restrict ourselves to cylindrical contact homology, i.e. study the algebraic structure of gravitational descendants only for this special case. \\
This paper is organized as follows: While in section two we review the most important definitions and results about SFT with gravitational descendants and its relation with integrable systems in \cite{F2} and \cite{FR} (including short discussions of how to interpret the SFT homology algebra as a Poisson algebra of functions on a singular phase space and how to generalize the notion of local functionals from Gromov-Witten theory to general contact manifolds), in section three we first show, as a motivation for our main result, how the topological recursion relations in Gromov-Witten theory carry over to symplectic Floer theory. Since this example suggests that the localization theorem for gravitational descendants needs a non-equivariant version of cylindrical contact homology which, similar to symplectic Floer homology, is generated by parametrized instead of unparametrized closed Reeb orbits, we recall in section four the definition of non-equivariant cylindrical homology from \cite{BO} and prove the topological recursion relations in the non-equivariant situation. Finally, in section five we discuss two important applications of our main result. First we show how the topological recursion formulas carry over from the non-equivariant to the equivariant situation and use this result to show that, as in rational Gromov-Witten theory, all descendant invariants can be computed from primary invariants, that is, without descendants. After this we show that our results can be further used to define an action of the (quantum) cohomology on non-equivariant cylindrical homology similar to the corresponding action on symplectic Floer homology defined in \cite{PSS}. At the end we show that in the Floer case of SFT we just get back the topological recursion relations in Floer homology from section three and that the action of quantum cohomology on non-equivariant homology splits and agrees with the action on Floer homology as defined in \cite{PSS}.\\
While the work on this paper began when both authors were members of the Mathematical Sciences Research Institute (MSRI) in Berkeley, most of the work was conducted when the first author was a postdoc at the Max Planck Institute (MPI) for Mathematics in the Sciences in Germany and the second author was a FSMP postdoc at the Institut de Mathematiques de Jussieu, Paris VI. They want to thank the institutes for their hospitality and their great working environment. Further they would like to thank Y. Eliashberg and D. Zvonkine for useful discussions.
\vspace{0.5cm}
\section{Symplectic field theory with gravitational descendants}
\subsection{Symplectic Field Theory}
Symplectic field theory (SFT) is a very large project, initiated by Y. Eliashberg,
A. Givental and H. Hofer in their paper \cite{EGH}, designed to describe in a unified way
the theory of pseudoholomorphic curves in symplectic and contact topology.
Besides providing a unified view on well-known theories like symplectic Floer
homology and Gromov-Witten theory, it shows how to assign algebraic invariants
to closed manifolds with a stable Hamiltonian structure. \\
Following \cite{BEHWZ} a Hamiltonian structure on a closed $(2m-1)$-dimensional manifold $V$ is a closed two-form $\omega$
on $V$, which is maximally nondegenerate in the sense that $\ker\omega=\{v\in TV:\omega(v,\cdot)=0\}$
is a one-dimensional distribution. The Hamiltonian structure is required to be stable in the sense that there exists a
one-form $\lambda$ on $V$ such that $\ker\omega\subset\ker d\lambda$ and $\lambda(v)\neq 0$ for all $v\in\ker\omega-\{0\}$.
Any stable Hamiltonian structure $(\omega,\lambda)$ defines a symplectic hyperplane distribution $(\xi=\ker\lambda,\omega_{\xi})$,
where $\omega_{\xi}$ is the restriction of $\omega$, and
a vector field $R$ on $V$ by requiring $R\in\ker\omega$ and $\lambda(R)=1$, which is called the Reeb vector field of the
stable Hamiltonian structure. Examples for closed manifolds $V$ with a stable Hamiltonian structure $(\omega,\lambda)$
are contact manifolds, symplectic mapping tori and principal circle bundles over symplectic manifolds ([BEHWZ]): \\
First observe that when $\lambda$ is a contact form on $V$, it is easy to check that $(\omega:=d\lambda,\lambda)$ is a stable
Hamiltonian structure and the symplectic hyperplane distribution agrees with the contact structure.
For the other two cases, let $(M,\omega_M)$ be a symplectic manifold. Then every principal circle bundle $S^1\to V\to M$ and
every symplectic mapping torus $M\to V\to S^1$, i.e. $V=\Mph=\IR\times M/\{(t,p)\sim(t+1,\phi(p))\}$ for
$\phi\in\Symp(M,\omega)$ also carries a stable Hamiltonian structure. For the circle bundle the Hamiltonian
structure is given by the pullback $\pi^*\omega$ under the bundle projection and we can choose as one-form $\lambda$ any $S^1$-connection form.
On the other hand, the stable
Hamiltonian structure on the mapping torus $V=\Mph$ is given by lifting the symplectic form to $\omega\in\Omega^2(\Mph)$ via the natural
flat connection $TV=TS^1\oplus TM$ and setting $\lambda=dt$ for the natural $S^1$-coordinate $t$ on $\Mph$.
While in the mapping torus case $\xi$ is always integrable, in the circle bundle case the hyperplane distribution $\xi$ may be integrable or
non-integrable, even contact. \\
Symplectic field theory assigns algebraic invariants to closed manifolds $V$ with a stable Hamiltonian structure.
The invariants are defined by counting $\Ju$-holomorphic curves in $\IR\times V$ with finite energy,
where the underlying closed Riemann surfaces are explicitly allowed to have punctures, i.e. single points are removed.
The almost complex structure $\Ju$ on the cylindrical manifold $\IR\times V$ is required to be cylindrical in the sense that it is
$\IR$-independent, links the two natural vector fields on $\IR\times V$, namely the Reeb vector field
$R$ and the $\IR$-direction $\del_s$, by $\Ju\del_s=R$, and turns the symplectic hyperplane distribution on $V$ into a complex subbundle of $TV$, $\xi=TV\cap \Ju TV$. It follows that a cylindrical almost complex structure $\Ju$ on $\IR\times V$ is determined by its restriction $\Ju_{\xi}$ to $\xi\subset TV$, which is required to be $\omega_{\xi}$-compatible in the sense that $\omega_{\xi}(\cdot,\Ju_{\xi}\cdot)$ defines a metric on $\xi$. Note that such almost complex structures $\Ju$ are called compatible with the stable Hamiltonian structure and that the set of these almost complex structures is non-empty and contractible. \\
We assume that the stable Hamiltonian structure is Morse in the sense that all closed orbits of the
Reeb vector field are nondegenerate in the sense of \cite{BEHWZ}; in particular, the set
of closed Reeb orbits is discrete. Further it is shown in \cite{BEHWZ} that all $\Ju$-holomorphic curves in $\IR\times V$ with finite energy are asymptotically cylindrical over collections of Reeb orbits $\Gamma^{\pm}=\{\gamma^{\pm}_1,\ldots,
\gamma^{\pm}_{n^{\pm}}\}$ as the $\IR$-factor tends to $\pm\infty$. We denote by $\CM_{g,r,A}(\Gamma^+,\Gamma^-)/\IR$ the corresponding compactified moduli space of genus $g$ curves with $r$ additional marked points representing the absolute homology class $A\in H_2(V)$ using a choice of spanning surfaces ([BEHWZ],[EGH]). After choosing abstract perturbations using polyfolds following \cite{HWZ}, we get that
$\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ is a (weighted branched) manifold with corners of dimension
equal to the Fredholm index of the Cauchy-Riemann operator for $\Ju$.
{\it Note that as in \cite{F2} and \cite{FR} we will not discuss transversality for the Cauchy-Riemann operator but just refer to the upcoming
papers on polyfolds by H. Hofer and his co-workers.} \\
Let us now briefly introduce the algebraic formalism of SFT as described in \cite{EGH}: \\
Recall that a multiply-covered Reeb orbit $\gamma^k$ is called bad if
$\CZ(\gamma^k)\neq\CZ(\gamma)\mod 2$, where $\CZ(\gamma)$ denotes the
Conley-Zehnder index of $\gamma$. Calling a Reeb orbit $\gamma$ {\it good} if it is not bad we assign to every
good Reeb orbit $\gamma$ two formal graded variables $p_{\gamma},q_{\gamma}$ with grading
\begin{equation*}
|p_{\gamma}|=m-3-\CZ(\gamma),|q_{\gamma}|=m-3+\CZ(\gamma)
\end{equation*}
when $\dim V = 2m-1$. Assuming we have chosen a basis $A_0,\ldots,A_N$ of $H_2(V)$, we assign to every $A_i$ a formal
variables $z_i$ with grading $|z_i|=- 2 c_1(A_i)$. In order to include higher-dimensional moduli spaces we further assume that a string
of closed (homogeneous) differential forms $\Theta=(\theta_1,\ldots,\theta_N)$ on $V$ is chosen and assign to
every $\theta_{\alpha}\in\Omega^*(V)$ a formal variables $t^{\alpha}$
with grading
\begin{equation*} |t^{\alpha}|=2 -\deg\theta_{\alpha}. \end{equation*}
Finally, let $\hbar$ be another formal variable of degree $|\hbar|=2(m-3)$. \\
Let $\WW$ be the graded Weyl algebra over $\IC$ of power series in the variables
$\hbar,p_{\gamma}$ and $t_i$ with coefficients which are polynomials in the
variables $q_{\gamma}$ and Laurent series in $z_n$, which is equipped with the associative product $\star$ in
which all variables super-commute according to their grading except for the
variables $p_{\gamma}$, $q_{\gamma}$ corresponding to the same Reeb orbit $\gamma$,
\begin{equation*} [p_{\gamma},q_{\gamma}] =
p_{\gamma}\star q_{\gamma} -(-1)^{|p_{\gamma}||q_{\gamma}|}
q_{\gamma}\star p_{\gamma} = \kappa_{\gamma}\hbar.
\end{equation*}
($\kappa_{\gamma}$ denotes the multiplicity of $\gamma$.) Since it is shown in \cite{EGH} that the bracket
of two elements in $\WW$ gives an element in $\hbar\WW$, it follows that we get a bracket on the module
$\hbar^{-1}\WW$. Following \cite{EGH} we further introduce
the Poisson algebra $\PP$ of formal power series in the variables $p_{\gamma}$ and $t_i$ with
coefficients which are polynomials in the variables $q_{\gamma}$ and Laurent series in $z_n$ with Poisson bracket given by
\begin{equation*}
\{f,g\} = \sum_{\gamma}\kappa_{\gamma}\Bigl(\frac{\del f}{\del p_{\gamma}}\frac{\del g}{\del q_{\gamma}} -
(-1)^{|f||g|}\frac{\del g}{\del p_{\gamma}}\frac{\del f}{\del q_{\gamma}}\Bigr).
\end{equation*}
As in Gromov-Witten theory we want to organize all moduli spaces $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$
into a generating function $\IH\in\hbar^{-1}\WW$, called {\it Hamiltonian}. In order to include also higher-dimensional
moduli spaces, in \cite{EGH} the authors follow the approach in Gromov-Witten theory to integrate the chosen differential forms
$\theta_{\alpha}$ over the moduli spaces after pulling them back under the evaluation map from target manifold $V$.
The Hamiltonian $\IH$ is then defined by
\begin{equation*}
\IH = \sum_{\Gamma^+,\Gamma^-} \int_{\CM_{g,r,A}(\Gamma^+,\Gamma^-)/\IR}
\ev_1^*\theta_{\alpha_1}\wedge\ldots\wedge\ev_r^*\theta_{\alpha_r}\; \hbar^{g-1}t^\alpha p^{\Gamma^+}q^{\Gamma^-}z^d
\end{equation*}
with $t^{I}=t^{\alpha_1}\ldots t^{\alpha_r}$, $p^{\Gamma^+}=p_{\gamma^+_1}\ldots p_{\gamma^+_{n^+}}$,
$q^{\Gamma^-}=q_{\gamma^-_1}\ldots q_{\gamma^-_{n^-}}$ and $z^d = z_0^{d_0} \cdot \ldots \cdot z_N^{d_N}$.
Expanding
\begin{equation*} \IH=\hbar^{-1}\sum_g \IH_g \hbar^g \end{equation*}
we further get a rational Hamiltonian $\Ih=\IH_0\in\PP$, which counts only curves with genus zero. \\
While the Hamiltonian $\IH$ explicitly depends on the chosen contact form, the cylindrical almost complex structure,
the differential forms and abstract polyfold perturbations making all moduli spaces regular, it is outlined in \cite{EGH}
how to construct algebraic invariants, which just depend on the contact structure and the cohomology classes of the
differential forms.
\vspace{0.5cm}
\subsection{Gravitational descendants}
In complete analogy to Gromov-Witten theory we can introduce $r$ tautological line
bundles $\LL_1,\ldots,\LL_r$ over each moduli space $\CM_r=\CM_{g,r,A}(\Gamma^+,\Gamma^-)/\IR$ , where the fibre of $\LL_i$
over a punctured curve $(u,\Si)\in\CM_r$ is again given
by the cotangent line to the underlying, possibly unstable nodal Riemann surface (without ghost components) at the
$i$.th marked point and which again formally can be defined as the pull-back of the vertical cotangent line
bundle of $\pi: \CM_{r+1}\to\CM_r$ under the canonical section $\sigma_i: \CM_r\to\CM_{r+1}$ mapping to the $i$.th marked
point in the fibre. Note again that while the vertical cotangent line bundle is rather a sheaf (the dualizing sheaf) than a true bundle since
it becomes singular at the nodes in the fibres, the pull-backs under the canonical sections are still true line bundles
as the marked points are different from the nodes and hence these sections avoid the singular loci. \\
While in Gromov-Witten theory the gravitational descendants were defined by integrating powers of the first Chern class
of the tautological line bundle over the moduli space, which by Poincare duality corresponds to counting common zeroes of
sections in this bundle, in symplectic field theory, more generally every holomorphic curves theory where curves with
punctures and/or boundary are considered, we are faced with the problem that the moduli spaces generically have
codimension-one boundary, so that the count of zeroes of sections in general depends on the chosen sections in the
boundary. It follows that the integration of the first Chern class of the tautological line bundle over a single moduli
space has to be replaced by a construction involving all moduli space at once. Note that this is similar to the choice of
coherent abstract perturbations for the moduli spaces in symplectic field theory in order to achieve transversality for
the Cauchy-Riemann operator. \\
Keeping the interpretation of descendants as common zero sets of sections in powers of the
tautological line bundles, the first author defined in his paper \cite{F2} the notion of {\it coherent collections of sections}
$(s)$ in the tautological line bundles over all moduli spaces, which just formalizes how the sections chosen for the
lower-dimensional moduli spaces should affect the section chosen for a moduli spaces on its boundary. Based on this he then
defined {\it descendants of moduli spaces} $\CM^j\subset\CM$, which were obtained inductively as zero sets of these coherent
collections of sections $(s_j)$ in the tautological line bundles over the descendant moduli spaces $\CM^{j-1}\subset\CM$. \\
So far we have only considered the case with one additional marked point. On the other hand, as already outlined in \cite{F2},
the general case with $r$ additional marked points is just notationally more involved. Indeed, we can
easily define for every moduli space $\CM_r=\CM_{g,r,A}(\Gamma^+,\Gamma^-)/\IR$ with $r$ additional marked points and every
$r$-tuple of natural numbers $(j_1,\ldots,j_r)$ descendants $\CM^{(j_1,\ldots,j_r)}_r\subset\CM_r$ by setting
\begin{equation*} \CM^{(j_1,\ldots,j_r)}_r = \CM^{(j_1,0,\ldots,0)}_r\cap \ldots \cap \CM^{(0,\ldots,0,j_r)}_r, \end{equation*}
where the descendant moduli spaces $\CM^{(0,\ldots,0,j_i,0,\ldots,0)}_r\subset\CM_r$ are defined in the same way as the
one-point descendant moduli spaces $\CM^{j_i}_1\subset\CM_1$ by looking at the $r$ tautological line bundles $\LL_{i,r}$
over the moduli space $\CM_r = \CM_r(\Gamma^+,\Gamma^-)/\IR$ separately. In other words, we inductively choose generic
sections $s^j_{i,r}$ in the line bundles $\LL_{i,r}^{\otimes j}$ to define $\CM^{(0,\ldots,0,j,0,\ldots,0)}_r=
(s^j_{i,r})^{-1}(0)\subset\CM^{(0,\ldots,0,j-1,0,\ldots,0)}_r\subset\CM_r$. \\
With this we can define the descendant Hamiltonian of SFT, which we will continue denoting by $\IH$, while the
Hamiltonian defined in \cite{EGH} will from now on be called {\it primary}. In order to keep track of the descendants we
will assign to every chosen differential form $\theta_\alpha$ now a sequence of formal variables $t^{\alpha,j}$ with grading
\begin{equation*} |t^{\alpha,j}|=2(1-j) -\deg\theta_\alpha. \end{equation*}
Then the descendant Hamiltonian $\IH\in\hbar^{-1}\WW$ of SFT is defined by
\begin{equation*}
\IH = \sum_{\Gamma^+,\Gamma^-,I} \int_{\CM^{(j_1,\ldots,j_r)}_{g,r,A}(\Gamma^+,\Gamma^-)/\IR}
\ev_1^*\theta_{\alpha_1}\wedge\ldots\wedge\ev_r^*\theta_{\alpha_r}\; \hbar^{g-1}t^Ip^{\Gamma^+}q^{\Gamma^-},
\end{equation*}
where $p^{\Gamma^+}=p_{\gamma^+_1} \ldots p_{\gamma^+_{n^+}}$, $q^{\Gamma^-}=q_{\gamma^-_1} \ldots q_{\gamma^-_{n^-}}$ and
$t^I=t^{\alpha_1,j_1} \ldots t^{\alpha_r,j_r}$.\\
Expanding
\begin{equation*} \IH=\hbar^{-1}\sum_g \IH_g \hbar^g \end{equation*}
we further get a rational Hamiltonian $\Ih=\IH_0\in\PP$, which counts only curves with genus zero.
\vspace{0.5cm}
\subsection{Quantum Hamiltonian systems with symmetries}
We want to emphasize that the following statement is not yet a theorem in the strict mathematical sense as the analytical
foundations of symplectic field theory, in particular, the neccessary transversality theorems for the Cauchy-Riemann
operator, are not yet fully established. Since it can be expected that the polyfold project by Hofer and his
collaborators sketched in \cite{HWZ} will provide the required transversality theorems, we follow other papers in the field in
proving everything up to transversality and state it nevertheless as a theorem.
\begin{theorem} Differentiating the Hamiltonian $\IH\in\hbar^{-1}\WW$ with respect to the formal variables $t_{\alpha,p}$
defines a sequence of quantum Hamiltonians
\begin{equation*} \IH_{\alpha,p}=\frac{\del\IH}{\del t^{\alpha,p}} \in H_*(\hbar^{-1}\WW,[\IH,\cdot]) \end{equation*}
{\it in the full SFT homology algebra with differential $D=[\IH,\cdot]: \hbar^{-1}\WW\to\hbar^{-1}\WW$, which commute with respect to the
bracket on $H_*(\hbar^{-1}\WW,[\IH,\cdot])$,}
\begin{equation*} [\IH_{\alpha,p},\IH_{\beta,q}] = 0,\; (\alpha,p),(\beta,q)\in\{1,\ldots,N\}\times\IN. \end{equation*}
\end{theorem}
Everything is an immediate consequence of the master equation $[\IH,\IH]=0$, which can be proven in the same
way as in the case without descendants using the results in \cite{F2}. While the boundary equation $D\circ D=0$ is well-known
to follow directly from the identity $[\IH,\IH]=0$, the fact that every $\IH_{\alpha,p}$, $(\alpha,p)\in
\{1,\ldots,N\}\times\IN$ defines an element in the homology $H_*(\hbar^{-1}\WW,[\IH,\cdot])$ follows from the identity
\begin{equation*} [\IH,\IH_{\alpha,p}] = 0,\end{equation*}
which can be shown by differentiating the master equation with respect to the $t^{\alpha,p}$-variable and using the
graded Leibniz rule,
\[ \frac{\del}{\del t^{\alpha,p}} [f,g] =
[\frac{\del f}{\del t^{\alpha,p}},g] + (-1)^{|t^{\alpha,p}||f|} [f,\frac{\del g}{\del t^{\alpha,p}}]. \]
On the other hand, in order to see that any two $\IH_{\alpha,p}$,
$\IH_{\beta,q}$ commute {\it after passing to homology} it suffices to see that by differentiating twice (and using that all
summands in $\IH$ have odd degree) we get the identity
\begin{equation*}
[\IH_{\alpha,p},\IH_{\beta,q}]+(-1)^{|t^{\alpha,p}|}[\IH,\frac{\del^2\IH}{\del t^{\alpha,p}\del t^{\beta,q}}] = 0.
\end{equation*}
By replacing the full Hamiltonian $\IH$ by the rational Hamiltonian $\Ih$ counting only curves with genus zero, we get a rational version of the above theorem as a corollary.\\
\begin{corollary} Differentiating the rational Hamiltonian $\Ih\in\PP$ with respect to the formal variables $t_{\alpha,p}$
defines a sequence of classical Hamiltonians
\begin{equation*} \Ih_{\alpha,p}=\frac{\del\Ih}{\del t^{\alpha,p}} \in H_*(\PP,\{\Ih,\cdot\}) \end{equation*}
{\it in the rationall SFT homology algebra with differential $d=\{\Ih,\cdot\}: \PP\to\PP$, which commute with respect to the
bracket on $H_*(\PP,\{\Ih,\cdot\})$,}
\begin{equation*} \{\Ih_{\alpha,p},\Ih_{\beta,q}\} = 0,\; (\alpha,p),(\beta,q)\in\{1,\ldots,N\}\times\IN. \end{equation*}
\end{corollary}
We now turn to the question of independence of these nice algebraic structures from the choices like contact form,
cylindrical almost complex structure, abstract polyfold perturbations and, of course, the choice of the coherent
collection of sections. This is the content of the following theorem, where we however again want to emphasize that
the following statement is not yet a theorem in the strict mathematical sense as the analytical foundations of symplectic
field theory, in particular, the neccessary transversality theorems for the Cauchy-Riemann operator, are not yet fully
established.
\begin{theorem} For different choices of contact form $\lambda^{\pm}$, cylindrical almost complex structure
$\Ju^{\pm}$ , abstract polyfold perturbations and sequences of coherent collections of sections $(s^{\pm}_j)$ the
resulting systems of commuting operators $\IH^-_{\alpha,p}$ on $H_*(\hbar^{-1}\WW^-,D^-)$ and $\IH^+_{\alpha,p}$ on
$H_*(\hbar^{-1}\WW^+,D^+)$ are isomorphic, i.e. there exists an isomorphism of the Weyl algebras $H_*(\hbar^{-1}\WW^-,D^-)$
and $H_*(\hbar^{-1}\WW^+,D^+)$ which maps $\IH^-_{\alpha,p}\in H_*(\hbar^{-1}\WW^-,D^-)$ to
$\IH^+_{\alpha,p}\in H_*(\hbar^{-1}\WW^+,D^+)$.
\end{theorem}
For the proof observe that in \cite{F2} the first author introduced the notion of a collection of sections $(s_j)$ in the
tautological line bundles over all moduli spaces of holomorphic curves in the cylindrical cobordism interpolating between the
auxiliary structures which are {\it coherently connecting} the two coherent collections of sections $(s^{\pm}_j)$. \\
We again get a rational version of the above theorem as a corollary.
\begin{corollary} For different choices of contact form $\lambda^{\pm}$, cylindrical almost complex structure
$\Ju^{\pm}$ , abstract polyfold perturbations and sequences of coherent collections of sections $(s^{\pm}_j)$ the
resulting systems of commuting functions $\Ih^-_{\alpha,p}$ on $H_*(\PP^-,d^-)$ and $\Ih^+_{\alpha,p}$ on
$H_*(\PP^+,d^+)$ are isomorphic, i.e. there exists an isomorphism of the Poisson algebras $H_*(\PP^-,d^-)$
and $H_*(\PP^+,d^+)$ which maps $\Ih^-_{\alpha,p}\in H_*(\PP^-,d^-)$ to
$\Ih^+_{\alpha,p}\in H_*(\PP^+,d^+)$.
\end{corollary}
These invariance results actually mean that SFT attaches a Weyl (or Poisson) algebra and (quantum) Hamiltonian system therein (in a coordinate free way, i.e. up to algebra isomorphisms) to a stable Hamiltonian structure. Focusing on the rational case for language simplicity we want to point out the fact that \emph{the Poisson SFT homology algebra can be thought of as the space of functions on some abstract infinite-dimensional Poisson space.} The nature of such space can be in fact very exhotic, and the description via space of functions remains the only one with full meaning. \\
However a dynamical interpretation of the Poisson space where the dynamics of the SFT-Hamiltonian system takes place is possible. The kernel $\ker(\{\Ih,\cdot\})$ can be seen as the algebra of functions on the space $\mathcal{O}$ of orbits of the Hamiltonian $\IR$-action given by $\Ih$ (the flow lines of the Hamiltonian vector field $X_{\Ih}$ associated to $\Ih$). Even in a finite dimensional setting the space $\mathcal{O}$ can be very wild. Anyhow the image $\text{im}(\{\Ih,\cdot\})$ is an ideal of such algebra and hence identifies a sub-space of $\mathcal{O}$ given by all of those orbits $o\in \mathcal{O}$ at which, for any $f\in\PP$, $\{\Ih,f\}|_{o}=0$ (notice that, since $\{\Ih,\Ih\}=0$, $\{\Ih,f\}$ descends to a function on $\mathcal{O}$). But such orbits are simply the constant ones, where $X_{\Ih}$ vanishes. \\
Hence the Poisson SFT-homology algebra $H_*(\PP,\{\Ih,\cdot\})$ can regarded as the algebra of functions on $X_{\Ih}^{-1}(0)$, seen as a subspace of the space $\mathcal{O}$ of orbits of $\Ih$, endowed with a Poisson structure by singular, stationary reduction.
\vspace{0.5cm}
\subsection{Integrable hierarchies of Gromov-Witten theory and local SFT}
The above beautiful theory of Hamiltonian systems from Symplectic Field Theory completely contains the older one, coming from rational Gromov-Witten theory (see e.g. \cite{DZ} for a comprehensive treatment), and in some sense even clarifies its topological origin.\\
The special case of stable Hamiltonian structure capturing the full rational Gromov-Witten theory of a closed symplectic manifold $M$ consists in the trivial circle bundle $V=S^1 \times M$. It was noted in \cite{EGH} for all circle bundles over $M$ that the rational SFT-potential is expressed in terms of the rational Gromov-Witten potential of $M$. In the special case of the trivial bundle, the full dispersionless integrable system structure from the Gromov-Witten theory of $M$ is reproduced as the above hamiltonian system of SFT (which is hence integrable, in this case).\\
In particular, further structures than just the Hamiltonian nature are inherited by the SFT-system in the trivial bundle case (and partly in the circle bundle case). In \cite{R2} the second author explicitly described how bihamiltonian and tau-structure are expressed in SFT terms, giving a complete parallelism with Dubrovin's theory of Frobenius manifolds \cite{DZ}. It is a well known fact that one can reconstruct the full descendant genus $0$ Gromov-Witten potential from the primary one by first finding the one-descendant potential, corresponding to the tau-symmetric set of commuting hamiltonians for the relevant integrable system, and then using Witten's conjecture in genus $0$, in its generalized form by Dubrovin, which identifies the full descendant rational potential with the tau-function of a specific solution to such integrable system. This approach has a far reaching extension to higher genera, culminating in Dubrovin's reconstruction method for the full genus $g$ Gromov-Witten potential (\cite{DZ}) in case of semisimple quantum cohomology, while in genus $0$ it is equivalent to string equation, divisor equation, dilaton equation and topological recursion relations.\\
Topological recursion relations can be very naturally expressed in terms of SFT of a circle bundle, thanks to the extra $S^1$-symmetry which is typical of that situation. It is hence very natural to wonder whether topological recursion relations have a counterpart in a more general SFT context than the circle bundle case. While string, dilaton and divisor equations where extended by the authors to the general SFT case in \cite{FR}, topological recursion relations appear far more subtle and we'll see in the next sections how a natural interpretation requires non-$S^1$-equivariant versions of holomorphic curve counting in stable Hamiltonian structures. Since the full picture for a non-equivariant version of SFT is not completely understood yet, we will restrict to the cylindrical case here.\\
Although this is a first concrete step in the study of integrability of the hamiltonian systems of SFT, notice that topological recursion relations in the forms we study here might not be enough to answer the question of integrability: the Hamiltonian systems arising from SFT are, a priori, much more general than those associated with Gromov-Witten invariants, involving in particular more than just local functionals. \\
Here the locality of the Hamiltonians means that the Hamiltonian is given by integrating a function, the so-called Hamiltonian density, over the circle, which itself just reflects the fact that the multiplicities of the positive and negative orbits match. From the latter it follows that the locality condition can be naturally generalized to the case of general contact manifolds by restricting to the Poisson subalgebra $\PP_{\geo}\subset\PP$ of so-called \emph{geometric Hamiltonians}, which is generated by monomials $p^{\Gamma_+}q^{\Gamma_-}$ such that $[\Gamma_+]=[\Gamma_-]\in H_1(V)$. Apart from the fact that it is also easy to see from the definition of the Poisson bracket that $\{f,g\}\in \PP_{\geo}$ whenever $f,g\in\PP_{\geo}$, all Hamiltonians defined by counting holomorphic curves are automatically geometric, i.e., $\Ih\in\PP_{\geo}$ and hence also $\Ih_{\alpha,p}\in\PP_{\geo}$, so that we can only expect to be able to prove completeness of the set of commuting Hamiltonians when we restrict to the above Poisson subalgebra. \\
While in this paper we still continue to work with the full Poisson algebra $\PP$ as all our results immediately restrict to the above Poisson subalgebra, the topological recursion relations, even together with string, dilaton and divisor equations, might not yet be restrictive enough to grant complete control over the algebra of commuting Hamiltonians. Nevertheless they seem however to give an affirmative answer to the fundamental question of the reconstructability of the gravitational descendants from the primaries at genus zero.\\
As concrete example beyond the case of circle bundles discussed in \cite{EGH} we review the symplectic field theory of a closed geodesic discussed in \cite{F2}. For this recall that in \cite{F2} the first author has shown that every algebraic invariant of symplectic field theory has a natural analog defined by counting only orbit curves. In particular, in the same way as we define sequences of descendant Hamiltonians $\IH^1_{i,j}$ and $\Ih^1_{i,j}$ by counting general curves in the symplectization of a contact manifold, we can define sequences of descendant Hamiltonians $\IH^1_{\gamma,i,j}$ and $\Ih^1_{\gamma,i,j}$ by just counting branched covers of the orbit cylinder over $\gamma$ with signs (and weights), where the preservation of the contact area under splitting and gluing of curves proves that for every theorem from above we have a version for $\gamma$. \\
Let $\PP^0_{\gamma}$ be the graded Poisson subalgebra of the Poisson algebra $\PP^0$ obtained from the Poisson algebra $\PP$ by setting all $t$-variables to zero, which is generated only by those $p$- and $q$-variables $p_n=p_{\gamma^n}$, $q_n=q_{\gamma^n}$ corresponding to Reeb orbits which are multiple covers of the fixed orbit $\gamma$. In his paper \cite{F2} the first author computed the corresponding Poisson-commuting sequence in the special case where the contact manifold is the unit cotangent bundle $S^*Q$ of a ($m$-dimensional) Riemannian manifold $Q$, so that every closed Reeb orbit $\gamma$ on $V=S^*Q$ corresponds to a closed geodesic $\bar{\gamma}$ on $Q$ (and the string of differential forms just contains a one-form which integrates to one over the closed Reeb orbit).
\begin{theorem}
The system of Poisson-commuting functions $\Ih^1_{\gamma,j}$, $j\in\IN$ on $\PP^0_{\gamma}$ is isomorphic to a system of Poisson-commuting functions $\Ig^1_{\bar{\gamma},j}$, $j\in\IN$ on $\PP^0_{\bar{\gamma}}=\PP^0_{\gamma}$, where for every $j\in\IN$ the descendant Hamiltonian $\Ig^1_{\bar{\gamma},j}$ given by
\begin{equation*}
\Ig^1_{\bar{\gamma},j} \;=\; \sum \epsilon(\vec{n})\frac{q_{n_1}\cdot ... \cdot q_{n_{j+2}}}{(j+2)!}
\end{equation*}
where the sum runs over all ordered monomials $q_{n_1}\cdot ... \cdot q_{n_{j+2}}$ with $n_1+...+n_{j+2} = 0$ \textbf{and which are of degree $2(m+j-3)$}. Further $\epsilon(\vec{n})\in\{-1,0,+1\}$ is fixed by a choice of coherent orientations in symplectic field theory and is zero if and only if one of the orbits $\gamma^{n_1},...,\gamma^{n_{j+2}}$ is bad in the sense of \cite{BEHWZ}.
\end{theorem}
For $\gamma=V=S^1$ this recovers the well-known result that the corresponding classical Hamiltonian system with symmetries is given by the dispersionless KdV hierarchy, see \cite{E}. Forgetting about the appearing sign issues, it follows that the sequence $\Ig^1_{\bar{\gamma},j}$ is obtained from the sequence for the circle by removing all summands with the wrong, that is, not maximal degree, so that the system is completely determined by the KdV hierarchy and the Morse indices of the closed geodesic and its iterates. Apart from using the geometric interpretation of gravitational descendants for branched covers of orbit cylinders over a closed Reeb orbit in terms of branching conditions, the second main ingredient for the proof is the idea in \cite{CL} to compute the symplectic field theory of $V=S^*Q$ from the string topology of the underlying Riemannian manifold $Q$ by studying holomorphic curves in the cotangent bundle $T^*Q$.
\vspace{0.5cm}
\subsection{String, dilaton and divisor equations}
While it is well-known that in Gromov-Witten theory the topological meaning of gravitational descendants leads to new differential
equations for the Gromov-Witten potential, it is natural to ask how these rich algebraic structures carry over from Gromov-Witten theory to symplectic field theory. As a first step, the authors have shown in the paper \cite{FR} how the well-known string, dilaton and divisor equations generalize from Gromov-Witten theory to symplectic field theory. \\
Here the main problem is to deal with the fact that the SFT Hamiltonian
itself is not an invariant for the contact manifold. More precisely it depends not only on choices like contact form,
cylindrical almost complex structure and coherent abstract perturbations but also on the chosen differential forms
$\theta_i$ and coherent collections of sections $(s_j)$ used to define gravitational descendants. The main application
of these equations we have in mind is the computation of the sequence of commuting quantum Hamiltonians
$\IH_{\alpha,p}=\frac{\del\IH}{\del t^{\alpha,p}}$ on SFT homology $H_*(\hbar^{-1}\WW, [\IH,\cdot])$ introduced in the last section. \\
As customary in Gromov-Witten theory we will assume that the chosen string of differential forms on $V$ contains a
two-form $\theta_2$. Since by adding a marked point we increase the dimension of the moduli space by two, the integration
of a two-form over it leaves the dimension unchanged and we can expect, as in Gromov-Witten theory, to compute the
contributions to SFT Hamiltonian involving integration of $\theta_2$ in terms of contributions without integration, where
the result should just depend on the homology class $A\in H_2(V)$ which can be assigned to the holomorphic curves in the
corresponding connected component of the moduli space. \\
It turns out that we obtain the same equations as
in Gromov-Witten theory (up to contributions of constant curves), but these however only hold after passing to SFT homology.
\begin{theorem}
For any choice of differential forms and coherent sections the following \emph{string, dilaton and divisor equations} hold \emph{after} passing to SFT-homology
\begin{eqnarray*}
\frac{\del}{\del t^{0,0}}\IH &=& \int_V t\wedge t + \sum_{k}t^{\alpha,k+1}\frac{\del}{\del t^{\alpha,k}}\IH
\;\in\; H_*(\hbar^{-1}\WW,[\IH,\cdot]), \\
\frac{\del}{\del t^{0,1}}\IH &=& \ID_{\mathrm{Euler}}\IH \;\in\, H_*(\hbar^{-1}\WW,[\IH,\cdot]), \\
\left(\frac{\del}{\del t^{2,0}} -z_0\frac{\del}{\del z_0}\right)\IH &=& \int_V t\wedge t\wedge \theta_2 + \sum_{k} t^{\alpha,k+1} c_{2\alpha}^\beta\frac{\del\IH}{\del t^{\beta, k}} \;\in\; H_*(\hbar^{-1}\WW,[\IH,\cdot]),
\end{eqnarray*}
with the first-order differential operator
$$\ID_{\mathrm{Euler}} := -2\hbar\frac{\del}{\del\hbar}-\sum_\gamma p_\gamma\frac{\del}{\del p_\gamma}
-\sum_\gamma q_\gamma\frac{\del}{\del q_\gamma}-\sum_{\alpha,p}t^{\alpha,p}\frac{\del}{\del t^{\alpha,p}}.$$
\end{theorem}
In order to prove the desired equations we will start with special non-generic choices of coherent collections of
sections in the tautological bundles $\LL_{i,r}$ over all moduli spaces $\CM_r=\CM_{g,r,A}(\Gamma^+,\Gamma^-)/\IR$ and then prove that the resulting equations are covariant with respect to a change of auxiliary data.
\vspace{0.5cm}
\subsection{Cylindrical contact homology}
While the punctured curves in symplectic field theory may have arbitrary genus and arbitrary numbers of positive and negative punctures, it is shown in \cite{EGH} that there exist algebraic invariants counting only special types of curves. While in rational symplectic field theory one counts punctured curves with genus zero, contact homology is defined by further restricting to punctured spheres with only one positive puncture. Further restricting to spheres with both just one negative and one positive puncture, i.e. cylinders, the resulting algebraic invariant is called cylindrical contact homology. \\
Note however that contact homology and cylindrical contact homology are not always defined. In order to
prove the well-definedness of (cylindrical) contact homology it however suffices to show that there are no punctured holomorphic curves where all punctures are negative (or
all punctures are positive). To be more precise, for the well-definedness of cylindrical contact homology it actually suffices to assume that there are no holomorphic planes and that there are either no holomorphic cylinders with two positive or no holomorphic cylinders with two negative ends. \\
While the existence of holomorphic curves without positive punctures can be excluded for all contact
manifolds using the maximum principle, which shows that contact homology is well-defined for all contact manifolds, it can be seen from homological reasons
that for mapping tori $M_{\phi}$ there cannot exist holomorphic curves in $\IR\times M_{\phi}$ carrying just one type of punctures, which shows that in this case
both contact homology and cylindrical contact homology are defined. \\
Similarly to what happens in Floer homology, the chain space for cylindrical homology $C$ is defined be the vector space generated by the formal variables $q_{\gamma}$ with coefficients which are formal power series in the $t^{\alpha,j}$-variables and Laurent series in the $z_n$-variables. Counting only holomorphic cylinders defines a differential $\del: C_*\to C_*$ by
\[ \del q_{\gamma^+} = \sum_{\gamma^-} \frac{\del^2 \Ih}{\del p_{\gamma^+}\del q_{\gamma^-}}|_{p=q=0} \cdot q_{\gamma^-}\]
with $\del\circ\del =0$ when there do not exist any holomorphic planes, so that one can define the cylindrical homology of the closed stable Hamiltonian manifold as the homology of the chain complex $(C,\del)$. The sequence of commuting Hamiltonians $\Ih_{\alpha,p}$ in rational symplectic field theory gets now replaced by linear maps
\[\del_{\alpha,p} = \frac{\del}{\del t^{\alpha,p}}\circ \del: C_*\to C_*,\;
\del_{\alpha,p}q_{\gamma^+}= \sum_{\gamma^-} \frac{\del^3 \Ih}{\del t^{\alpha,p}\del p_{\gamma^+}\del q_{\gamma^-}}|_{p=q=0} \cdot q_{\gamma^-},\]
which by the same arguments descend to maps on homology, $\del_{\alpha,p}: H_*(C,\del)\to H_*(C,\del)$, and commute on homology, $[\del_{\alpha,p},\del_{\beta,q}]_-=0$, with respect to the graded commutator $[f,g]_-=f\circ g - (-1)^{\deg(f)\deg(g)} g\circ f$.\\
While we have already shown how the well-known string, dilaton and divisor equations translate from Gromov-Witten theory to SFT, in this paper we want to proceed with our project of understanding how the rich algebraic structures from Gromov-Witten theory carry over to symplectic field theory. As the next step we want to show how classical genus-zero topological recursion generalizes to symplectic field theory. As we will outline in a forthcoming paper, it follows from the computation of the SFT of a Reeb orbit with descendants outlined above that the genus-zero topological recursion requires a non-equivariant version of SFT, which is generated by parametrized instead of unparametrized Reeb orbits. \\
The definition of this non-equivariant version of SFT is currently a very active field of research and related to the work of Bourgeois and Oancea in \cite{BO}, where a Gysin-type spectral sequence relating linearized contact homology (a slight generalization of cylindrical contact homology depending on a symplectic filling) and symplectic homology of this filling is established by viewing the one as the (non-)equivariant version of the other. On the other hand, since the topological recursion relations are already interesting in the case of cylindrical contact homology and the non-equivariant version of it is already understood, in this paper we will study the algebraic structure of gravitational descendants just for this special case first.
\vspace{0.5cm}
\section{Topological recursion in Floer homology from Gromov-Witten theory}
It is well-known that there is a very close relation between Gromov-Witten theory and symplectic Floer theory. Apart from the fact that the Floer (co)homology groups are isomorphic to quantum cohomology groups of the underlying symplectic manifold, which was used by Floer to prove the famous Arnold conjecture about closed orbits of the Hamiltonian vector field, it was outlined by Piunikhin-Salamon-Schwarz in \cite{PSS} how the higher structures carry over from one side to the other. As a motivation for the topological recursion relations in (non-equivariant) cylindrical homology, which we will prove in the next section, we show in this section how the topological recursion relations in Gromov-Witten theory carry over to symplectic Floer homology using a Morse-Bott correspondence.
\vspace{0.5cm}
\subsection{Topological recursion in Gromov-Witten theory}
As already mentioned in the last section it is customary in Gromov-Witten theory to introduce $r$ tautological line
bundles $\LL_1,\ldots,\LL_r$ over each moduli space $\CM_r=\CM_{g,r,A}(X)$ of closed $J$-holomorphic curves $u:(S_g,j)\to(X,J)$, where the fibre of $\LL_i$ over a curve $(u,\Si)\in\CM_r$ is again given
by the cotangent line to the underlying, possibly unstable nodal Riemann surface (without ghost components) at the
$i$.th marked point and which again formally can be defined as the pull-back of the vertical cotangent line
bundle of $\pi: \CM_{r+1}\to\CM_r$ under the canonical section $\sigma_i: \CM_r\to\CM_{r+1}$ mapping to the $i$.th marked
point in the fibre. \\
Assuming we have chosen a basis $A_0,\ldots,A_N$ of $H_2(X)$, we assign to every $A_i$ a formal
variable $z_i$ with grading $|z_i|=- 2 c_1(A_i)$. In order to include higher-dimensional moduli spaces we further assume that a string of closed (homogeneous) differential forms $\Theta=(\theta_1,\ldots,\theta_N)$ on $X$ is chosen and assign to
every $\theta_{\alpha}\in\Omega^*(X)$ a sequence of formal variables $t^{\alpha,j}$
with grading
\begin{equation*} |t^{\alpha,j}|=2 - 2j - \deg\theta_{\alpha}. \end{equation*}
Finally, let $\hbar$ be another formal variable of degree $|\hbar|=2(m-3)$ with $\dim X=2m$. \\
With this we can define the descendant potential of Gromov-Witten theory, which we will denote by $\IF$,
\begin{equation*}
\IF = \sum_{I} \int_{\CM^{(j_1,\ldots,j_r)}_{g,r,A}(X)}
\ev_1^*\theta_{\alpha_1}\wedge\ldots\wedge\ev_r^*\theta_{\alpha_r}\; \hbar^{g-1}t^I z^d,
\end{equation*}
where $t^{I}=t^{\alpha_1,j_1} \ldots t^{\alpha_r,j_r}$ and $z^d = z_0^{d_0} \cdot \ldots \cdot z_N^{d_N}$.
Expanding
\begin{equation*} \IF=\hbar^{-1}\sum_g \IF_g \hbar^g \end{equation*}
we further get a rational potential $\If=\IF_0\in\PP$, which counts only curves with genus zero. \\
Topological recursion relations are differential equations for the descendant potential which are proven using the geometric meaning of gravitational descendants, as for the string, dilaton and divisor equations. They follow from the fact that by making special non-generic choices for the sections in the tautological line bundles, the resulting zero divisors $\CM_{0,r}^{(j_1,\ldots,j_r)}(X)$ localize on the singular strata in the compactified moduli space $\CM_r(X)$. \\
We start with the following definition. A $r$-labelled tree is a triple $(T,E,\Lambda)$, where $(T,E)$ is a tree with the set of
vertices $T$ and the edge relation $E \subset T\times T$. The set $\Lambda = (\Lambda_{\alpha})$ is a decomposition of the index
set $I=\{1,\ldots,r\}=\bigcup \Lambda_{\alpha}$. We write $\alpha E\beta$ if $(\alpha,\beta)\in E$. A nodal holomorphic curve of genus zero modelled over $T=(T,E,\Lambda)$ is a tuple $(\underline{u}=(u_{\alpha}),\uz =
((z_{\alpha\beta})_{\alpha E\beta}, (z_k))$ of holomorphic maps $u_{\alpha}: (\CP,i)\to (X,J)$ and special points $z_{\alpha\beta}, z_k \in \CP$ such that $u_{\alpha}(z_{\alpha\beta})=u_{\beta}(z_{\beta\alpha})$ and for each $\alpha\in T$ the special points
in $Z_{\alpha} = \{z_{\alpha\beta}:\alpha E\beta\}\cup\{z_k: k\in\Lambda_{\alpha}\}$ are pairwise distinct. A nodal holomorphic curve $(\underline{u},\uz)$ is called stable if every constant map $u_{\alpha}$ the underlying sphere carries at
least three special points. Denote by $\IM_T=\IM_T(X)$ the space of all nodal holomorphic curves (of genus zero) modelled
over the tree $T=(T,E,\Lambda)$. Taking the union of all moduli spaces of stable nodal holomorphic curves modelled over $r$-labelled trees, we obtain the compactified space, $\CM_r = \coprod_T \IM_T$, which, equipped with the Gromov topology, provides the compactification of the moduli space $\IM_r$ of holomorphic spheres with $r$ marked points. \\
Fix $1\leq i\leq r$ and choose $1\leq j,k\leq r$ such that $i,j,k$ are pairwise different. While the string, dilaton and divisor equations are proven (see also \cite{FR}) by studying the behavior of the tautological line bundle under the natural map $\CM_{0,r}(X)\to\CM_{0,r-1}(X)$, the topological recursion relations follow by studying the behavior of the tautological line bundle under the natural map $\CM_{0,r}(X)\to\CM_{0,3}=\{\textrm{point}\}$, where the map and all marked points except $i,j,k$ are forgotten. Defining the divisor $\CM_{0,r}^{i,(j,k)}(X)$ as the union of moduli spaces of nodal holomorphic curves
\[\CM_{0,r}^{i,(j,k)}(X) = \bigcup_{T_0}\CM_{T_0}(X) = \bigcup_{T\leq T_0} \IM_T(X),\]
over all labelled trees $T_0=(T_0,E_0,\Lambda_0)$ with $T_0=\{1,2\}$, $1E_02$, $i\in\Lambda_{0,1}$, $j,k\in\Lambda_{0,2}$ or $T=(T,E,\Lambda)$ with $i\in\Lambda_{\alpha}\Rightarrow j,k\not\in \Lambda_{\alpha}$, respectively, the following theorem is a standard result in Gromov-Witten theory. \\
\begin{theorem} By studying the behavior of the tautological line bundle under the map $\CM_{0,r}(X)\to\CM_{0,3} =\{\textrm{point}\}$, where the map and all marked points except $i,j,k$ are forgotten, one can construct a special non-generic section in the tautological line bundle $\LL_{i,r}$ over $\CM_{0,r}(X)$ such that the zero divisor $\CM_{0,r}^{(0,\ldots,0,1,0,\ldots,0)}(X)$ agrees with the above divisor $\CM_{0,r}^{i,(j,k)}(X)$ of holomorphic spheres with one node, where the $i$.th marked point lies on one component and the $j$.th and the $k$.th fixed marked points lie on the other component. \end{theorem}
On the other hand, translating this into a differential equation for the descendant potential $\If$ we get
\begin{corollary} The descendant potential $\If$ of rational Gromov-Witten theory satiesfies the following topological recursion relations.
\[ \frac{\del^3 \If}{\del t^{\alpha,i}\del t^{\beta,j}\del t^{\gamma,k}} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu,0}} \,\eta^{\mu\nu}
\frac{\del^3 \If}{\del t^{\nu,0}\del t^{\beta,j}\del t^{\gamma,k}}, \]
where $\eta$ is the metric given by Poincar\'e pairing.
\end{corollary}
Before we show how these topological recursion relations translate from Gromov-Witten theory to symplectic Floer homology, we want to introduce a slightly weaker version of the above relations. Note that for the above formula one needs to choose two auxiliary $t$-variables. It follows that the recursion is not symmetric with respect to permuting of marked points. On the other hand, it will turn out that in symplectic Floer homology, more generally, non-equivariant cylindrical contact homology, we need to treat all additional marked points in a symmetric way to obtain coherence for the chosen special collections of sections. Because of this fact, we emphasize that the above topological recursion relations immediately lead to the following averaged version.
\begin{corollary} The descendant potential $\If$ of rational Gromov-Witten theory satiesfies the following averaged version of topological recursion relations,
\[ N(N-1) \frac{\del\If}{\del t^{\alpha,i}} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu,0}} \,\eta^{\mu\nu}
N(N-1)\frac{\del\If}{\del t^{\nu,0}}, \]
where $N:=\displaystyle{\sum_{\beta,j}}t^{\beta,j}\frac{\del}{\del t^{\beta,j}}$ is the differential operator which counts the number of marked points.
\end{corollary}
For the proof observe that the commutator of $\displaystyle{\frac{\del}{\del t^{\beta_1,j_1}}}$ and $t^{\beta_2,j_2}$ is one when $(\beta_1,j_1)=(\beta_2,j_2)$ and vanishes in all other cases, so that $$N(N-1)=\sum_{\beta_1,j_1}\sum_{\beta_2,j_2}t^{\beta_1,j_1}t^{\beta_2,j_2}\frac{\del}{\del t^{\beta_1,j_1}}\frac{\del}{\del t^{\beta_2,j_2}}.$$
\vspace{0.5cm}
\subsection{From Gromov-Witten theory to Floer homology}
Recall that the goal of this section is to translate the above topological recursion relations from Gromov-Witten theory to symplectic Floer theory. In order to do so, we recall in this subsection the well-known relation between Gromov-Witten theory and symplectic Floer theory. \\
First recall that the Floer cohomology $HF^*=HF^*(H)$ of a time-dependent Hamiltonian $H: S^1\times X\to \IR$ is defined as the homology of the chain complex $(CF^*,\del)$, where $CF^*$ is the vector space freely generated by the formal variables $q_{\gamma}$ assigned to all one-periodic orbits of $H$ with coefficients which are Laurent series in the variables $z_n$. On the other hand, the differential $\del: CF^*\to CF^*$ is given by counting elements in the moduli spaces $\CM(\gamma^+,\gamma^-)$ of cylinders $u:\IR\times S^1\to X$ satisfying the perturbed Cauchy-Riemann equation $\CR_{J,H}(u)=\del_s u+J_t(u)(\del_t u - X^H_t(u)) =0$ with a one-periodic family of almost complex structures $J_t$ and where $X^H_t$ denotes the symplectic gradient of $H_t$, and which converge to the one-periodic orbits $\gamma^{\pm}$ as $s\to\pm\infty$. \\
In the same way as the group of Moebius transforms acts on the solution space of Gromov-Witten theory and the moduli space is defined only after dividing out this obvious symmetries, $\IR$ acts on the above space of Floer cylinders by translations in the domain, so that the moduli space is again defined after dividing out this natural action. On the other hand, since the Hamiltonian (and the almost complex structure) depends on the $S^1$-coordinate, it will become important that we do not divide out the action of the circle. \\
In order to prove the Arnold conjecture about the number of one-periodic orbits of $H$ one shows that the Floer cohomology groups are isomorphic to the quantum cohomology groups $QH^*(X)$ of the underlying symplectic manifold, which are defined as the vector space freely generated by formal variables $t^{\alpha}=t^{\alpha,0}$, assigned as before to a chosen basis of the singular cohomology of $X$, with coefficients which are Laurent series in the $z_n$-variables. Note that as vector spaces the only difference to the usual cohomology groups $H^*(X)$ lies in the different choice of coefficients. \\
One way to prove the above isomorphism is by studying the behavior of the moduli spaces of Floer cylinders as the Hamiltonian $H$ converges to zero. Since in the limit $H=0$ every point on $X$ corresponds to a closed orbit, i.e. the orbits are no longer isolated but appear in finite-dimensional manifolds, we arrive at a Morse-Bott version of Floer cohomology in the sense of Bourgeois, see \cite{B}. \\
It follows from the removable singularity theorem for (unperturbed) holomorphic curves that in the limit the moduli spaces of Floer trajectories $\CM(\gamma^+,\gamma^-)$ are replaced by the moduli spaces of holomorphic spheres $\CM_{0,2}(X)$ with two marked points from Gromov-Witten theory, where at each of the two points cohomology classes $\alpha^+, \alpha^-\in H^*(X)$ from the target manifold $X$ are pulled-back using the natural evaluation map. It follows that in the limit $H=0$ the chain space is given by the vector space freely generated by formal variables $t^{\alpha}$ with coefficients which are Laurent series in the $z_n$-variables.\\
While holomorphic spheres with two marked points contribute to the Gromov-Witten potential of $X$, it is very important to observe that in the Morse-Bott limit $H=0$ we no longer divide out all symmetries $\IR\times S^1$ of the target, but only the $\IR$-shift as in Floer homology. It follows that the moduli space of (non-trivial) holomorphic spheres in the Morse-Bott limit always carries a non-trivial $S^1$-symmetry. In particular, it follows that differential for $H=0$ is indeed zero, so that the quantum cohomology groups agree with the underlying chain groups, which proves that the Floer cohomology groups are indeed isomorphic to the quantum cohomology groups, $HF^*(H)\cong QH^*(X)$. \\
Next we want to compare the natural operations on Floer and quantum cohomology. First note that there is a natural product structure on quantum cohomology, the so-called quantum cup product, given by counting holomorphic spheres with three marked points,
\[ QH^*(X)\otimes QH^*(X)\to QH^*(X),\; (t^{\alpha},t^{\beta}) \mapsto \frac{\del^3 \If}{\del t^{\alpha} \del t^{\beta} \del t^{\mu}} \eta^{\mu\nu} t^{\nu},\]
which we can also view as an action of $QH^*(X)$ on itself. Keeping the position of the first two marked points still fixed, note that we assume that also the position of the third marked point is fixed to determine unique coordinates on $\CP$. \\
On the other hand, in \cite{PSS} it was already shown that one can define the corresponding action of $QH^*(X)$ on the Floer cohomology groups $HF^*(H)$ by counting Floer cylinders $u:\IR\times S^1\to X$, $\CR_{J,H}(u)=0$ with an additional marked point with fixed position $(0,0)\in\RS$, as in the description of the moduli spaces of the quantum product. By the same arguments as used for the differential, note that when we pass to the Morse-Bott limit $H=0$ it is clear that this just gives us back the quantum cup product defined above. \\
Note that on the quantum cohomology side we can either assume that also the third marked point is fixed or it varies and we divide out the symmetry group $\RS$ of the two-punctured sphere afterwards. On the other hand, since in the Floer case we now only divide by $\IR$ and not by $\RS$ on the domain, it follows that in the second case with varying position the third marked point must still be constrained to lie on some ray $\IR\times\{t_0\} \subset \RS$, where without loss of generality we can assume that $t_0=0$. Keeping the picture of points with varying positions of marked points as in Gromov-Witten theory and SFT we hence have the following
\begin{proposition} With respect to the above Morse-Bott correspondence, counting holomorphic spheres with three marked points in Gromov-Witten theory corresponds in symplectic Floer theory to counting Floer cylinders with one additional marked point constrained to $\IR\times\{0\}\subset \RS$. \end{proposition}
More generally, adding an arbitrary number of additional marked points to include the full Gromov-Witten potential, it follows that we indeed have the following
\begin{proposition} With respect to the above Morse-Bott correspondence, counting holomorphic spheres with three or more marked points in Gromov-Witten theory corresponds on the Floer side to counting Floer cylinders with additional marked points, where only the first marked point is constrained to $\IR\times\{0\}\subset \RS$. \end{proposition}
\vspace{0.5cm}
\subsection{Topological recursion in Floer homology}
With the above translation scheme from Gromov-Witten theory to symplectic Floer theory in hand, we now will transfer the topological recursion relations from Gromov-Witten theory, outlined in the first subsection, to symplectic Floer theory. \\
But before we can give a precise algebraic statement, the propositions of the last subsection suggest that we first need to enrich the algebraic formalism as follows. As in Gromov-Witten theory and SFT we now enrich the Floer complex by requiring that the underlying chain space $CF^*$ is again the vector space freely generated by the formal variables $q_{\gamma}$ but with coefficients which are now not only Laurent series in the $z_n$-variables, but also formal power series in the $t^{\alpha,j}$-variables. \\
Denote by $\CM_r(\gamma^+,\gamma^-)$ the moduli space of Floer trajectories with $r$ marked points. Introducing $r$ tautological line bundles $\LL_{i,r}$ on $\CM_r(\gamma^+,\gamma^-)$ and coherent collections of sections as in cylindrical contact homology (and without distinguishing between constrained and unconstrained points), we again denote by $\CM_r^{(j_1,\ldots,j_r)}(\gamma^+,\gamma^-)$ the corresponding zero divisors. \\
As in cylindrical contact homology we can then enrich the differential $\del: CF^*\to CF^*$ by pulling back differential forms from the target $X$ and introducing descendants,
\[\del(q_{\gamma^+}) = \sum \int_{\CM^{(j_1,\ldots,j_r)}_r(\gamma^+,\gamma^-)} \ev_1^*\theta_{\alpha_1}\wedge \ldots \wedge \ev_r^*\theta_{\alpha_r} \; t^I q_{\gamma^-} z^d\]
with $t^I=t^{\alpha_1,j_1} \ldots t^{\alpha_r,j_r}$. As in cylindrical contact homology we then define
\[ \del_{(\alpha,i)} := \frac{\del}{\del t^{\alpha,i}}\circ \del.\]
As we have seen in the last subsection we further need to include cylinders with one constrained (to $\IR\times\{0\}$) marked point. In order to distinguish these new linear maps from the linear maps $\del_{\alpha,p}$ obtained by counting holomorphic cylinders with one \emph{un}constrained marked point, we denote them by $\del_{\check{\alpha},p}: CF^*\to CF^*$. In the same as for $\del_{\alpha,p}$ it can be shown that $\del_{\check{\alpha},p}$ descends to a linear map on homology, and commutes on homology, $[\del_{\check{\alpha},p},\del_{\check{\beta},q}]_-=0$, with respect to the graded commutator $[f,g]_-=f\circ g - (-1)^{\deg(f)\deg(g)} g\circ f$.\\
With this we can formulate our proposition about topological recursion in symplectic Floer theory as follows. In contrast to Gromov-Witten theory, we now obtain three different equations, depending on whether we remember both punctures, one puncture and one additional marked point or two marked points. On the other hand, inside their class we want to treat the additional marked point in a symmetric way as in the averaged version of topological recursion relations in Gromov-Witten theory stated in subsection 3.1. Note that this is needed for coherence as we will show in the proof of the corresponding relations for non-equivariant cylindrical contact homology in subsection 4.3.
\begin{proposition}\label{TRR-Floer} With respect to the above Morse-Bott correspondence, the topological recursion relations from Gromov-Witten theory have the following translation to symplectic Floer theory: For three \emph{different} non-generic special choices of coherent sections we have
\begin{itemize}
\item[(2,0):] $$ \del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} \del_{\check{\nu}}$$
\item[(1,1):] $$ N\,\del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N\, \del_{\check{\nu}} + \frac{1}{2}[\del_{(\check{\alpha},i-1)}, \check{N}\,\del]_+ $$
\item[(0,2):] $$N(N-1)\,\del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N(N-1)\, \del_{\check{\nu}} +[\del_{(\check{\alpha},i-1)},\check{N}(N-1)\,\del]_+$$
\end{itemize}
\vspace{0.5cm}
where $N:=\displaystyle{\sum_{\beta,j}}t^{\beta,j}\frac{\del}{\del t^{\beta,j}}$, $\check{N}:=\displaystyle{\sum_{\beta,j}} t^{\beta,j}\frac{\del}{\del \check{t}^{\beta,j}}$ and $[f,g]_+=f\circ g +(-1)^{\deg(f)deg(g)} g\circ f$ denotes a graded anti-commutator with respect to the operator composition. Notice that, $\check{N}$ and $\del$ having odd degree, in the above formulas the anti-commutator always corresponds to a sum.
\end{proposition}
\begin{proof}
In order to translate the localization result of Gromov-Witten theory to symplectic Floer theory, we first replace the holomorphic sphere with three or more marked points by a Floer cylinder with one marked point constrained to $\IR\times\{0\}$ and possibly other unconstrained additional marked points, where we assume that the constrained marked point agrees with the $i$.th marked point carrying the descendant. In order to obtain the three different equations we have to decide whether the $j$.th and $k$.th marked point agree with the positive or negative puncture or some other additional marked point and then use the localization theorem from the first subsection, which states that the zero divisor localizes on nodal spheres with two smooth components, where the $i$.th marked point lies on one component and the $j$.th and the $k$.th marked point lie on the other component. \\
In order to obtain equation (2,0) we remember both punctures ($j=+, k=-$). While for each holomorphic curve in the corresponding divisor $\CM^{i,(+,-)}_r(\gamma^+,\gamma^-)\subset \CM_r(\gamma^+,\gamma^-)$ the component with the $j$.th and the $k$.th marked point is a Floer cylinder, the other component with the $i$.th marked point is a sphere, since both components are still connected by a node (and not a puncture) by the compactness theorem in Floer theory. \\
On the other hand, in order to obtain equation (1,1), we remember one of the two punctures, $j=+$ (or $j=-$) and another marked point. While for each holomorphic curve in $\CM^{i,(+,k)}_r(\gamma^+,\gamma^-)\subset\CM_r(\gamma^+,\gamma^-)$ the component carrying the $j$.th and the $k$.th marked point still needs to be a Floer cylinder and not a holomorphic sphere as the $j$.th marked point is a puncture, the other component carrying the $i$.th marked point can either be a holomorphic sphere or Floer cylinder, depending on whether both components are connected by a node or a puncture. Note that in the second case this connecting puncture is neccessarily the negative puncture for the Floer cylinder with the $i$.th marked point and the positive puncture for the Floer cylinder with the $k$.th marked point. On the other hand, both Floer cylinder carry a special marked point, namely the $i$.th or the $k$.th marked point, respectively, which by the above Morse-Bott correspondence are constrained to $\IR\times\{0\}$. \\
Finally, in order to establish equation (0,2), we remember none of the two punctures. Since only Floer cylinders and holomorphic spheres appear in the compactification, it follows that for the above equation we must just sum over all choices for both components being either a cylinder or a sphere and again use the above Morse-Bott correspondence.
\end{proof}
\begin{remark} Note that in order to make the above proof precise, one needs to rigorously establish an isomorphism between Gromov-Witten theory and symplectic Floer theory \emph{beyond} the isomorphism between quantum cohomology and Floer cohomology groups together with the action of the quantum cohomology on them proven in \cite{PSS} (involving the full 'infinity structures'). On the other hand, while we expect that the Morse-Bott picture from above should lead to such an isomorphism in an obvious way, we are satisfied with the level of rigor (and call our result 'proposition' instead of 'theorem'), not only since it should just serve as a motivation for our topological recursion result in non-equivariant cylindrical contact homology, but also since our rigorous proof for that case will in turn directly lead to a (rigorous) proof of this proposition. \end{remark}
\vspace{0.5cm}
\section{Topological recursion in non-equivariant cylindrical homology}
Motivated by the topological recursion result in symplectic Floer homology discussed in the last section, we want to prove the corresponding topological recursion result for cylindrical contact homology. Recall that for the well-definedness of cylindrical contact homology we will assume that there are no holomorphic planes and that there are either no holomorphic cylinders with two positive or no holomorphic cylinders with two negative ends. Recall further that this is satisfied in the contact case when there are no holomorphic planes or when the stable Hamiltonian manifold is a symplectic mapping torus. Since, in contrast to Floer homology, the closed orbits in cylindrical contact homology are not parametrized by $S^1$, it turns out that we need to work with a non-$S^1$-equivariant version of cylindrical contact homology, where we follow the ideas in Bourgeois-Oancea's paper \cite{BO}. Note that in the contact case our results indeed immediately generalize from cylindrical contact homology to linearized contact homology which depends on a symplectic filling and is defined for any fillable contact manifold, e.g. since it is still isomorphic to the positive symplectic homology which only counts true cylinders.
\vspace{0.5cm}
\subsection{Non-equivariant cylindrical homology}
Recall that in the last section about topological recursion in Gromov-Witten theory and symplectic Floer theory we assumed that the underlying symplectic manifold $X$ is closed, i.e. compact and without boundary. On the other hand, it is well-known that one can also define a version of Floer homology for compact manifolds $X$ with contact boundary $V=\del X$, which is called symplectic homology. Note that here and in what follows we make no distinction between $X$ and its completion $X\cup\IR^+\times \del X$. It is defined, see also \cite{BO} for details, as a direct limit of Floer homology groups $SH_*(X)=\overrightarrow{\lim}\; HF_*(H)$ over an ordered set of admissible time-dependent Hamiltonians $H: S^1\times X\to\IR$. \\
On the other hand, it follows from the definition of the set of admissible Hamiltonians that, in the case when the Hamiltonian is time-independent, the one-periodic orbits of $H$ are critical points of $H$ in the interior of $X$ or correspond to closed orbits of the Reeb vector field on the contact boundary $V=\del X$, where both sets of closed orbits can be distinguished by the symplectic action. While it is easily seen that the chain subcomplex generated by the critical points computes the singular cohomology of $X$, $H^*(X)\cong H_{\dim X-*}(X,\del X)$, it follows using the action filtration that one can define $SH^+_*(X)$, the positive part of symplectic homology, or positive symplectic homology in short, as the homology of the corresponding quotient complex. \\
While for time-independent Hamiltonians the generators of positive symplectic homology correspond to closed orbits of the Reeb vector field on $V=\del X$, note that in the definition of (positive) symplectic homology, in the same way as in Floer homology, it is explicitly assumed that the underlying Hamiltonians $H: S^1\times X\to \IR$ are $S^1$-dependent. In particular, it follows that, in contrast to the cylindrical homology $HC_*(V)$, the positive symplectic homology $SH^+_*(X)$ is generated by parametrized instead of unparametrized orbits. \\
In their paper \cite{BO} the authors have proven that positive symplectic homology and cylindrical contact homology are related via a Gysin-type sequence. To be more precise, in their paper they use linearized contact homology instead of cylindrical contact homology, since the first one is defined for all contact manifolds with a strong symplectic filling, and agrees with cylindrical contact homology as long as there no holomorphic planes in the filling. For the proof Bourgeois and Oancea have defined a non-equivariant version of linearized contact homology, which is generated by parametrized instead of unparametrized orbits, and shown that it is isomorphic to positive symplectic homology. \\
The definition of this non-equivariant version of cylindrical contact homology is motivated by the following Morse-Bott approach to positive symplectic homology starting from an admissible set of time-\emph{in}dependent Hamiltonians $H^0: X\to\IR$. Recall that in this case the set of closed parametrized orbits of the Hamiltonian vector field is given by the set of closed parametrized Reeb orbits, which in turn is obtained from the usual set of set of closed (unparametrized) Reeb orbits by assigning to every closed unparametrized orbit the corresponding $S^1$-family of closed parametrized orbits. While in this case the closed Hamiltonian orbits are no longer nondegenerate, in particular, no longer isolated as required in the usual definition of symplectic (Floer) homology, but still come in $S^1$-families, the authors of \cite{BO} resolved the problem by applying a Morse-Bott approach as in Bourgeois' thesis \cite{B}.\\
Indeed, in order to obtain from a time-independent Hamiltonian $H^0: X\to\IR$ a family of time-dependent Hamiltonians $H^{\epsilon}: S^1\times X\to\IR$ with only nondegenerate closed orbits, one can choose for each closed (unparametrized) orbit $\gamma$ of the Hamiltonian $H^0$ a self-indexing Morse function $f_{\gamma}: \gamma \cong S^1\to\IR$, i.e. with one maximum and one minimum. Further choosing for each closed orbit a tubular neigborhood $(r,t)\in (-1,+1)\times S^1 \hookrightarrow X$ such that $\{0\}\times S^1$ corresponds to $\gamma$, and a smooth cut-off function $\varphi: (-1,+1)\to\IR$, so that we can extend each chosen Morse function $f_{\gamma}: S^1\to\IR$ to a function on $X$ by $\tilde{f}_{\gamma}(r,t) = \varphi(r)\cdot f_{\gamma}(t)$ and zero-extension, we can define the family of perturbed time-dependent Hamiltonians $H^{\epsilon}: S^1\times X\to\IR$ by $H^{\epsilon}(t,\cdot) = H^0+\epsilon\cdot \sum_{\gamma} \tilde{f}_{\gamma}(\cdot,t)$. \\
It can be shown that the closed (parametrized) orbits of each Hamiltonian $H^{\epsilon}$ are in one-to-one correspondence with the critical points of the Morse functions $f_{\gamma}$. Since all the Morse functions $f_{\gamma}: S^1\to\IR$ were chosen to have precisely one maximum $\hat{\gamma}$ and one minimum $\check{\gamma}$, it follows that each closed unparametrized orbit of the time-independent Hamiltonian $H^0:X\to\IR$ gives rise to \emph{two} closed parametrized orbits $\hat{\gamma}_{\epsilon}$ and $\check{\gamma}_{\epsilon}$ for each time-dependent Hamiltonian $H^{\epsilon}: S^1\times X\to\IR$. \\
Similarly to the approach in Bourgeois' thesis \cite{B}, in \cite{BO} the authors define a Morse-Bott complex for positive symplectic homology $SH^+_*(X)$ based on admissible time-independent Hamiltonians $H^0$ whose generators are the critical points $\hat{\gamma}$ and $\check{\gamma}$ on all closed unparametrized orbits $\gamma$ of $H^0$ and whose differential counts 'cascades' consisting of solutions to the Floer equation for $H^0$ connected by gradient trajectories of the Morse functions $f_{\gamma}$. On the other hand, it is clear that this immediately generalizes to cylindrical contact homology in the sense that one can define a non-equivariant version of cylindrical contact homology by choosing self-indexing Morse functions $f_{\gamma}$ for all closed (unparametrized) Reeb orbits $\gamma$. \\
Indeed, with this we can define non-equivariant cylindrical contact homology as the homology of the Morse-Bott complex whose generators are the critical points $\hat{\gamma}$ and $\check{\gamma}$ on all closed Reeb orbits $\gamma$ and whose differential counts 'cascades' consisting of holomorphic cylinders in $\IR\times V$ connected by gradient trajectories of the Morse functions $f_{\gamma}$. In other words, introducing two formal variables $\hat{q}_{\gamma}$ and $\check{q}_{\gamma}$ for each closed Reeb orbit $\gamma$ (corresponding to the two critical points $\hat{\gamma}$ and $\check{\gamma}$), the chain space for non-equivariant cylindrical contact homology $HC^{\textrm{non-}S^1}_*(V)$ is the vector space generated by the formal variables $\hat{q}_{\gamma}$ and $\check{q}_{\gamma}$ with coefficients which are formal power series in the $t^{\alpha,j}$-variables and Laurent series in the $z_n$-variables. Note that the chain space naturally splits, $$C^{\textrm{non-}S^1}_*=\hat{C}_*\oplus\check{C}_*,$$ where $\hat{C}_*$, $\check{C}_*$ is generated by the formal variables $\hat{q}_{\gamma}$, $\check{q}_{\gamma}$, respectively. Defining the differential $\del: \hat{C}_*\oplus\check{C}_*\to\hat{C}_*\oplus\check{C}_*$ as described above by counting Morse-Bott cascades, we furthermore
define as in Floer homology, \[ \del_{(\alpha,i)} := \frac{\del}{\del t^{\alpha,i}}\circ \del: \hat{C}_*\oplus\check{C}_*\to\hat{C}_*\oplus\check{C}_*.\]
Furthermore as for Floer homology we further need to include cylinders with one constrained (to $\IR\times\{0\}$) marked point. In order to distinguish these new linear maps from the linear maps $\del_{\alpha,p}$ obtained by counting holomorphic cylinders with one \emph{un}constrained marked point, we again denote them by $\del_{\check{\alpha},p}: CF^*\to CF^*$. In the same way as for $\del_{\alpha,p}$, it can be shown that $\del_{\check{\alpha},p}$ descends to a linear map on non-equivariant cylindrical homology.
\vspace{0.5cm}
\subsection{Topological recursion in non-equivariant cylindrical homology}
With the reasonable assumption in mind that the topological recursion relations in Floer homology also hold true in (positive) symplectic homology and hence also carry over to non-equivariant cylindrical contact homology, we now formulate our main theorem. Since we assumed that there are no holomorphic planes in $\IR\times V$ and hence the usual Gromov compactness result holds we define the Gromov-Witten potential $\If$ of a stable Hamiltonian manifold as the part of the rational SFT Hamiltonian $\Ih$ of $V$ counting holomorphic spheres without punctures, $\If = \Ih|_{p=0=q}$. Note that in the contact case this agrees with the Gromov-Witten potential of a point due to the maximum principle and is determined by the Gromov-Witten potential of the symplectic fibre in the case when the stable Hamiltonian manifold is a symplectic mapping torus as every holomorphic map $\CP\to\IR\times M_{\phi}\to \IR\times S^1\cong \IC^*$ is constant by Liouville's theorem.
\begin{theorem}\label{TRR-noneqCH} For three \emph{different} non-generic special choices of coherent sections the following three \emph{topological recursion relations} hold in \emph{non-equivariant} cylindrical contact homology
\begin{itemize}
\item[(2,0):] $$ \del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} \del_{\check{\nu}}$$
\item[(1,1):] $$ N\,\del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N\, \del_{\check{\nu}} + \frac{1}{2}[\del_{(\check{\alpha},i-1)}, \check{N}\,\del]_+ $$
\item[(0,2):] $$N(N-1)\,\del_{(\check{\alpha},i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N(N-1)\, \del_{\check{\nu}} +[\del_{(\check{\alpha},i-1)},\check{N}(N-1)\,\del]_+$$
\end{itemize}
\vspace{0.5cm}
where $N:=\displaystyle{\sum_{\beta,j}}t^{\beta,j}\frac{\del}{\del t^{\beta,j}}$, $\check{N}:=\displaystyle{\sum_{\beta,j}} t^{\beta,j}\frac{\del}{\del \check{t}^{\beta,j}}$ and $[f,g]_+=f\circ g +(-1)^{\deg(f)deg(g)} g\circ f$ denotes a graded anti-commutator with respect to the operator composition. Notice that, $\check{N}$ and $\del$ having odd degree, in the above formulas the anti-commutator always corresponds to a sum.
\end{theorem}
\vspace{0.5cm}
\subsection{Proof of the Main Theorem}
In this section we estabilish the psi-class localization result needed to prove Theorem \ref{TRR-noneqCH}, but also Proposition \ref{TRR-Floer}. We will show how coherent sections of tautological bundles on the moduli space of SFT-curves can be chosen such that their zero locus localizes on nodal configurations and boundary strata (multi-level curves).\\
Such localization will be the analogue, in presence of coherence conditions, of the usual result in Gromov-Witten theory describing the divisor $\psi_{i,r}=c_1(\LL_{i,r})$ on $\CM_{0,r,A}(X)$ as the locus of nodal curves where the $i$-th marked point lies on a different component with respect to a pair of other reference marked points. We will divide the discussion in three parts, corresponding to the three kind of topological recursion relations $(2,0)$, $(1,1)$ and $(0,2)$.\\
For the moment we stay general and consider the full SFT moduli space of curves with any number of punctures and marked points in a general manifold with stable Hamiltonian structure. In particular we will describe a special class of coherent collections of sections $s_{i,r}$ for the tautological bundles $\LL_{i,r}$ on the moduli spaces $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$. We will then consider a sequence inside such class converging to a (no longer coherent) collection of sections whose zeros will completely be contained in the boundary strata (both nodal and multi-floor curves) of $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$.\\
In \cite{FR} we already explained how to choose a (non-generic) coherent collection of sections $s_{i,r}$ in such a way that, considering the projection $\pi_r:\CM_{g,r,A}(\Gamma^+,\Gamma^-)\to\CM_{g,r-1,A}(\Gamma^+,\Gamma^-)$ consisting in forgetting the $r$-th marked point, the following \emph{comparison formula} holds for their zero sets:
\begin{equation}\label{comparison}
s_{i,r}^{-1}(0) = \pi_{r}^{-1}(s_{i,r-1}^{-1}(0))+D^\mathrm{const}_{i,r},
\end{equation}
The sum in the right hand side means union with the submanifold $D^\mathrm{const}_{i,r}$ of nodal curves with a constant sphere bubble carrying the $i$-th and $r$-th marked points, transversally intersecting
$\pi_{r}^{-1}(s_{i,r-1}^{-1}(0))$.\\
We wish to stress the fact that such choice is possible because any codimension $1$ boundary of the moduli space $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ decomposes into a product of moduli spaces where the factor containing the $i$-th marked point carries the same well defined projection map $\pi_{r}$. This is because codimension $1$ boundary strata are always formed by non-constant maps, which remain stable after forgetting a marked point.\\
In fact coherence also requires that our choice of coherent collection of sections is symmetric with respect to permutations of the marked points (other than the $i$-th, carrying the descendant). We can reiterate this procedure until we forget all the marked points but the $i$-th, getting easily
\begin{equation}\label{sdd}
s_{i,r}^{-1}(0)=(\pi_{1}^*\circ\ldots\circ\hat{\pi}_i^*\circ\ldots\circ\pi_{r}^* \, s_{i,1})^{-1}(0) + \sum_{\substack{I\sqcup J=\{1,\ldots,r\}\\ \{i\}\subsetneq I \subseteq \{1,\ldots,r\}}} D^\mathrm{const}_{(I|J)}
\end{equation}
where $D^\mathrm{const}_{(I|J)}$ is the submanifold of nodal curves with a constant sphere bubble carrying the marked points labeled by indices in $I$. Such choice of coherent sections is indeed symmetric with respect to permutation of the marked points.\\
However, forgetting all of the marked points is not what we want to do in general, so we may take another approach, that does not specify whether the points we are forgetting are marked points or punctures. Forgetting punctures only makes sense after forgetting the map too.\\
Indeed, consider the projection $\sigma:\CM_{g,r,A}(\Gamma^+,\Gamma^-)\to \CM_{g,r+|\Gamma^+|+|\Gamma^-|}$ to the Deligne-Mumford moduli space of stable curves consisting in forgetting the holomorphic map and asymptotic markers, consequently stabilizing the curve, and considering punctures just as marked points. For simplicity, denote $n=r+|\Gamma^+|+|\Gamma^-|$. The tautological bundle $\LL_{i,r}$ on $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ coincides, by definition, with the pull-back along $\sigma$ of the tautological bundle $\LL_{i,n}$ on $\CM_{g,n}$ away from the boundary stratum $D_i\subset \CM_{g,r,A}(\Gamma^+,\Gamma^-)$ of nodal curves with a (possibly non-constant) bubble carrying the $i$-th marked point alone and the boundary stratum $D'_i\subset \CM_{g,r,A}(\Gamma^+,\Gamma^-)$ of multi-level curves with a level consisting in a holomorphic disk bounded by a Reeb orbit and carrying the $i$-th marked point.\\
At this point we are going to make the following assumption, which will hold throughout the paper.\\
{\bf Assumption:} In $V\times \IR$ there is no holomorphic disk bounded by a Reeb orbit. This implies, in particular, $D'_i=\emptyset$.\\
We choose now a coherent collection of sections $\tilde{s}_{i,n}$ on the Deligne-Mumford moduli space of stable curves $\CM_{g,n}$. The definition of such coherent collection is the same as for the space of maps, but this time we impose coherence on each real-codimension $2$ divisor of nodal curves (as opposed to the case of maps, where we only imposed coherence at codimension $1$ boundary strata). Such a coherent collection pulls back to a coherent collection of sections on $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ away from the already considered boundary stratum $D_i$ (the only one still present after the above assumption). We then scale such sections to zero smoothly as they reach $D_i$ using coherent cut-off functions (as we did for the comparison formula in \cite{FR}) getting this way a coherent collection of sections on the full $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$. Precisely as in Gromov-Witten theory (see e.g. \cite{G}) we can see easily that the zero we added at $D_i$ has degree $1$.\\
Once more, suche construction is possible because any codimension $1$ boundary of the moduli space $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ decomposes into a product of moduli spaces where the factor containing the $i$-th marked point carries the same well defined projection map $\sigma$. This is because codimension $1$ boundary strata are always formed by multi-level curves, each level carrying at least two punctures (by the above assumption) and the $i$-th marked point, hence remaining stable after forgetting the map.\\
We then get, on $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$,
\begin{equation}\label{loc1}
s_{i,r}^{-1}(0)=(\sigma^* \, \tilde{s}_{i,n})^{-1}(0) + D_i
\end{equation}
This construction of a coherent collection of sections for $\CM_{g,r,A}(\Gamma^+,\Gamma^-)$ moves the problem of explicitly describing their zero locus to the more tractable space of curves $\CM_{g,n}$. Notice first of all that, since $\CM_{g,n}$ has no (codimension $1$) boundary, any generic choice (coherent or not) of sections for the tautological bundles will give rise to a zero loci with the same homology class. However, when we pull-back such section via $\sigma$, we want to remember more than just the homology class of its zero (as required by coherence), so we need to make some specific choice.\\
Let us now restrict ourselves to genus $0$. In Gromov-Witten theory, where there is no need for coherence conditions, we are used to select two marked points besides the one carrying the psi-class and succesively forget all the other ones until we drop on $\CM_{0,3}=\mathrm{pt}$, where the tautological bundle $\LL_{i,3}$ is trivial. This approach is not possible in our SFT context where we require coherence on $\CM_{g,n}$. Indeed, if we select two punctures labeled by $j$ and $k$, we automatically lose the required symmetry with respect to permutation of the marked points. To overcome this problem we need to use coherent collections of multisections (whose image is a branched manifold) in order to average over all the possible choices of a pair of punctures.\\
Let us choose an averaged (over all the possible ways of choosing $2$ marked points out of $n$) multi-section for $\LL_{i,n}$ on $\CM_{0,n}$ such that its zero locus has the (averaged) form
\begin{equation}\label{average}
s_{i,n}^{-1}(0)=\frac{(n-3)!}{(n-1)!}\sum_{\substack{2\leq k\leq n-2\\ I\sqcup J=\{1,\ldots,n\}\\i\in I,|J|=k}} \frac{k!}{(k-2)!} D_{(I|J)}
\end{equation}
It is easy to see that such a (holomorphic, non-generic) section can be perturbed to a generic smooth section which is also coherent (this is in fact a statement about all of the sections $s_{i,j}$ together, for $3\leq j\leq n$). The zero locus of such (multi)section will form a (branched) codimension $2$ locus in the tubular neighborhood of the unperturbed zero locus $s_{i,j}^{-1}(0)$, transversally intersecting such locus. For notational and visualization simplicity we will analyze the case of $\CM_{0,5}$ in the example below, leaving the general case as an exercise for the reader. Once such coherent collection of sections $\tilde{s}_{i,j}$ is constructed we consider a sequence of sections $\tilde{s}^{(k)}_{i,j}$ with $\tilde{s}^{0}_{i,j}=\tilde{s}_{i,j}$ and converging back to the old non-generic $s_{i,j}$ as $k\to\infty$. This limit construction determines a codimension $2$ locus in the moduli space $\CM_{0,r}(\Gamma^+,\Gamma^-)$ completely contained in the boundary strata formed my nodal and multilevel curves, corresponding to the first summand in the right hand side of equation (\ref{loc1}). The explicit form of the boundary components involved by such locus is described by formula (\ref{average}), where the divisor $D_{(I|J)}$ refers to the source nodal Riemann surfaces for the multilevel curves in the target $V\times \IR$.
\begin{example}\label{DMcoherence}
This example, and the understanding of the general phenomenon it describes, emerged in a discussion with Dimitri Zvonkine. Consider the moduli space $\CM_{0,5}$, whose boundary divisors, formed by nodal curves, we denote $D_{ij}$, $1\leq i < j \leq 5$, where $D_{ij}$ is the space of nodal curves with a bubble carrying the $i$-th and $j$-th points alone. The intersection structure of such divisors is represented by the following picture.
\begin{center}
\includegraphics[width=9cm]{m05-bare.eps}
\end{center}
When two different nodal divisors intersect, they do it with indersection index $+1$. The self-intersection index of any of them is, on the other hand, $-1$. Each of the $D_{ij}$ being a copy of $\mathbb{P}^1$ (representing a copy of the moduli space $\CM_{0,4}$ appearing at the boundary of $\CM_{0,5}$), this means that the normal bundle $N_{D_{ij}}$ of such $D_{ij}$ has Chern class $c_1(N_{D_{ij}})=-1$, hence the tubular neighborhood of $D_{ij}$ is a copy of $\tilde{\mathbb{C}}^2$, i.e. $\mathbb{C}^2$ blown up at $0$, $D_{ij}$ itself being the exceptional divisor. An intersecting $D_{kl}$ can then be seen as a line through the origin of $\tilde{\mathbb{C}}^2$.\\
Consider now the tautological line bundle $\LL_{1,5}$ on $\CM_{0,5}$. Using formula (\ref{average}), the corresponding averaged psi-class, i.e. the (dual to the) zero locus of a averaged multi-section of $\LL_{1,5}$, has the form
$$s_{1,5}^{-1}(0)=\frac{1}{2}(D_{12}+D_{13}+D_{14}+D_{15})+\frac{1}{6}(D_{23}+D_{24}+D_{25}+D_{34}+D_{35}+D_{45})$$
We now want to perturb such multi-section $s$ to a coherent multi-section $\tilde{s}$ by a small perturbation int he neighborhood of the nodal divisors. In fact it will be sufficient to describe how the zero locus is perturbed.\\
First notice that, once the line bundle $\LL_{1,5}$ is chosen, the nodal divisors $D_{ij}$ are split into two different sets, namely the ones for which $i=1$ and the ones with $i\neq 1$ (appearing in the first and second summand in the above averaged formula). The perturbation will be symmetric with respect to permutations inside these two subsets separately. Looking at the picture above for visual help, let us start by perturbing $\frac{1}{6}D_{34}$ away from itself in such a way that it still intersects $D_{34}$ at $D_{34}\cap D_{15}$ with index $-\frac{1}{6}$, at $D_{34}\cap D_{25}$ with index $+\frac{1}{6}$ and at $D_{34}\cap D_{12}$ with index $-\frac{1}{6}$ (notice that the total self-intersection index is $-\frac{1}{6}$, as it should for $\frac{1}{6}D_{34}$). The analogous choice is to be made for each of the divisors $D_{ij}$ with $i \neq 1$. Then we perturb $\frac{1}{2}D_{15}$ away from itself in such a way that it still intersects $D_{15}$ at $D_{15}\cap D_{34}$, $D_{15}\cap D_{24}$ and $D_{15}\cap D_{23}$ always with intersection index $-\frac{1}{6}$ (summing to a total self-intesection index of $-\frac{1}{2}$, as it should be for $\frac{1}{2}D_{15}$). This is in fact a multisection of the normal bundle $N_{D_{15}}$ formed by superimposing three sections of weight $\frac{1}{6}$ each, having a zero (of index $-\frac{1}{6}$) at $D_{15}\cap D_{34}$, $D_{15}\cap D_{24}$ and $D_{15}\cap D_{23}$ respectively. Notice that such perturbation of $\frac{1}{2}D_{15}$ still intersects $D_{34}$ in a punctured neighborhood of $D_{34}\cap D_{15}$ with total intersection index $\frac{2}{6}$ and once more precisely at $D_{34}\cap D_{15}$ with intersection index $\frac{1}{6}$. The analogous choice is to be made for all of the divisors $D_{1j}$. See the lef side of next picture for some intuition.
\begin{center}
\includegraphics[width=10cm]{zoompert.eps}
\end{center}
At the point $D_{34}\cap D_{15}$ we will combine the two perturbed divisors to annihilate their intersection there (their intersection indices being $-\frac{1}{6}$ and $+\frac{1}{6}$). This gives rise to a hyperbolic (and smooth, as we want to get generic sections) behaviour of the zero locus that will now avoid $D_{15}$ completely (right side of the above picture). In a tubular neighborhood of $D_{34}$ the situation is described by the following picture, representing such neighborhood as $\tilde{\mathbb{C}}^2$.
\begin{center}
\includegraphics[width=10cm]{blowuppert.eps}
\end{center}
The circle at the origin is the exceptional divisor $D_{34}$ and points on it are identified along diameters. Notice that, in the second picture, we see the hyperbolic behaviour and the fact that the green zero locus does avoid $D_{15}$.\\
We are now ready to prove that such averaged perturbed section is coherent with the corresponfing section on $\CM_{0,4}$, whose zero locus, using once more formula (\ref{average}), will be
$$s_{1,4}^{-1}(0)=\frac{1}{3}(D_{12}+D_{13}+D_{14}).$$
Indeed, the perturbed zero locus does not intersect at all $D_{1k}$, coherently with the fact that $\LL_{1,5}$ pulls back to the trivial bundle at such divisors, while at each of the $D_{ij}$ with $i\neq 1$ the situation is the same as for $D_{34}$: two of the three branches of the multisection of $N_{D_{15}}$, each with weight $\frac{1}{6}$, intersect it close to $D_{34}\cap D_{15}$ (total index $\frac{2}{6}$), and similarly for $N_{D_{12}}$, while, close to $D_{34}\cap D_{25}$, both the perturbation of $\frac{1}{6}D_{34}$ and the perturbation to $\frac{1}{6}D_{25}$ intersect $D_{34}$ (total index $\frac{2}{6}$). This is coherent with the above averaged formula for $s_{1,4}^{-1}(0)$.\\
With the very same approach (only the combinatorics being more complicated) we can trat the general case of $\CM_{0,n}$, constructing the analogous perturbation to the averaged formula (\ref{average}) for the psi-class on the Deligne-Mumford space of curves.
\end{example}
In order to prove Theorem \ref{TRR-noneqCH} we will make three different choices of special coherent collections of sections. Indeed, for equation $(2,0)$, the idea is not remembering any marked point, but only averaging with respect to the possible choices of two punctures. In this case we can choose an averaged coherent collection of multisections on the Deligne-Mumford moduli space of curves with $|\Gamma^+|+|\Gamma^-|+1$ marked points (using the perturbation technique of example \ref{DMcoherence}, where we are keeping all the puctures and the $i$-th marked point, carrying the psi-class), and then use equation (\ref{sdd}) to go to the space of maps $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$. This coherent collection is evidently symmetric, with respect to permutations of marked points and punctures separately. Its zero locus, in the moduli space $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$, has the form
$$\frac{P(P-1)}{2} s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) = \sum_{\substack{i\in I,\ I\sqcup J=\{1,\ldots,r\}\\ \Gamma^+_1 \sqcap \Gamma^+_2 = \Gamma^+ \\ \Gamma^-_1 \sqcap \Gamma^-_2 = \Gamma^- \\ P_2=|\Gamma^+_2|+|\Gamma^-_2|}} \frac{P_2(P_2-1)}{2}\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$$
where $P=|\Gamma^+|+|\Gamma^-|$ and $D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$ refers to a codimension $2$ locus in the tubular neighborhood of two-components curves joint at a node or puncture (hence nodal or two-level) where marked points and puctures split on each component as indicated by the subscript. The combinatorial factor on the left-hand side accounts for the possible ways of choosing two punctures to be remembered out of $P$, while the one on the right-hand side accounts for the number of ways a term of the form $D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$ appears in the described averaging construction.\\
As a second possible choice we will start from an averaged coherent collection of multisections on the Deligne-Mumford moduli space of curves with $|\Gamma^+|+|\Gamma^-|+2$ marked points (like in example \ref{DMcoherence} where, again, we keep all the punctures, the $i$-th marked point carrying the psi-class, but also another marked point). There are exactly $r-1$ different forgetful projections from the space $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$ to such $\CM_{0,|\Gamma^+|+|\Gamma^-|+2}$, corresponding to the numbering of the extra remembered marked point (excluding the $i$-th, which is also always remebered). In order to obtain a coherent collection on $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$ which is also symmetric with respect to permutations of the marked points we need to use the pull-back construction of equation (\ref{sdd}), but also average among these different forgetful projections. After some easy combinatorics, its zero locus has the form
\begin{equation*}
\begin{split}
&(r-1)\frac{P(P+1)}{2} s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) =\\
&\sum_{\substack{i\in I,\ I\sqcup J=\{1,\ldots,r\}\\ |I|=r_1,\ |J|=r_2\\ \Gamma^+_1 \sqcap \Gamma^+_2 = \Gamma^+ \\ \Gamma^-_1 \sqcap \Gamma^-_2 = \Gamma^- \\ P_2=|\Gamma^+_2|+|\Gamma^-_2|}} \left[r_2 \frac{P_2(P_2+1)}{2}+(r_1-1)\frac{P_2(P_2-1)}{2}\right]\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}
\end{split}
\end{equation*}
Finally, as a last choice, we start from the moduli space of curves with $|\Gamma^+|+|\Gamma^-|+3$ marked points, so that we are keeping, after forgetting the map, two extra marked points (besides the $i$-th). This time there will be exactly $\frac{(r-1)(r-2)}{2}$ forgetful maps from $\CM_{0,r,A}(\Gamma^+,\Gamma^-)$ to $\CM_{0,|\Gamma^+|+|\Gamma^-|+3}$, corresponding to the numbering of the two extra remembered marked points. The pull-back construction of equation (\ref{sdd}) needs then to be averaged among these possible choices in order to be symmetric with respect to permutations of marked points. This time the averaging combinatorics gives
\begin{equation*}
\begin{split}
&\frac{(r-1)(r-2)}{2}\frac{(P+2)(P+1)}{2} s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) =\\
&\sum_{\substack{i\in I,\ I\sqcup J=\{1,\ldots,r\}\\ |I|=r_1,\ |J|=r_2\\ \Gamma^+_1 \sqcap \Gamma^+_2 = \Gamma^+ \\ \Gamma^-_1 \sqcap \Gamma^-_2 = \Gamma^- \\ P_2=|\Gamma^+_2|+|\Gamma^-_2|}} \left[\frac{r_2(r_2-1)}{2} \frac{(P_2+2)(P_2+1)}{2}+ r_2(r_1-1)\frac{P_2(P_2+1)}{2}\right.\\
&\phantom{\sum_{\substack{i\in I,\ I\sqcup J=\{1,\ldots,r\}\\ |I|=r_1,\ |J|=r_2\\ \Gamma^+_1 \sqcap \Gamma^+_2 = \Gamma^+ \\ \Gamma^-_1 \sqcap \Gamma^-_2 = \Gamma^- \\ P_2=|\Gamma^+_2|+|\Gamma^-_2|}}}\left.+\frac{(r_1-1)(r_1-2)}{2}\frac{P_2(P_2-1)}{2}\right]\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}
\end{split}
\end{equation*}
Such three different choices of multisections can also be superimposed to form further multisections which are, of course, still coherent. This corresponds to some linear combination of the equations for their zero loci. In particular, by taking respectively the first one, the second one minus the first one, and the third one minus twice the second one plus the first one, we get multisections whose wero loci have the form
$$\frac{P(P-1)}{2}\ s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) = \sum \frac{P_2(P_2-1)}{2}\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$$
$$(r-1)P\ s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) = \sum r_2P_2\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$$
$$\frac{r-1(r-2)}{2}\ s_{i,r,\Gamma^+,\Gamma^-}^{-1}(0) = \sum \frac{r_2(r_2-1)}{2}\ D_{(I,\Gamma^+_1,\Gamma^-_1|J,\Gamma^+_2,\Gamma^-_2)}$$
\\
To complete the proof of Theorem \ref{TRR-noneqCH} we just need to notice that the limit procedure taking the perturbed section $\tilde{s}_{i,n}$ of $\LL_{i,n}$ back to its original non-generic limit $s_{i,n}$ indeed amounts to selecting, in the space of maps relevant for cylindrical non-equivariant contact homology, via formula (\ref{loc1}), either nodal configurations (and this is obvious), or two-level ones, i.e. where the two smooth components are connected by a puncture instead of a node. While the two-level curves are of codimension one and not two in the space of maps (indeed this extra dimension remembers the information on the angular coordinate used for the gluing at the connecting puncture), we correct this error by fixing the decoration, i.e. the identification of the tangent planes at both points, \emph{a priori} as follows.\\
Since each of the two cylinders connected by the puncture carries one (or two in the case of (0,2)) of the remembered additional marked points, the position of these additional marked points can be used to fix unique $S^1$-coordinates and hence asymptotic markers on each of the two cylinders, which in turn defines a natural decoration by simply requiring that the two asymptotic markers are identified. Note that this turns the additional markers used for fixing the $S^1$-coordinates automatically into additional marked points constrained to $\IR\times \{0\}$. The following picture illustrate such phenomenon for the case relevant to equation $(1,1)$: the red puncture and marked point are those we are remembering, the black marked point is the one carrying the psi-class, whose power is specified by the index, the dashed line represents $\IR\times \{0\}$ and the green arrow indicates the matching condition between the two dashed lines (notice that, for simplicity, we are not explicitly drawing any other marked point). The $(0,2)$ case is completely similar, only involving averaging between the two possible choices of marked point to be constrained to $\IR\times \{0\}$.\\
\begin{center}
\includegraphics[width=10cm]{splitting.eps}
\end{center}
We are only left with translating all this geometric picture and the three above equations for the zero loci back into our generating funtions language for the potential (recall the definition of the non equivariant cylindrical homology differential). There we see that the averaging combinatorial coefficients in formula (\ref{average}) are absorbed in the right way to give rise to the statement of Theorem \ref{TRR-noneqCH}.
\vspace{0.5cm}
\section{Applications}
In this final section we want to apply the topological recursion result for non-equivariant cylindrical homology to (equivariant) cylindrical homology. As an important result we show that, as in rational Gromov-Witten theory, all descendant invariants can be computed from primary invariants, i.e. those without descendants. Furthermore we will prove that the topological recursion relations imply that one can define an action of the quantum cohomology ring $QH^*(V)$ of the target manifold (defined using the Gromov-Witten potential $\If$ of $V$ introduced above) on the non-equivariant cylindrical homology $HC^{\textrm{non-}S^1}_*(V)$ by counting holomorphic cylinders with one constrained marked point.
\vspace{0.5cm}
\subsection{Topological recursion in cylindrical homology}
Since the chain space for non-equivariant cylindrical homology splits, $C^{\textrm{non-}S^1}_*=\hat{C}_*\oplus\check{C}_*$, it follows that the linear maps on the chain space, obtained by differentiating the differential of non-equivariant cylindrical homology with respect to $t^{\alpha,p}$- or $\check{t}^{\alpha,p}$-variables, can be restricted to linear maps between $\hat{C}_*$ and $\check{C}_*$, respectively. On the other hand, since each of the spaces $\hat{C}_*$ and $\check{C}_*$ is just a copy of the chain space for (equivariant) cylindrical homology, with degree shifted by one for the second space, $\hat{C}_* = C_*$, $\check{C}_*=C_*[1]=C_{*+1}$, we can translate the linear maps from non-equivariant cylindrical homology to (equivariant) cylindrical homology as follows. \\
While the restricted linear maps $\del_{(\alpha,p)}: \hat{C}_*\to\hat{C}_*$ and $\del_{(\alpha,p)}: \check{C}_*\to\check{C}_*$ indeed agree with the linear maps $\del_{(\alpha,p)}: C_*\to C_*$ from cylindrical homology as defined in subsection 2.6, note that one can now introduce new linear maps $\del_{(\check{\alpha},p)}: C_*\to C_*$ on cylindrical homology by requiring that they agree with the linear maps $\del_{(\check{\alpha},p)}: \hat{C}_*\to \hat{C}_*$ (and hence $\del_{(\check{\alpha},p)}: \check{C}_*\to \check{C}_*$) from non-equivariant cylindrical homology. \\
On the other hand, while the topological recursion relations we proved for the non-equivariant case are useful to compute the linear maps $\del_{(\check{\alpha},p)}$ on $HC^{\textrm{non-}S^1}_*$, the goal of topological recursion in cylindrical contact homology (as in rational SFT) is to compute the linear maps $\del_{(\alpha,p)}: HC_*\to HC_*$. In order to apply our results of the non-equivariant case to the equivariant case, we make use of the fact that (apart from the mentioned equivalence with $\del_{(\alpha,p)}: \hat{C}_*\to\hat{C}_*$ and $\del_{(\alpha,p)}: \check{C}_*\to\check{C}_*$) the linear map $\del_{(\alpha,p)}: C_*\to C_*$ also agrees with the restricted linear map $\del_{(\check{\alpha},p)}: \hat{C}_*\to\check{C}_*$. \\
In order to see this, observe that, while in the case of $\del_{(\alpha,p)}: \hat{C}_*\to\hat{C}_*$ (or $\del_{(\alpha,p)}: \check{C}_*\to\check{C}_*$) the free $S^1$-coordinate on the cylinder is fixed by the critical point on the negative (or positive) closed Reeb orbit, in the case of $\del_{\check{\alpha},p}: \hat{C}_*\to\check{C}_*$ the free $S^1$-coordinate on the cylinder is fixed by the additional marked point (and thereby turning it into a constrained marked point). \\
With this we can prove the following corollary about topological recursion in (equivariant) cylindrical homology.
\begin{corollary}\label{TRR-eqCH} For three \emph{different} non-generic special choices of coherent sections the following three \emph{topological recursion relations} hold in (equivariant) cylindrical contact homology
\begin{itemize}
\item[(2,0):] $$ \del_{(\alpha,i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} \del_{\nu}$$
\item[(1,1):] $$ N\,\del_{(\alpha,i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N\, \del_{\nu} + \frac{1}{2}[\del_{(\alpha,i-1)}, \check{N}\,\del]_+ + \frac{1}{2}[\del_{(\check{\alpha},i-1)}, N\,\del]_+$$
\item[(0,2):] \begin{eqnarray*} N(N-1)\,\del_{(\alpha,i)} = \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} N(N-1)\, \del_{\nu} &+&[\del_{(\alpha,i-1)},\check{N}(N-1)\,\del]_+\\ &+&[\del_{(\check{\alpha},i-1)},N(N-1)\,\del]_+
\end{eqnarray*}
\end{itemize}
\end{corollary}
\begin{proof}
While the relation (2,0) is immediately follows by identifying the linear map $\del_{(\alpha,p)}: C_*\to C_*$ with the restricted linear map $\del_{(\check{\alpha},p)}: \hat{C}_*\to\check{C}_*$, for the relations (1,1) and (0,2) it suffices to observe that
\begin{eqnarray*}
(\del_{(\check{\alpha},i-1)}\circ\check{N}\del:\hat{C}_*\to\check{C}_*)
&=& (\del_{(\check{\alpha},i-1)}: \hat{C}_*\to\check{C}_*) \circ (\check{N}\del: \hat{C}_*\to\hat{C}_*) \\
&+& (\del_{(\check{\alpha},i-1)}: \check{C}_*\to\check{C}_*) \circ (\check{N}\del: \hat{C}_*\to\check{C}_*) \\
&=& (\del_{(\alpha,i-1)}: C_*\to C_*) \circ (\check{N}\del: C_*\to C_*) \\
&+& (\del_{(\check{\alpha},i-1)}: C_*\to C_*) \circ (N\del: C_*\to C_*).
\end{eqnarray*}
\end{proof}
While it follows that the second and the third topological recursion relation involve the linear maps $\del_{\check{\alpha},p}: C_*\to C_*$ defined using non-equivariant contact homology and hence leave the frame of standard (equivariant) cylindrical homology, it is notable that the first topological recursion relation (2,0) indeed has the following important consequence.
\begin{corollary} All linear maps $\del_{(\alpha,p)}: HC_*(V)\to HC_*(V)$ on cylindrical homology involving gravitational descendants can be computed from the linear maps $\del_{\alpha}: HC_*(V)\to HC_*(V)$ with no gravitational descendants and the primary rational Gromov-Witten potential of the underlying stable Hamiltonian manifold, i.e. again involving no gravitational descendants. \end{corollary}
\begin{proof}
For the proof it suffices to observe that after applying the topological recursion relation (2,0) the marked point with the descendant sits on the attached sphere, so that the linear maps with descendants can indeed be computed from the linear maps without descendants and the rational Gromov-Witten potential of the target manifold with gravitational descendants. Together with the standard result of rational Gromov-Witten theory (generalized in the obvious way from symplectic manifolds to stable Hamiltonian manifolds without holomorphic planes) that the full descendant potential can be computed from the primary potential involving no descendants using the above mentioned topological recursion relations together with the divisor (to add more marked points on non-constant spheres), string and dilaton (for the case of constant spheres) equations, it follows the remarkable result that also in cylindrical homology the descendant invariants are determined by the primary invariants, that is, if we additionally include the primary Gromov-Witten potential.
\end{proof}
\begin{remark} Note that the first topological relation actually descends to homology, i.e. it holds for $\del_{(\alpha,i)}$ and $\del_{\nu}$ \emph{viewed as linear maps on cylindrical homology} $HC_*(V)$. In particular, while on the chain level all topological recursion relations only hold true for (three different) \emph{special} choices of coherent sections, \emph{after} passing to homology the first relation (2,0) holds for \emph{all} coherent sections. \end{remark}
As we already remarked, it follows from the maximum principle that the Gromov-Witten potential of a contact manifold simply agrees with the Gromov-Witten potential of a point. Since in this case it follows from dimensional reasons that after setting all $t$-variables to zero we have $$\frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}}|_{t=0} = 0,\; i>0,$$ we have the following important vanishing result for contact manifolds. \\
For the rest of this subsection as well as the next one we will restrict ourselves to the case where all formal $t$-variables are set to zero. \\
Following the notation in \cite{F2} and \cite{FR} let us denote by $HC^0_*(V)=H(C^0_*,\del^0)$ the cylindrical homology without additional marked points and hence without $t$-variables, which is obtained from the big cylindrical homology complex $HC_*(V)=H(C_*,\del)$ by setting all $t$-variables to zero. In the same way let us introduce the corresponding linear map $\del^1_{(\alpha,p)}: HC^0_*(V)\to HC^0_*(V)$ obtained again by setting $t=0$ and which now counts holomorphic cylinders with just one additional marked point (and descendants).
\begin{corollary} In the case when $V$ is a contact manifold, after setting all $t$-variables to zero, the corresponding descendant linear maps $\del^1_{(\alpha,p)}: HC^0_*(V)\to HC^0_*(V)$, $p>0$ are zero. \end{corollary}
While this result shows that counting holomorphic cylinders with one additional marked point and gravitational descendants is not very interesting in the case of contact manifolds, it is clear from our ongoing work on topological recursion in full rational symplectic field theory that the arguments used above do \emph{not} apply to the sequence of commuting Hamiltonians $\Ih^1_{(\alpha,p)}$ of rational SFT, which in the Floer case lead to the integrable hierarchies of Gromov-Witten theory. More precisely, we expect that the corresponding recursive procedure involves primary invariants belonging to a non-equivariant version of rational SFT.
\vspace{0.5cm}
\subsection{Action of quantum cohomology on non-equivariant cylindrical homology}
As we already mentioned in subsection 3.2, in \cite{PSS} Piunikhin-Salamon-Schwarz defined an action of the quantum cohomology ring of the underlying symplectic manifold on the Floer (co)homology groups by counting Floer cylinders with one additional marked point constrained to $\IR\times\{0\}\subset\RS$. Note that for this the authors also needed to show that the concatination of two maps on Floer cohomology corresponds to the ring multiplication in quantum cohomology. \\
While in \cite{PSS} this result was proven by establishing appropriate compactness and gluing theorems for all appearing moduli spaces, in this final subsection we want to show how our topological recursion relation (1,1) together with the relation (2,0) can be used to define a corresponding action of the quantum cohomology (defined using the Gromov-Witten potential introduced above) on the non-equivariant cylindrical contact homology of a stable Hamiltonian manifold after setting all $t$-variables to zero. \\
In the same way as for closed symplectic manifolds we define the quantum cohomology $QH^*(V)$ of the stable Hamiltonian manifold $V$ as the vector space freely generated by formal variables $t^{\alpha}=t^{\alpha,0}$, with coefficients which are Laurent series in the $z_n$-variables. Note that, as vector spaces, the only difference to the usual cohomology groups $H^*(V)$ again lies in the different choice of coefficients. On the other hand, while for general stable Hamiltonian manifolds the quantum product defined using the Gromov-Witten three-point invariants is different from the usual product structure of $H^*(V)$, note that for contact manifolds we have $QH^*(V)=H^*(V)$ (with the appropriate choice of coefficients) as in this case the Gromov-Witten potential of $V$ agrees with that of a point. Recalling that the linear maps $\del^1_{\check{\alpha}}=\del^1_{(\check{\alpha},0)}$ actually descend to maps on (non-equivariant) cylindrical homology $HC^{0,\textrm{non-}S^1}_*(V)$, we prove the following
\begin{corollary} The map
\begin{eqnarray*}
QH^*(V)\otimes HC^{0,\textrm{non-}S^1}_*(V)\to HC^{0,\textrm{non-}S^1}_*(V),
&& (t^{\alpha},\hat{q}_{\gamma}) \mapsto \del^1_{\check{\alpha}}(\hat{q}_{\gamma}),\\
&& (t^{\alpha},\check{q}_{\gamma}) \mapsto \del^1_{\check{\alpha}}(\check{q}_{\gamma}),\\
\end{eqnarray*}
defines an action of the quantum cohomology ring $QH^*(V)$ on the non-equivariant cylindrical homology $HC^{0,\textrm{non-}S^1}_*(V)$ (after setting all $t=0$). \end{corollary}
\begin{proof}
It follows from our topological recursion relations (2,0) and (1,1) for non-equivariant cylindrical contact homology that, after setting all $t$-variables to zero, we indeed have the following two non-averaged topological recursion relations,
\begin{eqnarray*}
\del^2_{(\check{\alpha},i),(\beta,j)} &=& \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} \del^2_{\check{\nu},(\beta,j)} + \frac{\del^3 \If}{\del t^{\alpha,i-1}\del t^{\beta,j}\del t^{\mu}} \eta^{\mu\nu} \del^1_{\check{\nu}},\\
\del^2_{(\check{\alpha},i),(\beta,j)} &=& \frac{\del^2 \If}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} \del^2_{\check{\nu},(\beta,j)} + \frac{1}{2}[\del^1_{(\check{\alpha},i-1)}, \del^1_{(\check{\beta},j)}]_+.
\end{eqnarray*}
While the first equation follows from differentiating the recursion relation (2,0) with respect to the formal variable $t^{\beta,j}$, the second equation follows from the recursion relation (1,1) by first setting all $t$-variables except $t^{\beta,j}$ to zero. \\
Ignoring invariance problems for the moment, the desired result follows by comparing both equations. Since the left side and the first summand on the right side of both equations agree, it follows that $$ \frac{1}{2}[\del^1_{(\check{\alpha},i-1)}, \del^1_{(\check{\beta},j)}]_+= \frac{\del^3 \If}{\del t^{\alpha,i-1}\del t^{\beta,j}\del t^{\mu}} \eta^{\mu\nu} \del^1_{\check{\nu}}.$$ On the other hand, since $[\del^1_{(\check{\alpha},i-1)}, \del^1_{(\check{\beta},j)}]_-=0$ on homology, and using the natural relation between the commutator and its corresponding anti-commutator, $\frac{1}{2}[\del^1_{(\check{\alpha},i-1)}, \del^1_{(\check{\beta},j)}]_+ + \frac{1}{2}[\del^1_{(\check{\alpha},i-1)}, \del^1_{(\check{\beta},j)}]_- = \del^1_{(\check{\alpha},i-1)}\circ\del^1_{(\check{\beta},j)}$, it follows that after passing to homology we have $$\del^1_{(\check{\alpha},i-1)}\circ\del^1_{(\check{\beta},j)}= \frac{\del^3 \If}{\del t^{\alpha,i-1}\del t^{\beta,j}\del t^{\mu}} \eta^{\mu\nu} \del^1_{\check{\nu}},$$ so that the desired result follows after setting $i=1$ and $j=0$. \\
On the other hand, since the above recursion relations are true on the chain level only for special, in particular, two \emph{different} coherent collections of sections, the above reasoning leads to a true statement as the desired result is a result about invariants, i.e. it is independent of the chosen auxiliary data. While the above identity should hold after passing to homology, note that the invariance problem cannot be resolved as for the topological recursion relation (2,0) by simplying passing to homology, since the two equations with which we started involve linear maps counting holomorphic cylinders with more than one additional marked point. \\
In order to show that the desired composition rule still holds after passing to homology, we make use of the fact that we can choose nice coherent collections of sections interpolating between the two special coherent sections (in the sense of subsection 2.2) as follows. Since it follows from the proof of the main theorem in subsection 4.3 that our special coherent collections of sections are indeed pulled back from the moduli space of curves to the moduli space of maps (we can ignore the bubbles that we added afterwards here), we do not need to consider arbitrary interpolating coherent sections but only those which are again pull-backs of coherent sections on the underlying moduli space of curves. Since in the moduli space of curves (in contrast to the moduli space of maps) the strata of singular curves (maps) are of codimension at least two, we can further choose the homotopy such that it avoids all singular strata, so that all underlying curves in the homotopy are indeed smooth. Since we excluded holomorphic planes throughout the paper, it follows that the only singular maps that appear during the interpolation process are holomorphic maps where a cylinder without additional marked points splits off. But this implies that the difference between the two different special coherent collections of sections is indeed exact, so that the above equation indeed holds after passing to homology.
\end{proof}
While in the same way we can give an alternative proof of the result of Piunikhin-Salamon-Schwarz by using our topological recursion relations (1,1) and (2,0) in symplectic Floer theory of subsection 3.3, in contrast note that in (equivariant) cylindrical homology, due to the differences in the topological recursion formulas in this case, neither $\del^1_{\alpha}$ nor $\del^1_{\check{\alpha}}$ defines an action of quantum cohomology on (equivariant) cylindrical homology. \\
Finally, using the isomorphism of Bourgeois-Oancea in \cite{BO}, we show how the latter result also establishes an action of the cohomology ring on the symplectic homology, and thereby generalizes the result of \cite{PSS} in the obvious way from closed manifolds to compact manifolds $X$ with contact boundary $\del X=V$. For the proof we assume that there not only no holomorphic planes in $(\IR\times) V$, but also no holomorphic planes in the filling $X$. Furthermore we assume that all $t$-variables are set to zero without explicitly mentioning it again.
\begin{corollary} Using the isomorphism between non-equivariant cylindrical homology and positive symplectic homology in \cite{BO}, our result defines an action of the cohomology ring $H^*(X)$ on (full) symplectic homology $SH^0_*(X)$ (at $t=0$). \end{corollary}
\begin{proof}
Since we assume that there are no holomorphic planes in the filling $X$, it further follows from the computation of the differential in symplectic homology by Bourgeois and Oancea in \cite{BO} that the symplectic homology is given by the direct sum,
$SH^0_*(X)=SH^{0,+}_*(X)\oplus H^{\dim X-*}(X)$. While the action of $H^*(V)$ on non-equivariant cylindrical contact homology $HC^{0,\textrm{non-}S^1}_*(V)$ established above defines an action of $H^*(X)$ on $SH^{0,+}_*(X)$ using the isomorphism $HC^{0,\textrm{non-}S^1}_*(V)\cong SH^{0,+}_*(X)$ and the natural map $H^*(X)\to H^*(V)$ defined by the inclusion $V\hookrightarrow X$, together with the natural action of $H^*(X)$ on itself we get the desired result. \end{proof}
Of course, we expect that this action agrees with the action of $H^*(X)$ on $SH^0_*(X)$ defined using the action on the Floer homology groups $FH^0_*(H)$ for admissible Hamiltonians and taking the direct limit, after generalizing the result in \cite{PSS} from closed symplectic manifolds to compact symplectic manifolds with contact boundary in the obvious way.
\begin{example}
Using the natural map $H^*(Q)\to H^*(T^*Q)$ given by the projection, note that in the cotangent bundle case $X=T^*Q$ this defines an action of $H^*(Q)$ on $SH^0_*(T^*Q)$, which by \cite{AS} and \cite{SW} is isomorphic to $H_*(\Lambda Q)$, where $\Lambda Q$ denotes the loop space of $Q$. Introducing additional marked points on the cylinders in the proofs of Abbondandolo-Schwarz and Salamon-Weber, we expect that it can be shown that this action agrees with the natural action of $H^*(Q)$ on $H_*(\Lambda Q)$ given by the cap product and the base point map $\Lambda Q\to Q$. \\
On the other hand, as mentioned above, in the equivariant setting we do \emph{not} expect to find a natural action of $H^*(Q)$ on (equivariant) cylindrical homology $HC^0_*(S^*Q)$, which by \cite{CL} is isomorphic to the $S^1$-equivariant singular homology $H^{S^1}_*(\Lambda Q,Q)$. But this fits with the well-known fact that there is \emph{no} natural action of the cohomology \hbox{$(H^*(Q)\to) H^*(\Lambda Q)$} on relative $S^1$-equivariant homology $H^{S^1}_*(\Lambda Q,Q)$.
\end{example}
\subsection{Example: Cylindrical homology in the Floer case}
We end this paper by discussing briefly the important \emph{Floer case} of SFT, which was worked out in the paper \cite{F1} of the first author, including the neccessary transversality proof. Here $V=S^1\times M$ is equipped with a stable Hamiltonian structure $(\omega^H=\omega+dH\wedge dt,\lambda=dt)$ for some time-dependent Hamiltonian $H:S^1\times M\to\IR$ on a closed symplectic manifold $M=(M,\omega)$. It follows that the Reeb vector field is given by $R^H=\del_t+X^H_t$, so that in particular every one-periodic closed Reeb orbit is a periodic orbit of the time-dependent Hamiltonian $H$. More precisely, it can be shown, see \cite{F1}, that the chain complex of equivariant cylindrical contact homology naturally splits into subcomplexes generated by Reeb orbits of a fixed integer period and that the equivariant cylindrical homology generated by Reeb orbits of period one agrees with the standard symplectic Floer homology for the time-dependent Hamiltonian $H:S^1\times M\to\IR$. In order to see that the differentials indeed agree, one uses that the holomorphic map from the cylinder to $\IR\times V$ splits, $\tilde{u}=(h,u):\RS\to(\RS)\times M$, where the map $h:\RS\to\RS$ is just the identity (up to automorphisms of the domain). Note that in the Morse-Bott limit $H=0$ one arrives in the trivial circle bundle case and just gets back the relation between SFT and Gromov-Witten theory from \cite{EGH}.We now show the following important
\begin{proposition} In the Floer case the topological recursion relations for equivariant cylindrical homology reproduce the topological recursion relations for symplectic Floer homology from section three. In particular, by passing to the Morse-Bott limit $H=0$, they reproduce the standard topological recursion relations of Gromov-Witten theory. Furthermore, the action of the quantum cohomology on the non-equivariant cylindrical homology splits and agrees with the action of quantum cohomology on symplectic Floer homology defined in \cite{PSS}. \end{proposition}
\begin{proof} In the same way as it follows from the fact that the map $h:\RS\to\RS$ is just the identity (up to automorphisms of the domain) that the differentials $\del$ in equivariant cylindrical homology and symplectic Floer homology naturally agree, it follows that the linear maps $\del_{\check{\alpha}}$ for $\alpha\in H^*(M)$ introduced in symplectic Floer homology in section three and in equivariant cylindrical homology (using the corresponding map in non-equivariant cylindrical homology) agree. Furthermore it follows from the same result that for $\alpha_1=\alpha\wedge dt\in H^*(S^1\times M)$ we have
\[\del_{\alpha_1,p} = \del_{\check{\alpha},p}: C_*\to C_*,\;\; \del_{\check{\alpha_1},p} = 0.\]
Note that the last equation follows from the fact that the $S^1$-symmetry is divided out twice. Again only working out the second topological recursion relation, it then indeed follows that
\begin{eqnarray*}
(N \del_{(\check{\alpha},i)}: CF_*\to CF_*) &=& (N \del_{(\alpha_1,i)}: C_*\to C_*) \\&=& \frac{\del^2 \If_{S^1\times M}}{\del t^{\alpha_1,i-1}\del t^{\mu}} \eta^{\mu\nu} (N \del_{\nu}: C_*\to C_*) \\&+& (\frac{1}{2}[\del_{(\alpha_1,i-1)}, \check{N}\,\del]_+: C_*\to C_*) \\&+& (\frac{1}{2}[\del_{(\check{\alpha}_1,i-1)}, N\,\del]_+: C_*\to C_*)\\&=& \frac{\del^2 \If_M}{\del t^{\alpha,i-1}\del t^{\mu}} \eta^{\mu\nu} (N \del_{\check{\nu}}: CF_*\to CF_*) \\&+& (\frac{1}{2}[\del_{(\check{\alpha},i-1)}, \check{N}\,\del]_+: CF_*\to CF_*) ,
\end{eqnarray*}
where we further use that by similar arguments, namely that every holomorphic map $\CP\to\RS$ is constant, the Gromov-Witten potential of the stable Hamiltonian manifold $S^1\times M$ is given by the Gromov-Witten potential of the symplectic manifold $M$, see also the discussion at the beginning of subsection 4.2. \\
Apart from the fact that this proves the desired statement about the topological recursion relations, note that the same equation shows that in this special case the linear map $\del_{\check{\alpha}}: C_*\to C_*$ indeed leads to an action of the quantum cohomology of $S^1\times M$ on the \emph{equivariant} cylindrical homology, which agrees with the one defined by \cite{PSS} on symplectic Floer homology. For the action of quantum cohomology $QH^*(S^1\times M)$ on the non-equivariant cylindrical homology $HC^{\textrm{non-}S^1}_*(S^1\times M)$, observe that the differential of non-equivariant cylindrical homology is indeed of diagonal form \[\del = \diag(\del,\del): \hat{CF}_*\oplus \check{CF}_* \to \hat{CF}_*\oplus \check{CF}_*\] with the Floer homology differential $\del: CF_*\to CF_*$. This follows from the fact that in this case the only off-diagonal contribution $\delta: \hat{CF}_*\to\check{CF}_*$ is also zero, which as above again follows from the fact that the $S^1$-symmetry on the cylinder is divided out twice. Furthermore note that by the same argument the linear map $\del_{\check{\alpha}}$ is also of diagonal form. It follows that the non-equivariant cylindrical homology is given as a direct sum, $HC^{\textrm{non-}S^1}_*=\hat{HF}_*\oplus\check{HF}_*$, that the quantum cohomology acts on both factors separately and agrees with the action defined in \cite{PSS}.
\end{proof}
\vspace{0.5cm}
|
1,108,101,562,735 | arxiv | \section{Introduction}\label{introduction_sec}
Metric-affine gravity \cite{Hehl:1995} is a natural extension of Einstein's general relativity theory. It is based on gauge-theoretic principles \cite{Blagojevic:2002,Hehl:2013}, and it takes into account microstructural properties of matter (spin, dilation current, proper hypercharge) as possible physical sources of the gravitational field, on an equal footing with macroscopic properties (energy and momentum) of matter.
In this work we derive the equations of motion of extended deformable test bodies in metric-affine gravity. In this theory, matter is characterized by three fundamental Noether currents -- the canonical energy-momentum current, the canonical hypermomentum current, and the metrical energy-momentum current. These objects satisfy a set of conservation laws (or, more exactly, balance equations). Following Mathisson, Papapetrou, and Dixon \cite{Mathisson:1937,Papapetrou:1951:3,Dixon:1964,Dixon:1974,Dixon:1979,Dixon:2008}, the equations of motion of extended test bodies are derived from the conservation laws. Our derivation is based on a covariant multipolar test body method, which utilizes Synge's world function formalism \cite{Synge:1960,DeWitt:Brehme:1960}.
In view of the multi-current characterization of matter in metric-affine gravity, we develop here a general approach which is applicable to an arbitrary set of conservation laws for any number of currents. The latter can include the gravitational, electromagnetic, and other physical currents if they are relevant to the model under consideration. The results presented here allow for the systematic study of test body motion in a very large class of gravitational theories (and not only gravitational), in particular they can also be applied to the case in which there is a general nonminimal coupling between gravity and matter. Models with nonminimal coupling have recently attracted a lot of attention in the literature \cite{Bertolami:etal:2007,Nojiri:2011}. Their physical interpretation and impact are still a subject of discussion \cite{Straumann:2008,Harko:2014:1}.
Here we explicitly show how the new geometrical structures in metric-affine gravity couple to matter, which in turn may underlie the design of experimental tests of gravity beyond the Einsteinian (purely Riemannian) geometrical picture. Our current work, generalizes and unifies several previous works \cite{Stoeger:Yasskin:1979,Stoeger:Yasskin:1980,Puetzfeld:2007,Puetzfeld:Obukhov:2008:1,Puetzfeld:Obukhov:2008:2,Puetzfeld:Obukhov:2013:1,Hehl:Obukhov:Puetzfeld:2013,Puetzfeld:Obukhov:2013:3,Puetzfeld:Obukhov:2013:4,Roshan:2013,Puetzfeld:Obukhov:2014:1} on the equations of motion in gauge gravity theories.
The structure of the paper is as follows: In section \ref{MAG_sec} we briefly introduce the relevant geometrical notions and recall the dynamical structure of metric-affine gravity. Our discussion is different from \cite{Hehl:1995} in that we avoid the use of the anholonomic frame/coframe, and all considerations are based on the traditional (Einsteinian) holonomic coordinate tensor formalism. We pay special attention to the extension of metric-affine gravity to the case of nonminimal coupling of gravity and matter. In section \ref{master_sec} we develop a generalized framework for the analysis of the multi-current conservation laws, and derive general covariant {\sl master equations} of motion for test bodies characterized by an arbitrary set of Noether currents. On the basis of these general results, we then obtain in section \ref{eom_sec} the equations of motion of extended test bodies in metric-affine gravity. The infinite hierarchy of equations for multipole moments up to an arbitrary order is given, and we analyze the lowest orders of approximation in some more detail. In particular we derive the equations of motion of a pole-dipole test body, as well as monopolar particle in section \ref{special_cases_sec}, and compare those to previous results in the literature. Our final conclusions are drawn our in \ref{conclusion_sec}. A brief summary of our conventions and frequently used formulas can be found in the appendices \ref{conventions_app} and \ref{expansion_app}. Appendix \ref{explicit_app} contains some supplementary material on the derivation of the general equations of motion.
Our notations and conventions are those of \cite{Hehl:1995}. In particular, the basic geometrical quantities such as the curvature, torsion, and nonmetricity are defined as in \cite{Hehl:1995}, and we use the Latin alphabet to label the spacetime coordinate indices. Furthermore, the metric has the signature $(+,-,-,-)$. It should be noted that our definition of the metrical energy-momentum tensor is different from the definition used in \cite{Bertolami:etal:2007,Nojiri:2011,Puetzfeld:Obukhov:2013:1}.
\section{Metric-affine gravity}\label{MAG_sec}
The geometrical arena of metric-affine gravity is as follows. The physical spacetime is identified with a four-dimensional smooth manifold $L_4$, which is endowed with a metric $g_{ij}$, and a linear connection $\Gamma_{ki}{}^j$. These structures introduce the physically important notions of lengths, angles, and parallel transport on the spacetime. In general, the geometry of such a manifold is exhaustively characterized by three tensors: the curvature, the torsion and the nonmetricity. They are defined as follows
\begin{eqnarray}
R_{kli}{}^j &:=& \partial_k\Gamma_{li}{}^j - \partial_l\Gamma_{ki}{}^j + \Gamma_{kn}{}^j \Gamma_{li}{}^n - \Gamma_{ln}{}^j\Gamma_{ki}{}^n,\label{curv}\\
T_{kl}{}^i &:=& \Gamma_{kl}{}^i - \Gamma_{lk}{}^i,\label{tors}\\ \label{nonmet}
Q_{kij} &:=& -\,\nabla_kg_{ij} = - \partial_kg_{ij} + \Gamma_{ki}{}^lg_{lj} + \Gamma_{kj}{}^lg_{il}.
\end{eqnarray}
The Riemannian connection $\widehat{\Gamma}_{kj}{}^i$ is uniquely determined by the conditions of vanishing torsion and nonmetricity which yield explicitly
\begin{equation}
\widehat{\Gamma}_{kj}{}^i = {\frac 12}g^{il}(\partial_jg_{kl} + \partial_kg_{lj} - \partial_lg_{kj}).\label{Chr}
\end{equation}
The deviation of the geometry from the Riemannian one is then conveniently described by the {\it distortion} tensor
\begin{equation}
N_{kj}{}^i := \widehat{\Gamma}_{kj}{}^i - \Gamma_{kj}{}^i.\label{dist}
\end{equation}
The system (\ref{tors}) and (\ref{nonmet}) allows us to find the distortion tensor in terms of the torsion and nonmetricity. Explicitly,
\begin{eqnarray}
N_{kj}{}^i &=& -\,{\frac 12}(T_{kj}{}^i + T^i{}_{kj} + T^i{}_{jk})\nonumber\\
&& +\,{\frac 12}(Q^i{}_{kj} - Q_{kj}{}^i - Q_{jk}{}^i).\label{NTQ}
\end{eqnarray}
Conversely, one can use this to express the torsion and nonmetricity tensors in terms of the distortion,
\begin{eqnarray}
T_{kj}{}^i &=& -\,2N_{[kj]}{}^i,\label{TN}\\
Q_{kij} &=& -\,2N_{k(ij)}.\label{QN}
\end{eqnarray}
Substituting (\ref{dist}) into (\ref{curv}), we find the relation between the non-Riemannian and the Riemannian curvature tensors
\begin{equation}
R_{adc}{}^b = \widehat{R}_{adc}{}^b - \widehat{\nabla}_aN_{dc}{}^b + \widehat{\nabla}_dN_{ac}{}^b + N_{an}{}^bN_{dc}{}^n - N_{dn}{}^bN_{ac}{}^n.\label{RRN}
\end{equation}
The hat over a symbol denotes the Riemannian objects (such as the curvature tensor) and the Riemannian operators (such as the covariant derivative) constructed from the Christoffel symbols (\ref{Chr}).
\subsection{Dynamics in metric-affine theory}\label{fieldeqs_sec}
The gravitational effects in the metric-affine theory are described by the set of fundamental variables: the independent metric $g_{ij}$ and connection $\Gamma_{kj}{}^i$. Accordingly, there are two sets of field equations.
Assuming standard minimal coupling, the total Lagrangian of interacting gravitational and matter fields reads
\begin{equation}\label{Ltot}
L = V(g_{ij}, R_{ijk}{}^l, N_{ki}{}^j) + L_{\rm mat}(g_{ij}, \psi^A, \nabla_i\psi^A).
\end{equation}
In general, the gravitational Lagrangian $V$ is constructed as a diffeomorphism invariant function of the curvature, torsion, and nonmetricity. However, in view of the relations (\ref{TN}) and (\ref{QN}), we can limit ourselves to Lagrangian functions that depend arbitrarily on the curvature and the distortion tensors. The matter Lagrangian depends on the matter field $\psi^A$ and its covariant derivative $\nabla_k\psi^A = \partial_k\psi^A -\Gamma_{ki}{}^j\,(\sigma^A{}_B)_j{}^i\,\psi^B$. Here $(\sigma^A{}_B)_j{}^i$ are the generators of general coordinate transformations.
The field equations of metric-affine gravity can be written in several equivalent ways. The standard form is the set of the so-called ``first'' and ``second'' field equations (using the modified covariant derivative defined by ${\stackrel * \nabla}{}_i = \nabla_i + N_{ki}{}^k$):
\begin{eqnarray}
{\stackrel * \nabla}{}_nH^{in}{}_k + {\frac 12}T_{mn}{}^iH^{mn}{}_k - E_k{}^i &=& - \Sigma_k{}^i,\label{1st}\\
{\stackrel * \nabla}{}_lH^{kli}{}_j + {\frac 12}T_{mn}{}^kH^{mni}{}_j - E^{ki}{}_j &=& \Delta^i{}_j{}^k.\label{2nd}
\end{eqnarray}
Here the generalized gravitational field momenta are introduced by
\begin{eqnarray}
H^{kli}{}_j &:=& -\,2{\frac {\partial V}{\partial R_{kli}{}^j}},\label{HH}\\
H^{ki}{}_j &:=& -\,{\frac {\partial V}{\partial T_{ki}{}^j}},\label{HT}\\
M^{kij} &:=& -\,{\frac {\partial V}{\partial Q_{kij}}},\label{HM}
\end{eqnarray}
and the gravitational hypermomentum density is
\begin{equation}\label{EN}
E^{ki}{}_j = - H^{ki}{}_j - M^{ki}{}_j = -\,{\frac {\partial V}{\partial N_{ki}{}^j}}.
\end{equation}
Furthermore, the generalized energy-momentum tensor of the gravitational field is
\begin{equation}
E_k{}^i = \delta_k^i V + {\frac 12}Q_{kln} M^{iln} + T_{kl}{}^n H^{il}{}_n + R_{kln}{}^m H^{iln}{}_m.\label{Eg}
\end{equation}
The sources of the gravitational field are the canonical energy-momentum tensor and the canonical hypermomentum of matter, respectively:
\begin{eqnarray}
\Sigma_k{}^i &:=& {\frac {\partial L_{\rm mat}}{\partial\nabla_i\psi^A}}\,\nabla_k\psi^A - \delta^i_kL_{\rm mat}.\label{canD}\\
\Delta^i{}_j{}^k &:=& {\frac {\partial L_{\rm mat}}{\partial \Gamma_{ki}{}^j}} = - {\frac {\partial L_{\rm mat}}{\partial\nabla_k\psi^A}} \,(\sigma^A{}_B)_j{}^i \psi^B.\label{tD}
\end{eqnarray}
It is straightforward to verify that instead of the first field equation (\ref{1st}), one can use the so-called zeroth field equation which reads
\begin{equation}
{\frac 2{\sqrt{-g}}}{\frac {\delta (\sqrt{-g}V)}{\delta g_{ij}}} = t^{ij}.\label{0th}
\end{equation}
On the right-hand side, the matter source is now represented by the metrical energy-momentum tensor which is defined by
\begin{equation}\label{tmet}
t_{ij} := {\frac 2{\sqrt{-g}}}{\frac {\partial (\sqrt{-g}L_{\rm mat})}{\partial g^{ij}}}.
\end{equation}
The system (\ref{1st}) and (\ref{2nd}) is completely equivalent to the system (\ref{0th}) and (\ref{2nd}), and it is a matter of convenience which one is solved.
In order to give an explicit example of physical matter with microstructure, we recall the hyperfluid model \cite{Obukhov:1993}. This is a direct generalization of the general relativistic ideal fluid variational theory \cite{Taub:1954,Schutz:1970} and of the spinning fluid model of Weyssenhoff and Raabe \cite{Weyssenhoff:1947,Obukhov:1987}. Using the variational principle for the hyperfluid \cite{Obukhov:1993}, one derives the canonical energy-momentum and hypermomentum tensors:
\begin{eqnarray}
\Sigma_k{}^i &=& \,v^iP_k - p\left(\delta_k^i - v^iv_k\right),\label{hypS}\\
\Delta^n{}_m{}^i &=& \,v^iJ_m{}^n,\label{hypD}
\end{eqnarray}
where $v^i$ is the 4-velocity of the fluid and $p$ is the pressure. Fluid elements are characterized by their microstructural properties: the momentum density $P_k$ and the intrinsic hypermomentum density $J_m{}^n$.
\subsection{Nonminimal coupling}\label{nonmin_sec}
Let us now consider an extension of the metric-affine theory by allowing the {\it nonminimal coupling} of matter and gravity via the modified Lagrangian
\begin{equation}\label{Lnon}
FL_{\rm mat}(g_{ij}, \psi^A, \nabla_i\psi^A).
\end{equation}
which replaces the second term in (\ref{Ltot}). The coupling function $F = F(g_{ij}, R_{ijk}{}^l, N_{ki}{}^j)$ can depend arbitrarily on its arguments. When $F = 1$, we recover the minimal coupling case.
In the previous paper \cite{Obukhov:Puetzfeld:2014} we derived the conservation laws in such a generalized theory. They read as follows:
\begin{eqnarray}\label{cons1f}
\widehat{\nabla}_j\Delta^i{}_k{}^j &=& -\,U_{jm}{}^{ni}{}_k\Delta^m{}_n{}^j + \Sigma_k{}^i - t_k{}^i,\\
\widehat{\nabla}_j\Sigma_k{}^j &=& -\,V_{j}{}^n{}_k\Sigma_n{}^j - R_{kjm}{}^n\Delta^m{}_n{}^j - {\frac 12}Q_{kj}{}^nt_n{}^j \nonumber \\
&&- A_k\,L_{\rm mat}.\label{cons2f}
\end{eqnarray}
Here we denote $A_k := \widehat{\nabla}_k\log F$, and
\begin{eqnarray}
U_{jmn}{}^{ik} &=& A_j\delta_m^i\delta_n^k - N_{jm}{}^i\delta_n^k + N_{j}{}^k{}_n\delta_m^i,\label{U}\\
V_{jn}{}^k &=& A_j\delta_n^k + N^k{}_{jn}.\label{V}
\end{eqnarray}
\section{General multipolar framework}\label{master_sec}
In this section we derive ``master equations of motion'' for a general extended test body, which is characterized by a set of currents
\begin{equation}
J^{Aj}.\label{JA}
\end{equation}
Normally, these are the so-called Noether currents that correspond to an invariance of the action under certain symmetry group. However, this is not necessary, and any set of currents is formally allowed. We call $J^{Aj}$ dynamical currents. The generalized index (capital Latin letters $A,B,\dots$) labels different components of the currents.
As the starting point for derivation of the equations of motion for generalized multipole moments, we consider the following conservation law:
\begin{equation}
\widehat{\nabla}_jJ^{Aj} = -\,\Lambda_{jB}{}^A\,J^{Bj} - \Pi^A{}_{\dot{B}}\Xi^{\dot{B}}.\label{dJ}
\end{equation}
On the right-hand side, we introduce objects that can be called material currents
\begin{equation}
\Xi^{\dot{A}}\label{Xi}
\end{equation}
to distinguish them from the dynamical currents $J^{Aj}$. The number of components of the dynamical and material currents is different; hence, we use a different index with a dot, $\dot{A}, \dot{B}, \dots$, the range of which does not coincide with that of $A,B,\dots$. At this stage we do not specify the ranges of both types of indices, this will be done for the particular examples which we analyze later. As usual, Einstein's summation rule over repeated indices is assumed for the generalized indices as well as for coordinate indices.
Both sets of currents $J^{Aj}$ and $\Xi^{\dot{A}}$ are constructed from the variables that describe the structure and the properties of matter inside the body. In contrast, the objects
\begin{equation}
\Lambda_{jB}{}^A,\qquad \Pi^A{}_{\dot{B}},
\end{equation}
do not depend on the matter, but they are functions of the external classical fields which act on the body and thereby determine its motion. The list of such external fields includes the electromagnetic, gravitational, and scalar fields.
We will now derive the equations of motion of a test body by utilizing the covariant expansion method of Synge \cite{Synge:1960}. For this we need the following auxiliary formula for the absolute derivative of the integral of an arbitrary bitensor density $\widetilde{B}^{x_1 y_1}=\widetilde{B}^{x_1 y_1}(x,y)$ (the latter is a tensorial function of two spacetime points):
\begin{eqnarray}
{\frac{D}{ds}} \int\limits_{\Sigma(s)} \widetilde{B}^{x_1 y_1} d \Sigma_{x_1} &=& \int\limits_{\Sigma(s)} \widehat{\nabla}_{x_1} \widetilde{B}^{x_1 y_1} w^{x_2} d \Sigma_{x_2} \nonumber \\
&& + \int\limits_{\Sigma(s)} v^{y_2} \widehat{\nabla}_{y_2} \widetilde{B}^{x_1 y_1} d \Sigma_{x_1}.\label{int_aux}
\end{eqnarray}
Here $v^{y_1}:=dx^{y_1}/ds$, $s$ is the proper time, ${\frac{D}{ds}} = v^i\widehat{\nabla}_i$, and the integral is performed over a spatial hypersurface. Note that in our notation the point to which the index of a bitensor belongs can be directly read from the index itself; e.g., $y_{n}$ denotes indices at the point $y$. Furthermore, we will now associate the point $y$ with the world-line of the test body under consideration.
Here the tilde marks densities, $\sigma$ denotes Synge's \cite{Synge:1960} world function, with $\sigma^y$ being its first covariant derivative, and $g^y{}_x$ is the parallel propagator for vectors. For objects with more complicated tensorial properties the parallel propagator is straightforwardly generalized to $G^Y{}_X$ and $G^{\dot{Y}}{}_{\dot{X}}$. We will need these generalized propagators to deal with the dynamical and material currents $J^{Aj}$ and $\Xi^{\dot{A}}$. More details are collected in appendix \ref{conventions_app}.
After these preliminaries, we introduce integrated moments for the two types of currents via (for $n = 0,1,\dots)$
\begin{eqnarray}
j^{y_1\cdots y_n Y_0} \!&=&\! (-1)^n\!\!\!\!\int\limits_{\Sigma(\tau)}\!\!\!\sigma^{y_1}\!\cdots\!\sigma^{y_n}G^{Y_0}{}_{X_0}\widetilde{J}^{X_0 x''}d\Sigma_{x''},\label{j1n}\\
i^{y_1\dots y_{n} Y_0 y'} \!&=&\! (-1)^n\!\!\!\!\int\limits_{\Sigma(\tau)}\!\!\!\sigma^{y_1}\!\cdots\!\sigma^{y_n}G^{Y_0}{}_{X_0}g^{y'}{}_{x'}\widetilde{J}^{X_0 x'}w^{x''}d\Sigma_{x''},\nonumber\\
&& \label{i1n}\\
m^{y_1\dots y_{n} \dot{Y}_0 } \!&=&\! (-1)^n\!\!\!\!\int\limits_{\Sigma(\tau)}\!\!\!\sigma^{y_1}\!\cdots\!\sigma^{y_n}G^{\dot{Y}_0}{}_{\dot{X}_0}\widetilde{\Xi}^{\dot{X}_0}w^{x''}d\Sigma_{x''}.\label{m1n}
\end{eqnarray}
Integrating (\ref{dJ}) and making use of (\ref{int_aux}), we find the following ``master equation of motion'' for the generalized multipole moments:
\begin{widetext}
\begin{eqnarray}
{\frac{D}{ds}} j^{y_1\cdots y_n Y_0} \!&=&\! - n\, v^{(y_1} j^{y_2 \dots y_n) Y_0} + n\, i^{(y_1 \dots y_{n-1}|Y_0|y_n)}- \gamma^{Y_0}{}_{Y'y''y_{n+1}}\left(i^{y_1 \dots y_{n}Y'y''} + j^{y_1 \dots y_{n}Y'}v^{y''}\right)\nonumber\\
&&- \Lambda_{y'Y''}{}^{Y_0}i^{y_1 \dots y_{n}Y''y'} - \Lambda_{y'Y''}{}^{Y_0}{}_{;y_{n+1}}i^{y_1 \dots y_{n+1}Y''y'}- \Pi^{Y_0}{}_{\dot{Y}'}m^{y_1 \dots y_{n}\dot{Y}'} - \Pi^{Y_0}{}_{\dot{Y}';y_{n+1}}m^{y_1 \dots y_{n+1}\dot{Y}'}\nonumber\\
&&+ \sum\limits^{\infty}_{k=2}{\frac 1{k!}}\Bigl[-(-1)^k n\, \alpha^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} i^{y_2 \dots y_n)y_{n+1} \dots y_{n+k} Y_0 y'}+ (-1)^k n\, v^{y'} \beta^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} j^{y_2 \dots y_n)y_{n+1} \dots y_{n+k} Y_0}\nonumber\\
&&+ (-1)^k\gamma^{Y_0}{}_{Y'y''y_{n+1}\dots y_{n+k}}\left(i^{y_1 \dots y_{n+k}Y'y''} + j^{y_1 \dots y_{n+k}Y'}v^{y''}\right)- \Lambda_{y'Y''}{}^{Y_0}{}_{;y_{n+1}\dots y_{n+k}}i^{y_1 \dots y_{n+k}Y''y'}\nonumber\\
&&- \Pi^{Y_0}{}_{\dot{Y}';y_{n+1}\dots y_{n+k}}m^{y_1 \dots y_{n+k}\dot{Y}'}\Bigr].\label{master}
\end{eqnarray}
\end{widetext}
\subsection{Electrodynamics in Minkowski spacetime}\label{eom_max}
To see how the general formalism works, let us consider the motion of electrically charged extended bodies under the influence of electromagnetic field in the flat Minkowski spacetime. This problem was analyzed earlier by means of a different approach in \cite{Dixon:1967}.
In this case, it is convenient to recast the set of dynamical currents into the form of a column
\begin{equation}
J^{Aj} = \left(\begin{array}{c}J^j \\ \Sigma^{kj}\end{array}\right),\label{Jmax}
\end{equation}
where $J^j$ is the electric current and $\Sigma^{kj}$ is the energy-momentum tensor. Physically, the structure of the dynamical current is crystal clear: the matter elements of an extended body are characterized by the two types of ``charges'', the electrical charge (the upper component) and the mass (the lower component).
The generalized conservation law comprises two components of different tensor dimensions:
\begin{equation}
\widehat{\nabla}_j \left(\begin{array}{c}J^j \\ \Sigma^{kj}\end{array}\right) =
\left(\begin{array}{c}0 \\ - F^{kj}J_j\end{array}\right),\label{dJmax}
\end{equation}
where the lower component of the right-hand side describes the usual Lorentz force.
Accordingly, we indeed recover for the dynamical current (\ref{Jmax}) the conservation law in the form (\ref{dJ}) where $\Xi^{\dot{B}} = 0$ and
\begin{equation}
\Lambda_{jB}{}^A = \left(\begin{array}{c|c}0 & 0\\ \hline F_j{}^k & 0\end{array}\right).\label{LamMax}
\end{equation}
The generalized moments (\ref{j1n})-(\ref{m1n}) have the same column structure, reflecting the two physical charges of matter:
\begin{eqnarray}
j^{y_1\cdots y_n Y_0} \!&=&\! \left(\begin{array}{c}j^{y_1\cdots y_n} \\ p^{y_1\cdots y_ny_0} \end{array}\right),\label{jMmax}\\
i^{y_1\cdots y_n Y_0y'} \!&=&\! \left(\begin{array}{c}i^{y_1\cdots y_ny'} \\ k^{y_1\cdots y_ny_0y'} \end{array}\right),\label{iMmax}
\end{eqnarray}
whereas $m^{y_1\dots y_{n} \dot{Y}_0 } =0$.
As a result, the master equation (\ref{master}) reduces to the coupled system of the two
sets of equations for the moments:
\begin{eqnarray}
{\frac{D}{ds}} j^{y_1\cdots y_n} &=& - n\, v^{(y_1} j^{y_2 \dots y_n)} + n\,i^{(y_1 \dots y_{n})},\label{dj1nmax}\\
{\frac{D}{ds}} p^{y_1\cdots y_n y_0} &=& - n\, v^{(y_1} p^{y_2 \dots y_n)y_0} + n\,k^{(y_1 \dots y_{n-1}|y_0| y_{n})}\nonumber\\
&& - \sum\limits^{\infty}_{k=1}{\frac 1{k!}}F_{y'}{}^{y_0}{}_{;y_{n+1}\dots y_{n+k}}i^{y_1 \dots y_{n+k}y'} \nonumber \\
&&- F_{y'}{}^{y_0}i^{y_1 \dots y_{n}y'}.\label{dp1nmax}
\end{eqnarray}
These equations should be compared to those of \cite{Dixon:1967}.
\section{Equations of motion in metric-affine gravity}\label{eom_sec}
We are now in a position to derive the equations of motion for extended test bodies in metric-affine gravity. Introducing the dynamical current
\begin{equation}
J^{Aj} = \left(\begin{array}{c}\Delta^{ikj} \\ \Sigma^{kj}\end{array}\right),\label{JAM}
\end{equation}
and the material current
\begin{equation}
\Xi^{\dot{A}} = \left(\begin{array}{c} t^{ik} \\ L_{\rm mat}\end{array}\right),\label{XIM}
\end{equation}
we then recast the system (\ref{cons1f}) and (\ref{cons2f}) into the generic conservation law (\ref{dJ}), where we now have
\begin{eqnarray}
\Lambda_{jB}{}^A &=& \left(\begin{array}{c|c}U_{ji'k'}{}^{ik} & - \delta^i_j\delta^k_{k'}\\ \hline R^k{}_{ji'k'} & V_{jk'}{}^k\end{array}\right),\label{LamMAG}\\
\Pi^A{}_{\dot{B}} &=& \left(\begin{array}{c|c}\delta^i_{i'}\delta^k_{k'} & 0\\ \hline {\frac 12} Q^k{}_{i'k'} & A^k\end{array}\right).\label{PIMAG}
\end{eqnarray}
Like in the previous example of an electrically charged body, the matter elements in metric-affine gravity are also characterized by two ``charges'': the canonical hypermomentum (upper component) and the canonical energy-momentum (lower component). This is reflected in the column structure of the dynamical current (\ref{JAM}). The material current (\ref{XIM}) takes into account the metrical energy-momentum and the matter Lagrangian related to the nonminimal coupling. The multi-index $A = \{ik,k\}$, whereas $\dot{A} = \{ik,1\}$. Accordingly, the generalized propagator reads
\begin{equation}
G^Y{}_X = \left(\begin{array}{c|c} g^{y_1}{}_{x_1}g^{y_2}{}_{x_2} & 0\\ \hline 0 & g^{y_1}{}_{x_1}\end{array}\right),\label{propMAG}
\end{equation}
and we easily construct the expansion coefficients of its derivatives from the corresponding expansions of the derivatives of the vector propagator $g^{y}{}_{x}$:
\begin{eqnarray}
\gamma^{Y_0}{}_{Y_1y_2\dots y_{k+2}} = \left(\begin{array}{c|c} \gamma^{\{y_0\tilde{y}\}}{}_{\{y'y''\}y_2\dots y_{k+2}} & 0\\ \hline 0 & \gamma^{y_0}{}_{y'y_2\dots y_{k+2}}\end{array}\right),\nonumber\\ \label{propE}
\end{eqnarray}
where we denoted
\begin{equation}
\gamma^{\{y_0\tilde{y}\}}{}_{\{y'y''\}y_2\dots y_{k+2}} = \gamma^{y_0}{}_{y'y_2\dots y_{k+2}}\delta^{\tilde{y}}_{y''} + \gamma^{\tilde{y}}{}_{y''y_2\dots y_{k+2}}\delta^{y_0}_{y'}.\label{ggg}
\end{equation}
In particular, for the first expansion coefficient ($k = 1$), we find
\begin{eqnarray}
\gamma^{\{y_0\tilde{y}\}}{}_{\{y'y''\}y_2y_{3}} &=& {\frac 12}\left(\hat{R}^{y_0}{}_{y'y_2y_3}\delta^{\tilde{y}}_{y''} + \hat{R}^{\tilde{y}}{}_{y''y_2y_3}\delta^{y_0}_{y'}\right),\label{gR1}\\
\gamma^{y_0}{}_{y'y_2y_{3}} &=& {\frac 12}\hat{R}^{y_0}{}_{y'y_2y_3}.\label{gR2}
\end{eqnarray}
For completeness, let us also write down another generalized propagator
\begin{equation}
G^{\dot{Y}}{}_{\dot{X}} = \left(\begin{array}{c|c} g^{y_1}{}_{x_1}g^{y_2}{}_{x_2} & 0\\ \hline 0 & 1\end{array}\right).\label{propMAG2}
\end{equation}
The last step is to write the generalized moments (\ref{j1n})-(\ref{m1n}) in terms of their components:
\begin{eqnarray}
j^{y_1\cdots y_n Y} \!&=&\! \left(\begin{array}{c}h^{y_1\cdots y_ny'y''} \\ p^{y_1\cdots y_ny'} \end{array}\right),\label{jMAG}\\
i^{y_1\dots y_{n} Y y_0} \!&=&\! \left(\begin{array}{c}q^{y_1\cdots y_ny'y''y_0} \\ k^{y_1\cdots y_ny'y_0} \end{array}\right),\label{iMAG}\\
m^{y_1\dots y_{n} \dot{Y} } \!&=&\! \left(\begin{array}{c}\mu^{y_1\cdots y_ny'y''} \\ \xi^{y_1\cdots y_n} \end{array}\right).\label{mMAG}
\end{eqnarray}
For the two most important moments, ``$h$'' stands for the hypermomentum, whereas ``$p$'' stands for the momentum.\footnote{Note that in order to facilitate the comparison with our previous work \cite{Puetzfeld:Obukhov:2013:3}, we provide in appendix \ref{explicit_app} the explicit form of integrated conservation laws (\ref{cons1f}) and (\ref{cons2f}), as well as the generalized integrated moments (\ref{jMAG}) -- (\ref{mMAG}) in the notation used in \cite{Puetzfeld:Obukhov:2013:3}.} Finally, substituting all of the above into the ``master equation'' (\ref{master}), we obtain the system of multipolar equations of motion for extended test bodies in metric-affine gravity:
\begin{widetext}
\begin{eqnarray}
\frac{D}{ds} h^{y_1 \dots y_n y_a y_b} &=& - n \, v^{(y_1} h^{y_2 \dots y_n) y_a y_b} + n \, q^{(y_1 \dots y_{n-1} | y_a y_b | y_n)} + k^{y_1 \dots y_n y_b y_a} - \mu^{y_1 \dots y_n y_a y_b} \nonumber \\
&& - \frac{1}{2}\widehat{R}^{y_a}{}_{y' y'' y_{n+1}} \left(q^{y_1 \dots y_{n+1} y' y_b y''} + v^{y''} h^{y_1 \dots y_{n+1} y' y_b}\right) \nonumber \\
&& - \frac{1}{2}\widehat{R}^{y_b}{}_{y' y'' y_{n+1}}\left( q^{y_1 \dots y_{n+1} y_a y' y''} + v^{y''} h^{y_1 \dots y_{n+1} y_a y'}\right) \nonumber \\
&& - U_{y_0y'y''}{}^{y_ay_b} q^{y_1 \dots y_n y' y'' y_0} - U_{y_0y'y''}{}^{y_ay_b}{}_{;y_{n+1}} q^{y_1 \dots y_{n+1} y' y'' y_0}\nonumber\\
&& + \sum^{\infty}_{k=2} {\frac{1}{k!}}\Bigg[ (-1)^k \gamma^{y_a}{}_{y' y'' y_{n+1} \dots y_{n+k}}\left( q^{y_1 \dots y_{n+k} y' y_b y''} + v^{y''} h^{y_1 \dots y_{n+k} y' y_b}\right) \nonumber \\
&& + (-1)^k \gamma^{y_b}{}_{y' y'' y_{n+1} \dots y_{n+k}}\left( q^{y_1 \dots y_{n+k} y_a y' y''} + v^{y''} h^{y_1 \dots y_{n+k} y_a y'}\right) \nonumber \\
&& - (-1)^k n \, \alpha^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} q^{y_2 \dots y_n)y_{n+1} \dots y_{n+k} y_a y_b y'} + (-1)^k v^{y'} n \, \beta^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} h^{y_2 \dots y_n)y_{n+1} \dots y_{n+k} y_a y_b} \nonumber \\
&& - U_{y_0y'y''}{}^{y_ay_b}{}_{;y_{n+1} \dots y_{n+k}} q^{y_1 \dots y_{n+k} y' y'' y_0} \Bigg], \label{int_eom_1_general}
\end{eqnarray}
\begin{eqnarray}
\frac{D}{ds} p^{y_1 \dots y_n y_a}&=& - n \, v^{(y_1} p^{y_2 \dots y_n) y_a} + n \, k^{(y_1 \dots y_{n-1} | y_a | y_n)} - A^{y_a} \xi^{y_1 \dots y_n} - A^{y_a}{}_{;y_{n+1}} \xi^{y_1 \dots y_{n+1}} \nonumber\\
&& - V_{y''y'}{}^{y_a} k^{y_1 \dots y_n y' y''} - V_{y''y'}{}^{y_a}{}_{;y_{n+1}} k^{y_1 \dots y_{n+1} y' y''} - \frac{1}{2} \widehat{R}^{y_a}{}_{y' y'' y_{n+1}}\left(k^{y_1 \dots y_{n+1} y' y''} + v^{y''} p^{y_1 \dots y_{n+1} y'} \right)\nonumber \\
&& - R^{y_a}{}_{y_0 y' y''} q^{y_1 \dots y_n y' y'' y_0} - R^{y_a}{}_{y_0 y' y'';y_{n+1}} q^{y_1 \dots y_{n+1} y' y'' y_0} \nonumber \\
&& - \frac{1}{2} Q^{y_a}{}_{y'' y'} \mu^{y_1 \dots y_n y' y''} - \frac{1}{2} Q^{y_a}{}_{y'' y';y_{n+1}} \mu^{y_1 \dots y_{n+1} y' y''} \nonumber \\
&& + \sum^{\infty}_{k=2} \frac{1}{k!}\Bigg[ (-1)^k \gamma^{y_a}{}_{y' y'' y_{n+1} \dots y_{n+k}}\left( k^{y_1 \dots y_{n+k} y' y''} + v^{y''} p^{y_1 \dots y_{n+k} y'}\right) \nonumber \\
&& - (-1)^k n \, \alpha^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} k^{y_2 \dots y_n)y_{n+1} \dots y_{n+k} y_a y' } + (-1)^k n \, v^{y'} \beta^{(y_1}{}_{y' y_{n+1} \dots y_{n+k}} p^{y_2 \dots y_n ) y_{n+1} \dots y_{n+k} y_a} \nonumber \\
&& - R^{y_a}{}_{y_0 y' y'';y_{n+1} \dots y_{n+k}} q^{y_1 \dots y_{n+k} y' y'' y_0} - V_{y''y'}{}^{y_a}{}_{;y_{n+1} \dots y_{n+k}} k^{y_1 \dots y_{n+k} y' y''} \nonumber \\
&& - \frac{1}{2} Q^{y_a}{}_{y'' y';y_{n+1} \dots y_{n+k}} \mu^{y_1 \dots y_{n+k} y' y''} - A^{y_a}{}_{;y_{n+1} \dots y_{n+k}} \xi^{y_1 \dots y_{n+k}} \Bigg].
\label{int_eom_2_general}
\end{eqnarray}
\end{widetext}
\section{Special cases} \label{special_cases_sec}
The general equations of motion (\ref{int_eom_1_general}) and (\ref{int_eom_2_general}) are valid to {\it any} multipolar order. In the following sections we focus on some special cases; in particular, we work out the two lowest multipolar orders of approximation and consider the explicit form of the equations of motion in special geometries.
\subsection{General pole-dipole equations of motion}
From (\ref{int_eom_1_general}) and (\ref{int_eom_2_general}), we can derive the general pole-dipole equations of motion. The relevant moments to be kept at this order of approximation are $p^a, p^{ab}, h^{ab}, q^{abc}, k^{ab}, k^{abc}, \mu^{ab}, \mu^{abc}, \xi^{a},$ and $\xi$. Since all objects are now evaluated on the world-line, we switch back to the usual tensor notation.
For $n=1$ and $n=0$, eq.\ (\ref{int_eom_1_general}) yields
\begin{eqnarray}
0 &=& k^{acb} - \mu^{abc} + q^{bca} - v^a h^{bc}, \label{eom_1_n_1} \\
\frac{D}{ds} h^{ab} &=& k^{ba} - \mu^{ab} - U_{cde}{}^{ab} q^{dec}. \label{eom_1_n_0}
\end{eqnarray}
Furthermore for $n=2,1,0$ equation (\ref{int_eom_2_general}) yields
\begin{eqnarray}
0 &=& k^{(a|c|b)} - v^{(a} p^{b)c}, \label{eom_2_n_2} \\
\frac{D}{ds} p^{ab} &=& k^{ba} - v^a p^b - A^b \xi^a - V_{dc}{}^b k^{acd} - \frac{1}{2} Q^b{}_{dc} \mu^{acd},\nonumber \\
\label{eom_2_n_1} \\
\frac{D}{ds} p^{a} &=& - V_{cb}{}^a k^{bc} - R^{a}{}_{dbc} q^{bcd} - \frac{1}{2} Q^a{}_{cb} \mu^{bc} \nonumber \\
&& - A^a \xi -\frac{1}{2} \widehat{R}^a{}_{cdb} \left(k^{bcd} + v^d p^{bc} \right) \nonumber \\
&&- V_{dc}{}^a{}_{;b} k^{bcd} - \frac{1}{2} Q^a{}_{dc;b} \mu^{bcd} - A^a{}_{;b} \xi^b. \label{eom_2_n_0}
\end{eqnarray}
\subsubsection{Rewriting equations of motion}
Let us decompose (\ref{eom_1_n_1}) and (\ref{eom_1_n_0}) into symmetric and skew-symmetric parts:
\begin{eqnarray}
\mu^{abc} &=& k^{a(bc)} + q^{(bc)a} - v^a h^{(bc)}, \label{eom_1_n_1S} \\
0 &=& -\,k^{a[bc]} + q^{[bc]a} - v^a h^{[bc]}, \label{eom_1_n_1A} \\
\mu^{ab} &=& -\,\frac{D}{ds}h^{(ab)} + k^{(ab)} - U_{cde}{}^{(ab)} q^{dec}, \label{eom_1_n_0S}\\
\frac{D}{ds} h^{[ab]} &=& -\,k^{[ab]} - U_{cde}{}^{[ab]} q^{dec}. \label{eom_1_n_0A}
\end{eqnarray}
As a result, we can express the moments symmetric in the last two indices $\mu^{ab} = \mu^{(ab)}$ and $\mu^{cab} = \mu^{c(ab)}$ (in general, this is possible also for an arbitrary order $\mu^{c_1\dots c_nab} = \mu^{c_1\dots c_n(ab)}$) in terms of the other moments.
Let us denote the skew-symmetric part $s^{ab} := h^{[ab]}$, as this greatly simplifies the subsequent manipulations and the comparison with \cite{Puetzfeld:Obukhov:2013:3}.
The system of the two equations (\ref{eom_2_n_2}) and (\ref{eom_1_n_1A}) can be resolved in terms of the 3rd rank $k$-moment. The result reads explicitly
\begin{eqnarray}
k^{abc} &=& v^a p^{cb} + v^c\left(p^{[ab]} - s^{ab}\right)\nonumber\\
&& + v^b\left(p^{[ac]} - s^{ac}\right) + v^a\left(p^{[bc]} - s^{bc}\right)\nonumber\\
&& + q^{[ab]c} + q^{[ac]b} + q^{[bc]a}.\label{kabc}
\end{eqnarray}
This yields some useful relations:
\begin{eqnarray}
k^{a[bc]} &=& - v^as^{bc} + q^{[bc]a},\label{ka1}\\
k^{[ab]c} &=& v^{[a} p^{|c|b]} + v^c\left(p^{[ab]} - s^{ab}\right) + q^{[ab]c}.\label{ka2}
\end{eqnarray}
The next step is to use the equations (\ref{eom_1_n_0S}), (\ref{eom_1_n_1S}) together with (\ref{kabc}) and substitute the $\mu$-moments and $k$-moments into (\ref{eom_1_n_0}) and (\ref{eom_2_n_1})-(\ref{eom_2_n_0}). This yields the system that depends only on the $p,h,q$ and $\xi$ moments.
Let us start with the analysis of (\ref{eom_2_n_0}). The latter contains the combination $k^{[b|c|d]} + v^{[d}p^{b]c}$ where the skew symmetry is imposed by the contraction with the Riemann curvature tensor which is antisymmetric in the last two indices. Making use of (\ref{kabc}), we derive
\begin{equation}
k^{[a|c|b]} + v^{[b}p^{a]c} = \kappa^{abc} + \kappa^{acb} - \kappa^{bca},\label{kka}
\end{equation}
where we introduced the abbreviation
\begin{equation}
\kappa^{abc} = v^c\left(p^{[ab]} - s^{ab}\right) + q^{[ab]c}.\label{kappa}
\end{equation}
Note that by construction $\kappa^{abc} = \kappa^{[ab]c}$.
Then by making use of the Ricci identity we find
\begin{eqnarray}
-\,{\frac 12} \widehat{R}^a{}_{cdb} \left(k^{bcd} + v^d p^{bc} \right) &=&
\widehat{R}^a{}_{bcd}\left[q^{[cd]b} \right.\nonumber\\
&&\left. + v^b\left(p^{[cd]} - s^{cd}\right)\right].\label{Rq}
\end{eqnarray}
Substituting $k^{bc}$ from (\ref{eom_2_n_1}) and $\mu^{bc}$ from (\ref{eom_1_n_0S}), we find after some algebra
\begin{eqnarray}
&&-\,V_{cb}{}^ak^{bc} - {\frac 12}Q^a{}_{cb}\mu^{bc} = - \,A_b{\frac {Dp^{ba}}{ds}} - N^a{}_{cd}{\frac {Dh^{cd}}{ds}}\nonumber\\
&&- \left(p^a + N^a{}_{cd}h^{cd}\right)v^bA_b - A^aA^b\xi_b - k^{bac}A_bA_c\nonumber\\
&&+ \left(N^a{}_{nb}N_{dc}{}^n - N^a{}_{cn}N_d{}^n{}_b\right)q^{cbd}.\label{VkQmu1}
\end{eqnarray}
Further simplification is achieved by noticing that
\begin{eqnarray}
v^bA_b &=& {\frac {DA}{ds}},\label{vA}\\
k^{bac}A_bA_c &=& p^{ca}A_c{\frac {DA}{ds}},\label{kAA}
\end{eqnarray}
where we used (\ref{eom_2_n_2}) and recalled that $A_b = A_{;b}$.
Analogously, taking $k^{b[cd]}$ from (\ref{ka1}) and $\mu^{bcd}$ from (\ref{eom_1_n_1S}), we derive
\begin{eqnarray}
&&-\,V_{dc}{}^a{}_{;b}k^{bcd} - {\frac 12}Q^a{}_{dc;b}\mu^{bcd} = - \,A_{b;c}k^{cab} \nonumber\\
&& + N^a{}_{cd;b}q^{cdb} - N^a{}_{cd;b}v^bh^{cd}.\label{VkQmu2}
\end{eqnarray}
We can again use $A_b = A_{;b}$ and (\ref{eom_2_n_2}) to simplify
\begin{equation}
- \,A_{b;c}k^{cab} = - p^{ba}{\frac {DA_b}{ds}}.\label{kAd}
\end{equation}
After these preliminary calculations, we substitute (\ref{Rq})-(\ref{kAd}) into (\ref{eom_2_n_0}) to recast the latter into
\begin{eqnarray}
&& {\frac {D}{ds}}\left(Fp^a + FN^a{}_{cd}h^{cd} + p^{ba}\widehat{\nabla}_bF\right) \nonumber\\
&& = F \widehat{R}^a{}_{bcd}v^b\left(p^{[cd]} - s^{cd}\right)\nonumber\\
&& - Fq^{cbd}\left[R^a{}_{dcb} - \widehat{R}^a{}_{dcb} - N^a{}_{cb;d}\right.\nonumber\\
&& \left. - N^a{}_{nb}N_{dc}{}^n + N^a{}_{cn}N_d{}^n{}_b\right]\nonumber\\
&& - FA^a\left(\xi + \xi^bA_b\right) - F\xi^bA^a{}_{;b}.\label{DPa}
\end{eqnarray}
Finally, combining (\ref{eom_2_n_1}) and (\ref{eom_1_n_0}) to eliminate $k^{ba}$ we derive the equation
\begin{eqnarray}
{\frac {D}{ds}}\left(p^{ab} - h^{ab}\right) &=& \mu^{ab} - v^a\left(p^b + N^b{}_{cd}h^{cd}\right)\nonumber\\
&& +\,q^{cda}N^b{}_{cd} - q^{cbd}N_{dc}{}^a + q^{acd}N_d{}^b{}_c\nonumber\\
&& -\,\xi^aA^b + (q^{abc} - k^{abc})A_c.\label{Dpab}
\end{eqnarray}
Following \cite{Puetzfeld:Obukhov:2013:3}, we introduce the total orbital and the total spin angular moments
\begin{equation}
L^{ab} := 2p^{[ab]},\qquad S^{ab} := -2h^{[ab]},\label{LS}
\end{equation}
and define the generalized total energy-momentum 4-vector and the generalized total angular momentum by
\begin{eqnarray}
{\cal P}^a &:=& F(p^a + N^a{}_{cd}h^{cd}) + p^{ba}\widehat{\nabla}_bF,\label{Ptot}\\
{\cal J}^{ab} &:=& F(L^{ab} + S^{ab}).\label{Jtot}
\end{eqnarray}
Then, taking into account the identity (\ref{RRN}) which with the help of the raising and lowering of indices can be recast into
\begin{eqnarray}
\widehat{\nabla}^aN_{dcb} &=& - R^a{}_{dcb} + \widehat{R}^a{}_{dcb} + N^a{}_{cb;d}\nonumber\\
&& + N^a{}_{nb}N_{dc}{}^n - N_d{}^n{}_bN^a{}_{cn},\label{DNRR}
\end{eqnarray}
we rewrite the pole-dipole equations of motion (\ref{DPa}) and (\ref{Dpab}) in the final form
\begin{eqnarray}
{\frac {D{\cal P}^a}{ds}} &=& {\frac 12}\widehat{R}^a{}_{bcd}v^b{\cal J}^{cd} + Fq^{cbd}\widehat{\nabla}^aN_{dcb} \nonumber\\
&& -\, \xi\widehat{\nabla}^aF - \xi^b\widehat{\nabla}_b\widehat{\nabla}^aF,\label{DPtot}\\
{\frac {D{\cal J}^{ab}}{ds}} &=& -\,2v^{[a}{\cal P}^{b]} + 2F(q^{cd[a}N^{b]}{}_{cd} + q^{c[a|d|}N_{dc}{}^{b]}\nonumber\\
&& + q^{[a|cd|}N_d{}^{b]}{}_c) - 2\xi^{[a}\widehat{\nabla}^{b]}F.\label{DJtot}
\end{eqnarray}
The last equation arises as the skew-symmetric part of (\ref{Dpab}), whereas the symmetric part of the latter is a non-dynamical relation that determines the $\mu^{ab}$ moment
\begin{eqnarray}
\mu^{ab} &=& {\frac {D\Upsilon^{ab}}{ds}} + {\frac 1F} v^{(a} \left( {\cal P}^{b)} + {\cal J}^{b)c}A_c \right) + \xi^{(a}A^{b)}\nonumber\\
&& -\,q^{cd(a}N^{b)}{}_{cd} + q^{c(a|d|}N_{dc}{}^{b)} - q^{(a|cd|}N_d{}^{b)}{}_c\nonumber\\
&& +\,(q^{[ac]b} + q^{[bc]a} - q^{(ab)c})A_c.\label{muabPD}
\end{eqnarray}
Here the symmetric moment of the total hypermomentum is introduced via
\begin{equation}
\Upsilon^{ab} := p^{(ab)} - h^{(ab)}.\label{hypermom}
\end{equation}
\subsection{Coupling to the post-Riemannian geometry: Fine structure}
Let us look more carefully at how the post-Riemannian pieces of the gravitational field couple to extended test bodies. At first, we notice that the generalized energy-momentum vector (\ref{Ptot}) contains the term $N^a{}_{cd}h^{cd}$ that describes the direct interaction of the distortion (torsion plus nonmetricity) with the intrinsic dipole moment of the hypermomentum. Decomposing the latter into the skew-symmetric (spin) part and the symmetric (proper hypermomentum + dilation) part, we find
\begin{equation}
N^a{}_{cd}h^{cd} = -\,{\frac 12}N^a{}_{[cd]}S^{cd} - {\frac 12}Q^a{}_{cd}h^{(cd)}.\label{Nh}
\end{equation}
Here we made use of (\ref{QN}). This is quite consistent with the gauge-theoretic structure of metric-affine gravity. The second term shows that the intrinsic proper hypermomentum and the dilation moment couple to the nonmetricity, whereas the first term displays the typical spin-torsion coupling.
Similar observations can be made for the coupling of higher moments which appear on the right-hand sides of (\ref{DPtot}) and (\ref{DJtot}) - and thus determine the force and torque acting on an extended body due to the post-Riemannian gravitational field. In order to see this, let us introduce the decomposition
\begin{equation}
-\,{\frac 12}q^{abc} = {\stackrel {d}{q}}{}^{abc} + {\stackrel {s}{q}}{}^{cab}\label{qqq}
\end{equation}
into the two pieces
\begin{eqnarray}
{\stackrel {d}{q}}{}^{abc} &:=& {\frac 12}\left(q^{[ac]b} + q^{[bc]a} - q^{(ab)c}\right),\label{qd}\\
{\stackrel {s}{q}}{}^{abc} &:=& {\frac 12}\left(q^{[ab]c} + q^{[ac]b} - q^{[bc]a}\right).\label{qs}
\end{eqnarray}
The overscript ``$d$'' and ``$s$'' notation shows the relevance of these objects to the dilation plus proper hypermomentum and to the spin, respectively. By construction, we have the following algebraic properties
\begin{equation}
{\stackrel {d}{q}}{}^{[ab]c} \equiv 0,\qquad {\stackrel {s}{q}}{}^{(ab)c} \equiv 0.\label{qq0}
\end{equation}
Making use of the decomposition (\ref{qqq}) and of the explicit structure of the distortion (\ref{NTQ}), we then recast the equations of motion (\ref{DPtot}) and (\ref{DJtot}) into
\begin{eqnarray}
{\frac {D{\cal P}^a}{ds}} &=& {\frac 12}\widehat{R}^a{}_{bcd}v^b{\cal J}^{cd}\nonumber\\
&& +\,F{\stackrel {s}{q}}{}^{cbd}\widehat{\nabla}^a T_{cbd} + F{\stackrel {d}{q}}{}^{cbd}\widehat{\nabla}^a Q_{dcb}\nonumber\\
&& -\,\xi\widehat{\nabla}^aF - \xi^b\widehat{\nabla}_b\widehat{\nabla}^aF,\label{DPdec}\\
{\frac {D{\cal J}^{ab}}{ds}} &=& -\,2v^{[a}{\cal P}^{b]}\nonumber\\
&& +\,2F({\stackrel {s}{q}}{}^{cd[a}T_{cd}{}^{b]} + 2{\stackrel {s}{q}}{}^{[a|cd|}T^{b]}{}_{cd})\nonumber\\
&& +\, 2F({\stackrel {d}{q}}{}^{cd[a}Q^{b]}{}_{cd} + 2{\stackrel {d}{q}}{}^{[a|dc|}Q_{cd}{}^{b]})\nonumber\\
&& -\,2\xi^{[a}\widehat{\nabla}^{b]}F.\label{DJdec}
\end{eqnarray}
Now we clearly see the fine structure of the coupling of extended bodies to the post-Riemannian geometry. The first lines in the equations of motion describe the usual Mathisson-Papapetrou force and torque. They depend on the Riemannian geometry only. A body with the nontrivial moment (\ref{qs}) is affected by the torsion field, whereas the nontrivial moment (\ref{qd}) feels the nonmetricity. This explains the different physical meaning of the higher moments (\ref{qd}) and (\ref{qs}). In addition, the last lines in (\ref{DPdec}) and (\ref{DJdec}) describe contributions due to the nonminimal coupling.
\subsection{General monopolar equations of motion}
At the monopolar order we have nontrivial moments $p^a, k^{ab}, \mu^{ab}$ and $\xi$. The nontrivial equations of motion then arise from the eq.\ (\ref{int_eom_1_general}) for $n = 0$ and from the eq.\ (\ref{int_eom_2_general}) for $n=1, n = 0$:
\begin{eqnarray}
0 &=& k^{ba} - \mu^{ab},\label{Mono1}\\
0 &=& k^{ba} - v^a p^b,\label{Mono2}\\
{\frac {Dp^a}{ds}} &=& -\,V_{cb}{}^ak^{bc} - {\frac 12}Q^a{}_{cb}\mu^{bc} - A^a\xi.\label{Mono3}
\end{eqnarray}
The first two equations (\ref{Mono1}) and (\ref{Mono2}) yield
\begin{equation}
k^{[ab]} = 0,\qquad v^{[a} p^{b]} = 0,\label{vp}
\end{equation}
and substituting (\ref{Mono1}), (\ref{Mono2}) and (\ref{vp}) into (\ref{Mono3}) we find
\begin{equation}
{\frac {D(Fp^a)}{ds}} = -\,\xi\widehat{\nabla}^aF.\label{Mono4}
\end{equation}
From (\ref{vp}) we have $p^a = Mv^a$ with the mass $M := v^ap_a$, and this allows us to recast (\ref{Mono4}) into the final form
\begin{equation}
M{\frac {Dv^a}{ds}} = -\,\xi(g^{ab} - v^av^b){\frac {\widehat{\nabla}_bF}F}.\label{Mono5}
\end{equation}
Hence, in general the motion of nonminimally coupled monopole test bodies is nongeodetic. Furthermore, the general monopole equation of motion (\ref{Mono5}) reveals an interesting feature of theories with nonminimal coupling. There is an ``indirect'' coupling, i.e.\ through the coupling function $F(g_{ij}, R_{ijk}{}^l,T_{ij}{}^k,Q_{kij})$, of post-Riemannian spacetime features to structureless test bodies.
\subsection{Weyl-Cartan spacetime}
In Weyl-Cartan spacetime the nonmetricity reads $Q_{kij} = Q_kg_{ij}$, where $Q_k$ is the Weyl covector. Hence the distortion is given by
\begin{eqnarray}\label{wc_distortion}
N_{kj}{}^i = K_{kj}{}^i + {\frac{1}{2}}\left(Q^i g_{kj} - Q_k\delta^i_j - Q_j\delta^i_k\right).
\end{eqnarray}
The contortion tensor is constructed from the torsion,
\begin{equation}
K_{kj}{}^i = -\,{\frac 12}(T_{kj}{}^i + T^i{}_{kj} + T^i{}_{jk}).\label{KT}
\end{equation}
As a result, the generalized momentum (\ref{Ptot}) in Weyl-Cartan spacetime takes the form
\begin{eqnarray}
{\cal P}^a = Fp^a - {\frac F2}\left(K^a{}_{cd} S^{cd} - Q_bS^{ba} + Q^aD\right) + p^{ba}\widehat{\nabla}_bF.\nonumber\\
\label{wc_ptot}
\end{eqnarray}
Here we introduced the {\it intrinsic dilation} moment $D := g_{ab}h^{ab}$.
Substituting the distortion (\ref{wc_distortion}) into (\ref{DPtot}) and (\ref{DJtot}), we find the pole-dipole equations of motion in the Weyl-Cartan spacetime:
\begin{eqnarray}
{\frac {D{\cal P}^a}{ds}} &=& {\frac 12}\widehat{R}^a{}_{bcd}v^b{\cal J}^{cd} + F{\stackrel {s}{q}}{}^{cbd}\widehat{\nabla}^a T_{cbd} \nonumber\\
&& +\,Z^b\widehat{\nabla}^aQ_b -\xi\widehat{\nabla}^aF - \xi^b\widehat{\nabla}_b\widehat{\nabla}^aF,\label{DPWC}\\
{\frac {D{\cal J}^{ab}}{ds}} &=& -\,2v^{[a}{\cal P}^{b]} + 2F({\stackrel {s}{q}}{}^{cd[a}T_{cd}{}^{b]} + 2{\stackrel {s}{q}}{}^{[a|cd|}T^{b]}{}_{cd})\nonumber\\
&& + 2FZ^{[a}Q^{b]} - 2\xi^{[a}\widehat{\nabla}^{b]}F.\label{DJWC}
\end{eqnarray}
Here we introduced the trace of the modified moment (\ref{qd}),
\begin{eqnarray}
Z^a := g_{bc}{\stackrel {d}{q}}{}^{bca} = {\frac 12}g_{bc}\left(q^{bac} - q^{bca} - q^{abc}\right).\label{Za}
\end{eqnarray}
It is coupled to the Weyl nonmetricity.
\subsection{Weyl spacetime}
Weyl spacetime \cite{Weyl:1923} is obtained as a special case of the results above for vanishing torsion. Hence the contortion is trivial,
\begin{eqnarray}
K_{abc} = 0. \label{contortion}
\end{eqnarray}
Taking this into account, the generalized momentum (\ref{wc_ptot}) and the equations of motion (\ref{DPWC}) and (\ref{DJWC}) are simplified even further.
It is interesting to note that besides a direct coupling of the dilation moment to the Weyl nonmetricity on the right-hand sides of (\ref{DPWC}) and (\ref{DJWC}), there is also a nontrivial coupling of the spin to the nonmetricity in (\ref{wc_ptot}).
\subsection{Riemann-Cartan spacetime}
Another special case is obtained when the Weyl vector vanishes $Q_a = 0$. Equations (\ref{wc_ptot})-(\ref{DJWC}) then reproduce {\it in a covariant way} the findings of Yasskin and Stoeger \cite{Stoeger:Yasskin:1980} when the coupling is minimal ($F=1$). For nonminimal coupling we recover our earlier results in \cite{Puetzfeld:Obukhov:2013:3}.
\section{Conclusions}\label{conclusion_sec}
We have worked out covariant test body equations of motion for standard metric-affine gravity, as well as its extensions with nonminimal coupling. Our results cover a very large class of gravitational theories, and one can use them as a theoretical basis for systematic tests of gravity by means of extended deformable bodies.
Furthermore, our work generalizes a whole set of works \cite{Bailey:Israel:1975,Stoeger:Yasskin:1979,Stoeger:Yasskin:1980,Puetzfeld:2007,Puetzfeld:Obukhov:2008:1,Puetzfeld:Obukhov:2008:2,Puetzfeld:Obukhov:2013:1,Hehl:Obukhov:Puetzfeld:2013,Puetzfeld:Obukhov:2013:3,Puetzfeld:Obukhov:2013:4,Puetzfeld:Obukhov:2014:1}.
In particular it can be viewed as a completion of the program initiated in \cite{Puetzfeld:2007}, in which a noncovariant Papapetrou-type \cite{Papapetrou:1951:3} approach was used. The general equations of motion (\ref{int_eom_1_general}) and (\ref{int_eom_2_general}) cover all of the previously reported cases. As demonstrated explicitly, the master equation (\ref{master}) allows for a quick adoption to any physical theory, as soon as the conservation laws and (multi-)current structure are fixed.
It is satisfying to see that in the context of nonminimal metric-affine gravity, one is able to recover the same indirect coupling -- as previously reported in \cite{Puetzfeld:Obukhov:2013:3} in the case of torsion -- of new geometrical quantities to regular matter via the coupling function $F$. This may be exploited to devise new strategies to detect post-Riemannian spacetime features in future experiments. We hope that our covariant unified framework sheds more light on the systematic test of theories which exhibit nonminimal coupling.
\section*{Acknowledgements}
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the grant LA-905/8-1/2 (D.P.).
|
1,108,101,562,736 | arxiv | \section{Introduction}
High resolution observations of the solar photosphere have greatly
enhanced our understanding of the complex interactions between
magnetic fields and turbulent convective motions. Modern ground-based
telescopes, such as the SST on La Palma, and space-borne instruments,
such as the Solar Optical Telescope (SOT) on the recently-launched
Hinode satellite, can now resolve the fine structure of photospheric
magnetic fields. Such high resolution observations have already
provided new insights into the magnetic field structures that can be
found in sunspot penumbrae (Scharmer et al. 2002), plage regions
(Berger et al. 2004) and the quiet Sun (see, for example, Centeno et
al. 2007; Rezaei et al. 2007). In the quiet Sun, localised
concentrations of intense vertical magnetic flux often accumulate in
the convective downflows (Lin \& Rimmele 1999; Dom\'inguez Cerde\~na,
Kneer \& S\'anchez Almeida 2003; Centeno et al. 2007). These localised
flux concentrations typically show up as bright points in G-band
images of the photospheric granulation (see, for example, Berger \&
Title 1996, 2001). Direct measurements indicate that the peak field
strength within these small-scale magnetic features is often well in
excess of a kilogauss (see, for example, Grossmann-Doerth, Keller \&
Sch\"ussler 1996; Dom\'inguez Cerde\~na et al. 2003). Although
kilogauss-strength magnetic features occupy only a small fraction of
the quiet solar surface, these localised flux concentrations contain a
significant proportion of the total (unsigned) quiet Sun magnetic flux
(Dom\'inguez Cerde\~na, S\'anchez Almeida \& Kneer 2006; S\'anchez
Almeida 2007).
\par The origin of quiet Sun magnetic fields remains an open
question. Either these magnetic fields are generated locally at the
photosphere, as the result of small-scale convectively-driven dynamo
action (Cattaneo 1999; Cattaneo \& Hughes 2006; V\"ogler \& Sch\"ussler
2007), or they were originally generated elsewhere in the solar
interior and are simply being re-processed and amplified by the local convective
motions. What is clear, however, is that the granular convective
upwellings at the solar photosphere will tend to expel magnetic flux,
causing it to accumulate in the convective downflows (Proctor \& Weiss
1982). This explains the observed association between quiet Sun
magnetic fields and the intergranular lanes. However, the peak
field strength that is measured in these regions is more difficult to
explain. Estimates of the kinetic energy density of the granular
convection in the solar photosphere suggest that magnetic field
strengths greater than approximately $400$G would exceed the
``equipartition'' value, $B_e$, at which the magnetic energy density
of these localised fields is comparable to the granular convective
kinetic energy density (see, for example, Galloway, Proctor \& Weiss
1977). Since the magnetic energy density is proportional to the square
of the magnetic field strength, the observed kilogauss-strength fields
in the quiet Sun have a magnetic energy density that is an order of
magnitude larger than equipartition. In fact, a better estimate for
the observed field strengths is given by $B_p$, which corresponds to
the field strength at which the local magnetic pressure balances the
ambient granular gas pressure.
\par In theoretical studies of Boussinesq magnetoconvection (see e.g. Galloway
et al. 1977; Galloway, Proctor \& Weiss 1978; Galloway \& Moore 1979),
the process of magnetic flux expulsion leads naturally to the formation of
localised flux concentrations at the edges of convective cells. For
axisymmetric converging flows, the {\it kinematic} flux concentration of
weak magnetic fields produces peak fields that scale linearly with the
magnetic Reynolds number, $Rm$, whilst the width of the resulting
magnetic feature is proportional to $Rm^{-1/2}$. However, for stronger
fields this amplification process is limited by {\it dynamical}
effects. Since the magnetic pressure does not play a role in Boussinesq
magnetoconvection, any magnetic feedback is due to the non-conservative
component of the Lorentz force, which results from the magnetic
curvature. In the dynamical regime, the peak field can only exceed
$B_e$ for large values of the magnetic Prandtl number (the ratio of
the kinematic viscosity of the fluid to its magnetic
diffusivity). Formally, this scaling can only be verified for small
Reynolds numbers, $Re$, although it appears not to be restricted to this
parameter regime (Cattaneo 1999). Kerswell \& Childress (1992) also
found a similar magnetic Prandtl number dependence in their idealised
boundary-layer study of the equilibrium of a thin flux tube in steady
compressible convection (see also Cameron \& Galloway
2005). Unfortunately, estimates of the magnetic Prandtl number in the
solar photosphere indicate that this diffusivity ratio is extremely
small (see, for example, Ossendrijver 2003). So although these
comparatively idealised models can explain the formation of highly
localised magnetic features at large $Rm$, they are unable to account
for the appearance of super-equipartition, kilogauss-strength magnetic
fields in the quiet Sun.
\par Since the peak magnetic field in kilogauss-strength
magnetic features is comparable to $B_p$, the magnetic pressure is
bound to play a significant role in the local dynamics. If a magnetic
feature is assumed to be in pressure balance with its non-magnetic
surroundings, the internal gas pressure must be smaller than that of
the surrounding fluid in order to compensate for the large
magnetic pressure. Unless these features are much cooler than their
surroundings, this reduction in gas pressure is most easily achieved
if these magnetic regions are at least partially evacuated (see, for
example, Proctor 1983; Proctor \& Weiss 1984). This effect is absent
from Boussinesq models, which neglect the effects of compressibility,
so it is inevitable that such models underestimate the peak fields
that can be generated by the process of flux concentration at the
solar photosphere. When seeking to explain the formation of
kilogauss-strength magnetic fields in the quiet Sun, it is therefore
important to consider the effects of compressibility (see e.g. Hughes
\& Proctor 1988).
\par The most widely studied compressible model of magnetic field
amplification in the quiet Sun is the ``convective collapse''
instability of thin magnetic flux tubes (Webb \& Roberts
1978; Spruit 1979; Spruit \& Zweibel 1979; Unno \& Ando
1979). Convective collapse models consider the linear stability of a
thin vertical magnetic flux tube embedded in a non-magnetic,
superadiabatically stratified atmosphere. There are initially no
convective motions along the tube, which is assumed to be in
pressure balance with its surroundings. Provided that the initial
magnetic field is not too strong, the flux tube is subject to a
convective instability that drains plasma vertically out of the
tube. As the plasma drains out of the tube, the local pressure balance
implies that the tube must ``collapse'' to form a narrower (and therefore more
intense) magnetic flux concentration. This instability will continue
to operate until the magnetic field is strong enough to suppress
convective motions. This model certainly represents a plausible
mechanism for the formation of kilogauss-strength magnetic
fields (the only restriction being that the peak field, $B_{max} \le
B_p$). Having said that, this model has its limitations
(Hughes \& Proctor 1988; Thomas \& Weiss 1992, 2008). In
particular, the thin flux tube approximation does not appear to be
consistent with photospheric observations (see, e.g. Berger \& Title
1996; Berger et al. 2004): even in the quiet Sun, the observed
magnetic regions seem to form a constantly-evolving ``magnetic fluid''
(see, for example, Thomas \& Weiss 2008), and so cannot simply be regarded
as a collection of discrete collapsed flux tubes. In addition, the
static equilibrium for the instability is rarely achievable in photospheric
magnetoconvection. Finally, by reducing the horizontal component of
the momentum equation to a simple pressure balance, the model not only
ignores the possible dynamical influence of the surrounding convective
motions, but also neglects the possible effects of magnetic curvature,
which may play an important role in ensuring the equilibrium of
more extensive magnetic features (see, for example, V\"ogler et
al. 2005, where curvature effects help to support a peak field,
$B_{max}$, in excess of $B_p$).
\par More realistic models of this process can only be studied by
carrying out large-scale numerical simulations of convective magnetic
flux intensification. In order to make detailed comparisons between numerical
simulations and spectral observations, it is necessary to take account
of effects such as partial ionisation and radiative transfer (see,
e.g., Nordlund 1982; V\"ogler et al. 2005). This approach has
successfully simulated magnetic fields and spectral features that are
similar to those observed in plages (Grossmann-Doerth, Sch\"ussler \&
Steiner 1998; Keller et al. 2004; Carlsson et al. 2004; V\"ogler et
al. 2005), in sunspot umbrae (Sch\"ussler \& V\"ogler 2006), and in
the quiet Sun (Khomenko et al. 2005; Stein \& Nordlund 2006). More
idealised models of photospheric magnetoconvection focus entirely upon
the interactions between compressible convection and magnetic fields
(e.g. Hurlburt \& Toomre 1988; Matthews, Proctor \& Weiss 1995; Weiss
et al. 1996; Rucklidge et al. 2000; Weiss, Proctor \& Brownjohn 2002;
Bushby \& Houghton 2005). This complementary approach lends itself
more easily to systematic surveys of parameter space, and has had
considerable success in qualitatively reproducing solar-like
behaviour.
\par In this paper, we investigate the formation of
super-equipartition, localised magnetic features in the quiet Sun. We
carry out numerical simulations of an idealised model of photospheric
magnetoconvection. Clearly it is not possible to achieve solar-like
values of the viscous and magnetic Reynolds numbers in these
simulations (which are both very large at the solar photosphere, see
e.g. Ossendrijver 2003). However the Reynolds numbers that are used
here are large enough that these simulations (at least qualitatively)
reproduce most of the key physical features of flux intensification in
photospheric magnetoconvection. The set-up of this model and the
numerical results that are obtained from it are described in the next
two Sections of the paper. In the final part, we discuss the relevance
of these results to photospheric magnetic fields and compare our
findings with predictions of the simplified ``convective collapse''
models.
\section{Governing equations and numerical methods}
The model that is considered in this paper is an idealised, local
representation of magnetoconvection in the quiet Sun. We
solve the equations of three-dimensional compressible
magnetohydrodynamics for a plane layer of electrically-conducting,
perfect monatomic gas. The layer of gas is heated from below and
cooled from above. We assume constant values for the gravitational
acceleration $g$, the shear viscosity $\mu$, the magnetic diffusivity
$\eta$, the thermal conductivity $K$, the magnetic permeability
$\mu_0$ and the specific heat capacities at constant density and
pressure ($c_v$ and $c_p$ respectively). The axes of the chosen
Cartesian frame of reference are oriented so that the $z$-axis
points vertically downwards (parallel to the gravitational
acceleration). Defining $d$ to be the depth of the convective layer,
the gas occupies the region $0 \le x, y \le 4d$ and $0 \le z \le d$,
which gives a wide Cartesian domain with a square horizontal
cross-sectional area. Periodic boundary conditions are imposed in each
of the horizontal directions. Idealised boundary conditions are
imposed at $z=0$ (the upper surface) and $z=d$ (the lower
surface). These bounding surfaces are held at fixed temperature, and
are assumed to be impermeable and stress-free. In addition, any
magnetic field that is present is constrained to be vertical at the
upper and lower boundaries. Similar models have been considered in
several previous studies (see, for example, Matthews et al. 1995;
Rucklidge et al. 2000; Weiss et al. 2002; Bushby \& Houghton 2005).
\par It is convenient to formulate the governing equations for
magnetoconvection in terms of non-dimensional variables. We adopt
non-dimensionalising scalings that are similar to those described by
Matthews et al. (1995). All lengths are scaled in terms of the depth
of the Cartesian domain, $d$, whilst the temperature, $T$, and
density, $\rho$, are both scaled in terms of their values at the upper
surface of the domain (which are denoted by $T_0$ and $\rho_0$
respectively). Defining $R_*$ to be the gas constant, the velocity,
$\mathbf{u}$, is scaled in terms of the isothermal sound speed at the top of
the layer, $\left(R_*T_0\right)^{1/2}$; a natural scaling for time is
therefore $d/\left(R_*T_0\right)^{1/2}$, which corresponds to an
acoustic timescale. The parameter $\theta$ denotes the
(dimensionless) temperature difference between the upper and lower
boundaries. In the absence of any motion the gas is a polytrope with
a polytropic index $m=gd/R_*T_0\theta -1$.
Rather than relating the magnetic field strength
to the Chandrasekhar number (see, for example, Weiss et al. 2002;
Bushby \& Houghton 2005), we also scale the Alfv\'en speed at the top
of the layer in terms of the sound speed. This implies that the
appropriate non-dimensionalising scaling for the magnetic field,
$\mathbf{B}$, is given by
$\left(\mu_0\rho_0R_*T_0\right)^{1/2}$. Having made these scalings,
the governing equations for the density, the momentum density ($\rho
\mathbf{u}$), the magnetic field and the temperature are given by
\begin{eqnarray}
&&\frac{\partial \rho}{\partial t}=- \nabla \cdot \left(\rho
\bmath{u}\right),\\ \nonumber \\
&&\frac{\partial}{\partial t}\left(\rho \bmath{u}\right)=- \nabla
\left(P + |\bmath{B}|^2/2\right) +\theta(m+1)\rho\bmath{\hat{z}}\\
\nonumber&& \hspace{0.65in} + \nabla \cdot \left( \bmath{BB} - \rho \bmath{uu} +
\kappa \sigma \bmath{\tau}\right), \hspace{0.4in} \\ \nonumber \\
&&\frac{\partial \bmath{B}}{\partial t}=\nabla \times \left( \bmath{u}
\times \bmath{B} - \kappa \zeta_0 \nabla \times \bmath{B} \right),
~~\nabla \cdot \bmath{B} = 0,\\ \nonumber \\
&&\frac{\partial T}{\partial t}= -\bmath{u}\cdot\nabla T -
\left(\gamma -1\right)T\nabla \cdot \bmath{u} +
\frac{\kappa\gamma}{\rho}\nabla^2 T \\ \nonumber &&\hspace{0.35in}+
\frac{\kappa(\gamma-1)}{\rho}\left(\sigma \tau^2/2 + \zeta_0|\nabla
\times \bmath{B}|^2\right).
\end{eqnarray}
\noindent The components of the stress tensor, $\bmath{\tau}$ are
given by
\begin{equation}
\tau_{ij}= \frac{\partial u_i}{\partial x_j}+\frac{\partial
u_j}{\partial x_i} - \frac{2}{3}\frac{\partial u_k}{\partial
x_k}\delta_{ij},
\end{equation}
\noindent whilst the pressure, $P$ is determined by the equation of
state for a perfect gas
\begin{equation}
P=\rho T.
\end{equation}
\par These governing equations are characterised by several
non-dimensional constants, including the Prandtl
number, $\sigma=\mu c_p/K$, the (non-dimensional) thermal diffusivity,
$\kappa=K/\rho_0 d c_p \left(R_*T_0\right)^{1/2}$, and the ratio of
specific heats, $\gamma=c_p/c_v$. The ratio of the magnetic
diffusivity to the thermal diffusivity, $\zeta$, is proportional to
the fluid density $\rho$ (and therefore increases with depth). At
the top of the layer $\zeta=\zeta_0\equiv\eta
\rho_0 c_p/K$. If all the other parameters are fixed, varying $\kappa$
is equivalent to varying the mid-layer Rayleigh number,
\begin{equation}
Ra=\left(m+1-m\gamma\right)\left(1+\theta/2\right)^{2m-1}\frac{(m+1)\theta^2}{\kappa^2\gamma\sigma}.
\end{equation}
\noindent This Rayleigh number measures the destabilising effects of a
superadiabatic temperature gradient relative to the stabilising
effects of diffusion. The parameters are all described in greater
detail by Matthews et al. (1995).
\par The simplest non-trivial equilibrium solution of these governing
equations corresponds to a static polytrope with a uniform magnetic
field. Initially we restrict attention to the case in which
$\mathbf{B}=0$. In this equilibrium solution, $\mathbf{u}=0$,
$T(z)=1+\theta z$ and $\rho(z)=\left(1+\theta z\right)^m$. Fixing
$\theta=10$ and $m=1$ gives a highly stratified atmosphere in which
the temperature and density both vary by an order of magnitude across
the layer. We consider a monatomic gas, therefore $\gamma=5/3$. This
implies that the $m=1$ polytrope is superadiabatically stratified. For
simplicity (and for ease of comparison with previous studies) we set
the Prandtl number equal to unity, i.e. $\sigma=1$. A range of values
for $\zeta_0$ is considered in this paper. Finally, the Rayleigh
number is chosen to be $Ra=4.0 \times 10^5$. This value for $Ra$ is
more than two orders of magnitude larger than the critical value for
the onset of convective instabilities.
\par The model described above is investigated by carrying out large-scale
numerical simulations. For this $4 \times 4 \times 1$ Cartesian
domain, we adopt a computational mesh of $256 \times 256 \times 160$
grid points. The horizontal extent of this computational
domain is smaller than in some of our previous calculations (see,
e.g. Bushby \& Houghton 2005). This reduction in box size does
influence the global structure of the convective motions by
eliminating mesoscale structures (Rucklidge et al. 2000). However, the
horizontal extent of the computational domain probably has little
influence upon the localised process of flux concentration. The
advantage of considering a smaller domain is that it allows us to
carry out high resolution numerical simulations without requiring an
excessive number of grid points. This enables us to model
magnetoconvective behaviour more accurately at high Reynolds numbers,
so this reduction in box size is a reasonable compromise. We use a
well tested code (see, e.g. Matthews et al. 1995) in which horizontal
derivatives are evaluated in Fourier space, whilst vertical
derivatives are evaluated using fourth order finite differences. The
time-stepping is carried out via an explicit 3rd order Adams-Bashforth
scheme, with a variable time-step. The code is efficiently
parallelised using MPI, and these simulations have made use of the
Cambridge-Cranfield High Performance Computing Facility and the UKMHD
Consortium machine based in St Andrews.
\begin{figure}
\begin{center}
\epsfxsize\hsize\epsffile{fig1.eps}
\caption{The temperature distribution for non-magnetic convection, in
a horizontal plane near the upper surface of the Cartesian
domain, at $(z/d)=0.05$. Temperature contours are evenly
spaced between $T=1.4$ and $T=2.6$ (the same contour spacings are
used for all temperature plots in this paper). Brighter regions correspond to
warmer fluid, cooler areas of fluid are represented by darker
regions.\label{fig:1}}
\end{center}
\end{figure}
\section{Results}
\subsection{The initial state}
The starting point for these idealised simulations is fully developed
non-magnetic convection. In order to generate such a convective state,
we consider the equilibrium solution corresponding to a static
polytrope with no applied magnetic field, and introduce a small
amplitude random temperature perturbation. This convectively-unstable
configuration is allowed to evolve until it has relaxed to a
statistically steady hydrodynamic state, as illustrated in
Fig.~\ref{fig:1}, which shows a snapshot of the temperature
distribution in a horizontal plane near the upper surface of the
computational domain. In this granular pattern, bright regions
correspond to broad warm upflows, whilst darker regions correspond to
cool narrow downflows. The strength of the convection can be
characterised by the mid-layer Reynolds number $Re=\rho_{\rm
mid}U_{\rm rms}d/\mu$, where $U_{\rm rms}$ corresponds to the
rms-velocity of the convection and $\rho_{\rm mid}$ is the density at
the mid-layer of the original static polytrope. The Reynolds number
here is approximately $Re=150$. As noted in the Introduction,
realistic Reynolds numbers for photospheric convection are numerically
unobtainable, but the Reynolds number is large enough that
instructive results can be obtained in these idealised
calculations.
\par Having established this (purely hydrodynamic) convective state,
we then introduce a weak uniform vertical magnetic field,
$B_0\bmath{\hat z}$, with $B_0$ chosen so that the initial magnetic
energy is approximately 0.1\% of the kinetic energy. A weak imposed
field of this form tends to favour the formation of highly localised
magnetic features (see, for example, Bushby \& Houghton 2005). In what
follows, the time at which this magnetic field is introduced is
denoted by $t=0$. The subsequent evolution of this magnetic field
depends crucially upon the magnetic Reynolds number of the flow,
$Rm=U_{\rm rms}d/\eta$. In non-dimensional terms, $Rm \propto
\zeta_0^{-1}$; thus it is possible to investigate a range of
values of $Rm$ simply by repeating the numerical experiment with
different values of $\zeta_0$. The range investigated is
$0.2\leq\zeta_0\leq 2.4$ ($120 \gtrsim Rm \gtrsim 10)$. Larger values of
$Rm$ correspond to less diffusive plasmas, and are therefore more
relevant to photospheric magnetoconvection, although (as with $Re$) it
is not yet possible to carry out fully resolved simulations with
realistic values of $Rm$ for photospheric magnetoconvection. We
therefore focus initially upon the case of $\zeta_0=0.2$
($Rm\simeq 120$). The effects of varying $\zeta_0$ are discussed later in
this Section.
\begin{figure}
\begin{center}
\epsfxsize\hsize\epsffile{fig2.eps}
\caption{The magnetic field distribution during the flux expulsion
phase, at $t=0.24$. The shaded contours show the temperature distribution
in a horizontal plane at $(z/d)=0.05$ (as described in
Fig.~\ref{fig:1}). The white contours denote constant values of the
vertical component of the magnetic field.\label{fig:2}}
\end{center}
\end{figure}
\subsection{Results for $\zeta_0=0.2$}
Since the initial magnetic field is comparatively weak, the Lorentz
forces play a minor role in the very early stages of evolution in
these numerical simulations. During this brief ``kinematic'' phase,
diverging convective motions at the upper surface of the computational
domain rapidly expel magnetic flux from the granular interiors. This
process causes magnetic flux to accumulate in the convective downflows
in the intergranular lanes. The strongest field concentrations tend to
occur at the vertices between neighbouring granules, due to
the fact that any flows along the intergranular lanes tend to converge
upon these vertices. The process of flux expulsion and accumulation
is illustrated in Fig.~\ref{fig:2}, which shows the distribution of
the vertical component of the magnetic field during this flux
expulsion phase. In this (relatively) high $Rm$ regime, the width of
the localised magnetic features is comparable to the width of the
narrow intergranular lanes.
\par As these localised concentrations of magnetic flux form, the
principle of flux conservation implies that the magnetic energy
density in these regions rapidly increases to the point where the
Lorentz forces become dynamically significant. In the upper layers of
the computational domain, most of the magnetic energy resides in the
vertical component of the magnetic field, and the horizontal gradients
in this field component are usually much larger than any variations in
the vertical direction. This implies that the horizontal magnetic
pressure gradient (i.e. the horizontal component of
$\nabla\left[\mathbf{B}^2/2\mu_0\right]$) is the dominant component of
the Lorentz force. The effects of the magnetic pressure are most
apparent near the surface, where the gas pressure is relatively
small. The magnetic pressure gradient tends to inhibit the converging
convective motions that are responsible for driving the flux
concentration process. However, the flux amplification process is not
immediately suppressed, because the total (i.e. gas plus magnetic)
pressure increases more gradually than the magnetic pressure
alone. This is due to the fact that the local convective downflows
rapidly carry fluid away from the surface regions of the magnetic
feature. These downflows are responsible for partially evacuating the
upper regions of the magnetic features, which in turn leads to a
reduction in the local gas pressure. This reduction in gas pressure
(at least partially) compensates for the increased magnetic
pressure. In the most intense magnetic flux concentrations, the gas
pressure drops to as little as a few percent of its initial value at
the surface before the magnetic field becomes locally strong enough to
suppress the vertical convective motions. A combination of the
suppression of these vertical motions plus the effects of diffusion
eventually halts this flux concentration process.
\begin{figure*}
\begin{center}
\epsfxsize\hsize\epsffile{fig3.eps}
\caption{The $x$-dependence of the pressure distributions (each at
fixed $y$) at three different time intervals along a horizontal cut
through the strongest magnetic feature at the upper surface. Solid
lines correspond to the magnetic pressure, $P_{\rm mag}$; dashed lines
correspond to the gas pressure, $P_{\rm gas}$; the dotted lines
represent the dynamic pressure, $P_{\rm dyn}$ (see text). These
snapshots are taken at (a) t=0.12, (b) t=0.61 and (c)
t=1.61.\label{fig:3}}
\end{center}
\end{figure*}
\par To describe this process in a more quantitative fashion, it
is useful not only to consider the variations of the gas pressure
($P_{\rm gas}$) and the magnetic pressure ($P_{\rm mag}$), but also the
dynamic pressure ($P_{\rm dyn}$), which represents the
dynamical influence of the convective motions. Following Hurlburt \&
Toomre (1988), we define $P_{\rm dyn}=\rho|\mathbf{u}|^2$ (although note
that other definitions have been used in some other studies, e.g.,
Weiss et al. 1996). Whilst this expression does not correspond
directly to a pressure term in the momentum equation $(2)$, it does
usefully quantify the vigour of the convective motions. Since the peak
flow speeds at the surface are comparable to the local sound speed
(any shocks being smoothed out by the viscosity of the fluid), we
would expect $P_{\rm dyn}$ to play a significant dynamical
role. Fig.~3 illustrates the time-evolution of these pressure
distributions at the surface (where the most significant variations
are observed) immediately after the magnetic field is introduced. To
generate a one-dimensional pressure map, we fix the value of $y$ so
that a horizontal cut (in the $x$ direction) along the upper surface
of the computational domain passes through the strongest magnetic
feature. The three plots in Fig.~3 show the surface pressure
distributions along such a cut at three different times. These plots
clearly illustrate the scenario that was described in the previous
paragraph. As the magnetic pressure grows, there is a corresponding
decrease in the gas pressure as fluid rapidly drains out of the
surface regions of the magnetic feature. Once the feature has formed
(lower plot of Fig.~3), convective motions are strongly suppressed and
the gas pressure in the magnetic region is very much smaller than that
of the surrounding field-free fluid. Although partial evacuation,
with accompanying field intensification, has
already been observed in several previous studies (e.g. Hurlburt \&
Toomre 1988; Weiss et al. 1996; V\"ogler et al. 2005), the level of
evacuation is much more dramatic in these simulations.
\par The most surprising aspect of Fig.~3 is the fact that the
magnetic pressure within the upper layers of the resulting magnetic flux
concentration is much larger than the gas pressure of the surrounding
non-magnetic fluid. Since the ambient gas pressure increases rapidly with
depth ($P_{\rm gas} \propto (1+10z)^2$ in the unperturbed polytropic
atmosphere), the difference between the surrounding gas pressure and the
internal magnetic pressure decreases as we move away from the
surface. For this magnetic feature, the magnetic pressure is
comparable to the external gas pressure at a depth of approximately
$z=0.05$. However, even down to depths of approximately $z=0.1$, the sum
of the internal gas and magnetic pressures is still larger than the
external gas pressure. So, even below the surface, this magnetic
feature is not simply in pressure balance with its non-magnetic
surroundings. V\"ogler et al. (2005) have shown that magnetic
curvature effects can play a key role in the confinement of magnetic
flux concentrations. However, in this case the field lines are
predominantly vertical near the upper surface, so magnetic curvature
effects are negligible. Note that the lack of field-line curvature
also implies that these magnetic features are not being confined in
deeper layers (where the gas pressure is higher), since this would
require the flux concentrations to spread out in the upper layers.
\par In the absence of a confining influence due to magnetic
curvature, the observed pressure imbalance implies
that the surrounding convective motions must be playing an active role
in the confinement of this magnetic feature. Put differently, where
the flow converges on an intense flux concentration there has to be an
excess stagnation pressure. Here this excess pressure is provided by
the magnetic field in a region where the gas pressure is reduced. This
is in total contrast with the simulation of umbral convection by
Sch\"ussler and V\"ogler (2006), where a divergent rising plume is
associated with a weaker field, and the excess pressure results
instead from an enhancement of density (associated with buoyancy
braking). The significance of the dynamic pressure in these
calculations is illustrated in Fig.~3 -- it is clear that the dynamic
pressure around the magnetic feature can often be comparable to the
local gas pressure. This shows that the dynamical influence of the
surrounding convective motions cannot be ignored when considering
models of photospheric magnetic field intensification. It should also be
stressed that the magnetic energy density of the flux concentration
that is illustrated in Fig.~3 is much larger than the mean kinetic
energy density (i.e. $P_{\rm dyn}/2$) of the surrounding granular
convection. These simulations provide confirmation of the fact that
the process of partial evacuation is important in the production of
such super-equipartition fields. Without such a reduction in the local
gas pressure, it is difficult to see how convective motions alone
could produce such a strong magnetic feature.
\begin{figure}
\epsfxsize\hsize\epsffile{fig4a.eps}
\vspace{0.2cm}
\epsfxsize\hsize\epsffile{fig4b.eps}
\caption{The magnetic field distribution at $t=26.26$ (after several
convective turnover times). Top:
Like Fig.~2, this shows contours of the vertical magnetic field
component superimposed upon the temperature profile in a horizontal
plane at $(z/d)=0.05$. Bottom: The pressure
distributions along a horizontal cut through the strongest magnetic
feature at the surface. As in Fig.~3, the solid line corresponds to
the magnetic pressure, the dashed line corresponds to the gas
pressure, whilst the dynamic pressure is represented by a dotted
line.\label{fig4}}
\end{figure}
\par All the discussion so far has focused upon the early stages of
these numerical simulations. The process of magnetic field
intensification is arguably the most interesting phase of the
dynamics, although the evolution of the resulting magnetic features
also raises important issues. In the present idealised model, the net imposed
vertical magnetic flux is a conserved quantity, so there will always
be some accumulations of vertical magnetic flux somewhere within the
computational domain. However, what is less clear is whether or not
the occurrence of super-equipartition magnetic fields is a transient
feature of the model. Fig.~4 shows the magnetic field distribution
several convective turnover times after the magnetic field was first
introduced. It is clear that the most intense flux concentrations are
still partially evacuated, and it is also clear that the magnetic
energy density of these regions still exceeds the mean kinetic energy
density of the surrounding convection. In addition, the fact that the
internal magnetic pressure (in the upper layers) is still much larger
than the external gas pressure indicates that the surrounding
convective motions are still playing a key role in the confinement of
the magnetic feature. During their evolution, these features
continuously interact with the convective motions, which implies that
they are deformed, shredded and advected around the domain in a
time-dependent fashion. However, despite these complex interactions,
ultra-intense super-equipartition magnetic features seem to be a
robust feature of the simulation and are not simply a transient
phenomenon.
\subsection{The effects of varying $\zeta_0$}
\begin{figure}
\begin{center}
\epsfxsize\hsize\epsffile{fig5.eps}
\caption{The magnetic flux distribution during the kinematic phase for
$\zeta_0=1.2$. Like Fig.~2, this plot is for $t=0.24$ and shows
the spatial variation of the temperature and the vertical magnetic
field component in a horizontal plane at $(z/d)=0.05$ (i.e. near the
upper surface). For ease of comparison, the same scales have been
used in Fig.~2 and Fig.~5 for both the magnetic field and the
temperature.\label{fig:5}}
\end{center}
\end{figure}
One of the key parameters in these simulations is the magnetic
Reynolds number, $Rm$. Computational restrictions limit the range of
values of $Rm$ that can be considered, and all values that can be
simulated will be very much smaller than real photospheric
values. However, the simulation that has already been described
qualitatively illustrates some of the main physical processes that
occur during photospheric magnetic flux amplification. In this
Section, we assess the effects of repeating this simulation for
different values of $\zeta_0$. This is equivalent to varying the
magnetic Reynolds number of the flow, which is inversely proportional
to $\zeta_0$: smaller values of $Rm$ correspond to larger values of
$\zeta_0$ and vice-versa. In order that comparisons can easily be made
between the different cases, all magnetohydrodynamical simulations are
started from exactly the same hydrodynamic initial conditions.
\begin{table}
\begin{center}
\caption{The $\zeta_0$-dependence of the magnitude of the vertical
magnetic field component at a fixed position at the upper surface of the
computational domain. The time is fixed at $t=0.12$, and the centre
of the magnetic feature is at
$\left(x,y\right)=\left(3.16,2.39\right)$ in all cases. Magnetic
field strengths are normalised in terms of the strength of the
imposed magnetic field. Also shown (in the middle column) is the
corresponding value of $Rm$ for each value of $\zeta_0$.\label{table:1}}
\begin{tabular}{@{}lcc}
\hline $\zeta_0$ & $Rm$ & $B_z (max)/B_z (initial)$ \\ \hline 0.2 &
117 & 3.61 \\
0.3 & 83 & 3.50 \\ 0.4 & 62 & 3.40 \\ 0.6 & 42 & 3.23 \\ 1.2 & 22 &
2.85 \\ 2.4 & 11 & 2.42 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\epsfxsize\hsize\epsffile{fig6.eps}
\caption{Probability density functions for the vertical component of
the magnetic field at the upper surface of the computational
domain. These functions correspond to different values of
$\zeta_0$. These values are $\zeta_0=0.2$ (top), $\zeta_0=0.6$ (middle) and
$\zeta_0=1.2$ (bottom). For smaller values of $\zeta_0$ (or,
equivalently, larger values of $Rm$) there is a higher probability
of stronger fields and there is a greater proportion of reversed
polarity magnetic flux.\label{fig:6}}
\end{center}
\end{figure*}
\par Even during the brief kinematic phase, clear trends are observed
as the parameter $\zeta_0$ is varied. The effects of some of these
trends are illustrated in Fig.~\ref{fig:5}, for $\zeta_0=1.2$. Like
Fig.~\ref{fig:2}, this shows the spatial distribution of the
temperature and the vertical component of the magnetic field near the
upper surface of the computational domain. In order to allow direct
comparison between Fig.~\ref{fig:2} and Fig.~\ref{fig:5}, this
snapshot of the simulation is also taken at $t=0.24$, and the contours
are plotted at the same levels -- this implies that the scales for
temperature and magnetic field correspond directly to those used in
Fig.~\ref{fig:2}. In this kinematic phase, the temperature field is
the same in both plots; however it is immediately apparent that the
magnetic field structures in Fig.~\ref{fig:5} are weaker and less
localised than those seen in the $\zeta_0=0.2$ case. This is a
consequence of the increased importance of diffusion for larger values
of $\zeta_0$ (equivalently, lower values of $Rm$). The influence that
$\zeta_0$ has upon the magnetic field in this kinematic phase is quantified in
Table~\ref{table:1}. Here, we measure the strength of the vertical
component of the magnetic field at
$\left(x,y,z\right)=\left(3.16,2.39,0\right)$ and $t=0.12$, as a
function of $\zeta_0$. This spatial location corresponds to the centre
of the strongest surface magnetic feature at $t=0.12$, which does not
depend upon $\zeta_0$ in the kinematic regime (although the location
of this peak field is obviously a function of time). The table confirms
that, at a fixed time, there are weaker fields at lower values of
$Rm$. For simple flows, it is possible to construct a similarity
solution for this kind of kinematic flux concentration, which leads to
a power-law relationship between the peak magnetic field and the
magnetic Reynolds number (see, e.g., Proctor \& Weiss,
1982). Unfortunately, due to the complexity of the flow patterns in
this case, there appears to be no analogous scaling here.
\par For $\zeta_0=0.2$, it was found that the high magnetic pressure in the
strongest magnetic field concentrations quickly led to the partial
evacuation of these magnetic features. Since these magnetic flux
concentrations grow more gradually for larger values of $\zeta_0$,
this partial evacuation process is much slower in these
cases and the effects of diffusion tend to play a more dominant role
in limiting the flux concentration. Therefore, the amount of evacuation
that occurs decreases with increasing $\zeta_0$. This demonstrates that the
rapid intensification that occurred in the $\zeta_0=0.2$ case is very
much a high magnetic Reynolds number phenomenon. However, some
evacuation is observed in all cases that were investigated except
$\zeta_0=2.4$, and wherever features do become partially evacuated,
super-equipartition magnetic features are observed. Fig.~\ref{fig:6}
shows time-averaged probability density functions (pdfs) for the
vertical component of the surface magnetic field, for three different
values of $\zeta_0$. In all cases, the pdfs peak at $B_z=0$, which
implies that the majority of the domain is field-free. In addition,
there is a significant component of reversed magnetic flux at lower
values of $\zeta_0$. At higher $Rm$ there is a greater tendency for
flux to be advected with any fluid flow, and these convective motions
certainly have the potential to reverse magnetic flux at the granular
boundaries at the surface of the domain. Most interestingly, there is
not a large difference between the peak fields in the pdfs for
$\zeta_0=0.6$ and $\zeta_0=0.2$. Equivalently, the magnitude of the
peak field appears to be only weakly-dependent upon $Rm$ in this
particular parameter regime. This suggests that rather than being
controlled by diffusive effects, the peak attainable magnetic field in
this parameter regime is determined by the combined effects of the
external gas and dynamic pressures. So, although ultra-intense magnetic
features form more rapidly at higher $Rm$, these features can also be
produced at lower values of $Rm$, provided that magnetic diffusion
does not completely inhibit the flux concentration process before the
magnetic field can become dynamically active.
\section{Conclusions}
\par This paper describes results from a series of numerical
experiments that were designed to investigate the formation of
localised magnetic flux concentrations at the solar photosphere. By
adopting an idealised model, it was possible to assess the effects
that varying the magnetic Reynolds number might have upon this flux
intensification process. As expected, magnetic flux tends to
accumulate preferentially in convective downflows, where it forms
localised features -- the horizontal scale of these features decreases
with increasing magnetic Reynolds number. High magnetic pressures lead
to the partial evacuation of these features as they form. At high
values of $Rm$, the resulting field strengths are typically much
larger than the equipartition value at which the local magnetic energy
density balances the mean kinetic energy density of the surrounding
granular convection. In addition, the strongest magnetic fields that
are formed in the upper layers of the domain exert a magnetic pressure
that is significantly larger than the external gas pressure. The
appearance of these ultra-intense magnetic fields shows that the
dynamic pressure that is associated with the surrounding convection
must be playing a key role in the confinement of these magnetic
features to localised regions. Some super-equipartition magnetic
fields are found in all cases except the most diffusive case
(corresponding to $\zeta_0=2.4$).
\par It is interesting to relate these results to convective
collapse models (see, e.g., Webb \& Roberts 1978; Spruit \& Zweibel
1979). Although those models are certainly an
idealised representation of photospheric magnetic field
intensification, our simulations suggest that they correctly
identify the most important process in the formation of
super-equipartition magnetic fields, namely the partial evacuation of
magnetic regions by convective downflows. Our simulations do, however,
raise some important issues relating to convective
collapse. First, convective collapse models typically adopt an initial
condition that corresponds to a thin flux tube embedded in a static
atmosphere. Since such magnetic features form in well-established
convective downflows, this static equilibrium never occurs in our
simulations. Additionally, the evacuation process seems to begin before the
flux accumulation process finishes. So our simulations could be
seen as describing an ``adjustment'' rather than an instability. The
second point that is raised by these simulations concerns the pressure
balance. Convective collapse models assume a constant balance between
the external gas pressure and the gas and magnetic pressures within
the flux tube. Our simulations indicate that the dynamical effects of
the surrounding convection are playing a significant role in the
confinement of these magnetic features. The neglect of this dynamic
pressure underestimates the strength of the strongest magnetic
fields that can be generated. Finally, Cameron \&
Galloway (2005) have modelled aspects of convective collapse by conducting numerical simulations of laminar magnetoconvection in a simplified geometry. From these simulations they argue that super-equipartition magnetic features must be structured on a
kinematic scale at the solar photosphere. Although only a limited
range of values for $Rm$ has been considered here, our simulations
do produce some super-equipartition fields on larger scales, and so
contradict their view.
\par When comparing results from these simulations with photospheric
observations, it is important to remember some of the simplifying
assumptions in this model. Processes such as radiative
transfer have been neglected, so it is not possible to make detailed spectral
comparisons between these simulations and observations. Nevertheless, this
idealised approach has had a great deal of success in reproducing
qualitative features of photospheric magnetoconvection. In qualitative
terms, our simulations appear to be consistent with solar
observations and provide a plausible explanation for the appearance
of super-equipartition magnetic features in the intergranular lanes in
the quiet Sun. However, this model does have other limitations, notably
the fact that many of the parameters that are used are not closely related
to realistic solar values. We have also used highly idealised boundary
conditions in these simulations: for example we impose a rigid
boundary at the upper surface. It should be stressed that, although
the transition to a subadiabatic stratification in the photosphere
provides a ``softer'' boundary condition, this will not prevent the
formation of extremely strong magnetic fields. Another
limitation that should be noted is the fact that (due to the periodic
boundary conditions) the imposed magnetic flux is independent of both
$z$ and $t$. This implies that net vertical magnetic flux cannot enter
or leave the domain, therefore the initial non-zero magnetic flux is
an invariant quantity in the model. In the quiet Sun, there is a
continuous emergence of mixed polarity magnetic flux, which will
interact with existing magnetic features. Such interactions will tend
to limit the lifetimes of these magnetic features. This is something
that cannot be represented in our idealised simulations, though it may
be possible to make progress by careful choice of initial conditions.
\par In the Introduction, we noted that it is still not clear
whether quiet Sun magnetic features are simply fragments of reprocessed
magnetic flux or whether they are generated locally as the result of
small-scale dynamo action. In fact, this model of compressible
convection can drive a small-scale dynamo once the magnetic Reynolds number
exceeds a threshold of about $Rm=250$. However, even for a
marginally-excited dynamo, the magnetic Prandtl number is of order $2$
which is very much larger than the magnetic Prandtl number in the solar
photosphere. Whether or not such a dynamo could operate in the low
magnetic Prandtl number regime has been the subject of
considerable debate (see, for example, Boldyrev \& Cattaneo 2004;
Schekochihin et al. 2005). Although our idealised simulations do assume
some pre-existing magnetic field (i.e. they do not generate this field
self-consistently), the processes of magnetic flux expulsion and
intensification that are illustrated by these simulations are generic
and are therefore likely to be of relevance to the solar photosphere
whether or not a local dynamo is operating.
\par This work is motivated by high resolution observations of
the solar photosphere. Although current ground-based instruments, such
as the Swedish 1-metre Solar Telescope, are already provided very
detailed images of the photosphere, it is likely that newer
instruments (including those carried on the recently launched Hinode
satellite) will reveal many new features of photospheric
magnetoconvection. Over the next few years, these new observations
will enable us to refine our current theoretical models, but will also
inevitably present new theoretical challenges.
\section*{Acknowledgements}
This work was supported by PPARC/STFC while PJB held a postdoctoral
appointment at DAMTP in Cambridge. The numerical simulations that
were described in this paper made use of computing facilities belonging to the
UKMHD Consortium (based at the University of St Andrews) and the
Cambridge-Cranfield High Performance Computing Facility. We would also
like to thank Fran\c{c}ois Rincon and the referee for their helpful comments.
|
1,108,101,562,737 | arxiv | \section{Introduction}
Let $\mathfrak{g}$ be the Lie algebra of a connected simple algebraic group $G$ of adjoint type over an algebraically closed field $k$.
A {\it grading} on $\mathfrak{g}$ is a decomposition
$$\mathfrak{g}=\bigoplus_{i\in \mathbb{Z}/m}\mathfrak{g}_i$$
where $m$ is an integer $\geq 0$ and $[\mathfrak{g}_i,\mathfrak{g}_j]\subset \mathfrak{g}_{i+j}$ for all $i,j$.
The summand $\mathfrak{g}_0$ is a Lie subalgebra of $\mathfrak{g}$ and we let $G_0$ denote the corresponding connected subgroup of $G$.
The adjoint action of $G$ on $\mathfrak{g}$ restricts to an action of $G_0$ on each summand $\mathfrak{g}_i$.
We are interested in the invariant theory of this action, for which there is no loss of generality if we assume that $i=1$.
If $m=1$ this is the invariant theory of the adjoint representation, first developed by Chevalley,
who showed that the restriction $k[\mathfrak{g}]^G\to k[\mathfrak{t}]^W$ of $G$-invariant polynomials on $\mathfrak{g}$
to polynomials on a Cartan subalgebra $\mathfrak{t}$ invariant under the Weyl group $W$ is an isomorphism.
This and other aspects of Chevalley's theory were generalized to the case $m=2$ by Kostant and Rallis \cite{kostant-rallis}.
Soon after, Vinberg \cite{vinberg:graded} showed that for any $m\geq 0$ the invariant theory of the $G_0$-action on $\mathfrak{g}_1$ has similar parallels with the adjoint representation of $G$ on $\mathfrak{g}$. Vinberg worked over $\mathbb{C}$, but in
\cite{levy:thetap}, Vinberg's theory was extended to fields of good odd positive characteristic
not dividing $m$.
Some highlights of Vinberg theory are as follows.
A {\it Cartan subspace} is a linear subspace $\mathfrak{c}\subset\mathfrak{g}_1$
which is abelian as a Lie algebra, consists of semisimple elements, and is maximal with these two properties. All Cartan subspaces are conjugate under $G_0$.
Hence the dimension of $\mathfrak{c}$ is an invariant of the grading, called the {\it rank}, which we denote in this introduction by $r$.
The {\it little Weyl group} is the subgroup $W_\mathfrak{c}$ of $\GL(\mathfrak{c})$ arising from the action of the normalizer of $\mathfrak{c}$ in $G_0$. The group $W_\mathfrak{c}$ is finite and is generated by semisimple transformations of $\mathfrak{c}$ fixing a hyperplane and we have an isomorphism of invariant polynomial rings
$$k[\mathfrak{g}_1]^{G_0}\overset\sim\longrightarrow k[\mathfrak{c}]^{W_\mathfrak{c}},$$
given by restriction. Finally $k[\mathfrak{g}_1]^{G_0}\simeq k[f_1,\dots, f_r]$ is a polynomial algebra generated by $r$ algebraically independent polynomials $f_1,\dots,f_r$ whose degrees $d_1,\dots, d_r$ are determined by the grading. In particular the product of these degrees is the order of $W_\mathfrak{c}$.
We have a dichotomy: either the rank $r=0$, in which case $\mathfrak{g}_1$ consists entirely of nilpotent elements of $\mathfrak{g}$, or $r>0$, in which case $m>0$ and $\mathfrak{g}_1$ contains semisimple elements of $\mathfrak{g}$. A basic problem is to classify all gradings of rank $r>0$ and to compute the little Weyl groups $W_\mathfrak{c}$ in each case.
Another open question is {\it Popov's conjecture:\ } $\mathfrak{g}_1$ should contain a
{\it Kostant section}: an affine subspace $\mathfrak{v}$ of $\mathfrak{g}_1$ with $\dim\mathfrak{v}=r$, such that the restriction map
$k[\mathfrak{g}_1]^{G_0}\longrightarrow k[\mathfrak{v}]$
is an isomorphism.
The classification of positive-rank gradings and their little Weyl groups, along with verification of Popov's conjecture was given in \cite{levy:thetap} and \cite{levy:exceptional} for gradings of Lie algebras of classical type and those of types $G_2$ and $F_4$. In this paper we complete this work by proving analogous results for types $E_6,\ E_7$ and $E_8$, using new methods which apply
to the Lie algebras of general simple algebraic groups $G$.
The main idea is to compute Kac coordinates of lifts of automorphisms of the root system $R$ of $\mathfrak{g}$, as we shall now explain.
Choosing a base in $R$ and a pinning in $\mathfrak{g}$ (defined in section \ref{mum}), we may write
the automorphism groups $\Aut(R)$ and $\Aut(\mathfrak{g})$ as semidirect products:
$$\Aut(R)=W\rtimes \Theta,\qquad\Aut(\mathfrak{g})=G\rtimes\Theta,$$
where $W$ is the Weyl group of $R$ and $\Theta$, the symmetry group of the Dynkin graph $D(R)$ of $R$, is identified with the group of automorphisms of $\mathfrak{g}$ fixing the chosen pinning.
To each
$\vartheta\in \Theta$ one can associate an affine root system $\Psi=\Psi(R,\vartheta)$ consisting of affine functions on an affine space $\mathcal{A}$ of dimension
equal to the number of $\vartheta$-orbits on the nodes of the diagram $D(R)$.
Kac' original construction of $\Psi$ uses infinite dimensional Lie algebras and works over $\mathbb{C}$; our approach constructs $\Psi$ directly from the pair $(R,\vartheta)$ and works over any algebraically closed field in which the order $e$ of $\vartheta$ is nonzero.
The choice of pinning on $\mathfrak{g}$ determines a rational structure on $\mathcal{A}$ and a basepoint $x_0\in\mathcal{A}$.
Following an idea of Serre \cite{serre:kac}, we associate to each rational point $x\in\mathcal{A}_\mathbb{Q}$ an embedding $\varrho_x:\boldsymbol{\mu}_m\hookrightarrow G$ of group schemes over $k$, where $m$ is the denominator of $x$.
If $m$ is nonzero in $k$ and we choose a root of unity $\zeta\in k^\times$ of order $m$,
then $x$ determines an actual automorphism $\theta_x\in G\vartheta$ of order $m$. If $x$ lies in the closure $\overline C$ of the fundamental alcove of $\mathcal{A}$ then the affine coordinates of $x$ are those defined by Kac (when $k=\mathbb{C}$ and $\zeta=e^{2\pi i/m}$); we call these {\it normalized Kac coordinates}, since we also consider points $x$ outside $\overline C$ having some affine coordinates negative. Any $x\in\mathcal{A}_\mathbb{Q}$ can be moved into $\overline C$ via operations of the affine Weyl group $W(\Psi)$, and this can be done effectively, using a simple algorithm.
See also \cite{levy:exceptional}, which gives a different way of extending Kac coordinates to positive characteristic.
The half-sum of the positive co-roots is a vector $\check\rho$ belonging to the translation subgroup of $\mathcal{A}$. In the {\it principal segment} $[x_0,x_0+\check\rho]\subset \mathcal{A}$ we are especially interested in the points
$$x_m:=x_0+\tfrac{1}{m}\check\rho\in\mathcal{A}_\mathbb{Q},$$
where $m$ is the order of an elliptic $\mathbb{Z}$-regular automorphism $\sigma\in\Aut(R)$. Here $\sigma$ is {\it elliptic} if $\sigma$ has no nonzero fixed-points in the reflection representation, and we say $\sigma$ is {\it $\mathbb{Z}$-regular} if the group generated by $\sigma$ acts freely on $R$.
(This is almost equivalent to Springer's notion of regularity, and for our purposes it is the correct one. See section \ref{Zregular}.)
Now assume that the characteristic of $k$ is not a torsion prime for $\mathfrak{g}$.
Choose a Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$, let $T$ be the maximal torus of $G$ centralizing $\mathfrak{t}$ with normalizer $N$ in $G$ and let $\Aut(\mathfrak{g},\mathfrak{t})$ be the subgroup of $\Aut(\mathfrak{g})$ preserving $\mathfrak{t}$.
The groups $\Aut(R)$ and $\Aut(\mathfrak{g},\mathfrak{t})/T$ are isomorphic and we may canonically identify $W$-conjugacy classes in $\Aut(R)$ with $N/T$-conjugacy classes in $\Aut(\mathfrak{g},\mathfrak{t})/T$. Let
$\sigma\in\Aut(R)$ be an elliptic $\mathbb{Z}$-regular automorphism whose order $m$ is nonzero in $k$. Write $\sigma=w\cdot\vartheta$ with $w\in W$ and $\vartheta\in\Theta$.
Then there is a unique $G$-conjugacy class $C_\sigma\subset G\vartheta$ such that
$C_\sigma\cap \Aut(\mathfrak{g},\mathfrak{t})$ projects to the class of $\sigma$ in $\Aut(R)$.
Using results of Panyushev in \cite{panyushev:theta}, we show that $C_\sigma$ contains the automorphism $\theta_{x_m}$, where $x_m$ is the point
on the principal segment defined above.
The (un-normalized) Kac coordinates of $x_m$ are all $=1$ except one coordinate is $1+(m-h_\vartheta)/e$, where $h_\vartheta$ is the twisted Coxeter number of $(R,\vartheta)$. Translating by the affine Weyl group we obtain the normalized Kac coordinates of the class $C_\sigma\subset G\vartheta$. The automorphisms in $C_\sigma$ have positive rank equal to the multiplicity of the cyclotomic polynomial $\Phi_m$ in the characteristic polynomial of $\sigma$. They are exactly the semisimple automorphisms of $\mathfrak{g}$ for which $G_0$ has stable orbits in $\mathfrak{g}_1$, in the sense of Geometric Invariant Theory.
Every $G$-conjugacy class of positive-rank automorphisms $\theta\in\Aut(\mathfrak{g})$ whose order is nonzero in $k$ contains a lift of a $W$-conjugacy class in $\Aut(R)$. For any particular group $G$ we can tabulate the Kac coordinates of such lifts; these are exactly the Kac coordinates of positive rank gradings.
For this purpose it is enough to consider only the lifts of certain classes in $\Aut(R)$, almost all of which are elliptic and $\mathbb{Z}$-regular in $\Aut(R')$ for some root subsystem of $R$, whose Kac coordinates are easily found, as above.
These tables are only preliminary because they contain some Kac diagrams more than once, reflecting the fact that a given class in $\Aut(\mathfrak{g})$ may contain lifts of several classes of
$\sigma\in \Aut(R)$. However, each class in $\Aut(\mathfrak{g})$ has a ``best" $\sigma$ whose properties tell us about other aspects of the grading, for example the little Weyl group $W(\mathfrak{c})$. Our final tables for $E_6, E_7$ and $E_8$ list each positive rank Kac diagram once and contain this additional data.
Besides its contributions to Vinberg theory {\it per se}, this paper was motivated by connections between Vinberg theory and the structure and representation theory of a reductive group $\mathbf{G}$ over a $p$-adic field $F$.
The base field $k$ above is then the residue field of a maximal unramified extension
$L$ of $F$. We assume $\mathbf{G}$ splits over a tame extension $E$ of $L$.
Then the Galois group $\Gal(E/L)$ is cyclic and acts on the root datum of $\mathbf{G}$ via a pinned automorphism $\vartheta$.
The grading corresponds to a point $x$ in the Bruhat-Tits building of $\mathbf{G}(L)$,
the group $G_0$ turns out to be the reductive quotient of
the parahoric subgroup $\mathbf{G}(L)_x$ fixing $x$,
and the summands $\mathfrak{g}_i$ are quotients in the Moy-Prasad filtration of $\mathbf{G}(L)_x$.
As we will show elsewhere, the classification of positive rank gradings leads to a classification of non-degenerate $K$-types, a long outstanding problem in the representation theory of $\mathbf{G}(F)$,
and stable $G_0$-orbits in the dual of $\mathfrak{g}_1$ give rise to supercuspidal representations of $\mathbf{G}(F)$ attached to elliptic $\mathbb{Z}$-regular elements of the Weyl group. These generalize the ``simple supercuspidal representations" constructed in \cite{gross-reeder}, which correspond to the Coxeter element.
After the first version of this paper was written, we learned from A. Elashvili that $25$ years ago he, D. Panyushev and E. Vinberg had also calculated, by completely different methods, all the positive rank gradings and little Weyl groups in types $E_{6,7,8}$ (for $k=\mathbb{C}$) but they had never published their results. We thank them for comparing their tables with ours. For other aspects of positive-rank gradings on exceptional Lie algebras, see \cite{degraaf-yakimova}.
\section{Kac coordinates}
Kac \cite[chap. 8]{kac:bluebook} showed how conjugacy classes of torsion automorphisms of simple Lie algebras $\mathfrak{g}$ (over $\mathbb{C}$)
can be parametrized by certain labelled affine Dynkin diagrams, called {\bf Kac coordinates}.
If we choose a root of unity $\zeta\in\mathbb{C}^\times$ of order $m$, then any automorphism $\theta\in\mathfrak{g}$ of order $m$ gives a grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$, where $\mathfrak{g}_i$ is the $\zeta^i$-eigenspace of $\theta$.
This grading depends on the choice of $\zeta$ and if we replace $\mathbb{C}$ by another ground field $k$, we are forced to assume that $m$ is invertible in $k$. As in \cite{levy:thetap}, this assumption will be required for our classification of positive-rank automorphisms.
However, at the level of classifying {\it all} torsion automorphisms, Serre has remarked (see \cite{serre:kac}) that, at least in the inner case, one can avoid the choice of $\zeta$ and restrictions on $k$ by replacing an automorphism $\theta$ of order $m$ with an embedding
$\boldsymbol{\mu}_m\hookrightarrow \Aut(\mathfrak{g})^\circ$ of group schemes over $k$, where $\boldsymbol{\mu}_m$ is the group scheme of $m^{th}$ roots of unity.
In this section we give an elementary treatment of Kac coordinates in Serre's more general setting, and we extend his approach to embeddings $\boldsymbol{\mu}_m\hookrightarrow\Aut(\mathfrak{g})$.
In the outer case, where the image of $\boldsymbol{\mu}_m$ does not lie in $\Aut(\mathfrak{g})^\circ$, we still find it necessary to assume the characteristic $p$ of $k$ does not divide the order of the projection of $\boldsymbol{\mu}_m$ to
the component group of $\Aut(\mathfrak{g})$.
Our approach differs from \cite{kac:bluebook} in that we avoid infinite dimensional Lie algebras (cf. \cite{reeder:torsion}).
We then discuss a family of examples, the principal embeddings of $\boldsymbol{\mu}_m$, which play an important role in gradings of positive rank.
\subsection{Based automorphisms and affine root systems}\label{affine}
For background on finite and affine root systems see \cite{bour456} and \cite{macdonald:affine}.
Let $R$ be an irreducible reduced finite root system spanning a real vector space $V$.
The automorphism group of $R$ is the subgroup of $\GL(V)$ preserving $R$:
$$\Aut(R)=\{\sigma\in\GL(V):\ \sigma(R)=R\}.$$
We say an automorphism $\sigma\in\Aut(R)$ is {\bf based} if $\sigma$ preserves a base of $R$.
If we choose a base $\Delta$ of $R$ then we have a splitting
$$\Aut(R)=W\rtimes\Theta,$$
where $W$ is the Weyl group of $R$ and $\Theta=\{\sigma\in\Aut(R):\ \sigma(\Delta)=(\Delta)\}$.
Since $R$ is irreducible, the group $\Theta$ is isomorphic to a symmetric group $S_n$ for $n=1,2$ or $3$.
In this section we will associate to any based automorphism $\vartheta\in\Aut(R)$ an affine root system $\Psi(R,\vartheta)$ whose isomorphism class will depend only on the order $e$ of $\vartheta$.
We first establish more notation to be used throughout the paper.
Let $X=\mathbb{Z} R$ be the lattice in $V$ spanned by $R$ and let $\check X=\Hom(X,\mathbb{Z})$ be the dual lattice. We denote the canonical pairing between $X$ and $\check X$ by
$\langle \lambda,\check \omega\rangle$, for $\lambda\in X$ and $\check\omega\in\check X$.
Fix a base $\Delta=\{\alpha_1,\dots,\alpha_\ell\}$ of $R$, where $\ell$ is the rank of $R$,
and let $\check R\subset\check X$ be the co-root system with base
$\check\Delta=\{\check\alpha_1,\dots,\check\alpha_\ell\}$, where $\check\alpha_i$ is the co-root corresponding to $\alpha_i$.
The pairing $\langle\ ,\ \rangle$ extends linearly to the real vector spaces $V=\mathbb{R}\otimes X $ and $\check V:=\mathbb{R}\otimes \check X $. Thus, a root $\alpha\in R$ can be regarded as the linear functional $\check v\mapsto \langle \alpha,\check v\rangle$ on $\check V$, and by duality $\Aut(R)$ can be regarded as a subgroup of $\GL(\check V)$. In this viewpoint the Weyl group $W$ is the subgroup of $\GL(\check V)$ generated by the reflections
$s_\alpha:\check v\mapsto \check v-\langle \alpha,\check v\rangle\check\alpha$ for $\alpha\in R$.
Let $\check\rho$ be one-half the sum of those co-roots $\check\alpha\in\check R$ which are non-negative integral combinations of elements of $\check\Delta$. We also have
$$\check\rho=\check\omega_1+\check\omega_2+\cdots+\check\omega_\ell,$$
where $\{\check\omega_i\}$ are the fundamental co-weights dual to $\Delta$, that is, $\langle\alpha_i,\check\omega_i\rangle=1$ and $\langle\alpha_i,\check\omega_j\rangle=0$ if $i\neq j$.
Let $\check V^\vartheta=\{\check v\in \check V:\ \vartheta(\check v)=\check v\}$ be the subspace of $\vartheta$-fixed vectors in $\check V$ and let
$R_\vartheta=\{\alpha\vert_{\check V^\vartheta}:\ \alpha\in R\}$ be the set of restrictions to
$\check V^\vartheta$ of roots in $R$. By duality $\Theta$ permutes the fundamental co-weights $\{\check\omega_i\}$, so the vector $\check\rho$ lies in $\check V^\vartheta$.
And since $\langle\alpha,\check\rho\rangle=1$ for all $\alpha\in \Delta$, it follows that no root vanishes on $\check V^\vartheta$. Moreover two roots $\alpha,\alpha'\in R$ have the same restriction to $\check V^\vartheta$ if and only if they lie in the same $\langle\vartheta\rangle$-orbit in $R$. Hence we have
$$R_\vartheta=\{\beta_a:\ a\in R/\vartheta\},$$
where $R/\vartheta$ is the set of $\langle\vartheta\rangle$-orbits in $R$ and
$\beta_a=\alpha\vert_{\check V^\vartheta}$ for any $\alpha\in a$.
For $a\in R/\vartheta$, we define $\check \beta_a\in \check V^\vartheta$ by
\begin{equation}\label{check}
\check \beta_a=
\begin{cases}
\ \ \ \sum_{\alpha\in a}\check \alpha & \quad\text{if}\quad 2\beta_a\notin R_\vartheta\\
2\sum_{\alpha\in a}\check \alpha & \quad\text{if}\quad 2\beta_a\in R_\vartheta,
\end{cases}
\end{equation}
and we set $\check R_\vartheta=\{\check\beta_a:\ a\in R/\vartheta\}$. Then
$\langle \beta_a,\check\beta_a\rangle=2$ and $\langle \beta_a,\check\beta_b\rangle\in\mathbb{Z}$
for all $a,b\in R/\vartheta$.
Note that $2\beta_a\notin R_\vartheta$ precisely when $a$ consists of ``orthogonal" roots;
that is, when $a=\{\gamma_1,\dots, \gamma_k\}$ with $\langle\gamma_i,\check \gamma_j\rangle=0$ for $i\neq j$.
In this case, the element
$$s_a:=s_{\gamma_1}s_{\gamma_2}\cdots s_{\gamma_k}\in W$$
has order two, is independent of the order of the product and is centralized by $\vartheta$.
If $2\beta_a\in R_\vartheta$ we have $a=\{\gamma_1,\gamma_2\}$ where $\gamma_1+\gamma_2\in R$.
In this case we define $s_a=s_{\gamma_1+\gamma_2}$, noting this $s_a$ is also centralized by $\vartheta$.
A short calculation shows that
$$s_a(\beta_b)=\beta_b-\langle \beta_b,\check\beta_a\rangle\beta_a,$$
in all cases.
On the other hand, if $\beta\in b$, then $s_a(\beta_b)=s_a(\beta)\vert_{\check V^\vartheta}$,
since $s_a$ is centralized by $\vartheta$. It follows that
$\beta_b-\langle \beta_b,\check\beta_a\rangle\beta_a\in R_\vartheta$.
These involutions $s_a$, for $a\in R/\vartheta$,
generate the centralizer $W^\vartheta=\{w\in W:\ \vartheta w=w\vartheta\}$
\cite[2.3]{steinberg:varchev}.
Thus, $R_\vartheta$ is a root system (possibly non-reduced) whose Weyl group is $W^\vartheta$.
The rank $\ell_\vartheta$ of $R_\vartheta$ equals the number of $\vartheta$-orbits in $\Delta$.
Let $\mathcal{A}^\vartheta$ be an affine space for the vector space $\check V^\vartheta$.
We denote the action by $(v,x)\mapsto v+x$ for $v\in\check V^\vartheta$ and $x\in \mathcal{A}^{\vartheta}$ and
for $x,y\in\mathcal{A}^{\vartheta}$ we let $y-x\in \check V^\vartheta$ be the unique vector such that $(y-x)+x=y$.
For any affine function $\psi:\mathcal{A}^\vartheta\to \mathbb{R}$ we let $\dot\psi:\check V^\vartheta\to \mathbb{R}$ be the unique linear functional such that $\psi(x+v)=\psi(x)+\langle\dot\psi,v\rangle$ for all
$v\in\check V^\vartheta$.
Choose a basepoint $x_0\in\mathcal{A}^{\vartheta}$. For each linear functional $\lambda:\check V^\vartheta\to\mathbb{R}$ define an affine function
$\widetilde\lambda:\mathcal{A}^{\vartheta}\to \mathbb{R}$ by $\widetilde\lambda(x)=\langle \lambda,x-x_0\rangle$. In particular, each root
$\beta_a\in R_\vartheta$ gives an affine function $\widetilde\beta_a$ on $\mathcal{A}^\vartheta$.
For each orbit
$a\in R/\vartheta$, set $u_a=1/|a|$.
If $ \beta_a\notin 2R_\vartheta$, define
$$\Psi_a=\{\widetilde\beta_a+nu_a:\ n\in \mathbb{Z}\}.$$
If $\beta_a\in 2R_\vartheta$, define
$$\Psi_a=\{\widetilde\beta_a+(n+{\textstyle{ \frac{1}{2} } })u_a:\ n\in \mathbb{Z}\}.$$
The resulting collection
$$\Psi(R,\vartheta):=\bigcup_{a\in R/\vartheta}\Psi_a$$
of affine functions on $\mathcal{A}^\vartheta$ is a reduced, irreducible affine root system
(in the sense of \cite[1.2]{macdonald:affine}) and $x_0\in\mathcal{A}^\vartheta$ is a special point for $\Psi(R,\vartheta)$.
An {\bf alcove} in $\mathcal{A}^\vartheta$ is a connected component of the open subset of points in $\mathcal{A}$ on which no affine function in $\Psi(R,\vartheta)$ vanishes.
There is a unique alcove $C\subset\mathcal{A}^{\vartheta}$ containing $x_0$ in its closure and on which $\tilde\beta_a>0$ for every $\vartheta$-orbit $a\subset\Delta$.
The walls of $C$ are hyperplanes $\psi_i=0$,
$i=0,1,\dots,\ell_\vartheta=\dim \mathcal{A}^\vartheta$, and $\{\psi_0,\psi_1,\dots,\psi_{\ell_\vartheta}\}$ is a base of the affine root system $\Psi(R,\vartheta)$.
The point $x_0$ lies in all but one of these walls; we choose the numbering so that
$\psi_0(x_0)\neq 0$.
There are unique relatively prime positive integers $b_i$ such that $\sum b_i\dot\psi_i=0$.
We have $b_0=1$ and the affine function
$\sum_{i=0}^{\ell_\vartheta}b_i\psi_i$ is constant, equal to $1/e$, where $e=|\vartheta|$.
The reflections $r_i$ about the hyperplanes $\psi_i=0$ for $i=0,1,\dots,\ell_\vartheta$ generate an irreducible affine Coxeter group $W_\aff(R,\vartheta)$ which acts simply-transitively on alcoves in $\mathcal{A}^\vartheta$.
If $\vartheta=1$ we recover the affine root system attached to $R$ as in \cite{bour456} and
$W_\aff(R):=W_\aff(R,1)$ is the affine Weyl group of $R$.
For an example with nontrivial $\vartheta$, take $R$ of type $A_2$ and $\vartheta$ of order two.
We have $\check V=\{(x,y,z)\in\mathbb{R}^3:\ x+y+z=0\}$, and
$$\alpha_1=x-y,\quad\alpha_2=y-z,\quad \check\alpha_1=(1,-1,0),\quad \check\alpha_2=(0,1,-1),\quad\check\rho=(1,0,-1).$$
The nontrivial automorphism $\vartheta\in\Aut(R)$ permuting $\{\alpha_1,\alpha_2\}$ acts on $\check V$ by $\vartheta(x,y,z)=(-z,-y,-x)$. We identify
$\check V^\vartheta=\{(x,0,-x):\ x\in\mathbb{R}\}$ with $\mathbb{R}$ via projection onto the first component.
The $\langle\vartheta\rangle$-orbits in the positive roots are $a=\{\alpha_1,\alpha_2\}$ and $b=\{\alpha_1+\alpha_2\}$,
so $\beta_a=x$ and $\beta_{b}=2x$.
If we identify $\mathcal{A}^\vartheta=\mathbb{R}$ and take $x_0=0$, then
$$\Psi_a=\{x+\tfrac{n}{2}:\ n\in \mathbb{Z}\},\qquad \Psi_b=\{2x+n+\tfrac{1}{2}:\ n\in \mathbb{Z}\}.$$
The alcove $C$ is the open interval $(0,\tfrac{1}{4})$ in $\mathbb{R}$. The walls of $C$ are defined by the vanishing of the affine roots
$$\psi_0=\tfrac{1}{2}-2x,\qquad \psi_1=x$$
which satisfy the relation $\psi_0+2\psi_1=\frac{1}{2}$, so $b_0=1$ and $b_1=2$.
The group $W_\aff(R,\vartheta)$ is infinite dihedral, generated by the reflections of $\mathbb{R}$ about $0$ and $\tfrac{1}{4}$.
We list the affine root systems for nontrivial $\vartheta$ in Table 1.
As the structure of $\Psi(R,\vartheta)$ depends only on $R$ and the order $e$ of $\vartheta$, the pair $(R,\vartheta)$ is indicated by the symbol ${^eR}$, called the {\it type} of $(R,\vartheta)$.
Information about $\Psi(R,\vartheta)$ is encoded in a {\it twisted affine diagram} $D({^eR})$ which is a graph with vertices indexed by $i\in\{0,1,\dots,\ell_\vartheta\}$, labelled by the integers $b_i$.
The number $m_{ij}$ of bonds between vertices $i$ and $j$ is determined as follows.
Choose a $W^\vartheta$-invariant inner product $(\ ,\ )$ on $V^\vartheta$ and suppose that
$(\dot\psi_j,\dot\psi_j)\geq (\dot\psi_i,\dot\psi_i)$. Then
$$m_{ij}=\frac{(\dot\psi_j,\dot\psi_j)}{(\dot\psi_i,\dot\psi_i)}.$$
If $m_{ij}>1$ we put an arrow pointing from vertex $j$ to vertex $i$.
Removing the labels and arrows from the twisted affine diagram $D({^eR})$ gives the
Coxeter diagram $D({^eR})_\cox$ of $W_\aff(R,\vartheta)$ (except in type ${^2A_2}$ the four bonds should be interpreted as $r_0r_1$ having infinite order). Table 1 gives the twisted affine diagrams for $e>1$ (their analogues for $e=1$ being well-known).
For each type we also give the {\it twisted Coxeter number},
which is the sum
\begin{equation}\label{twistedcoxeter}
h_\vartheta=e\cdot(b_0+b_1+\cdots+b_{\ell_\vartheta}),
\end{equation}
whose importance will be seen later.
The node $i=0$ is indicated by $\bullet$.
\begin{center}
{\small Table 1: Twisted Affine diagrams and twisted Coxeter numbers}
\begin{equation}\label{coxnumber}
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccc}
\hline
{^eR} &D({^eR})& \ell_\vartheta & h_\vartheta\\
\hline
{^2\!A_{2}} & \twoAtwo & 1 & 6\\
{^2\!A_{2n}} &
\overset{1}\bullet\!\!\Longrightarrow\!\!\overset{2}\circ \text{----}\!\! \overset{2}\circ\!\text{--} \cdots\text{--}
\overset{2}\circ\!\!\Longrightarrow\!\!\overset{2}\circ & n & 4n+2\\
{^2\!A_{2n-1}}&\begin{matrix}
\overset{1}\circ \text{----}\!\!\!\!\! &\overset{2}\circ&
\!\!\!\!\!\text{--} \cdots\text{--}
\overset{2}\circ\!\!\Longleftarrow\!\overset{1}\circ\\
&\underset{1}{\text{\rotatebox{270}{\!\!\!\!\!\!\!\!\!----$\bullet$}}}&
\end{matrix}&n&4n-2\\
{^2\!D_{n}} &\overset{1}\bullet\!\!\Longleftarrow\!\!
\overset{1}\circ \text{----}\!\! \overset{1}\circ\!\text{--} \cdots\text{--}
\overset{1}\circ\!\!\Longrightarrow\!\!\overset{1}\circ&n-1&2n\\
{^3\!D_{4}}
&\overset{1}\bullet\text{----}\overset{2}\circ\!\Lleftarrow\!\overset{1}\circ &2&12\\
{^2\!E_{6}} &\overset{1}\bullet\!\text{----}\!
\overset{2}\circ\! \text{----}\!
\overset{3}\circ\!\!\Longleftarrow\!\!
\overset{2}\circ\text{----}\!
\overset{1}\circ&4&18\\
\hline
\end{array}}
\end{equation}
\end{center}
{\bf Remark:\ }
Let $\mathcal{R}$ be the set of pairs $(R,e)$, where $R$ is an irreducible reduced finite root system and $e$ is a divisor of $|\Theta|$. Let $\mathcal{R}_{\aff}$ be the set of irreducible reduced affine root systems, as in \cite{macdonald:affine}, up to isomorphism.
Let $\mathcal{D}$ be the set of pairs
$(D,o)$, where $D$ is the Coxeter diagram of an irreducible affine Coxeter group and $o$ is a choice of orientation of each multiple edge of $D$. The classification of reduced irreducible affine root systems \cite[1.3]{macdonald:affine} shows that the assignments
$(R,e)\mapsto {^eR}\mapsto D({^eR})$ give bijections
$$\mathcal{R}\overset\sim\longrightarrow\mathcal{R}_{\aff}\overset\sim\longrightarrow\mathcal{D}.$$
\subsection{Torsion points, Kac coordinates and the normalization algorithm}\label{kac-coord}
Retain the notation of the previous section.
Let $\mathcal{A}^{\vartheta}_{\mathbb{Q}}$ be the set of points in $\mathcal{A}^\vartheta$ on which the affine roots in $\Psi(R,\vartheta)$ take rational values.
The {\it order} of a point $x\in \mathcal{A}^{\vartheta}_{\mathbb{Q}}$ is the smallest positive integer $m$ such that $\psi(x)\in\frac{1}{m}\mathbb{Z}$ for every $\psi\in\Psi(R,\vartheta)$. In this case there are integers $s_i$ such that $\psi_i(x)=s_i/m$, and
$\gcd(s_0,\dots,s_{\ell_\vartheta})=1$.
Moreover, since $b_0\psi_0+\cdots+b_{\ell_\vartheta}\psi_{\ell_\vartheta}$ is constant, equal to $ 1/e$, (recall that $e$ is the order of $\vartheta$) it follows that
$$e\cdot \sum_{i=0}^{\ell_\vartheta} b_i s_i=m.$$
In particular, the order $m$ is divisible by $e$.
We call integer vector $(s_0,s_1,\dots, s_{\ell_\vartheta})$ the (un-normalized) {\bf Kac coordinates} of $x$.
The point $x$ lies in $\overline C$ precisely when all $s_i$ are non-negative; in this case we refer to the vector $(s_i)$ as {\bf normalized Kac coordinates}. The action of the affine Weyl group
$W_{\aff}(R,\vartheta)$ on $\mathcal{A}_\mathbb{Q}^\vartheta$ can be visualized as an action on Kac coordinates, as follows.
The reflection $r_j$ about the wall $\psi_j=0$ sends the Kac coordinates $(s_i)$ to $(s_i')$, where
$$s_i'= s_i-\langle \beta_i,\check \beta_j\rangle s_j.$$
Un-normalized Kac coordinates may have some $s_j<0$. If we apply $r_j$ and repeat this process by selecting negative nodes and applying the corresponding reflections, we will eventually obtain
normalized Kac coordinates $(s_i')$. Geometrically, this {\bf normalization algorithm} amounts to moving a given point $x\in \mathcal{A}_\mathbb{Q}^\Theta$ into the fundamental alcove $\overline C$ by a sequence of reflections about walls, see \cite[Sec. 3.2]{reeder:torsion}.
We have implemented the normalization algorithm on a computer and used it extensively to construct the tables in sections \ref{E678} and \ref{2E6}.
The image of the projection $e^{-1}\sum_{i=0}^{e-1}\vartheta^i:\check X\to V^\vartheta$ is a lattice $Y_\vartheta$ in $V^\vartheta$ which is preserved by $W^\vartheta$.
The extended affine Weyl group
$$\widetilde{W}_{\aff}(R,\vartheta):=W^\vartheta\rtimes Y_\vartheta$$
contains $W_{\aff}(R,\vartheta)$ as a normal subgroup of finite index and
the quotient may be identified with a group of symmetries of
the oriented diagram $D({^eR})$.
We regard two normalized Kac diagrams as equivalent if one is obtained from the other by a symmetry of the oriented diagram $D({^eR})$ coming from $\widetilde{W}_{\aff}(R,\vartheta)$.
For $R=E_6, E_7, E_8$ and $e=1$ these diagram symmetries are: rotation of order three, reflection of order two and trivial, respectively. In type ${^2E_6}$ these diagram symmetries are trivial (see table \eqref{coxnumber}).
\subsection{$\mu_m$-actions on Lie algebras}\label{mum}
Let $k$ be an algebraically closed field. All $k$-algebras are understood to be commutative with $1$,
and in this section all group schemes are affine over $k$, and are regarded as representable functors from the category of finitely generated $k$-algebras to the category of groups. We refer to \cite{waterhouse} for more details on affine group schemes.
Every finitely generated $k$-algebra $A$ is a direct product of $k$-algebras
$A=\prod_{\iota\in I(A)}A_\iota,$
where $I(A)$ indexes the connected components $\Spec(A_\iota)$ of $\Spec(A)$ and each $A_\iota$ is a $k$-algebra with no non-trivial idempotents. This decomposition is to be understood when we describe the $A$-valued points in various group schemes below.
Each finite (abstract) group $\varGamma$ is regarded a constant group scheme, given by
$\varGamma(A)=\prod_{\iota\in I(A)}\varGamma(A_\iota),$
where $\varGamma(A_\iota)=\varGamma$. In other words, an element $\gamma\in\varGamma(A)$ is a function $(\iota\mapsto\gamma_\iota)$ from $I(A)$ to $\varGamma$.
Let $\boldsymbol{\mu}_m$ denote the group scheme of $m^{th}$ roots of unity,
whose $A$-valued points are given by
$$\boldsymbol{\mu}_m(A)=\{a\in A:\ a^m=1\}=\prod_{\iota\in I(A)}\boldsymbol{\mu}_m(A_\iota).$$
If $m$ is nonzero in $k$ then $\boldsymbol{\mu}_m(A_\iota)=\boldsymbol{\mu}_m(k)$ for every $\iota\in I(A)$,
so $\boldsymbol{\mu}_m$ is a constant group scheme and we have
$$\boldsymbol{\mu}_m(A)=\prod_{\iota\in I(A)}\boldsymbol{\mu}_m(k).$$
If $m$ is zero in $k$ then $\boldsymbol{\mu}_m$ is not a constant group scheme.
A $k$-vector space $V$ can be regarded as a $k$-scheme such that $V(A)=A\otimes_kV$.
To give a grading $V=\sum_{i\in \mathbb{Z}/m\mathbb{Z}}V_i$ as $k$-schemes
is to give a morphism $\varrho:\boldsymbol{\mu}_m\to\GL(V)$, where
$\GL(V)(A)$ is the automorphism group of the free $A$-module $V(A)$.
Indeed, $\mathbb{Z}/m$ is canonically isomorphic to the Cartier dual $\Hom(\boldsymbol{\mu}_m,\mathbf{G}_m)$,
so a morphism $\varrho:\boldsymbol{\mu}_m\to\GL(V)$ gives a grading $V(A)=\oplus_{i\in \mathbb{Z}/m}V_i(A)$ where $V_i(A)=\{v\in V(A):\ \varrho(\zeta)v=\zeta^i v\quad\forall \zeta\in \boldsymbol{\mu}_m(A)\}$.
Now let $R$ be an irreducible root system as before, with base $\Delta$
and group of based automorphisms $\Theta$.
Set $X=\mathbb{Z} R$ and $\check X=\Hom(X,\mathbb{Z})$.
Then $(X,R,\check X,\check R)$ is the root datum of a connected simple algebraic group scheme $G$ over $k$ of adjoint type.
Let $\mathfrak{g}$ be the Lie algebra of $G$ and let $T\subset B$ be a maximal torus contained in a Borel subgroup of $G$. We identify $R$ with the set of roots of $T$ in $\mathfrak{g}$, and $\Delta$ with the set of simple roots of $T$ in the Lie algebra of $B$. Choose a root vector $E_i$ for each simple root
$\alpha_i\in\Delta$. The data $(X,R,\check X,\check R,\{E_i\})$ is called a {\bf pinning} of $G$.
Fix an element $\vartheta\in\Theta$. Assume the order $e$ of $\vartheta$ is nonzero in $k$,
so that $\boldsymbol{\mu}_e$ and $\langle \vartheta\rangle$ are isomorphic constant group schemes over $k$,
and choose an isomorphism $\tau:\boldsymbol{\mu}_e\to\langle\vartheta\rangle$.
By our choice of pinning $(X,R,\check X,\check R,\{E_i\})$, the group $\langle\vartheta\rangle$ may also be regarded as a subgroup of $\Aut(\mathfrak{g})$ permuting the root vectors $E_i$ in the same way $\vartheta$ permutes the roots $\alpha_i$, and we have a semidirect product
$$G\rtimes\langle\vartheta\rangle\subset\Aut(\mathfrak{g}),$$
where the cyclic group $\langle\vartheta\rangle$ is now viewed as a constant subgroup scheme of automorphisms of $\mathfrak{g}$, whose points in each $k$-algebra $A$ consist of vectors $(\vartheta^{n_\iota})$ acting on $\mathfrak{g}(A)=\prod_\iota\mathfrak{g}(A_\iota)$, with
$\vartheta^{n_\iota}$ acting on the factor $\mathfrak{g}(A_\iota)$.
Now let $m$ be a positive integer divisible by $e$ (but $m$ could be zero in $k$).
Let $m/e:\boldsymbol{\mu}_m\to \boldsymbol{\mu}_e$ be the morphism sending $\zeta\in\boldsymbol{\mu}_m(A)$ to
$\zeta^{m/e}\in\boldsymbol{\mu}_e(A)$ for every $k$-algebra $A$.
Finally, for each rational point $x\in\mathcal{A}^\vartheta_\mathbb{Q}$ of order $m$ we shall now
define a morphism
$$\varrho_x:\boldsymbol{\mu}_m\to T^\vartheta\times\langle\vartheta\rangle,$$
where $T^\vartheta$ is the subscheme of $\vartheta$-fixed points in $T$.
We have $x=\tfrac{1}{m}\check \lambda+ x_0$, for some
$\check \lambda\in\check X^\vartheta$.
The co-character $\check \lambda$ restricts to a morphism
$\check \lambda_m:\boldsymbol{\mu}_m\to T^\vartheta$ and
we define $\varrho_x$
on $A$-valued points by
$$\varrho_x(\zeta)=\check \lambda_m(\zeta)\times\tau(\zeta^{m/e}),\qquad\text{for}\qquad
\zeta\in\boldsymbol{\mu}_m(A).
$$
Since
$$\Hom(\boldsymbol{\mu}_m,T^\vartheta)=\check X^\vartheta/m\check X^\vartheta\simeq
\tfrac{1}{m}\check X^\vartheta/\check X^\vartheta,
$$
we see that $\check \lambda_m$ corresponds precisely
to an orbit of $x$ under translation by $\check X^\vartheta$ on
$\mathcal{A}^\vartheta_\mathbb{Q}$.
The condition that $x$ has order $m$ means that $\check \lambda_m$ does not factor through $\boldsymbol{\mu}_d$ for any proper divisor
$d\mid m$.
Let $\widetilde w\in \widetilde{W}_{\aff}(R,\vartheta)$ have projection $w\in W^\vartheta$ and denote the canonical action of
$W^\vartheta$ on $T^\vartheta$ by $w\cdot t$, for $t\in T^\vartheta(A)$.
Then we have
$$\varrho_{\widetilde w\cdot x}(\zeta)=w\cdot \varrho_x(\zeta) $$
for all $\zeta\in \boldsymbol{\mu}_m(A)$.
One can check (cf. \cite[section 3]{reeder:torsion}) that two points $x,y\in \mathcal{A}_\mathbb{Q}^\vartheta$ of order $m$ give $G$-conjugate embeddings $\varrho_x, \varrho_y:\boldsymbol{\mu}_m\hookrightarrow T^\vartheta\times \vartheta$ if and only if $x$ and $y$ are conjugate under $\widetilde{W}_{\aff}(R,\vartheta)$.
The morphism $\varrho_x$ is thus determined by the Kac coordinates $(s_0,s_1,\dots, s_{\ell_\vartheta})$ of $x$ and the $G$-conjugacy class of $\varrho_x$ is determined by the normalized Kac coordinates of the $\widetilde{W}_{\aff}(R,\vartheta)$-orbit of $x$.
\subsection{Principal $\mu_m$-actions}\label{principalmu}
We continue with the notation of section \ref{mum}. Recall that $\check\rho\in \check X^\vartheta$ is the sum of the fundamental co-weights $\check\omega_i$.
For every positive integer $m$ divisible by $e$, we have a {\bf principal} point
$$x_m:=x_0+\tfrac{1}{m}\check\rho\in\mathcal{A}^\vartheta_\mathbb{Q}$$
of order $m$. It corresponds to the {\bf principal embedding}
$$\varrho_m=\varrho_{x_m}:\ \boldsymbol{\mu}_m\longrightarrow T^\vartheta\times\langle\vartheta\rangle,\qquad\text{given by}\qquad
\varrho_m(\zeta)=\check\rho(\zeta)\times\tau(\zeta^{m/e}).
$$
The Kac coordinates of $x_m$ and $\varrho_m$ are given as follows.
If $1\leq i\leq\ell_\vartheta$ we have $\psi_i=\tilde\beta_i$ for some $\beta_i\in R_\vartheta$ which is the restriction to $\check V^\Theta$ of a simple root $\alpha_i\in \Delta$. Since $\langle \alpha_i,\check\rho\rangle=1$, it follows that $\langle\psi,x_m\rangle=1/m$ so $s_i=1$, and we have
$$m=e\cdot \sum_{i=0}^{\ell_\vartheta}b_is_i=es_0+e\cdot\sum_{i=1}^{\ell_\vartheta}b_i=es_0+h_\vartheta-e,$$
where $h_\vartheta=e\cdot\sum_{i=0}^{\ell_\vartheta}b_i$ is the twisted Coxeter number of $R_\vartheta$ (see \eqref{twistedcoxeter}). Hence the remaining Kac-coordinate of the principal point $x_m$ is
$$s_0=1+\frac{m-h_\vartheta}{e}.$$
This is negative if $m<h_\vartheta-e$, in which case we can apply the normalization algorithm of section \ref{kac-coord} to obtain the normalized Kac coordinates of $x_m$. Examples are found in the tables of section \ref{exceptional}.
We will be especially interested in the points $x_m$ where $m$ is the order of an elliptic $\mathbb{Z}$-regular automorphism in $W\vartheta$ (defined in the next section). The twisted Coxeter number $h_\vartheta$ is one of these special values of $m$, corresponding to $s_0=1$ (cf. section \ref{stableclassification} below).
\section{$\mathbb{Z}$-regular automorphisms of root systems}\label{Zregular}
We continue with the notation of section \ref{affine}: $R$ is an irreducible finite reduced root system with a chosen base $\Delta$ and automorphism group $\Aut(R)=W\rtimes\Theta$,
where $W$ is the Weyl group of $R$ and $\Theta$ is the subgroup of $\Aut(R)$ preserving $\Delta$.
\begin{definition}\label{def:regular} An automorphism $\sigma\in \Aut(R)$ is
{$\boldmath\mathbb{Z}$}-{\bf regular} if the group generated by $\sigma$ acts freely on $R$.
\end{definition}
This is nearly equivalent to Springer's notion of a regularity (over $\mathbb{C}$) \cite{springer:regular}.
In this section we will reconcile our definition with that of Springer.
Let $X=\mathbb{Z} R$ be the root lattice of $R$
and let $\check X=\Hom(X,\mathbb{Z})$ be the co-weight lattice.
We say that a vector $\check v\in k\otimes \check X$ is {\boldmath $k$}-{\bf regular} if $\langle \alpha,\check v\rangle\neq 0$ for every $\alpha\in R$. We say also that an automorphism
$\sigma\in\Aut(R)$ is {\boldmath $k$}-{\bf regular} if $\sigma$ has a $k$-regular eigenvector in $k\otimes\check X$.
Taking $k=\mathbb{C}$ we recover Springer's definition of regularity \cite{springer:regular}.
At first glance it appears that $\sigma$ could be $k$-regular for some fields $k$ but not others. This is why we have defined regularity over $\mathbb{Z}$, as in Def. \ref{def:regular}. Of course the definition of $\mathbb{Z}$-regularity seems quite different from that of $k$-regularity. An argument due to Kostant for the Coxeter element (cf. \cite[Cor. 8.2]{kostant:betti}) shows that a $k$-regular automorphism is $\mathbb{Z}$-regular (see
\cite[Prop. 4.10]{springer:regular}).
The converse is almost true but requires an additional condition.
We will prove:
\begin{prop}\label{prop:regular} An automorphism $\sigma\in \Aut(R)$
is $\mathbb{Z}$-regular if and only if for every algebraically closed field $k$ in which the order $m$ of $\sigma$ is nonzero there is $k$-regular eigenvector for $\sigma$ in $k\otimes\check X$ whose eigenvalue has order $m$.
\end{prop}
Suppose $\sigma=w\vartheta$ where $w\in W$ and $\vartheta\in\Theta$ is a based
automorphism of order $e$.
If $\sigma$ has order $m$ and has a $k$-regular eigenvalue $\lambda$ of order $d$,
then $m=\lcm(d,e)$. Indeed, it is clear that $m$ is divisible by
$n:=\lcm(d,e)$. Conversely, we have $\lambda^n=1$ so
$\sigma^n$ fixes a regular vector, but $\sigma^n\in W$, so in fact $\sigma^n=1$ and $m\mid n$.
Hence the notions of
$\mathbb{Z}$-regularity and $k$-regularity coincide precisely when $e\mid d$.
In particular they coincide if $\vartheta=1$, that is, if $\sigma\in W$.
However, if $\vartheta$ has order $e>1$ and we take $\sigma=\vartheta$, then $\sigma$ fixes the $k$-regular vector $\check\rho$ so $\sigma$ is $k$-regular (if $e\neq 0$ in $k$). However
$\sigma$ fixes the highest root, so $\sigma$ is not $\mathbb{Z}$-regular. And if $\zeta\in k^\times$ has order $e$ there are no $k$-regular vectors in the $\zeta$-eigenspace of $\sigma$.
The proof of Prop. \ref{prop:regular} will be given after some preliminary lemmas.
\begin{lemma}\label{based} An automorphism $\sigma\in\Aut(R)$ is based if and only if no root of $R$ vanishes on $\check X^\sigma$.
\end{lemma}
\proof Assume that $\sigma\in \Aut(R)$ preserves a base $\Delta'\subset R$. Then $\sigma$ preserves the set $R^+$ of roots in $R$ which are non-negative integral linear combinations of roots in $\Delta'$. The vector $\sum_{\beta\in R^+}\check\beta$
belongs to $\check X^\sigma$ and no root vanishes on it.
Conversely, let $\check v\in \check X^\sigma$ be a vector on which no root in $R$ vanishes.
Then $v$ defines a chamber $\mathcal{C}$ in the real vector space $\mathbb{R}\otimes X$, namely,
$$\mathcal{C}=\{\lambda\in \mathbb{R}\otimes X:\ \langle\lambda,\check v\rangle>0\}.$$
As $\sigma$ fixes $\check v$, the chamber $\mathcal{C}$ is preserved $\sigma$,
so $\sigma$ permutes the walls of $\mathcal{C}$.
The set of roots $\alpha$ for which $\ker\check \alpha$ is a wall of $\mathcal{C}$ is therefore a base of $R$ preserved by $\sigma$.
\qed
Next, we say that $\sigma\in \Aut(R)$ is {\bf primitive} if $\sigma$ preserves no proper root subsystem of $R$.
\begin{lemma}\label{primitive}
If $\sigma\in \Aut(R)$ is primitive, then its characteristic polynomial
on $V$ is irreducible over $\mathbb{Q}$. That is, we have
$\det(tI_V-\sigma\vert_V)=\Phi_m(t)$,
where $m$ is the order of $\sigma$ and $\Phi_m(t)\in\mathbb{Z}[t]$ is the cyclotomic polynomial whose roots are the primitive $m^{th}$ roots of unity.
\end{lemma}
\proof
In this proof we change notation slightly and let $V=\mathbb{Q}\otimes X$ denote the {\it rational} span of $X$ and let $\overline\mathbb{Q}$ be an algebraic closure of $\mathbb{Q}$.
For $\alpha\in R$, let $V_\alpha\subset V$ be the rational span of the $\sigma$-orbit of $\alpha$. Since $V_\alpha$ is spanned by roots, it follows from \cite[VI.1]{bour456} that
$R\cap V_\alpha$ is a root subsystem of $R$. As it is preserved by the primitive automorphism
$\sigma$, we must have $R\subset V_\alpha$, so $V_\alpha=V$. Hence the map $\mathbb{Q}[t]\to V$ given by sending $f(t)\mapsto f(\sigma)\alpha$ is surjective, and its kernel is the ideal in $\mathbb{Q}[t]$ generated by the minimal polynomial $M(t)$ of $\sigma$ on $V$. Hence $\deg M(t)=\dim V$ so we have $M(t)=\det(tI_V-\sigma\vert_V)$.
We must show that $M(t)$ is irreducible over $\mathbb{Q}$. If not, then $M(t)$ is divisible by $\Phi_d(t)$ for some proper divisor $d\mid m$. This means $\sigma$ has an eigenvalue of order $d$ on $\overline\mathbb{Q}\otimes V$, implying that $\sigma^d$ has nonzero fixed-point space $\check X^{\sigma^d}$. The set of roots vanishing on $\check X^{\sigma^d}$ is a root subsystem not equal to the whole of $R$, and therefore is empty, again using the primitivity of $\sigma$.
By Lemma \ref{based}, $\sigma^d$ is a nontrivial automorphism preserving a base $\Delta'$ of $R$. As in the proof of that lemma, the sum of the positive roots for $\Delta'$ is a nonzero $\overline\mathbb{Q}$-regular vector in $V$ fixed by $\sigma^d$.
Hence the nontrivial subgroup $\langle\sigma^d\rangle$ has trivial intersection with $W$.
If $\sigma\in W$ this is a contradiction and the lemma is proved in this case.
Assume that $\sigma\notin W$.
Since $R$ is irreducible and we have shown that the projection
$\Aut(R)\to \Theta$ is injective on $\langle\sigma^d\rangle$,
it follows that $\sigma^d$ has order $e\in\{2,3\}$.
We must also have $(e,d)=1$ and $m=ed$.
As $e$ is determined by the projection of $\sigma$ to $\Theta$, it follows that $d$ is the {\it unique} proper divisor of $m$ such that $\Phi_d(t)$ divides $M(t)$.
Since the roots of $M(t)$ are $m^{th}$ roots of unity (because $\sigma^m=1$)
and are distinct (since $\sigma$ is diagonalizable on $\overline\mathbb{Q}\otimes V$)
and $M(t)\neq \Phi_d(t)$ by assumption, it follows that $M(t)=\Phi_m(t)\cdot \Phi_d(t)$.
If $e=2$ then $-\sigma\in W$ is also primitive, with reducible minimal polynomial $M(-t)=
\Phi_m(-t)\cdot \Phi_d(-t)$, contradicting the case of the lemma previously proved.
If $e=3$, then $\Phi$ has type $D_4$, so $m=3d$ and
$$4=\deg M=\phi(3d)+\phi(d)=\phi(d)[\phi(3)+1]=3\phi(d),$$
which is also impossible. The lemma is now proved in all cases.
\qed
Now let $\sigma\in \Aut(R)$ be a $\mathbb{Z}$-regular automorphism of order $m$.
Recall from Def. \ref{def:regular} that this means the group $\langle\sigma\rangle$ generated by $\sigma$ acts freely on $R$.
For each $\alpha\in R$, let $V_\alpha\subset \mathbb{Q}\otimes X$ denote the
$\mathbb{Q}$-span of the $\langle \sigma\rangle$-orbit of $\alpha$
and let $M_\alpha(t)$ be the minimal polynomial of $\sigma$ on $V_\alpha$.
\begin{lemma}\label{free}
If $\sigma$ is $\mathbb{Z}$-regular of order $m$ then $\Phi_m(t)$ divides $M_\alpha(t)$ in $\mathbb{Z}[t]$, for all $\alpha\in R$.
\end{lemma}
\proof Let $\zeta\in \overline\mathbb{Q}^\times$ be a root of unity of order $m$ and let $\alpha\in R$.
It suffices to show that $\zeta$ is an eigenvalue of $\sigma$ in $\overline\mathbb{Q}\otimes V_\alpha$.
Let $R'$ be a minimal (nonempty) $\sigma$-stable root subsystem of $R\cap V_\alpha$,
and let
$$R'=R'_0\cup R'_1\cup\cdots\cup R'_{k-1}$$
be the decomposition of $R'$ into irreducible components.
These are permuted transitively by $\sigma$; we index them so that $\sigma^iR'_0=R'_i$ for $i\in \mathbb{Z}/k$. The stabilizer of
$R'_0$ in $\langle \sigma\rangle$ is generated by $\sigma^k$.
Correspondingly, the rational span $V'$ of $R'$ is a direct sum
$$V'=V'_0\oplus V'_1\oplus\cdots\oplus V'_{k-1}\subset V_\alpha$$
where $V'_i$ is the rational span of $R'_i$.
Suppose that $\eta:=\zeta^k$ is an eigenvalue of $\tau:=\sigma^k$ in $\overline\mathbb{Q}\otimes V'_0$,
afforded by the vector $v\in \overline\mathbb{Q}\otimes V'_0$.
Let $S$ and $T$ denote the group algebras over $\overline\mathbb{Q}$
of $\langle\sigma\rangle$ and $\langle \tau\rangle$, respectively,
and let $\overline\mathbb{Q}_\eta$ be the $T$-module with underlying vector space $\overline\mathbb{Q}$
on which $\tau$ acts as multiplication by $\eta$.
There is a unique map of $S$-modules
$$f:S\otimes_T \overline\mathbb{Q}_\eta\longrightarrow V'$$
such that $f(1\otimes 1)=v\in V'_0$.
As $f(\sigma^i\otimes 1)=\sigma^iv\in \overline\mathbb{Q}\otimes V'_i$,
and the spaces
$V'_0, V'_1, \dots, V'_{k-1}$ are linearly independent,
it follows that $f$ is injective. Frobenius reciprocity implies that $\zeta$ appears as an eigenvalue of $\sigma$ in $\overline\mathbb{Q}\otimes V'$, hence also in
$\overline\mathbb{Q}\otimes V_\alpha$.
It therefore suffices to prove that $\eta$ appears as an eigenvalue of $\tau$ on $\overline\mathbb{Q}\otimes V'_0$.
Since $\sigma$ acts freely on $R$, it follows that $\tau$ acts freely on $R'_0$ and has order $n:=m/k$ on $R'_0$. We claim that $\tau$ is primitive on $R'_0$.
For if $R''\subset R'_0$ is a root subsystem preserved by $\tau$ then
$R''\cup \sigma R''\cup\cdots\cup \sigma^{k-1} R''$ is a root subsystem preserved by $\sigma$ which must equal $R'$ (by minimality), so that $R''=R'_0$. Hence $\tau$ is indeed primitive on $R'_0$.
By Lemma \ref{primitive} the characteristic polynomial of $\tau$ on $V_0'$ is the cyclotomic polynomial $\Phi_n(t)$,
which has the root $\zeta^{m/n}=\zeta^k=\eta$. Therefore $\eta$ appears as an eigenvalue of $\tau$ on $\overline\mathbb{Q}\otimes V'_0$, as desired.
\qed
We are now ready to prove Prop. \ref{prop:regular}.
Let $k$ be an algebraically closed field and set $V_k:=k\otimes X$, $\check V_k:=k\otimes \check X$. Recall that a $k$-regular vector $\check v\in \check V_k$ is one for which
$\langle\alpha,\check v\rangle\neq 0$ for all $\alpha\in R$.
For completeness we recall the proof of the easy direction of Prop. \ref{prop:regular} (cf. \cite[4.10]{springer:regular}).
Assume that $\sigma\in\Aut(R)$ is $k$-regular, and let $\check v\in \check V_k$ be a $k$-regular eigenvector of $\sigma$ with eigenvalue $\zeta\in k^\times$ of order $m$ equal to the order of $\sigma$.
Suppose $\sigma^d\alpha=\alpha$ for some $\alpha\in R$. Then
$$
0\neq \langle \alpha,\check v\rangle=\langle \sigma^d\alpha,\check v\rangle=\langle \alpha,\sigma^{-d}\check v\rangle
=\zeta^{-d}\langle\alpha,\check v\rangle.
$$
It follows that $\zeta^d=1$. Since $\sigma$ and $\zeta$ have the same order, it follows that
$\sigma^d=1$. Hence $\langle \sigma\rangle$ acts freely on $R$, so $\sigma$ is $\mathbb{Z}$-regular.
Assume now that $\sigma$ is $\mathbb{Z}$-regular, so that $\langle\sigma\rangle$ acts freely on $R$.
Let $\bar\Phi_m(t)$ denote the image, under the map $\mathbb{Z}[t]\to k[t]$ induced by the canonical map $\mathbb{Z}\to k$, of the cyclotomic polynomial $\Phi_m(t)$. Since $m$ is nonzero in $k$, it follows that
all roots of $\bar \Phi_m(t)$ in $k$ have order $m$.
Let $\zeta\in k^\times$ be one of them.
Let $\alpha\in R$ and let $X_\alpha$ be the subgroup of $X$ generated by the
$\langle\sigma\rangle$-orbit of $\alpha$.
Then $X_\alpha$ is a lattice in $V_\alpha=\mathbb{Q}\otimes X_\alpha$ and
$\Phi_m(t)$ divides the characteristic polynomial $\det(tI-\sigma\vert_{X_\alpha})$
in $\mathbb{Z}[t]$, by Lemma \ref{free}.
Hence $\bar\Phi_m(t)$ divides $\det(tI-\sigma\vert_{k\otimes X_\alpha})$ in $k[t]$.
In particular $\zeta^{-1}$ is an eigenvalue of $\sigma$ on $k\otimes X_\alpha$.
The operator $ P_\zeta\in \End(V_k)$ given by
$$P_\zeta
=1+\zeta \sigma+\zeta^{2} \sigma^2+\cdots+\zeta^{m-1} \sigma^{m-1}
$$
preserves $k\otimes X_\alpha$ and $P_\zeta(k\otimes X_\alpha)$
is the $\zeta^{-1}$-eigenspace of $\sigma$ in $k\otimes X_\alpha$.
As $X_\alpha$ is spanned by roots $\sigma^i\alpha$ and
$P_\zeta(\sigma^i\alpha)=\sigma^{-i}P_\zeta(\alpha)$, it follows that $P_\zeta(\alpha)\neq 0$.
As $\alpha\in R$ was arbitrary, we have that $P_\zeta(\alpha)\neq 0$ for all $\alpha\in R$. Since $k$ is infinite, there exists $\check v\in \check V_k$ such that $\langle P_\zeta(\alpha),\check v\rangle\neq 0$ for all $\alpha\in R$.
The dual projection
$$\check P_\zeta
=1+\zeta^{-1}\sigma+\zeta^{-2}\sigma^2+\cdots+\zeta^{1-m} \sigma^{m-1}\in \End(\check V_k)$$
satisfies
$$\langle \alpha, \check P_\zeta (\check v)\rangle=\langle P_\zeta(\alpha), \check v\rangle\neq 0, $$
for all $\alpha\in R$. Therefore $\check P_\zeta (\check v)$ is a $k$-regular eigenvector of $\sigma$ in $\check V_k$ whose eigenvalue $\zeta$ has order $m$.
This completes the proof of Prop. \ref{prop:regular}.
\qed
\section{Positive rank gradings}\label{sec:posrank}
Let $\mathfrak{g}$ be the Lie algebra of a connected simple algebraic group $G$ of adjoint type over an algebraically closed field $k$ whose characteristic is not a torsion prime for $G$.
Then $G=\Aut(\mathfrak{g})^\circ$ is the identity component of $\Aut(\mathfrak{g})$.
We fix a Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ with corresponding maximal torus $T=C_G(\mathfrak{t})$ and let $R$ be the set of roots of $\mathfrak{t}$ of $\mathfrak{g}$. Let $N=N_G(T)$ be the normalizer of $T$, so that $W=N/T$ is the Weyl group of $R$.
From now on we only consider gradings $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$ whose
period $m$ is nonzero in $k$. By choosing an $m^{th}$ root of unity in $k^\times$,
we get an automorphism $\theta\in\Aut(\mathfrak{g})$ of order $m$,
such that $\theta$ acts on $\mathfrak{g}_i$ by the scalar $\zeta^i$.
In this section we show how all such gradings of positive rank may be effectively found by computing lifts to $\Aut(\mathfrak{g})$ of
automorphisms $\sigma\in\Aut(R)$.
\subsection{A canonical Cartan subalgebra}\label{csa}
Given any Cartan subalgebra $\mathfrak{s}$ of $\mathfrak{g}$ with centralizer $S=C_G(\mathfrak{s})$,
let
$$\Aut(\mathfrak{g},\mathfrak{s})=\{\theta\in\Aut(\mathfrak{g}):\ \theta(\mathfrak{s})=\mathfrak{s}\}.$$
We have an isomorphism (obtained by conjugating $\mathfrak{s}$ to our fixed Cartan subalgebra $\mathfrak{t}$)
$$\Aut(\mathfrak{g},\mathfrak{s})/S\simeq \Aut(R)$$
which is unique up to conjugacy in $\Aut(R)$. Thus any element of $\Aut(\mathfrak{g},\mathfrak{s})$ gives a well-defined conjugacy class in $\Aut(R)$.
However, an automorphism $\theta\in\Aut(\mathfrak{g})$ may normalize various Cartan subalgebras $\mathfrak{s}$, giving rise to various classes in $\Aut(R)$.
We will define a canonical $\theta$-stable Cartan subalgebra, which will allow us associate to $\theta$ a well-defined conjugacy class in $\Aut(R)$.
For each $\theta\in\Aut(\mathfrak{g})$ whose order is nonzero in $k$ we define a canonical $\theta$-stable Cartan subalgebra $\mathfrak{s}$ of $\mathfrak{g}$ as follows.
Let $\mathfrak{c}\subset\mathfrak{g}_1$ be a Cartan subspace.
The centralizer $\mathfrak{m}=\mathfrak{z}_\mathfrak{g}(\mathfrak{c})$ is a $\theta$-stable Levi subalgebra of $\mathfrak{g}$
and we have $\mathfrak{m}=\oplus\mathfrak{m}_i$ where $\mathfrak{m}_i=\mathfrak{m}\cap\mathfrak{g}_i$.
Choose a Cartan subalgebra $\mathfrak{s}_0$ of $\mathfrak{m}_0$.
Then $\mathfrak{s}_0$ contains regular elements of $\mathfrak{m}$ \cite[Lemma 1.3]{levy:thetap}, so the centralizer
$$\mathfrak{s}:=\mathfrak{z}_\mathfrak{m}(\mathfrak{s}_0)$$
is a $\theta$-stable Cartan subalgebra of $\mathfrak{m}$, and $\mathfrak{s}$ is also a Cartan subalgebra of $\mathfrak{g}$.
We have $\mathfrak{s}\cap \mathfrak{g}_0=\mathfrak{s}_0$ (so our notation is consistent) and $\mathfrak{s}\cap \mathfrak{g}_1=\mathfrak{c}$.
Since $G_0$ is transitive on Cartan subspaces in $\mathfrak{g}_1$ \cite[Thm. 2.5]{levy:thetap} and
$C_{G_0}(\mathfrak{c})^\circ$ is transitive on Cartan subalgebras of its Lie algebra $\mathfrak{m}_0$,
the Cartan subalgebra $\mathfrak{s}$ is unique up to $G_0$-conjugacy.
\subsection{A relation between $\Aut(\mathfrak{g})$ and $\Aut(R)$}\label{vdash}
For $\theta\in\Aut(\mathfrak{g})$ and $\sigma\in \Aut(R)$ we write
$$\theta\vdash \sigma$$
if the following two conditions are fulfilled:
\begin{itemize}
\item $\theta$ and $\sigma$ have the same order;
\item $\theta$ is $G$-conjugate to an automorphism $\theta'\in\Aut(\mathfrak{g},\mathfrak{t})$ such that $\theta'\vert_\mathfrak{t}=\sigma$.
\end{itemize}
Assume that $\theta\vdash \sigma$ and that the common order $m$ of $\theta$ and $\sigma$ is nonzero in $k$. Choose a root of unity $\zeta\in k^\times$ of order $m$, giving a grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$.
Recall that $\rank(\theta)$ is the dimension of a Cartan subspace $\mathfrak{c}\subset\mathfrak{g}_1$ for $\theta$.
Likewise, for $\sigma\in \Aut(R)$, let $\rank(\sigma)$ be the multiplicity of $\zeta$ as a root of the characteristic polynomial of $\sigma$ on $V$. Since $\mathfrak{t}$ consists of semisimple elements, it follows that
$\rank(\theta)\geq \rank(\sigma)$.
\begin{prop}\label{posrank} Let $\theta\in\Aut(\mathfrak{g})$ be an automorphism of positive rank whose order $m$ is nonzero in $k$. Then
$$\rank(\theta)=\max\{\rank(\sigma):\ \theta\vdash \sigma\}.$$
\end{prop}
\proof
It suffices to show that there exists $\sigma\in \Aut(\mathfrak{g},\mathfrak{t})$ such that $\theta\vdash \sigma$ and $\rank(\theta)=\rank(\sigma)$.
Replacing $\theta$ by a $G$-conjugate, we may assume that $\mathfrak{t}$ is the canonical Cartan subalgebra for $\theta$ (section \ref{csa}) so that $\theta\in\Aut(\mathfrak{g},\mathfrak{t})$,
and $\mathfrak{c}=\mathfrak{t}_1$ is a Cartan subspace contained in $\mathfrak{t}$.
Then $\mathfrak{c}$ is the $\zeta$-eigenspace of $\sigma:=\theta\vert_\mathfrak{t}\in\Aut(R)$.
Since $\theta$ has order $m$, it follows that the order of $\sigma$ divides $m$.
But $\sigma$ has an eigenvalue of order $m$, so the order of $\sigma$ is exactly $m$. We therefore have $\theta\vdash \sigma$ and
$\rank(\theta)=\dim\mathfrak{c}=\rank(\sigma)$.
\qed
Given $\sigma\in \Aut(R)$ let $\Kac(\sigma)$ denote the set of normalized Kac diagrams of automorphisms $\theta\in \Aut(\mathfrak{g},\mathfrak{t})$ for which $\theta\vdash \sigma$. Since there are only finitely many Kac diagrams of a given order, each set $\Kac(\sigma)$ is finite.
From Prop. \ref{posrank} it follows that
the Kac coordinates of all positive rank automorphisms of $\mathfrak{g}$
are contained in the union
\begin{equation}\label{positiverank}
\bigcup_{\sigma\in\Aut(R)/\sim}\Kac(\sigma),
\end{equation}
taken over representatives of the $W$-conjugacy classes in $\Aut(R)$.
Moreover $\rank(\theta)$ is the maximal $\rank(\sigma)$ for which the Kac coordinates of $\theta$ appear in $\Kac(\sigma)$.
\subsection{Inner automorphisms}
If $\theta\in G=\Aut(\mathfrak{g})^\circ$ is inner then its Kac diagram
will belong to $\Kac(w)$ for some $w\in W$.
In this section we refine the union \eqref{positiverank} to reduce the number of classes of $w$ to consider, and we show how to compute $\Kac(w)$ directly from $w$, for these classes.
A subset $J\subset\{1,\dots,\ell\}$ is {\bf irreducible} if the root system $R_J$ spanned by $\{\alpha_j:\ j\in J\}$ is irreducible. Two subsets $J,J'$ are {\bf orthogonal} if $R_J$ and $R_{J'}$ are orthogonal.
An element $w\in W$ is {\bf $m$-admissible} if $w$ has order $m$ and $w$ can be expressed as a product
\begin{equation}\label{admissible}
w=w_1 w_2\cdots w_d,
\end{equation} where
each $w_i$ is contained in $W_{J_i}$ for irreducible mutually orthogonal subsets $J_1,\dots, J_d$ of $\{1,2,\dots,\ell\}$ and on the reflection representation of $W_{J_i}$ each $w_i$ has
an eigenvalue of order $m$ but no eigenvalue equal to $1$ (so $w_i$ is elliptic in $W_{J_i}$). We call \eqref{admissible} an {\bf admissible factorization} of $w$. Note that each $w_i$ also has order
$m$, that $\rank(w)=\sum_i\rank(w_i)$,
and $\rank(w_i)>0$ for $1\leq i\leq d$.
Let $G_i$ be the Levi subgroup of $G$ containing $T$ and the roots from $J_i$,
and let $G_i'$ be the derived group of $G_i$.
Each $w_i\in W_{J_i}$ has a lift $\dot w_i\in G_i'\cap N$ and all such lifts are conjugate by $T\cap G_i$, hence the normalized Kac-coordinates of $\Ad(\dot w_i)$ in $\Ad(G_i')$ are well-defined.
Given an $m$-admissible element $w=w_1\cdots w_d$ as in \eqref{admissible},
let $\Kac(w)_{\un}$ be the set of
un-normalized Kac coordinates $(s_0,s_1,\dots,s_\ell)$ such that
\begin{itemize}
\item For $j\in J_i$ the coordinate $s_j$ is the corresponding normalized Kac coordinate of $w_i$ in $G_i'$.
\item For $i\in\{0,1,\dots,\ell\}-J$, the coordinate $s_i$ ranges over a set of representatives for $\mathbb{Z}/m$.
\item $\sum_{i=0}^\ell a_i s_i=m$.
\end{itemize}
If $w$ is any automorphism of $T$ we set $(1-w)T:=\{t\cdot w(t)^{-1}:\ t\in T\}$.
\begin{lemma}\label{lem:Kac(w)} If $w$ is $m$-admissible, then $\Kac(w)$ is the set of Kac diagrams obtained by applying the normalization algorithm of section \ref{kac-coord} to the elements of
$\Kac(w)_{\un}$.
\end{lemma}
\proof Each Kac diagram in $\Kac(w)_{\un}$ is that of a lift of $w$ in $N$ of order $m$.
Hence the normalization of this diagram lies in $\Kac(w)$. Conversely, suppose
$(s_i)$ are normalized Kac coordinates lying in $\Kac(w)$.
By definition, there is an inner automorphism $\theta\vdash w$ (notation of section \ref{vdash}) of order $m$ with these normalized Kac-coordinates, and we may assume that $\theta=\Ad(n)$ for some $n\in N$,
a lift of $w$. Then
$$n=\dot w_1\dot w_2\cdots \dot w_d\cdot t$$
where each $\dot w_i$ is a lift of $w_i$ and $t\in T$.
Let $Z$ be the maximal torus in the center of $G_1'\cdot G_2'\cdots G_d'\cdot T$.
Then $T=Z\cdot (1-w)T$, so we may conjugate $n$ by $T$ to arrange that $t\in Z$.
Next, we conjugate each $\dot w_i$ in $G_i'$ to an element $t_i\in T\cap G_i'$,
thus conjugating $n$ to
$$n'=t_1\cdot t_2\cdots t_d\cdot t\in T.$$
Since $n'$ has order $m$ there exists $\check\lambda\in\check X$ such that
$n'=\check\lambda(\zeta)$. As in section \ref{kac-coord}, the point $x=x_0+\frac{1}{m}\check\lambda\in\mathcal{A}_\mathbb{Q}$ has order $m$ and the simple affine roots $\psi_i$ take values $\psi_i(x)=s_i'/m$, where $s_i'$ are the Kac coordinates of $n'$ and $\sum_{i=0}^\ell a_i s'_i=m$.
If $j\in J_i$ then $s'_j$ is a Kac coordinate of the $G'_i$-conjugate
$\dot w_i$ of $t_i$, and if
$i\in\{0,1,\dots,\ell\}-J$ we have $\alpha_i(n')=\zeta^{s'_i}$, so the class of $s_i'$ in $\mathbb{Z}/m$ is determined.
Hence the Kac coordinates $(s'_i)$ lie in $\Kac(w)_{\un}$ and their normalization is $(s_i)$.
\qed
\begin{prop}\label{Kac(w)}
Let $\theta\in\Aut(\mathfrak{g})^\circ$ be an inner automorphism of order $m$ nonzero in $k$ with
$\rank(\theta)>0$. Then there exists an $m$-admissible element $w\in W$ such that $\theta\vdash w$, and the rank of $\theta$ is given by
$$\rank(\theta)=\max\{\rank(w):\ \theta\vdash w\},$$
where the maximum is taken over all $W$-conjugacy classes of $m$-admissible elements $w\in W$ such that $\theta\vdash w$.
\end{prop}
\proof We may assume that $\mathfrak{t}$ is the canonical Cartan subalgebra for $\theta$, so that
$\theta=\Ad(n)$ for some $n\in N$. The element $w=nT\in N/T=W$ has order $m$ and $\theta\vdash w$. Recall that the canonical Cartan subalgebra has the property that $\mathfrak{t}_1$ is a
Cartan subspace for $\theta$. Hence $\rank(\theta)=\rank(w)>0$ .
Assume first that $\mathfrak{t}_0=0$, that is, $w$ is elliptic.
Then $w$ is $m$-admissible and its admissible factorization
\eqref{admissible} is $w=w_1$, with $d=1$,
so the proposition is proved in this case.
Assume now that $\mathfrak{t}_0\neq 0$.
Let $R_0$ be the set of roots in $R$ vanishing on $\mathfrak{t}_0$. Since $R_0$ is the root system of a Levi subgroup of $G$,
there is a basis $\Delta=\{\alpha_1,\alpha_2,\dots,\alpha_\ell\}$ of $R$ such that
$\Delta_0:=\Delta\cap R_0$ is a basis of $R_0$.
We have $\Delta_0=\{\alpha_j:\ j\in J\}$ for some subset $J\subset\{1,2,\cdots,\ell\}$.
Decomposing $R_0$ into irreducible root systems $R_0^{i}$, we have
corresponding decompositions
\begin{displaymath}
\begin{split}
R_0&=R_0^1\cup R_0^2\cup\cdots\cup R_0^n,\\
\Delta_0&=\Delta_0^1\cup \Delta_0^2\cup\cdots\cup \Delta_0^n,\\
J&= J_1\cup J_2\cup\cdots\cup J_n,\\
W_J&=W_{J_1}\times W_{J_2}\times\cdots\times W_{J_n},\\
w&=w_1\cdot w_2\cdots\cdot w_n.
\end{split}
\end{displaymath}
By construction, $w$ is elliptic in $W_J$ and has an eigenvalue of order $m$ on the reflection representation of $W_J$.
Therefore, each $w_i$ is elliptic in $W_{J_i}$ and has eigenvalues of order dividing $m$.
And since $\rank(w)>0$ there is some number $d\geq 1$ of $w_i$'s having an eigenvalue of order exactly $m$.
Let the factors be numbered so that $w_i$ has an eigenvalue of order $m$ for $i\leq d$,
and $w_i$ has no eigenvalue of order $m$ for $i>d$. The element
$$w'=w_1w_2\cdots w_d$$
is $m$-admissible.
As before, let $G_i$ be the Levi subgroup of $G$ containing $T$ and the root subgroups from $J_i$,
and let $G_i'$ be the derived subgroup of $G_i$. The derived group of $C_G(\mathfrak{t}_0)$ is a
commuting product
$G_1'\cdot G_2'\cdots G_n'$.
Each $w_i$ has a lift
$\dot w_i\in N\cap G_i'$; such a lift is unique up to conjugacy by $T\cap G_i'$
and we have
$$\theta=\dot w_1\dot w_2\cdots \dot w_n \cdot t$$
for some $t\in T$. For $i>d$ we conjugate $\dot w_i$ in $G'_i$ to an element $t_i\in T$,
obtaining a conjugate $\theta'$ of $\theta$ having the form
$$\theta'=\dot w_1\dot w_2\cdots \dot w_d \cdot t'.$$
Therefore $\theta\vdash w'$ and $w'$ is $m$-admissible of the same rank as $\theta$.
The proposition is proved.
\qed
\section{Principal and stable gradings}\label{principalstable}
Retain the set-up of section \ref{sec:posrank}.
Let $B$ be a Borel subgroup of $G=\Aut(\mathfrak{g})^\circ$ containing our fixed maximal torus $T$. The algebraic group $G$ has root datum
$(X,R,\check X, \check R)$, where $X=X^\ast(T)$ (resp. $\check X=X_\ast(T)$) are the lattices of weights (resp. co-weights) of $T$, and $R$ (resp. $\check R$) are the sets of roots (resp. co-roots) of $T$ in $G$.
The base $\Delta$ of $R$ is the set of simple roots of $T$ in $B$.
As before, we choose a pinning
$(X,R,\check X, \check R,\{E_i\})$, where $E_i\in\mathfrak{g}$ is a root vector for the simple root $\alpha_i\in\Delta$. This choice
gives an isomorphism from $\Aut(R,\Delta)$ to the group
$\Theta=\{\vartheta\in\Aut(\mathfrak{g},\mathfrak{t}):\ \vartheta\{E_i\}=\{E_i\}\ \}$ of pinned automorphisms, and we have a splitting
$$\Aut(\mathfrak{g})=G\rtimes\Theta.$$
\subsection{Principal gradings}\label{principalgradings}
For each positive integer $m$ and pinned automorphism $\vartheta\in\Aut(R,\Delta)$, we have a principal grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$ given (as in section \ref{principalmu}) by the point $x_m:=\frac{1}{m}\check\rho+x_0$
(Recall that $\check \rho$ is the sum of the fundamental co-weights dual to the simple roots $\alpha_i\in\Delta$.)
The normalized Kac diagram of $x_m$ may be obtained via the algorithm described in section \ref{principalmu}. (These Kac diagrams may also be found in \cite{degraaf:niltheta}.)
Note that $\mathfrak{g}_1$ contains the regular nilpotent element
$E:=E_1+E_2+\cdots+E_\ell$ associated to our pinning.
If $m$ is nonzero in $k$ and we choose a root of unity $\zeta\in k^\times$ of order $m$, then $\mathfrak{g}_i$ is the $\zeta^i$-eigenspace for the automorphism
$$\theta_m:=\check\rho(\zeta)\vartheta.$$
Note that the $\zeta$-eigenspace $\mathfrak{g}_1$ for $\theta_m$ contains the regular nilpotent element $E:=E_1+E_2+\cdots+E_\ell$ associated to our pinning. Conversely if
$\theta=\check\lambda(\zeta)\vartheta$ is an automorphism of order $m$ whose $\mathfrak{g}_1$ contains a regular nilpotent element then $\theta$ is principal.
If the characteristic $p$ of $k$ is zero or sufficiently large, the element
$\check\rho(\zeta)$ is the image of $\begin{bmatrix}\zeta&0\\0&1\end{bmatrix}$ under the principal embedding $\PGL_2\hookrightarrow G$ associated by the Jacobson-Morozov theorem to $E$. Elsewhere in the literature a principal automorphism is called ``$N$-regular".
The first aim of this section is to show that lifts to $\Aut(\mathfrak{g})$ of $\mathbb{Z}$-regular elliptic automorphisms
$\sigma\in\Aut(R)$ are principal. (Recall that an automorphism $\sigma\in\Aut(R)$ is called {\bf elliptic} if $X^\sigma=0$.)
More precisely, let $\sigma=w\vartheta\in W\vartheta$ be an elliptic $\mathbb{Z}$-regular automorphism of $R$ (Def. \ref{def:regular}).
Let $n\in N$ be a lift of $w$. Since $\sigma$ is elliptic the fixed-point group $T^\sigma$ is finite, so the coset $nT\vartheta\subset G\vartheta$ consists of a single $T$-orbit under conjugation. It follows that the $G$-conjugacy class $C_\sigma$ of $n\vartheta$ in $G\vartheta$ depends only on $\sigma$. In this section we will prove the following.
\begin{prop}\label{clift} Assume $\sigma\in W\vartheta$ is elliptic and $\mathbb{Z}$-regular and that the order $m$ of $\sigma$ is nonzero in $k$.
Then the conjugacy class $C_\sigma$ contains $\check\rho(\zeta)\vartheta$ for every $\zeta\in k^\times$ of order $m$.
\end{prop}
The second aim of this section is to characterize the principal gradings which arise from
elliptic $\mathbb{Z}$-regular automorphisms of $R$ in terms of stability (see section \ref{sec:stable}).
\subsection{Conjugacy results}
If $\sigma$ is an automorphism of an abelian group $A$, we set
\[
(1-\sigma)A:=\{a\cdot\sigma(a)^{-1}:\ a\in A\}.
\]
Let $N^\vartheta, W^\vartheta$ denote the fixed-point subgroups of $\vartheta$ in $N, W$ respectively,
and let $N_\vartheta=\{n\in N:\ \vartheta(n)\equiv n\mod T\}$.
It is known (see \cite{steinberg:endo}) that $N_\vartheta=N^\vartheta\cdot T$. This group acts on the coset $T\vartheta$ by conjugation.
Meanwhile the fixed-point group $W^\vartheta$ acts on the quotient torus
$$T_\vartheta=T/(1-\vartheta)T$$
whose character and cocharacter groups
$X^\ast(T_\vartheta)=X^\vartheta$ and $ X_\ast(T_\vartheta)=\check X/(1-\vartheta)\check X$ are the invariants and coinvariants of $\vartheta$ in $X$ and $\check X$, respectively.
We now recall some conjugacy results from \cite{borel:corvallis} and \cite{reeder:torsion} which are stated over $\mathbb{C}$ but whose proofs are unchanged if $\mathbb{C}$ is replaced by any algebraically closed field $k$.
First, we have \cite[6.4]{borel:corvallis}:
\begin{lemma}\label{nu} The natural projection $\nu:T\to T_\vartheta$ induces a bijection
$$T\vartheta/N_\vartheta \longrightarrow T_\vartheta/W^\vartheta,$$
sending $t\vartheta\mod N_\vartheta\mapsto \nu(t)\mod W^{\vartheta}$.
\end{lemma}
From \cite[Lemma 3.2]{reeder:torsion} each semisimple element $g\vartheta\in G\vartheta$ is $G$-conjugate to an element of $t\vartheta$ with $t\in T^\vartheta$. Now \cite[6.5]{borel:corvallis} shows that sending $g\vartheta$ to the class of $\nu(t)$ modulo $W^\vartheta$ gives a bijection between the set of semisimple $G$-conjugacy classes in $G\vartheta$ and the orbit space $T_\vartheta/W^\vartheta$.
Now the affine variety $T_\vartheta/W^\vartheta$ has a canonical $\mathbb{Z}$-form,
namely the ring $\mathbb{Z}[X^\vartheta]^{W^\vartheta}$ of $W^{\vartheta}$-invariants in the integral group ring of the character group $X^\vartheta$ of $T_\vartheta$.
Indeed, let $X^{\vartheta}_+$ be the set of dominant weights in $X^\vartheta$ and for each $\lambda\in X^\vartheta_+$,
let $\eta_\lambda$ be the sum in $\mathbb{Z}[X^\vartheta]$ over the $W^\vartheta$-orbit of $\lambda$, and let
$\eta^k_\lambda$ be the same sum in the group ring $k[X^\vartheta]$.
Then $\{\eta_\lambda:\ \lambda\in X_+^{\vartheta}\}$ and $\{\eta_\lambda^k:\ \lambda\in X_+^{\vartheta}\}$
are bases of $\mathbb{Z}[X^\vartheta]^{W^\vartheta}$ and $k[X^\vartheta]^{W^\vartheta}$ respectively,
and $\{1\otimes\eta_\lambda:\ \lambda\in X^\vartheta_+\}$ is a $k$-basis of
$k\otimes_\mathbb{Z}(\mathbb{Z}[X^\vartheta]^{W^\vartheta})$. It follows that the canonical mapping
$\mathbb{Z}[X^\vartheta]^{W^\vartheta}\longrightarrow k[X^\vartheta]^{W^\vartheta}$ induces an isomorphism
\begin{equation}\label{XZform}
k\otimes_\mathbb{Z}(\mathbb{Z}[X^\vartheta]^{W^\vartheta})\overset\sim\longrightarrow k[X^\vartheta]^{W^\vartheta}.
\end{equation}
The torus $T_\vartheta$ is a maximal torus in a connected reductive group $G_\vartheta$ with Weyl group $W^\vartheta$,
so $\mathbb{Z}[X^\vartheta]^{W^\vartheta}$ has another $\mathbb{Z}$-basis, $\{\chi_\lambda:\ \lambda\in X^\vartheta_+\}$, where
$$\chi_\lambda=\sum_{\mu\in X^\vartheta} m_\lambda^\mu\mu,$$
and $m_\lambda^\mu$ is the multiplicity of the weight $\mu$ in the irreducible representation
of highest weight $\lambda$ of the complex group with the same root datum as $G_\vartheta$. Therefore
$k[X^\vartheta]^{W^\vartheta}$ has another $k$-basis, $\{\chi_\lambda^k:\ \lambda\in X^\vartheta_+\}$,
where $\chi_\lambda^k\in k[X^\vartheta]^{W^\vartheta}$ is the image of $1\otimes \chi_\lambda$ under the isomorphism \eqref{XZform}.
We now regard $G$ as a Chevalley group scheme over $\mathbb{Z}$, writing $G(A)$ for the group of $A$-valued points in a commutative ring $A$.
The group heretofore denoted by $G$ is now $G(k)$. Likewise $T$ and $N$ are now group schemes over $\mathbb{Z}$.
Let $\lambda\in X^\vartheta_+$ and let $V$ be the irreducible representation of $G(\mathbb{C})$ of highest weight $\lambda$. Since $\vartheta\lambda=\lambda$ it follows that
$V$ extends uniquely to a representation of
$G(\mathbb{C})\cdot\langle\vartheta\rangle$ such that $\vartheta$ acts trivially on the highest weight space $V(\lambda)$.
Choose a $G(\mathbb{Z})$-stable lattice $M$ in $V$ such that $M\cap V(\mu)$
spans each weight space $V(\mu)$ in $V$ and $\vartheta M=M$.
For example, we could take $M$ to be the admissible $\mathbb{Z}$-form of $V$ constructed by Kostant in \cite{kostant:zform}.
We get a representation of
$G(k)\cdot\langle\vartheta\rangle$ on $V_k:=k\otimes M$ which may be reducible and which depends on $M$. However, since $M$ contains a basis of $V$, the traces on $V_k$ of elements of $G(k)\cdot\langle\vartheta\rangle$ are independent of the choice of $M$.
Let $A=\mathbb{Z}[\zeta]\subset\mathbb{C}$ be the cyclotomic ring generated by a root of unity
$\zeta\in \mathbb{C}^\times$ of order $m$. Assume that $k$ is algebraically closed and $m$ is nonzero in $k$. Choose $\zeta_k\in k^\times$ a root of unity of order $m$.
We have ring homomorphisms
$$\mathbb{C}\overset{\iota}\hookleftarrow A\overset{\pi}\longrightarrow k,$$
where $\iota$ is the inclusion and $\pi(\zeta)=\zeta_k$. We use the same letters to denote maps on groups of points, e.g.,
$$G(\mathbb{C})\overset{\iota}\hookleftarrow G(A)\overset{\pi}\longrightarrow G(k),$$
and similarly for $T$ and $N$.
\begin{lemma}\label{kconj1} Let $s,t\in T(k)^\vartheta$ be elements of order $m$ such that
$\tr(s\vartheta, V_k)=\tr(t\vartheta,V_k)$ for all irreducible representations $V$ of $G(\mathbb{C})$ whose highest weight belongs to $X_+^\vartheta$. Then $s\vartheta$ and $t\vartheta$ are $G(k)$-conjugate.
\end{lemma}
\proof
Let $V'$ be the representation of $G_\vartheta(\mathbb{C})$ with the same highest weight as $V$.
And choose a lattice $M'\subset V'$ analogous to $M$ above. Since $s$ has order $m$ there is a co-weight $\check\omega\in \check X$ such that
$$s=\check\omega(\zeta_k)=\pi\check\omega(\zeta).$$
For each $\mu\in X^\vartheta$ let $M(\mu)=M\cap V(\mu)$ and likewise set
$M'(\mu)=M'\cap V'(\mu)$. We have
\begin{equation*}
\begin{split}
\tr(s\vartheta,V_k)
&=\sum_{\mu\in X^\vartheta}\mu(s)\cdot \tr(\vartheta,k\otimes M(\mu))
=\sum_{\mu\in X^\vartheta}\zeta_k^{\langle\mu,\check\omega\rangle}\cdot
\pi\left( \tr(\vartheta,M(\mu))\right)\\
&=\pi\left(\sum_{\mu\in X^\vartheta}\zeta^{\langle\mu,\check\omega\rangle}\cdot
\tr(\vartheta,M(\mu))\right).
\end{split}
\end{equation*}
By a result of Jantzen (see for example \cite{kumar-lusztig-prasad}) we have
$$
\sum_{\mu\in X^\vartheta}\zeta^{\langle\mu,\check\omega\rangle}\cdot \tr(\vartheta,M(\mu))
=
\sum_{\mu\in X^\vartheta}\zeta^{\langle\mu,\check\omega\rangle}\cdot \dim M'(\mu).
$$
It follows that
\begin{equation*}
\tr(s\vartheta,V_k)=\pi\left(\sum_{\mu\in X^\vartheta}\zeta^{\langle\mu,\check\omega\rangle}\cdot \dim M'(\mu)\right)=\tr(\nu(s),V'_k).
\end{equation*}
Applying this identity to $t\vartheta$ as well, we find that
$$\tr(\nu(s),V'_k)=\tr(\nu(t),V'_k).$$
Therefore $\chi_\lambda^k(\nu(s))=\chi_\lambda^k(\nu(t))$ for every $\lambda\in X_+^\vartheta$.
Since these $\chi_\lambda^k$ are a basis of $k[X^\vartheta]^{W^{\vartheta}}$, it follows from \cite[Cor. 6.6]{steinberg:regular}
that $\nu(s)\equiv\nu(t)\mod W^\vartheta$.
By Lemma \ref{nu} we have that $s\vartheta$ and $t\vartheta$ are $G(k)$-conjugate, as claimed.
\qed
Now suppose $g\in G(\mathbb{Z})$ and $g\vartheta$ is semisimple of order $m$.
Let $s\in T(\mathbb{C})^\vartheta$ and $t\in T(k)^\vartheta$ be such that $\iota(g)\vartheta$ is $G(\mathbb{C})$-conjugate to $s\vartheta$ and $\pi(g)\vartheta$ is $G(k)$-conjugate to $t\vartheta$.
\begin{lemma}\label{k-conj} In the situation just described, we have $s\in T(A)$ and $\pi(s)\vartheta$ is $G(k)$-conjugate to $t\vartheta$.
\end{lemma}
\proof
As above we have $s=\check\omega(\zeta)$ for some co-weight $\check\omega\in\check X$.
It follows that $s\in T(A)$. Moreover, $g\vartheta$ preserves the lattice $M$, so we have
$$\tr(\iota(g)\vartheta,M)=\tr(s\vartheta, V)=\sum_{\mu\in X^\vartheta}\zeta^{\langle \mu,\check\omega\rangle}\cdot\tr(\vartheta,M(\mu)).
$$
Applying $\pi$ to both sides we get
\begin{equation}\label{s}
\pi\left(\tr(\iota(g)\vartheta,M)\right)
=\sum_{\mu\in X^\vartheta}\zeta_k^{\langle \mu,\check\omega\rangle}\cdot\tr(\vartheta,V_k(\mu))
=\tr(\pi(s)\vartheta,V_k).
\end{equation}
On the other hand, we can first apply $\pi:G(A)\to G(k)$ and then take traces.
This gives
\begin{equation}\label{t}
\pi\left(\tr(\iota(g)\vartheta,M)\right)=\tr(\pi(g)\vartheta,V_k)=\tr(t\vartheta,V_k).
\end{equation}
Comparing the expressions \eqref{s} and \eqref{t} and using Lemma \ref{kconj1} we see that
$\pi(s)\vartheta$ and $t\vartheta$ are $G(k)$-conjugate as claimed.
\qed
We are ready to prove Prop. \ref{clift}.
Recall that $w\vartheta\in W\vartheta$ is an elliptic $\mathbb{Z}$-regular automorphism of $R$ whose order $m$ is nonzero in the algebraically closed field $k$. Let $\zeta\in k^\times$ be a root of unity of order $m$.
Recall that $\check\rho$ is the sum of the fundamental co-weights arising from our chosen pinning.
We have $\check\rho\in\check X^\vartheta$ and $\check\rho(\zeta)\in T(k)^\vartheta$.
We now prove Prop. \ref{clift} in the following form.
\begin{prop}\label{clift2}
For any lift $n\in N(k)$ of $w$, the element
$n\vartheta\in G(k)\vartheta$ is $G(k)$-conjugate to $\check\rho(\zeta)\vartheta$.
\end{prop}
\proof
Assume first that $k$ has characteristic zero. In this case the proof relies on
\cite[Thm. 3.3]{panyushev:theta} and is similar to the proof of
\cite[Thm. 4.2 (iii)]{panyushev:theta}.
The automorphism $\tau:=\check\rho(\zeta)\vartheta\in\Aut(\mathfrak{g})$ has order $m$ and gives a grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}'_i$, where $\mathfrak{g}'_i$ is the $\zeta^i$-eigenspace of $\tau$.
The sum
$E=\sum_{i=1}^\ell E_i$ of the simple root vectors in our pinning belongs to $\mathfrak{g}_1'$.
By \cite[Thm. 3.3(v)]{panyushev:theta}, the dimension of a Cartan subspace $\mathfrak{c}\subset\mathfrak{g}_1'$ may be computed as follows.
Let $f_1,\dots,f_\ell\in k[\mathfrak{t}]$ be homogeneous generators for the algebra of $W$-invariant polynomials on $\mathfrak{t}$. Assume, as we may, that each $f_i$ is an eigenvector for $\vartheta$, with eigenvalue denoted $\varepsilon_i$, and set $d_i=\deg f_i$.
The integer
$$a(m,\vartheta):=|\{i:\ 1\leq i\leq\ell,\ \varepsilon_i\zeta^{d_i}=1\}|$$
depends only on $m$ and $\vartheta$, and we have
$$\dim\mathfrak{c}=a(m,\vartheta).$$
Let $\mathfrak{s}$ be a canonical Cartan subalgebra for $\tau$ (section \ref{csa}).
There exists $g\in G$ such that $\mathfrak{t}=\Ad(g)\mathfrak{s}$, and we set $\theta'=g\tau g^{-1}$.
Since $\theta'$ normalizes $\mathfrak{t}$ and belongs to $G\vartheta$ we have $\theta'\in N\vartheta$.
Let $w'\vartheta\in W\vartheta$ be the projection of $\theta'$. Then $\Ad(g)\mathfrak{c}$ is the
$\zeta$-eigenspace $\mathfrak{t}(w'\vartheta,\zeta)$ of $w'\vartheta$ in $\mathfrak{t}$, so
$$\dim \mathfrak{t}(w'\vartheta,\zeta)=a(m,\vartheta).$$
Since $w\vartheta$ is $\mathbb{Z}$-regular and therefore $k$-regular (by Prop. \ref{prop:regular}),
it follows from \cite[Prop. 3.6]{springer:regular} that we also have
$\dim \mathfrak{t}(w\vartheta,\zeta)=a(m,\vartheta)$, and therefore
$$\dim\mathfrak{t}(w\vartheta,\zeta)=\dim\mathfrak{t}(w'\vartheta,\zeta).$$
By \cite[Thm. 6.4 (iv)]{springer:regular} the elements $w\vartheta, w'\vartheta\in W\vartheta$ are conjugate under $W$.
It follows that $n\vartheta$ is $N$-conjugate to an element of $T\theta'$.
As $w'\vartheta$ is also elliptic, it follows that $n\vartheta$ is actually conjugate to $\theta'$,
and hence to $\tau=\check\rho(\zeta)\vartheta$, as claimed.
Now assume that $k$ has positive characteristic not dividing $m$.
Let $A$ be the cyclotomic subring of $\mathbb{C}$ generated by $z=e^{2\pi i/m}$ and
let $\pi:A\to k$ be the ring homomorphism mapping $z\mapsto \zeta$.
By ellipticity, all lifts of $w\vartheta$ to $N(k)\vartheta$ are $T(k)$-conjugate, so we may choose our lift to be of the form $\pi(n)$ with $n\in N(\mathbb{Z})$. From the characteristic zero case just proved, we have that $\iota(n)\vartheta$ is $G(\mathbb{C})$-conjugate to $\check\rho(z)\vartheta$. By Lemma \ref{k-conj} it follows that
$\pi(n)\vartheta$ is $G(k)$-conjugate to $\check\rho(\zeta)\vartheta$, as claimed.
\qed
\subsection{Stable gradings} \label{sec:stable}
Let $H$ be a connected reductive $k$-group acting on a $k$-vector space $V$.
A vector $v\in V$ is called {\bf $H$-stable} (in the sense of Geometric Invariant Theory, see
\cite{mumford:stable}) if the $H$-orbit of $v$ is closed and the stabilizer of $v$ in $H$ is finite.
The second condition means that the stabilizer $H_v$ is a finite algebraic group:
it has only finitely many points over the algebraically closed field $k$.
Recall we are assuming the characteristic of $k$ is not a torsion prime for $G$
and that the period $m$ of the grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\mathfrak{g}_i$ is nonzero in $k$.
We have chosen a root of unity $\zeta\in k^\times$ of order $m$, and $\theta\in\Aut(\mathfrak{g})$ is the automorphism of order $m$ whose $\zeta^i$-eigenspace is $\mathfrak{g}_i$.
We say the grading $\mathfrak{g}=\oplus_{i\in \mathbb{Z}/m}\ \mathfrak{g}_i$ (or the automorphism $\theta$)
is {\bf stable} if there are $G_0$-stable vectors in $\mathfrak{g}_1$. In this section we will show that stable gradings are closely related to elliptic $\mathbb{Z}$-regular automorphisms of the root system $R$.
\begin{lemma}\label{stabless} A vector $v\in\mathfrak{g}_1$ is stable if and only if $v$ is a regular semisimple element of $\mathfrak{g}$ and the action of $\theta$ on the Cartan subalgebra centralizing $v$ is elliptic.
\end{lemma}
\proof Vinberg showed (\cite[Prop. 3]{vinberg:graded}) that the $G_0$-orbit of $v$ is closed in $\mathfrak{g}_1$ if and only if $v$ is semisimple in $\mathfrak{g}$. His proof works also in positive characteristic (see \cite[2.12-3]{levy:thetap}). If $v$ is semisimple its centralizer $C_G(v)$ is connected (since $p$ is not a torsion prime, by \cite[Thm. 3.14]{steinberg:torsion}) and reductive with semisimple derived subgroup $H$ \cite[13.19, 14.2]{borel:linear}. As $v$ is an eigenvector for $\theta$ we have $\theta(H)=H$.
If $v$ is stable then $H^\theta$ is finite.
Let $\pi:H_{sc}\to H$ be the simply-connected covering of $H$.
We lift $\theta$ to an automorphism of $H_{sc}$, denoting it again by $\theta$.
Now $H_{sc}^\theta$ is connected \cite[chap. 8]{steinberg:endo}
so $\pi(H_{sc}^\theta)\subset\left(H^\theta\right)^\circ$ is trivial.
Since $\ker\pi$ is finite, we must have $H_{sc}^\theta=1$.
This implies that $H_{sc}=1$. For otherwise, by \cite[chap. 8]{steinberg:endo},
there would be a maximal torus $T'$ contained in a Borel subgroup $B'$ of $H_{sc}$ such that $\theta(T')=T'$ and $\theta(B')=B'$, and $H_{sc}^\theta$ would have rank equal to the number of $\theta$-orbits on the set of simple roots of $T'$ in $B'$. Therefore $H_{sc}=1$, so $H=1$ and $C_G(v)$ is a torus. This means that $v$ is regular in $\mathfrak{g}$.
The reverse implication is clear.
\qed
Prop. \ref{clift} and Lemma \ref{stabless} have the following corollaries.
\begin{cor} \label{stable} Let $\theta\in G\vartheta$ have order $m$ nonzero in $k$. The following are equivalent.
\begin{enumerate}
\item The grading on $\mathfrak{g}$ given by $\theta$ is stable;
\item The action of $\theta$ on its canonical Cartan subalgebra induces
an elliptic $\mathbb{Z}$-regular automorphism of $R$;
\item
$\theta$ is principal and $m$ is the order of an elliptic $\mathbb{Z}$-regular element of $W\vartheta$.
\end{enumerate}
\end{cor}
\begin{cor}\label{classifiction} The map sending a stable automorphism $\theta\in \Aut(\mathfrak{g})$ to the automorphism of $R$ induced by the action of $\theta$ on its canonical Cartan subalgebra gives a bijection between the $G$-conjugacy classes of stable automorphisms of $\mathfrak{g}$ and the $W$-conjugacy classes of elliptic $\mathbb{Z}$-regular automorphisms of $R$.
\end{cor}
\section{Affine-pinned automorphisms}\label{affine-pinned}
In this section we construct certain automorphisms of $\mathfrak{g}$ arising from symmetries of the affine Dynkin diagram. These will be used to study outer automorphisms of $E_6$.
Assume $\mathfrak{g}$ is a simple Lie algebra over $\mathbb{C}$ with adjoint group $G=\Aut(\mathfrak{g})^\circ$.
Let $N, T$ be the normalizer and centralizer of a Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ and let
$W=N/T$.
Let $R$ be the set of roots of $T$ in $\mathfrak{g}$ and choose a base
$\Delta=\{\alpha_1,\dots,\alpha_\ell\}$ of $R$.
Let $\alpha_0$ be the lowest root of $R$ with respect to $\Delta$ and set $\Pi=\{\alpha_i:\ i\in I\}$, where $I=\{0,1,\dots,\ell\}$. The subgroup of $W$ preserving $\Pi$,
$$W_\Pi=\{w\in W:\ w\Pi=\Pi\}$$
is isomorphic to the fundamental group of $G$.
Each element $w\in W_\Pi$ determines a permutation $\sigma$ of $I$ such that
$$w\cdot \alpha_i=\alpha_{\sigma (i)}.$$
Choose a Chevalley lattice $\mathfrak{g}_\mathbb{Z}\subset\mathfrak{g}$ spanned by a lattice in $\mathfrak{t}$ and root vectors for $T$.
An {\it affine pinning} is a set
$\widetilde\Pi=\{E_0,E_1,\cdots,E_\ell\}$
consisting of nonzero root vectors $E_i\in \mathfrak{g}_{\alpha_i}\cap \mathfrak{g}(\mathbb{Z})$ for each $i\in I$.
Let $N(\mathbb{Z})$ be the stabilizer of $\mathfrak{g}(\mathbb{Z})$ in $N$,
and consider the subgroup
$$N_{\widetilde\Pi}=\{n\in N(\mathbb{Z}):\ n\widetilde\Pi=\widetilde\Pi\}.$$
\begin{lemma} Let $\widetilde\Pi$ be an affine pinning. Then the projection $N\to W$ restricts to an isomorphism $f:N_{\widetilde\Pi}\overset\sim\longrightarrow W_\Pi$.
\end{lemma}
\proof It is clear that $f(N_{\widetilde\Pi})\subset W_\Pi$.
An element in $\ker f$ lies in $T$ and fixes each root vector $E_i$, hence lies in the center of $G$, which is trivial since $G$ is adjoint. Hence $f$ is injective.
Let $w\in W_\Pi$. Since the projection $N\to W$ is surjective on $N(\mathbb{Z})$
\cite[Lemma 22]{steinberg:yale},
there is a lift $n'$ of $w$ such that $n'\in N(\mathbb{Z})$.
For each $i\in I$ we have
$n'\cdot E_i=c_i E_{\sigma(i)}$, for some $c_i=\pm 1$.
Let $\check\omega_1,\dots,\check\omega_\ell\in X_\ast(T)$ be the fundamental coweights of $T$ dual to
$\alpha_1,\dots, \alpha_\ell$.
The element $t=\prod_{i=1}^\ell \check\omega_i(c_i)$ lies in $T(\mathbb{Z})$ and the new lift $n=n't$ of $w$ satisfies $n\cdot E_i=E_{\sigma( i)}$ for $1\leq i\leq \ell$.
Let $d$ be the order of $w$. Then $\sigma^d=1$ so $n^d$ fixes $E_i$ for each $1\leq i\leq \ell$. Hence $n^d\in T$ and belongs to the kernel of each simple root $\alpha_i$.
Since $G$ is adjoint, it follows that $n^d=1$.
Let $i=\sigma(0)$. It follows from \cite[VI.2.2]{bour456} that $\sigma^j(0)\neq 0$ for $1\leq j<d$.
By what has been proved, we have
$$n^{-1}\cdot E_i=n^{d-1}\cdot E_i=E_{\sigma^{d-1}(i)}=E_{\sigma^{-1}(i)}=E_0.$$
It follows that $n\cdot E_0=E_i$, so $n$ is a lift of $w$ in $N_{\widetilde\Pi}$.
\qed
Now let $k$ be an algebraically closed field of characteristic not equal to two,
and view $G$ as a group scheme over $\mathbb{Z}$, via the lattice $\mathfrak{g}_\mathbb{Z}$.
Take $w\in W_\Pi$ of order two. Again from \cite[VI.2.2]{bour456} there exists a unique minuscule coweight $\check\omega_j$ such that $w\check\omega_j=-\check\omega_j$.
Since $2\neq 0$ in $k$, the natural map $T(\mathbb{Z})\to T(k)$ is injective, which implies that
the map $N(\mathbb{Z})\to N(k)$ is injective. We now let $n$ be the image in $N(k)$ of
the unique lift of $w$ in $N_{\widetilde\Pi}$.
\begin{prop}\label{kacminuscule}
There exists an affine pinning $\widetilde\Pi$ such that $n$ is $G(k)$-conjugate to $\check \omega_j(-1)$. The Kac coordinates of $\Ad(n)$ are given by:
$$
s_i=
\begin{cases}
1&\quad\text{for}\quad i\in\{0,j\}\\
0&\quad\text{for}\quad i\notin\{0,j\}.
\end{cases}
$$
These labels give the unique $w$-invariant Kac-diagram of order two having $s_0\neq 0$.
\end{prop}
\proof
By \cite[Lemma 5]{carter:weyl} there are mutually orthogonal roots $\gamma_1,\dots,\gamma_m\in R$ with corresponding reflections $r_1,\dots, r_m\in W$, such that
\begin{equation}\label{product}
w=r_1r_2\cdots r_m.
\end{equation}
Since $\check \omega_j$ is minuscule we have $\langle \alpha,\check\omega_j\rangle\in\{-1,0,1\}$
for each $\alpha\in R$. The positive roots made negative by $w$ are those for which
$\langle \alpha,\check\omega_j\rangle\neq 0$. Since $w\gamma_i=-\gamma_i$ for each $i$,
we may choose the sign of each $\gamma_i$ so that $\langle \gamma_i,\check \omega_j\rangle=1$. And since
$$-\check \omega_j=w\cdot\check \omega_j=\check \omega_j-\sum_{i=1}^m\langle \gamma_i,\check \omega_j\rangle\check \gamma_i,$$
it then follows that
\begin{equation}\label{sum}
\check \gamma_1+\check \gamma_2+\cdots +\check \gamma_m=2\check \omega_j.
\end{equation}
For each $i=1,\dots,m$ there exists a morphism
$\varphi_i: SL_2\to G$
over $\mathbb{Z}$ whose restriction to the diagonal subgroup is given by
$$\varphi_i\left(\begin{bmatrix}t&0\\0&t^{-1}\end{bmatrix}\right)=\check \gamma_i(t)$$
and such that $\varphi_i\left(\begin{bmatrix}0&-1\\1&0\end{bmatrix}\right)\in N(\mathbb{Z})$ and is a representative of $r_i$.
Since the roots $\gamma_i$ are mutually orthogonal, the images of these homomorphisms $\varphi_i$ commute with one another. Hence we have a $\mathbb{Z}$-morphism
$$\varphi:SL_2\to G,\qquad \text{given by}\qquad
\varphi\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right)=\prod_{i=1}^m\varphi_i\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right).
$$
By equation \eqref{product} the element
\begin{equation}\label{npinned}
n:=\varphi\left(\begin{bmatrix}0&-1\\1&0\end{bmatrix}\right)
\end{equation}
belongs to $N(\mathbb{Z})$ and represents $w$.
Equation \eqref{sum} implies that
$$\varphi\left(\begin{bmatrix}t&0\\0&t^{-1}\end{bmatrix}\right)=\check \omega_j(t)^2,$$
which in turn implies that $n$ has order two.
Since the matrices
$\begin{bmatrix}0&-1\\1&0\end{bmatrix}$ and
$\begin{bmatrix}\sqrt{-1}&0\\0&-\sqrt{-1}\end{bmatrix}$
are conjugate in $\SL_2$, it follows that $n$ is conjugate to $\check \omega_j(-1)$ in $G$,
and that $\Ad(n)$ has the asserted Kac-coordinates.
We construct an affine pinning stable under $n$ as follows.
Choose representatives $\alpha_i$ of the $w$-orbits in $\Pi$, and choose arbitrary nonzero root vectors $E_i\in \mathfrak{g}(\mathbb{Z})$ for these roots. Let $\sigma$ be the permutation of $I$ induced by $w$. If $w\cdot\alpha_i\neq\alpha_i$, let $E_{\sigma(i)}=n\cdot E_i$. Since $n$ has order two, we have $n\cdot E_{\sigma(i)}=E_i$. If $w\cdot\alpha_i=\alpha_i$ then $\alpha_i$ is orthogonal to each of the roots $\gamma_1,\dots,\gamma_m$, since the latter are negated by $w$. It follows that the image of each homomorphism $\varphi_1,\dots,\varphi_m$ centralizes the root space $\mathfrak{g}_{\alpha_i}$, so any nonzero vector $E_i\in\mathfrak{g}_{\alpha_s}\cap\mathfrak{g}(\mathbb{Z})$ is fixed by $n$.
The collection $\widetilde\Pi=\{E_i\}$ of vectors thus defined is an affine pinning stable under $n$. \qed
The following lemma will also be useful.
\begin{lemma}\label{nfixed} Let $S=(T^n)^\circ$ be the identity component of the subgroup of $T$ centralized by $n$. Then $S$ is centralized by the entire group $\varphi(\SL_2)$.
\end{lemma}
\proof
Since $2\check\omega_j$ is a simple co-weight in $\varphi(\SL_2)$ and $\check\omega_j$ is minuscule, we have that $\langle \alpha,2\check\omega_j\rangle\in \{-2,0,2\}$ for every root $\alpha\in R$.
Hence $\varphi(\SL_2)$ acts on $\mathfrak{g}$ as a sum of copies of the trivial and adjoint representations. Applying the element
\[\begin{bmatrix} 1&0\\-t&1\end{bmatrix}\cdot
\begin{bmatrix} 0&-1/t\\t&0\end{bmatrix}=
\begin{bmatrix} 1&-1/t\\0&1\end{bmatrix}\cdot
\begin{bmatrix} 1&0\\t&1\end{bmatrix}
\]
to a vector in the zero weight space and comparing components in the $-2$ weight space, we find (since the characteristic of $k$ is not two) that any vector in $\mathfrak{g}$ invariant under the normalizer of
$2\check\omega_j(k^\times)$ in $\varphi(\SL_2)$ is invariant under all of $\varphi(\SL_2)$.
Since the Lie algebra of $S$ consists of such vectors, the lemma is proved.
\qed
\section{Little Weyl groups}\label{littleweyl}
Let $\theta$ be an automorphism of $\mathfrak{g}$ whose order $m$ is invertible in $k$.
Choose a root of unity $\zeta\in k^\times$ of order $m$ and let
$\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$ be the grading of $\mathfrak{g}$ into $\zeta^i$-eigenspaces of $\theta$.
Choose a Cartan subspace $\mathfrak{c}$ in $\mathfrak{g}_1$ and assume the rank
$r=\dim \mathfrak{c}$ is positive.
The little Weyl group is defined as
$$W(\mathfrak{c},\theta)=N_{G_0}(\mathfrak{c})/Z_{G_0}(\mathfrak{c}),$$
where $G_0=(G^\theta)^\circ$ is the connected subgroup of $G$ with Lie algebra $\mathfrak{g}_0$. When it is necessary to specify $G$ in the little Weyl group we will write $W_G(\mathfrak{c},\theta)$.
It is clear from the definition that $W(\mathfrak{c},\theta)$ acts faithfully on $\mathfrak{c}$.
From \cite{vinberg:graded} and \cite{levy:thetap},
it is known that the action of $W(\mathfrak{c},\theta)$ on $\mathfrak{c}$ is generated by transformations fixing a hyperplane in $\mathfrak{c}$, that
the restriction map $k[\mathfrak{g}_1]^{G_0}\to k[\mathfrak{c}]^{W(\mathfrak{c},\theta)}$ is an isomorphism,
and that this ring is a polynomial ring with homogeneous generators $f_1,\dots,f_r$,
such that
$$|W(\mathfrak{c},\theta)|=\prod_{i=1}^r\deg(f_i).$$
\subsection{Upper bounds on the little Weyl group}
Recall we have fixed a Cartan subalgebra $\mathfrak{t}$ in $\mathfrak{g}$, with normalizer and centralizer $N$ and $T$ in $G$ and we have identified $W=N/T$.
Replacing $\theta$ by a $G$-conjugate if necessary, we may assume $\mathfrak{t}$ is the canonical Cartan subalgebra for $\theta$ (see \ref{csa}).
In particular $\mathfrak{c}$ is the $\zeta$-eigenspace of $\theta$ in $\mathfrak{t}$.
Then $\theta$ normalizes $N$ and $T$ in $\Aut(\mathfrak{g})$, giving an action of $\theta$ on $W$;
let $W^\theta=\{y\in W:\ \theta(y)=y\}$ be the fixed point subgroup of $\theta$ in $W$.
Elements in $W^\theta$ commute with the action of $\theta$ on $\mathfrak{t}$, so $W^\theta$ acts on the eigenspace $\mathfrak{c}$.
Let
\begin{equation}\label{W1}
W_1^\theta:=W^\theta/C_W(\mathfrak{c})^\theta
\end{equation}
be the quotient acting faithfully on $\mathfrak{c}$.
Since $\mathfrak{t}$ is a Cartan subalgebra in the Levi subalgebra $\mathfrak{m}=\mathfrak{z}_\mathfrak{g}(\mathfrak{c})$, it follows that
every element of $W(\mathfrak{c}, \theta)$ has a representative in $N$ and
that $W(\mathfrak{c},\theta)$ may be viewed as a subgroup of $W_1^\theta$. Thus,
we have an embedding
$$W(\mathfrak{c},\theta)\hookrightarrow W_1^\theta.$$
Note that $W(\mathfrak{c},\theta)$ is more subtle than $W_1^\theta$. For it can happen that two automorphisms $\theta$ and $\theta'$ of the same order agree on $\mathfrak{t}$ and $W$, so they have the same Cartan subspace $\mathfrak{c}$ and $W_1^{\theta}=W_1^{\theta'}$, but nevertheless $W(\theta,\mathfrak{c})\neq W(\theta',\mathfrak{c})$ (e.g. cases $4_a$ and $4_b$ in $E_6$; these examples are also used in
\cite[4.5]{panyushev:theta} to illustrate other subtleties).
A still coarser group, depending only on $\mathfrak{c}$ and not on $\theta$ is
$$W(\mathfrak{c}):= N_{W}(\mathfrak{c})/C_{W}(\mathfrak{c}).
$$
As subgroups of $\GL(\mathfrak{c})$, we have containments
$$W(\mathfrak{c},\theta)\subset W_1^\theta\subset W(\mathfrak{c}).$$
Under certain circumstances one or both of these containments is an equality.
\begin{lemma}\label{littleweylregular} Suppose $\mathfrak{c}$ contains a regular element of $\mathfrak{g}$. Then
$$W_1^\theta=W^\theta=W(\mathfrak{c}).$$
\end{lemma}
\proof By regularity it is clear that $W_1^\theta=W^\theta$ and that $W(\mathfrak{c})=N_W(\mathfrak{c})$.
And any $y\in N_W(\mathfrak{c})$ commutes with the scalar action of $\theta$ on $\mathfrak{c}$ so the commutator $[y,\theta]$ is trivial in $W$, again by regularity.
\qed
Panyushev \cite[Thm. 4.7]{panyushev:theta} has shown that both containments above are equalities if $\theta$ is principal:
\begin{prop}[Panyushev]\label{pan} If $\theta$ is principal then
$W(\mathfrak{c},\theta)=W_1^\theta=W(\mathfrak{c}).$
\end{prop}
We note that Panyushev works in characteristic zero, but his geometric proof works equally well in good characteristic $p\nmid m$, using the invariant theoretic results of \cite{levy:thetap}.
\begin{cor}\label{regprinc} If $\theta$ is principal and the restriction of $\theta$ to $\mathfrak{t}$ induces a $\mathbb{Z}$-regular automorphism of $R$ then $W(\mathfrak{c},\theta)=W^\theta$.
\end{cor}
\proof By Prop. \ref{prop:regular}, $\mathbb{Z}$-regularity implies $k$-regularity, so $W(\mathfrak{c})=W^\theta_1$ is just $W^\theta$. \qed
This sharpens the first result in this direction, which was proved in Vinberg's original work
\cite[Prop. 19]{vinberg:graded}:
\begin{cor}[Vinberg]\label{vinberg:stable} If $\theta$ gives a stable grading on $\mathfrak{g}$ then
$W(\mathfrak{c},\theta)=W^\theta$.
\end{cor}
\subsection{Little Weyl groups for inner gradings}
Assume now that $\theta$ is inner, and let the restriction
of $\theta$ to $\mathfrak{t}$ be given by the element $w\in W$.
In this section we give upper and lower bounds for $W(\mathfrak{c},\theta)$ depending only on $w$,
under certain conditions; these will suffice to compute almost all little Weyl groups in type $E_n$.
The fixed-point group
$$W^\theta=C_W(w),$$
is now the centralizer of $w$ in $W$, which acts on the $\zeta$-eigenspace $\mathfrak{c}$
of $w$ in $\mathfrak{t}$.
The quotient by the kernel of this action is the group $W_1^\theta$.
Simple upper and lower bounds for $W(\mathfrak{c},\theta)$ can be obtained as follows.
\begin{lemma}\label{simplebound} If $U$ is any subgroup of $C_W(w)$ acting trivially on $\mathfrak{c}$ then we have the inequalities
$$m\leq |W(\mathfrak{c},\theta)|\leq \frac{|C_W(w)|}{|U|}.$$
\end{lemma}
\proof
Since $\theta$ is semisimple it lies in the identity component $G_0$ of its centralizer in $G$. Hence the cyclic group $\langle\theta\rangle$ embeds in $W(\mathfrak{c},\theta)$, whence the lower bound.
The upper bound follows from \eqref{W1}.
\qed
Information about $C_W(w)$, including its order, is given in \cite{carter:weyl}.
Using the tables therein, one can often find a fairly large subgroup $U\subset C_W(w)$
as in Lemma \ref{simplebound}.
{\bf Example 1:\ } In type $E_8$ there are eight cases (namely $12_b$ through $12_i$ in the tables below) where $w$ is a Coxeter element in
$W(E_6)$.
From \cite{carter:weyl} we have $|C_W(w)|=144$. Hence the centralizer is given by
$$C_W(w)=\langle w\rangle\times \langle -w^6\rangle\times W(A_2),$$
where $A_2$ is orthogonal to the $E_6$.
Since $\mathfrak{c}$ lives in the $E_6$ Levi subalgebra and $w^6$ acts by $-1$ on $\mathfrak{c}$,
the inequalities of Lemma \ref{simplebound} become equalities for
$U=\langle -w^6\rangle\times W(A_2)$.
Hence $W(\mathfrak{c},\theta)\simeq \mu_{12}$ in these eight cases.
{\bf Example 2:\ } In type $E_8$ there are four cases ($6_h$ through $6_k$) where $w$ is a Coxeter element in $W(D_4)$. Let $\Delta_4=\{\beta_1,\dots,\beta_4\}$ be a base of the corresponding root subsystem of type $D_4$. The subgroup of $W(E_8)$ permuting $\Delta_4$ is a symmetric group $S_3$. We may choose the Coxeter element $w$ to be centralized by this $S_3$, and $\mathfrak{c}$ is a line in the span of the co-root vectors $\{d\check\beta_i(1)\}$.
The roots of $E_8$ orthogonal to $\Delta_4$ form another system of type $D_4$,
hence there is a subgroup $W_2\simeq W(D_4)$ fixing each root in $\Delta_4$ and therefore
acting trivially on $\mathfrak{c}$. Since $S_3$ normalizes $\Delta_4$ it also normalizes $W_2$.
From \cite{carter:weyl} we have $|C_W(w)|=6\cdot 6\cdot 192$,
so the inequalities of Lemma \ref{simplebound} hold for $U\simeq S_3\ltimes W(D_4)$.
Hence $W(\mathfrak{c},\theta)\simeq \mu_{6}$ in these four cases.
{\bf Example 3:\ } In type $E_7$ there are two cases ($9_a$ and $9_b$)
where $w$ is the square of a Coxeter element
and we have $C_W(w)=\langle -w\rangle\simeq\mu_{18}$.
Since $w$ is $\mathbb{Z}$-regular, Lemma \ref{simplebound} only gives the inequalities
$$9\leq |W(\mathfrak{c},\theta)|\leq 18.$$
In fact, we have $W(\mathfrak{c},\theta)\simeq\mu_{18}$ and $\mu_9$ in cases $9_a$ and $9_b$, respectively. This shows that, in general,
$W(\mathfrak{c},\theta)$ depends on $\theta$, and not just on $w$.
We will return to this example after sharpening our lower bound, as follows.
For any subset $J\subset\{1,\dots,\ell\}$ let $R_J$ be the root subsystem generated by
$\{\alpha_j:\ j\in J\}$, let $W_J$ be Weyl group of $R_J$ and let $\mathfrak{g}_J$ be the subalgebra of $\mathfrak{g}$ generated by the root spaces $\mathfrak{g}_\alpha$ for $\alpha\in R_J$. If the action of $\theta$ on $\mathfrak{t}$ is given by an element $w\in W_J$ then $\theta$ induces an automorphism $\theta_J$ of $\mathfrak{g}_J$.
\begin{lemma}\label{Jlowerbound} Suppose $\theta$ normalizes the Cartan subalgebra $\mathfrak{t}$ and has image
$w\in W_J$ for some subset $J\subset\{1,\dots,\ell\}$
such that the following conditions hold.
\begin{enumerate}
\item $\theta$ is conjugate to an automorphism $\theta'=\Ad(t)$ where $t\in T$ satisfies $\alpha_j(t)=\zeta$ for all $j\in J$;
\item The rank of $w$ on $\mathfrak{t}$ is equal to the rank of $\theta$;
\item The principal automorphisms of $\mathfrak{g}_J$ of order $m$ have rank equal to the rank of $\theta$.
\item $w$ is $\mathbb{Z}$-regular in $W_J$;
\end{enumerate}
Then there is an embedding $C_{W_J}(w)\hookrightarrow W(\mathfrak{c},\theta)$.
\end{lemma}
\proof
Condition 1 means there is $g\in G$ such that the automorphism
$$\theta'=g\theta g^{-1}=\Ad(t),$$
where $t\in T$ satisfies $\alpha_j(t)=\zeta$ for all $j\in J$.
We have $t=\check\rho_J(\zeta)z$ where $\check\rho_J$ is half the sum of the positive co-roots of $R_J$ (with respect to $\Delta_J$) and $z\in \ker\alpha_j$ for all $j\in J$.
Condition 2 means that the eigenspace $\mathfrak{c}:=\mathfrak{t}(w,\zeta)$ is a Cartan subspace for $\theta$.
Note that $\mathfrak{c}\subset \mathfrak{g}_J$.
Let $\mathfrak{c}_J$ be a Cartan subspace for the automorphism
$$\theta'_J:=\theta'\vert_{\mathfrak{g}_J}=\Ad(\check\rho_J(\zeta))\in G_J,$$
where $G_J=\Aut(\mathfrak{g}_J)^\circ$.
As $\theta'_J$ is principal of order $m$, we have $\dim\mathfrak{c}_J=\dim\mathfrak{c}$, by condition 3.
Now $\mathfrak{c}':=\Ad(g)\mathfrak{c}$ is a Cartan subspace for $\theta'$ in
$\mathfrak{g}(\theta',\zeta)$, and the latter subspace contains
$\mathfrak{g}_J(\theta'_J,\zeta)$, which in turn contains $\mathfrak{c}_J$.
Thus $\mathfrak{c}'$ and $\mathfrak{c}_J$ are two Cartan subspaces in $\mathfrak{g}(\theta',\zeta)$,
so there is $h\in G^{\theta'}$ such that $\Ad(hg)\mathfrak{c}=\Ad(h)\mathfrak{c}'=\mathfrak{c}_J$
\cite[Thm. 2.5]{levy:thetap}.
Conjugation by $hg$ gives an isomorphism
$$W_G(\mathfrak{c},\theta)\overset\sim\longrightarrow W_G(\mathfrak{c}_J,\theta').$$
Since the latter group contains $W_{G_J}(\mathfrak{c}_J,\theta'_J)$, we have an embedding
$$W_{G_J}(\mathfrak{c}_J,\theta'_J)\hookrightarrow W_G(\mathfrak{c},\theta).$$
Let $\mathfrak{t}_J=\mathfrak{t}\cap\mathfrak{g}_J$ and let $\mathfrak{t}_J'$ be a $\theta'_J$-stable Cartan subalgebra of $\mathfrak{g}_J$ containing $\mathfrak{c}_J$.
Then there is $b\in G_J$ such that $\Ad(b)\mathfrak{t}_J'\subset \mathfrak{t}_J$,
so $b\theta'_Jb^{-1}$ normalizes $\mathfrak{t}_J$ and $\mathfrak{c}_J':=\Ad(b)\mathfrak{c}_J$ is a Cartan subspace for
$b\theta'_Jb^{-1}$ contained in $\mathfrak{t}_J$.
Let $w'\in W_J$ be the element induced by $b\theta'_Jb^{-1}$.
We now have two elements $w,w'\in W_J$ having equidimensional $\zeta$-eigenspaces $\mathfrak{c}$ and $\mathfrak{c}_J'$ in $\mathfrak{t}_J$.
The one-parameter subgroups of $G_J$ which centralize $\mathfrak{t}_J$ form a lattice giving a
$\mathbb{Z}$-form $\check X_J$ of $\mathfrak{t}_J$. Let $A$ be the cyclotomic subring of $\mathbb{C}$ generated by $z=e^{2pi i/m}$ and let $\pi:A\to k$ be the ring homomorphism sending $z\mapsto \zeta$.
Since the map $\pi:\mu_m(\mathbb{C}^\times)\to \mu_m(k^\times)$ is an isomorphism,
it follows that the $z$-eigenspaces of $w$ and $w'$ in $\check X_J\otimes\mathbb{C}$ have the same dimension.
Now $w$ is $k$-regular on $\mathfrak{t}_J=k\otimes\check X_J$, by condition 4.
Hence $w$ is $\mathbb{C}$-regular on $\mathbb{C}\otimes\check X_J$, by Prop. \ref{prop:regular}.
By \cite[6.4]{springer:regular}, the elements $w$ and $w'$ are conjugate in $W_J$, so $w'$ is $k$-regular on
$\mathfrak{t}_J$. Hence the principal automorphism $b\theta'_Jb^{-1}$ of $\mathfrak{g}_J$ has regular vectors in
$\Ad(b)\mathfrak{c}_J$, so the principal automorphism $\theta'_J$ has regular vectors in $\mathfrak{c}_J$.
It now follows from Cor. \ref{regprinc} that
$W_{G_J}(\mathfrak{c}_J,\theta'_J)\simeq C_{W_J}(w')\simeq C_{W_J}(w)$.
\qed
{\bf Remarks:\ } 1.\ In practice, condition 1 means the normalized Kac diagram of $\theta$ can be conjugated under the affine Weyl group $W_{\aff}(R)$ to a (usually un-normalized) Kac diagram with $1$ on each node for $j\in J$. We will see that condition 1 is verified as a byproduct of the normalization algorithm.
2.\ The element $w$ is usually elliptic in $W_J$. When this holds, condition 3 is implied by conditions 2 and 4, as follows from Prop. \ref{clift2}.
3.\ Recall that the order of $C_{W_J}(w)$ is the product of those degrees of $W_J$ which are divisible by the order $m$ of $w$. Thus the lower bound in Prop. \ref{Jlowerbound} is completely explicit.
{\bf Example 3 revisited:\ } Recall that $G$ has type $E_7$ and $w$ is the square of a Coxeter element. We give the normalized Kac diagram for each $\theta$, the un-normalized diagram for each $\theta'$, whose subdiagram of $1$'s determines $J$.
$$
\begin{array}{cccc}
\text{No.} & \theta& \theta' & J\\
\hline
9_a&\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1} &\EVII{-8}{1}{1}{1}{1}{1}{1}{1} & E_7\\
9_b&\EVII{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1} & \EVII{-7}{1}{1}{1}{1}{1}{1}{0}& E_6\\
\hline
\end{array}
$$
Lemma \ref {Jlowerbound} shows that $9_a$ has little Weyl group $W(\mathfrak{c},\theta)\simeq\mu_{18}$, but does not decide case $9_b$, which we treat using invariant theory (see section \ref{littleweylE}).
\subsection{Stable isotropy groups}\label{stableisotropy}
Assume that $\theta\in\Aut(\mathfrak{g})$ gives a stable grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$. By definition there is a regular semisimple element $v\in\mathfrak{g}_1$ whose isotropy subgroup in $G_0$ is finite. Fix a Cartan subspace $\mathfrak{c}\subset\mathfrak{g}_1$ and let $S$ be the unique maximal torus in $G$ centralizing $\mathfrak{c}$. In the proof of Lemma \ref{stabless} we saw that $C_G(v)$ is a torus, so we must have $C_G(v)=S$. It follows that all stable vectors in $\mathfrak{c}$ have the same isotropy group in $G_0$, equal to
$$S_0:=S\cap G_0.$$
We now give a more explicit description of $S_0$.
First, $S_0$ is contained in the fixed-point subgroup $S^\theta$, which is finite of order
$$|S^\theta|=\det(1-\theta\vert_{X^\ast(S)}).$$
Let $N(S)$ be the normalizer of $S$ in $G$.
Then $N(S)^\theta$ meets all components of $G^\theta$, and it follows from Cor. \ref{vinberg:stable} that
the inclusion $S^\theta\hookrightarrow G^\theta$ induces an isomorphism
$$S^\theta/S_0\simeq G^\theta/G_0.$$
This quotient depends only on the image $\vartheta$ of $\theta$ in the component group of $\Aut(\mathfrak{g})$. To see this, let
$$G_{sc}\overset\pi\longrightarrow G$$
be the simply-connected covering of $G$ and set $Z=\ker\pi$.
Then $\theta$ and $\vartheta$ lift to automorphisms of $G_{sc}$
which we again denote by $\theta$ and $\vartheta$.
Since $G_{sc}^\theta$ is connected and $\theta=\vartheta$ on $Z$,
we have an exact sequence
$$1\longrightarrow Z^\vartheta\longrightarrow G_{sc}^\theta\longrightarrow G_0\longrightarrow 1,$$
which restricts to an exact sequence
$$1\longrightarrow Z^\vartheta\longrightarrow S_{sc}^\theta\longrightarrow S_0\longrightarrow 1,$$
where $S_{sc}=\pi^{-1}(S)$. Since
$$|S^\theta|=|S_{sc}^\theta|,$$
it follows that we have another exact sequence
$$1\longrightarrow S_0\longrightarrow S^\theta\longrightarrow Z/(1-\vartheta)Z\longrightarrow 1.$$
On the other hand, $Z/(1-\vartheta)Z$ is isomorphic to the subgroup $\Omega_\vartheta\subset \widetilde{W}_{\aff}(R,\vartheta)$ stabilizing the alcove $C$.
The group $\Omega_\vartheta$ acts as symmetries of the twisted affine Dynkin diagram
$D({^eR})$. These groups are well-known if $e=1$; for $e>1$, $\Omega_\vartheta$ is the full symmetry group of $D({^eR})$ and has order $1$ or $2$.
It follows that if $\theta$ is stable then
the isotropy group $S_0$ fits into an exact sequence
\begin{equation}\label{isotropy2}
1\longrightarrow S_0\longrightarrow S^\theta\longrightarrow \Omega_\vartheta\longrightarrow 1.
\end{equation}
The groups $S_0$ are tabulated for exceptional groups in Sect. \ref{exceptional}.
\subsection{Stable orbits and elliptic curves}
Certain remarkable stable gradings have appeared in recent work of Barghava and Shankar on the average rank of elliptic curves
(\cite{bhargava-shankar1}, \cite{bhargava-shankar2}). These gradings have periods $m=2,3,4,5$ and are of types ${^2\!A_2},\ {^3\!D_4},\ {^2\!E_6},\ E_8$ respectively, as tabulated below. Here $\mathbf d$ stands for the natural representation of $\SL_d$.
\begin{center}
$$
{\renewcommand{\arraystretch}{1.2}
\begin{array}{cccccc}
\hline
m&\text{Kac coord.}& W(\mathfrak{c},\theta)&\text{degrees}&G_0&\mathfrak{g}_1\\
\hline
&&&&&\\
2&\twoAtwoo &\SL_2(\mathbb{Z}/2)&2,3
&\SL_2/\boldsymbol{\mu}_2&\Sym^4(\mathbf 2)\\
&&&&&\\
3&0\ 0\Lleftarrow 1&\SL_2(\mathbb{Z}/3)&4,6
&\SL_3/\boldsymbol{\mu}_3&\Sym^3(\mathbf 3)\\
&&&&&\\
4&0\ 0\ 0\Leftarrow 1\ 0 &\boldsymbol{\mu}_2\times\SL_2(\mathbb{Z}/4)&8,12
&(\SL_2\times\SL_4)/\boldsymbol{\mu}_4&\mathbf 2\boxtimes\Sym^2(\mathbf 4)\\
&&&&&\\
5&\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0} &\boldsymbol{\mu}_5\times\SL_2(\mathbb{Z}/5)&20,30
&(\SL_5\times\SL_5)/\boldsymbol{\mu}_5&\mathbf 5\boxtimes\Lambda^2\mathbf 5\\
&&&&&\\
\hline
\end{array}}
$$
\end{center}
For each $m=2,3,4,5$ the isotropy subgroup $S_0$ is isomorphic to $\boldsymbol{\mu}_m\times\boldsymbol{\mu}_m$ and the little Weyl group $W(\mathfrak{c},\theta)$ is isomorphic to the group $W_m$ with presentation
$$W_m=\langle s,t:\ s^m=t^m=1, \quad sts=tst\rangle.$$
(Note that $W_m$ is infinite for $m> 5$.) The exact sequence
$$
1\longrightarrow S_0\longrightarrow N_{G_0}(\mathfrak{c})\longrightarrow W(\mathfrak{c},\theta)\longrightarrow 1
$$
gives a homomorphism $W(\mathfrak{c},\theta)\to \Aut(S_0)=\GL_2(\mathbb{Z}/m\mathbb{Z})$
with image $\SL_2(\mathbb{Z}/m\mathbb{Z})$ and split kernel $\langle\theta^e\rangle\simeq \boldsymbol{\mu}_{m/e}$,
as tabulated above (see also \cite{reeder:cyclotomic}).
In each case the number $|R|$ of roots is equal to $m\cdot (m-1)\cdot (12/b),$
where $b=4,3,2,1$ is the maximal number of bonds between two nodes in the twisted affine diagram $D({^eR})$.
We have $\dim G_0=|R|/m$ and
the degrees $d_1<d_2$ have the property that $3d_1=2d_2=|R|/(m-1)$.
Let $I, J\in k[\mathfrak{c}]^{W(\mathfrak{c},\theta)}$ be homogeneous generators of degrees $d_1, d_2$.
The discriminant on $\mathfrak{t}$ (product of all the roots in $R$) has restriction to $\mathfrak{c}$ given by
$D^{m-1}$ (up to nonzero scalar), where $D=-4I^3-27J^2$. The stable vectors $v\in \mathfrak{c}$ are those where $D(v)\neq 0$, and each stable vector $v$ corresponds to an elliptic curve
$E_v$ with equation
$$y^2=x^3+I(v)\cdot x+J(v)$$
whose $m$-torsion group $E_v[m]$ is isomorphic (as an algebraic group over $k$) to $S_0$.
For more information, along with some generalizations to hyperelliptic curves, see \cite{gross:manjulrep}.
\section{Classification of stable gradings}\label{stableclassification}
Let $\theta\in G\vartheta$ be an automorphism of $\mathfrak{g}$ whose order $m$ is invertible in $k$,
associated to the grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$.
After conjugating $\theta$ by an element of $G$ we may assume that $\mathfrak{t}$ is the canonical Cartan subalgebra of $\theta$. Then $\theta\vert_\mathfrak{t}=w\vartheta$, for some $w\in W$.
In section \ref{principalstable} we have seen that $\theta$ is stable if and only if $w\vartheta$ is an elliptic $\mathbb{Z}$-regular automorphism of $R$ , in which case $\theta$ is $G$-conjugate to $\check\rho(\zeta)\vartheta$ for some/any root of unity $\zeta\in k^\times$ of order $m$. Moreover, the $G$-conjugacy class of $\theta$ is completely determined by its order $m$.
The values of $m$ which can arise are the orders of elliptic $\mathbb{Z}$-regular automorphisms of $R$ in $W\vartheta$; these are classified in \cite{springer:regular}.
For example, the elliptic $\mathbb{Z}$-regular elements in $W\vartheta$ of maximal order are the
{\bf $\vartheta$-Coxeter elements}, whose order is the
$\vartheta$-Coxeter number
$$h_\vartheta=e\cdot (b_1+b_2+\cdots+b_{\ell_\vartheta})$$
(see \eqref{twistedcoxeter}).
These form a single $W$-conjugacy class in $W\vartheta$,
representatives of which include elements of the form $w\vartheta$, where
$w$ is the product, in any order, of one reflection $r_i$ taken from each of the $\vartheta$-orbits on simple reflections.
For any algebraically closed field $k$ in which $h_\vartheta$ is invertible
and any $\zeta\in k^\times$ of order $h_\vartheta$,
the automorphism
$$\theta_{\cox}=\Ad(\check\rho(\zeta))\vartheta\in\Aut(\mathfrak{g})$$
is stable of order $h_\vartheta$ and acts on its canonical Cartan subalgebra via a $\vartheta$-Coxeter element. The Kac coordinates of $\theta_\cox$ have
$s_i=1$ for all $i\in\{0,\dots,\ell_\vartheta\}$ and are already normalized.
For $m<h_\vartheta$ the automorphism $\check\rho(\zeta)\vartheta$ corresponds to a point in $\mathcal{A}_\mathbb{Q}^\vartheta$ with un-normalized coordinates $s_i=1$ for $i\neq 0$ and $s_0=1+(m-h_\vartheta)/e$ (see \ref{principalmu}). Here we must apply the normalization algorithm to obtain normalized Kac coordinates. By \eqref{isotropy2} these normalized Kac diagrams will be invariant under the symmetry group of the diagram
$D({^eR})$.
The resulting classification of the stable gradings in all types is tabulated for exceptional Lie algebras in section \ref{exceptional} and for classical Lie algebras in section
\ref{classical}.
\subsection{Stable gradings of exceptional Lie algebras}
\label{exceptional}
Here we tabulate the stable gradings for exceptional Lie algebras,
along with the corresponding elliptic $\mathbb{Z}$-regular element $w\vartheta\in W\vartheta$
and the isotropy group $S_0$ (see section \ref{stableisotropy}).
The column labelled $A$ will be explained in section \ref{distinguished}.
\begin{center}
{\small Table 2: The stable gradings for ${E_6}$}
$$
{\renewcommand{\arraystretch}{1.2}
\begin{array}{cccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w&S_0&A\\
\hline
12=h_\vartheta&\EVI{1}{1}{1}{1}{1}{1}{1} &\EVI{1}{1}{1}{1}{1}{1}{1}&E_6
&1&E_6\\
&&&&&\\
9&\EVI{-2}{1}{1}{1}{1}{1}{1}&\EVI{1}{1}{1}{1}{0}{1}{1} &E_6(a_1)
&1&E_6(a_1)\\
&&&&&\\
6&\EVI{-5}{1}{1}{1}{1}{1}{1}&\EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1} &E_6(a_2)
&1&E_6(a_3)\\
&&&&&\\
3&\EVI{-8}{1}{1}{1}{1}{1}{1}&\EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0} &3A_2
&\boldsymbol{\mu}_3\times\boldsymbol{\mu}_3&-\\
\hline
\end{array}}
$$
\end{center}
\vskip25pt
\begin{center}
{\small Table 3: The stable gradings for ${^2\!E_6}$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{ccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w\vartheta &S_0\\
\hline
18=h_\vartheta&\outEVI{1}{1}{1}{1}{1} &\outEVI{1}{1}{1}{1}{1}&-E_6(a_1)
&1 \\
12&\outEVI{-\!2}{1}{1}{1}{1} &\outEVI{1}{1}{0}{1}{1}&-E_6
&1 \\
6&\outEVI{-\!5}{1}{1}{1}{1}&\outEVI{1}{0}{0}{1}{0}&-(3A_2)
&1 \\
4&\outEVI{-\!6}{1}{1}{1}{1}&\outEVI{0}{0}{0}{1}{0}&-D_4(a_1)
&\boldsymbol{\mu}_4\times\boldsymbol{\mu}_4\\
2&\outEVI{-\!7}{1}{1}{1}{1} &\outEVI{0}{0}{0}{0}{1}&-1
&\boldsymbol{\mu}_2^6 \\
\hline
\end{array}}
$$
\end{center}
\vskip25pt
\begin{center}
{\small Table 4: The stable gradings for ${E_7}$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w&S_0&A\\
\hline
18=h_\vartheta&\EVII{1}{1}{1}{1}{1}{1}{1}{1} &\EVII{1}{1}{1}{1}{1}{1}{1}{1} &E_7
&1&E_7\\
14&\EVII{-3}{1}{1}{1}{1}{1}{1}{1} &\EVII{1}{1}{1}{1}{0}{1}{1}{1}&E_7(a_1)
&1&E_7(a_1)\\
6&\EVII{-11}{1}{1}{1}{1}{1}{1}{1}&\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} &E_7(a_4)
&1 &E_7(a_5)\\
2&\EVII{-15}{1}{1}{1}{1}{1}{1}{1} &\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}&7A_1
&\boldsymbol{\mu}_2^6&-\\
\hline
\end{array}}
$$
\end{center}
\begin{center}
{\small Table 5: The stable gradings for ${E_8}$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w&S_0&A\\
\hline
30=h_\vartheta&\E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad &\E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8
&1&E_8\\
24&\E{-5}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} \quad &\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}&E_8(a_1) &1&E_8(a_1)\\
20&\E{-9}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}\quad &\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}&E_8(a_2) &1&E_8(a_2)\\
15&\E{-14}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}&E_8(a_5)
&1&E_8(a_4)\\
12&\E{-17}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_8(a_3)
&1&E_8(a_5)\\
10&\E{-19}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad&\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_8(a_6)=-2A_4
&1&E_8(a_6)\\
8&\E{-21}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad &\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}&D_8(a_3) &\boldsymbol{\mu}_2\times\boldsymbol{\mu}_2&-\\
6&\E{-23}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad &\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}&E_8(a_8)=-4A_2
&1&E_8(a_7)\\
5&\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad &\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}&2A_4
&\boldsymbol{\mu}_5\times\boldsymbol{\mu}_5&-\\
4&\E{-25}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad&\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&2D_4(a_1)
&\boldsymbol{\mu}_2^4&-\\
3&\E{-26}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad&\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&4A_2
&\boldsymbol{\mu}_3^4&-\\
2&\E{-27}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\quad &\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{0}&8A_1=-1
&\boldsymbol{\mu}_2^8&-\\
\hline
\end{array}}
$$
\end{center}
\vskip15pt
\begin{center}
{\small Table 6: The stable gradings for ${F_4}$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w&S_0&A \\
\hline
12=h_\vartheta&\ \ \FIV{1}{1}{1}{1}{1} &\FIV{1}{1}{1}{1}{1}&F_4 &1&F_4\\
8&\FIV{-3}{1}{1}{1}{1} &\FIV{1}{1}{1}{0}{1}&B_4
&\boldsymbol{\mu}_2&F_4(a_1)\\
6&\FIV{-5}{1}{1}{1}{1}&\FIV{1}{0}{1}{0}{1}&F_4(a_1)
&1&F_4(a_2)\\
4&\FIV{-7}{1}{1}{1}{1}&\FIV{1}{0}{1}{0}{0}&D_4(a_1)
&\boldsymbol{\mu}_2\times\boldsymbol{\mu}_2&F_4(a_3)\\
3&\FIV{-8}{1}{1}{1}{1}&\FIV{0}{0}{1}{0}{0}&A_2+\tilde A_2
&\boldsymbol{\mu}_3\times\boldsymbol{\mu}_3&-\\
2&\FIV{-9}{1}{1}{1}{1} &\FIV{0}{1}{0}{0}{0}&4A_1
&\boldsymbol{\mu}_2^4&-\\
\hline
\end{array}}
$$
\end{center}
\vskip15pt
\begin{center}
{\small Table 7: The stable gradings for ${G_2}$ }
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccccc}
\hline
m&\text{un-normalized}&\text{normalized}& w &S_0&A\\
\hline
6=h_\vartheta&\ \ 1\ 1\Rrightarrow 1 &1\ 1\Rrightarrow 1&G_2
&1&G_2\\
3&-2\ 1\Rrightarrow 1 &1\ 1\Rrightarrow 0&A_2
&\boldsymbol{\mu}_3&G_2(a_1)\\
2&-3\ 1\Rrightarrow 1&0\ 1\Rrightarrow 0&A_1+\tilde A_1
&\boldsymbol{\mu}_2\times\boldsymbol{\mu}_2&-\\
\hline
\end{array}}
$$
\end{center}
\vskip25pt
\begin{center}
{\small Table 8: The stable gradings for ${^3D_4}$ }
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{c c c c c }
\hline
m&\text{un-normalized}&\text{normalized}& w\vartheta\in W(F_4)&S_0\\
\hline
12=h_\vartheta&1\ 1\Lleftarrow 1 &1\ 1\Lleftarrow 1&F_4
&1\\
6&-1\ 1\Lleftarrow 1&1\ 0\Lleftarrow 1&F_4(a_1)
&1\\
3&-2\ 1\Lleftarrow 1&0\ 0\Lleftarrow 1&A_2+\tilde A_2
&\boldsymbol{\mu}_3\times\boldsymbol{\mu}_3\\
\hline
\end{array}}
$$
\end{center}
\subsection{Stable gradings of classical Lie algebras}
\label{classical}
Here we tabulate the stable gradings of classical Lie algebras.
For inner type $A_n$ the only stable grading is the Coxeter one, so we omit this case.
\subsubsection{Type ${^2\!A_\ell}$}
The stable gradings in type ${^2\!A_\ell}$ correspond to divisors of $\ell$ and $\ell+1$, each having odd quotient $d=m/2$. Conjugacy classes in the symmetric group are denoted by their partitions. For example, $[d^{2k+1}]$ consists of the products of
$2k+1$ disjoint $d$-cycles.
\begin{center}
{\small Table 9: The stable gradings for ${^2A_2}$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{ c c c c}
\hline
m=2d&\text{Kac diagram}& w\vartheta&S_0\\
\hline
6=h_\vartheta
&
\twoAtwooo
&-1\times [3]
&1\\
2 &\twoAtwoo &-[1^3]
&\boldsymbol{\mu}_2\times\boldsymbol{\mu}_2\\
\hline
\end{array}}
$$
\end{center}
\begin{center}
{\small Table 10: The stable gradings for ${^2A_{2n}}$, $n\geq 2$}
$$
{\renewcommand{\arraystretch}{1.5}
\begin{array}{c c c c}
\hline
m=2d&\text{Kac diagram}& w\vartheta&S_0\\
\hline
2(2n+1)=h_\vartheta
&1 \Rightarrow 1\ \ 1\ \cdots\ 1\ \ 1\ \Rightarrow 1 &-1\times[2n+1]
&1\\
2&1 \Rightarrow 0\ \ 0\ \ 0\cdots\ 0\ \ 0\ \Rightarrow 0 &-1\times[1^{2n+1}]
&\boldsymbol{\mu}_2^{2n}\\
\frac{2(2n+1)}{2{k}+1},\quad {k}>0 &
1 \Rightarrow\underset{A_{2{k}} } { \underbrace{ 0\cdots0} }\ \ 1\ \
\underset{A_{2{k}} } { \underbrace{ 0\cdots0} }\ \ 1\ \ \cdots\ \ 1\ \
\underset{A_{2{k}} } { \underbrace{ 0\cdots0} }\Rightarrow 1
&-1\times [d^{2{k}+1}]
&\boldsymbol{\mu}_2^{2{k}}\\
\frac{2n}{k},\quad 1<\frac{n}{k}\ \text{odd}\quad &
1 \Rightarrow\underset{A_{2{k}-1} } { \underbrace{ 0\cdots0} }\ \ 1\ \
\underset{A_{2{k}-1} } { \underbrace{ 0\cdots0} }\ \ 1\ \ \cdots\ \ 1\ \
\underset{B_{k} } { \underbrace{ 0\cdots0 \Rightarrow 0}}
&-1\times [d^{2{k}},1]
&\boldsymbol{\mu}_2^{2{k}}\\
\hline
\end{array}}
$$
\end{center}
\begin{center}
{\small Table 11: The stable gradings for ${^2A_{2n-1}}$, $n\geq 3$}
$$
{\renewcommand{\arraystretch}{1.5}
\begin{array}{ cc c c}
\hline
m&\text{Kac diagram}& w\vartheta &S_0\\
\hline
&&&\\
2(2n-1)=h_\vartheta&
\begin{split}
&1\\
1\ \ \ & 1\ \ \ 1\ \ \ 1\ \ \ 1\cdots 1\ \ \ 1\Leftarrow1\\
\end{split}\qquad
&
-1\times [2n-1]
&1\\
&&&\\
2n\quad (\text{$n$ odd})&
\begin{split}
&1\\
1\ \ \ & 0\ \ \ 1\ \ \ 0\ \ \ 1\cdots 1\ \ \ 0\Leftarrow1\\
\end{split}\qquad
&
-1\times [n^2]
&1\\
&&&\\
\frac{2(2n-1)}{2{k}+1},\quad {k}>0 &
\begin{split}
&\quad \ \ 0\\
&\underset{ D_{{k}+1} } {\underbrace{0\ \ \ \ 0\ \cdots \ 0}}\ \ 1\ \
\underset{A_{2{k}} } {\underbrace{ 0\ \cdots \ 0}}\ \ 1\cdots
\ 1\ \ \underset{A_{2{k}} } {\underbrace{ 0\ \cdots\ 0}}
\Leftarrow 1
\end{split}
&-1\times [d^{2{k}+1},1]&\boldsymbol{\mu}_2^{2{k}}\\
&&&\\
\frac{2n}{k},\quad 1<\frac{n}{k}\ \text{odd}\quad
&
\begin{split}
&\quad \ \ 0\\
&\underset{ D_{{k}} } {\underbrace{0\ \ \ \ 0\ \cdots \ 0}}\ \ 1\ \
\underset{A_{2{k}-1} } {\underbrace{ 0\ \cdots \ 0}}\ \ 1\cdots
\ 1\ \ \underset{A_{2{k}-1} } {\underbrace{ 0\ \cdots\ 0}}
\Leftarrow 1
\end{split}
&-1\times [d^{2{k}}]
&\boldsymbol{\mu}_2^{2{k}-2}\\
\hline
\end{array}}
$$
\end{center}
\newpage
\subsubsection{Types $B_n, C_n$}
The stable gradings for type $B_n$ and $C_n$ correspond to divisors $k$ of $n$, with period $m=2n/k$. The corresponding class in $W(B_n)=W(C_n)$,
denoted $kB_{n/k}$, consists of the $k^{th}$ powers of a Coxeter element.
\begin{center}
{\small Table 12: The stable gradings for type $B_n$}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccc}
\hline
k=\frac{2n}{m}& \text{Kac diagram}& w& S_0\\
\hline
&&&\\
1
&
\begin{split}
&1\\
1\ \ \ & 1\ \ \ 1\ \ \ 1\ \ \ 1\cdots 1\ \ \ 1\Rightarrow1\\
\end{split}
&
B_{n}
&1\\
&&&\\
\underset{n\ \text{even}}{2}
&
\begin{split}
&1\\
1\ \ \ & 0\ \ \ 1\ \ \ 0\ \ \ 1\cdots 0\ \ \ 1\Rightarrow0\\
\end{split}
&
2 B_{n/2}
&\boldsymbol{\mu}_2\\
&&&\\
\underset{{k\ \text{even}}}{k>2}
&\begin{split}
&\quad \ 0\\
&\underset{ D_{{k}/2} } {\underbrace{0\ \ \ 0\ \cdots \ 0}}\ \ \ 1\ \ \
\underset{A_{{k}-1} } {\underbrace{ 0\ \cdots \ 0}}\ \ 1\cdots
1\ \ \ \underset{A_{{k}-1} } {\underbrace{ 0\ \cdots\ 0}}\ \ \ 1\ \ \
\underset{B_{{k}/2} } {\underbrace{ 0\cdots 0\ \Rightarrow 0}}
\end{split}
& {k} B_{n/{k}}
&\boldsymbol{\mu}_2^{{k}-1}\\
&&&\\
\underset{k\ \text{odd}}{k>1}
&
\begin{split}
&\quad \ 0\\
&\underset{ D_{({k}+1)/2} } {\underbrace{0\ \ \ 0\ \cdots \ 0}}\ \ \ 1\ \ \
\underset{A_{{k}-1} } {\underbrace{ 0\ \cdots \ \ 0}}\ \ 1\cdots
1\ \ \ \underset{A_{{k}-1} } {\underbrace{ 0\ \cdots\ \ 0}}\ \ \ 1\ \ \
\underset{B_{({k}-1)/2} } {\underbrace{ 0\cdots 0\ \Rightarrow 0}}
\end{split}
&
{k} B_{n/{k}}
&\boldsymbol{\mu}_2^{{k}-1}\\
\hline
\end{array}}
$$
\end{center}
\vskip50pt
\begin{center}
{\small Table 13: The stable gradings for type $C_n$}
$$
{\renewcommand{\arraystretch}{1.5}
\begin{array}{cccc}
\hline
k=\frac{2n}{m}&\text{Kac diagram}& w&S_0\\
\hline
1
&
1 \Rightarrow 1\ \ 1 \cdots 1\ \ 1\Leftarrow 1
& B_{n}
&1 \\
k>1\quad
&
1 \Rightarrow\underset{A_{{k}-1} } { \underbrace{ 0\cdots0} }\ \ 1\ \
\underset{A_{{k}-1} } { \underbrace{ 0\cdots0} }\ \ 1\ \ \cdots\ \ 1\ \
\underset{A_{{k}-1} } { \underbrace{ 0\cdots0} }\Leftarrow 1
&{k}B_{n/{k}}
&\boldsymbol{\mu}_2^{{k}-1} \\
\hline
\end{array}}
$$
\end{center}
\newpage
\subsubsection{Types $D_n$ and ${^2\!D_n}$ \quad ($n\geq 4$)}
The stable gradings for type $D_n$ correspond to even divisors $k$ of $n$ and odd divisors $\ell$ of $n-1$. The stable gradings for type ${^2\!D_n}$ correspond to odd divisors $\ell$ of $n$ and even divisors $k$ of $n-1$.
\begin{center}
{\small Table 14: The stable gradings for type $D_n$, $n\geq 4$}
$$
{\renewcommand{\arraystretch}{1.5}
\begin{array}{cccc}
\hline
m&\text{Kac diagram}& w&S_0\\
\hline
2n-2=h_\vartheta\quad&
\begin{split}
&\ 1\qquad\qquad\ 1\\
1\ \ \ \!& \ 1\ \ \ 1\cdots\ 1\ \ \ 1\ \ \ 1\\
\end{split}
& B_1+B_{n-1}
&1\\
&&&\\
n\quad (\text{if $n$ is even})&
\begin{split}
&1\qquad\qquad\qquad\ \ 1\\
1\ \ & 0\ \ 1\ \ 0\ \ 1\cdots0\ \ 1\ \ 0\ \ 1\\
\end{split}
& 2 B_{n/2}
&1\\
&&&\\
\frac{2n}{k}\quad 2<{k}\ \text{even}&
\begin{split}
&\quad 0\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\ \ \!\ \ \!0\\
&\underset{ D_{{k}/2} } {\underbrace{0\ \ 0\cdots \ 0}}\ \ 1\ \
\underset{A_{{k}-1} } {\underbrace{ 0\ \cdots \ 0}}\ \ 1\cdots
1\ \ \underset{A_{{k}-1} } {\underbrace{ 0\ \cdots\ 0}}\ \ 1\ \
\underset{D_{{k}/2} } {\underbrace{ 0\cdots 0\ \ \ 0}}
\end{split}
& {k} B_{n/{k}}
&\boldsymbol{\mu}_2^{{k}-2}\\
&&&\\
\frac{2n-2}{\ell}\quad 1< {\ell}\ \text{odd}&
\begin{split}
&\quad 0\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ \ \ \!0\\
&\underset{ D_{({\ell}+1)/2} } {\underbrace{0\ \ 0\cdots \ 0}}\ \ 1\ \
\underset{A_{{\ell}-1} } {\underbrace{ 0\ \cdots \ 0}}\ \ 1\cdots
1\ \ \underset{A_{{\ell}-1} } {\underbrace{ 0\ \cdots\ 0}}\ \ 1\ \
\underset{D_{({\ell}+1)/2} } {\underbrace{ 0\cdots 0\ \ \ 0}}
\end{split}
&B_1+{\ell} B_{(n-1)/{\ell}}
&\boldsymbol{\mu}_2^{{\ell}-1}\\
\hline
\end{array}}
$$
\end{center}
\vskip25pt
\begin{center}
{\small Table 15: The stable gradings for type ${^2\!D_n}$, $n\geq 3$}
$$
{\renewcommand{\arraystretch}{1.5}
\begin{array}{cccc}
\hline
m&\text{Kac diagram}& w&S_0\\
\hline
2n=h_\vartheta
&
1\Leftarrow 1\ \ 1\cdots1\ \ \ 1 \Rightarrow 1
& B_n
&1\\
&&&\\
n-1\quad
(\text{if $n$ is odd})&
0\Leftarrow 1\ \ 0\ \ 1\ \ 0\cdots1\ \ 0\ \ 1 \Rightarrow 0
& B_1+2 B_{n/2}
&\mu_2\times\mu_2\\
&&&\\
\frac{2n}{\ell}\quad 2<{\ell}\ \text{odd}
&\underset{ B_{({\ell}-1)/2} } {\underbrace{0\Leftarrow 0\cdots 0}}
\ \ 1\ \
\underset{A_{{\ell}-1} } {\underbrace{ 0\cdots 0}}\ \ 1\cdots
\ 1\ \underset{A_{{\ell}-1} } {\underbrace{ 0\cdots 0}}\ \ 1\ \
\underset{B_{({\ell}-1)/2} } {\underbrace{ 0\cdots 0 \Rightarrow 0}}
& {\ell} B_{n/{\ell}}
&\boldsymbol{\mu}_2^{{\ell}-1}\\
&&&\\
\frac{2n-2}{k}\quad 1< {k}\ \text{even}&
\underset{ B_{{k}/2} } {\underbrace{0\Leftarrow 0\cdots 0}}
\ \ 1\ \
\underset{A_{{k}-1} } {\underbrace{ 0\cdots 0}}\ \ 1\cdots
\ 1\ \underset{A_{{k}-1} } {\underbrace{ 0 \cdots 0}}\ \ 1\ \
\underset{B_{k/2} } {\underbrace{ 0\cdots 0 \Rightarrow 0}}
&B_1+{k} B_{(n-1)/{k}}
&\boldsymbol{\mu}_2^{{k}}\\
\hline
\end{array}}
$$
\end{center}
\subsection{Distinguished nilpotent elements and stable gradings}
\label{distinguished}
Kac coordinates of stable gradings are of two kinds, according as $s_0=0$ or $s_0=1$.
Expanding on section 9 of \cite{springer:regular}, we show here that all stable gradings with $s_0=1$ in exceptional Lie algebras are related to distinguished nilpotent elements.
For simplicity, we assume in this section only that $k$ has characteristic zero.
Let $A$ be a distinguished nilpotent element in $\mathfrak{g}$.
That is, the connected centralizer $C_G(A)^\circ$ is unipotent.
There is a homomorphism $\check\lambda:k^\times\to G$,
such that $\Ad(\check\lambda(t))A=tA$ for all
$t\in k^\ast$.
This gives a grading
$$\mathfrak{g}=\bigoplus_{j=-a}^{a}\mathfrak{g}(j),$$
where $\mathfrak{g}(j)=\{x\in \mathfrak{g}:\ \lambda(t)x=t^j\cdot x\ \ \forall t\in k^\times\}$ and
$a=\max\{j:\ \mathfrak{g}(j)\neq 0\}.$
Since $A$ is distinguished the linear map $\ad(A):\mathfrak{g}(0)\to\mathfrak{g}(1)$ is a bijection.
Set $m=a+1$, assume this is nonzero in $k$, and choose a root of unity $\zeta\in k^\times$ of order $m$.
The inner automorphism $\theta_A:=\Ad(\check\lambda(\zeta))\in \Aut(\mathfrak{g})^\circ$ has order $m$,
giving rise to a $\mathbb{Z}/m$-grading
$$\mathfrak{g}=\bigoplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i,$$
where $\mathfrak{g}_i$ is the $\zeta^i$-eigenspace of $\theta_A$ in $\mathfrak{g}$. We have
$$\mathfrak{g}_i=\sum_{\substack{-a\leq j\leq a\\ j\equiv i\mod m}}\mathfrak{g}(j),$$
so that
$$\mathfrak{g}_0=\mathfrak{g}(0)\qquad\text{and}\qquad \mathfrak{g}_1=\mathfrak{g}(-a)\oplus\mathfrak{g}(1).$$
Choose a maximal torus $T$ in a Borel subgroup $B$ of $G$ such that $\check\lambda\in X_\ast(T)$ and $\langle \alpha,\check\lambda\rangle\geq 0$ for all roots $\alpha$ of $T$ in $B$. For each of the simple roots
$\alpha_1,\dots,\alpha_\ell$ we have $\langle \alpha_i,\check\lambda\rangle\in\{0,1\}$. We set
$s_i=\langle \alpha_i,\check\lambda\rangle$, and also put $s_0=1$.
Since $\mathfrak{g}(-a)$ contains the lowest root space, it follows that $(s_0,s_1,\dots, s_\ell)$ are the normalized Kac-coordinates of $\theta_A$.
\begin{prop}\label{9.5} The following are equivalent.
\begin{enumerate}
\item There exists $M\in \mathfrak{g}(-a)$ such that $M+A$ is regular semisimple.
\item There exists $M\in \mathfrak{g}(-a)$ such that $M+A$ is semisimple.
\item The automorphism $\theta_A$ is stable.
\end{enumerate}
\end{prop}
\proof Implication $1\Rightarrow 2$ is obvious.
We prove $2\Rightarrow 3$.
Since $A$ is distinguished, the centralizer $C_{G_0}(A)$ is finite.
Since $G_0$ preserves each summand $\mathfrak{g}(j)$, we have
$C_{G_0}(M+A)\subset C_{G_0}(A)$. Hence $C_{G_0}(M+A)$ is also finite, so the $G_0$-orbit of $M+A$ in $\mathfrak{g}_1$ is stable.
The implication $3\Rightarrow 1$ is proved in \cite[9.5]{springer:regular}.
We give Springer's argument here for completeness.
Let $F$ be a $G$-invariant polynomial on $\mathfrak{g}$ such that $F(x)\neq 0$ if and only if $x$ is regular semisimple. For example, we can choose $F$ corresponding, under the Chevalley isomorphism $k[\mathfrak{t}]^G\overset\sim\to k[\mathfrak{t}]^W$, to the product of the roots. Now assuming that $3$ holds, there are vectors $Z\in \mathfrak{g}(-a)$ and $Y_0\in\mathfrak{g}(1)$ such that $Z+Y_0$ is semisimple and has finite stabilizer in $G_0$. The centralizer $\mathfrak{m}=\mathfrak{z}(Z+Y_0)$ is then reductive, with $\mathfrak{m}^\theta=0$, so $\mathfrak{m}$ is a Cartan subalgebra of $\mathfrak{g}$ and $Z+Y_0$ is in fact regular semisimple. Hence the polynomial $F_Z$ on $\mathfrak{g}(1)$ given by $F_Z(Y):=F(Z+Y)$ does not vanish identically.
Since $A$ is distinguished, the orbit $\Ad(G_0)A$ is dense in $\mathfrak{g}(1)$, so there is $g\in G_0$ such that $F_Z(\Ad(g)A)=F(\Ad(g)^{-1}Z+A)\neq 0$. It follows that $\Ad(g)^{-1}Z+A$ is regular semisimple so $1$ holds. \qed
We say that a distinguished nilpotent element $A\in\mathfrak{g}$ is {\bf $S$-distinguished} if the equivalent conditions of Prop. \ref{9.5} hold.
{\bf A non-example:\ } It can happen that $\mathfrak{g}(-a)+\mathfrak{g}(1)$ contains semisimple elements, but none have the form $M+A$ with $M\in\mathfrak{g}(-a)$. For example, suppose $\mathfrak{g}=\mathfrak{sp}_6$ and $A$ has Jordan blocks $(4,2)$. The automorphism $\theta_A$ has Kac coordinates
$$1\Rightarrow 1\ \ \ \!0\Leftarrow 1$$
and has rank equal to $1$.
It corresponds to $w\in W(C_3)$ of type
$C_2\times C_1$, which is not $\mathbb{Z}$-regular, so $A$ is not $S$-distinguished.
\begin{prop}\label{s0} Assume that $\mathfrak{g}$ is of exceptional type and that
$\theta\in\Aut(\mathfrak{g})^\circ$ is a stable inner automorphism whose Kac coordinates satisfy $s_0=1$. Then $\theta=\theta_A$ where $A$ is an $S$-distinguished nilpotent element in $\mathfrak{g}$.
\end{prop}
\proof In the tables of section \ref{exceptional} we have listed, for each $\theta$ with $s_0=1$, the conjugacy class of a nilpotent element $A$ such that $\theta_A$ has the normalized Kac coordinates of $\theta$.
\qed
{\bf Remark 1:\ } For $n$ even there is a unique $S$-distinguished non-regular nilpotent class
in $\mathfrak{so}_{2n}$ which is also $S$-distinguished in $\mathfrak{so}_{2n+1}$, having Jordan partitions $[2n+1,2n-1]$ and $[2n+1, 2n-1,1]$, respectively. For $A$ in these classes $\theta_A$ has order $n$. In these and the exceptional cases, the map $A\mapsto\theta_A$ is a bijection from the set of $S$-distinguished nilpotent $G$-orbits in $\mathfrak{g}$ to the set of inner gradings on $\mathfrak{g}$ with $s_0=1$.
However, Prop. \ref{9.5} is false for $C_n$, $n\geq 2$.
{\bf Remark 2:\ } If $A$ is $S$-distinguished then $\mathfrak{z}(M+A)$ is a canonical Cartan subalgebra for $\theta_A$ on which $\theta_A$ acts by an element of the conjugacy class in $W$ associated to $A$ via the Kazhdan-Lusztig map \cite{kazhdan-lusztig:affinefixedpoints}. This follows from the argument in \cite[9.11]{kazhdan-lusztig:affinefixedpoints},
and confirms two entries in \cite[Table 1]{spaltenstein:kl} (for $A=E_8(a_6), E_8(a_7)$), listed there as conjectural.
{\bf Remark 3:\ } There are exactly three cases where $\mathfrak{g}_0$ is a maximal proper Levi subalgebra in $\mathfrak{g}$. These occur in $G_2, F_4$ and $E_8$, for $a=2,3,5$ respectively, where $C_{G_0}(A)$ is a symmetric group $S_3, S_4, S_5$. These groups act irreducibly on the subspaces $\mathfrak{g}(-a)$ of dimensions $1,2,4$, in which the stabilizers of a vector in general position are the isotropy groups $S_0=\boldsymbol{\mu}_3, \boldsymbol{\mu}_2\times\boldsymbol{\mu}_2$, $1$. These are the maximal abelian normal subgroups of $C_{G_0}(A)$.
\section{Positive rank gradings for type $E_{6,7,8}$ (inner case)}\label{E678}
Assume now that $\mathfrak{g}$ has type $E_n$, for $n=6,7,8$.
From Prop. \ref{Kac(w)} we have the following algorithm to find all inner automorphisms of
$\mathfrak{g}$ having positive rank. For each $m\geq 1$ list
the $W$-conjugacy classes of $m$-admissible elements in $W$. For a representative $w$ of each class, form the list $\Kac(w)_{\un}$ and apply the normalization algorithm to each element of
$\Kac(w)_{\un}$, discarding duplicate results, to obtain the list $\Kac(w)$ of normalized Kac coordinates. Then by Prop. \ref{Kac(w)},
the union of the lists $\Kac(w)$ over all conjugacy-classes of $m$-admissible $w$ gives all positive rank inner automorphisms of order $m$.
To find the $\Kac(w)_{\un}$ when each $w_i$ is $\mathbb{Z}$-regular, we can use Prop. \ref{clift} to find the Kac coordinates of each $w_i$, which lead to those of $w$ via the normalization algorithm.
It turns out that we obtain all positive rank gradings from those $m$-admissible $w$ for which each factor $w_i$ is not only elliptic but also $\mathbb{Z}$-regular in $W_{J_i}$. However, we do not have an {\it a priori} proof of this fact,
so we must also compute Kac coordinates of lifts in the small number of cases
where not all $w_i$ are $\mathbb{Z}$-regular.
These non-regular cases are handled as follows.
By induction, we assume $w=w_i$ lies in no proper reflection subgroup and we consider the powers of $w$.
To illustrate the method, take the nonregular element $w=E_8(a_7)=-A_2E_6$ of order $12$.
First list the $32$ normalized Kac coordinates $(s_i)$ with $s_i\in\{0,1\}$
and $s_0 + 2 s_1 + 3 s_2 + 4 s_3 + 6 s_4 + 5 s_5 + 4 s_6 + 3 s_7 + 2 s_8 =12$.
We have $w^2$ and $w^3$ in the classes $A_2E_6(a_2)$ and $2A_2+2A_1$
whose lifts have Kac coordinates
$\E{0}{0}{1}{0}{0}{0}{0}{1}{0}$ and $\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}$, respectively.
Only one of the 32 elements on the list satisfies these two conditions,
namely $\E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}$. Therefore this is the Kac diagram
for the lift of $w$ in the class
$E_8(a_7)$.
\subsection{A preliminary list of Kac coordinates for positive rank gradings of inner type}
\label{preliminary}
For each possible order $m$ we list the $m$-admissible elements in $W(E_{6,7,8})$,
the rank $r=\rank(w)$.
and the form of the un-normalized Kac-coordinates of the lifts of $w$ In the column $\Kac(w)_{\un}$, each $\ast$ is an independent variable integer ranging over a set of representatives of $\mathbb{Z}/m$ such that the order is always $m$. For each vector of $\ast$-values we apply the normalization algorithm to obtain the normalized Kac coordinates $\Kac(w)$ in the last column. The sets $\Kac(w)$ are not disjoint. In a second set of tables (section \ref{Eposrank}), we will select, for each $\theta$ appearing in $\cup_w\Kac(w)$, a $w$ of maximal rank for which $\Kac(w)$ contains $\theta$.
We use Carter's notation for conjugacy classes in $W$, augmented as follows.
If $X$ is a conjugacy class and $-1\in W$ then $-X=\{-w:\ w\in X\}$. This makes some classes easier to understand; for example,
$E_8(a_7)=-A_2E_6$.
\begin{center}
{\small Table 16: $\Kac(w)_{\un}$ and $\Kac(w)$ for $m$-admissible $w$ in $W(E_6)$ }
$$
\begin{array}{|c|c|c|l|l|}
\hline
m& w & r&\Kac(w)_{\un}& \Kac(w)\\
\hline\hline
12& E_6 & 1& \EVI{1}{1}{1}{1}{1}{1}{1}& \EVI{1}{1}{1}{1}{1}{1}{1}\\
\hline
9& E_6(a_1) & 1& \EVI{1}{1}{1}{1}{0}{1}{1}& \EVI{1}{1}{1}{1}{0}{1}{1}\\
\hline
8& D_5& 1& \EVI{1}{ \ast}{ 1}{ 1}{ 1}{ 1}{ \ast}&
\EVI{0}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}\qquad \EVI{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}\\
\hline
6& E_6(a_2) & 2& \EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}& \EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\\
\hline
6& A_5& 1&\EVI{\ast}{ 1}{ \ast}{ 1}{ 1}{ 1}{ 1} &\EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad\EVI{0}{0}{1}{1}{0}{1}{0}\\
\hline
6& D_4& 1& \EVI{\ast}{ \ast}{ 1}{ 1}{ 1}{ 1}{ \ast}&\EVI{1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0} \qquad
\EVI{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1}
\qquad \EVI{0}{ 1}{ 1}{ 1}{ 0}{ 0}{ 1}\qquad \EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\\
\hline
5& A_4& 1&\EVI{\ast}{ 1}{ \ast}{ 1}{ 1}{ 1}{ \ast} &\EVI{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}
\qquad \EVI{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad \EVI{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}\\
\hline
4& D_4(a_1)& 2&\EVI{\ast}{ \ast}{ 1}{ 1}{ 0}{ 1}{ \ast} &
\EVI{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad \EVI{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}\\
\hline
4& A_3& 1&\EVI{\ast}{ \ast}{ \ast}{ 1}{ 1}{ 1}{ \ast} &
\EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad \EVI{0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}
\qquad \EVI{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1}\qquad \EVI{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}
\qquad \EVI{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\\
\hline
3& 3A_2& 3&\EVI{1}{ 1}{ 1}{ 1}{ \ast}{ 1}{ 1} &\EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
\hline
3& 2A_2& 2& \EVI{\ast}{ 1}{ \ast}{ 1}{ \ast}{ 1}{ 1} &\EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad \EVI{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
3& A_2& 1&\EVI{\ast}{\ast}{1}{\ast}{1}{\ast}{\ast} &
\EVI{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}\qquad \EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad \EVI{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad \EVI{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad \EVI{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
2& 4A_1& 4&\EVI{1}{ 1}{ \ast}{ \ast}{ 1}{ \ast}{ 1} &\EVI{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 3A_1& 3&\EVI{\ast}{ 1}{ \ast}{ \ast}{ 1}{ \ast}{ 1} &\EVI{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 2A_1& 2&\EVI{1}{ \ast}{ \ast}{ \ast}{ 1}{ \ast}{ \ast} & \EVI{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}
\qquad \EVI{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
2& A_1& 1&\EVI{\ast}{ \ast}{ \ast}{ \ast}{ 1}{ \ast}{ \ast} &
\EVI{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}\qquad \EVI{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
\end{array}
$$
\end{center}
\begin{center}
{\small Table 17: $\Kac(w)_{\un}$ and $\Kac(w)$ for $m$-admissible $w$ in $W(E_7)$ }
$$
\begin{array}{|c|c|c|l|l|}
\hline
m& w & r&\Kac(w)_{\un}& \Kac(w)\\
\hline\hline
18& E_7 & 1& \EVII{1}{1}{1}{1}{1}{1}{1}{1}& \EVII{1}{1}{1}{1}{1}{1}{1}{1}\\
\hline
14& E_7(a_1)& 1& \EVII{1}{1}{1}{1}{0}{1}{1}{1}& \EVII{1}{1}{1}{1}{0}{1}{1}{1}\\
\hline
12& E_7(a_2) & 1& \EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}& \EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
12& E_6& 1& \EVII{\ast}{1}{1}{1}{1}{1}{1}{\ast}&
\EVII{1}{0}{1}{1}{0}{1}{1}{1}\qquad \EVII{0}{ 1}{ 0}{ 0}{ 1}{ 1}{ 1}{ 1}
\qquad\EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}
\\
\hline
10& D_6 & 1& \EVII{\ast}{\ast}{1}{1}{1}{1}{1}{1}&
\EVII{0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}\qquad\EVII {1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}
\qquad\EVII {1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
9& E_6(a_1) & 1& \EVII{\ast}{1}{1}{1}{0}{1}{1}{\ast}&
\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\qquad\EVII {1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1} \\
\hline
8& D_5& 1& \EVII{\ast}{\ast}{1}{1}{1}{1}{1}{\ast}&
\EVII{0}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1}\qquad\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1}
\qquad\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}
\qquad\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\\
&&&&\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\qquad\EVII{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}
\qquad\EVII{1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\EVII{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1}\\
\hline
8& D_6(a_1) & 1& \EVII{\ast}{\ast}{1}{1}{0}{1}{1}{1}&
\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\EVII {1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
8& A_7& 1& \EVII{1}{1}{\ast}{1}{1}{1}{1}{1}&
\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0} \\
\hline
7& A_6 & 1& \EVII{\ast}{1}{\ast}{1}{1}{1}{1}{1}&
\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} \\
\hline
6& E_7(a_4) & 3& \EVII{1}{0}{0}{0}{1}{0}{0}{1}& \EVII{1}{0}{0}{0}{1}{0}{0}{1}\\
\hline
6& D_6(a_2) & 2& \EVII{\ast}{\ast}{1}{1}{0}{1}{0}{1}&
\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
6& E_6(a_2) & 2& \EVII{\ast}{1}{0}{0}{1}{0}{1}{\ast}&
\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\EVII {1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
6& D_4& 1& \EVII{\ast}{\ast}{1}{1}{1}{1}{\ast}{\ast}&
\EVII{1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1}
\qquad\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1}\qquad\EVII{0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\EVII{0}{ 0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}\\
&&&&\EVII {0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad\EVII {0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVII {1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}
\qquad\EVII {1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1}\\
\hline
6& A_5'' & 1& \EVII{\ast}{1}{\ast}{1}{1}{1}{1}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}\qquad\EVII {0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
6& A_5' & 1& \EVII{1}{1}{1}{1}{1}{\ast}{\ast}{\ast}&
\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}\qquad\EVII {0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\EVII{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}\\
\hline
5& A_4& 1& \EVII{\ast}{\ast}{\ast}{1}{1}{1}{1}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\EVII {0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}
\qquad\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}\qquad\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}
\qquad\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\\
\hline
4& 2A_3& 2& \EVII{1}{1}{\ast}{1}{\ast}{1}{1}{1}&
\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0} \qquad\EVII {0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0} \\
\hline
4& D_4(a_1)& 2& \EVII{\ast}{\ast}{1}{1}{0}{1}{\ast}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\EVII{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
4& A_3& 1& \EVII{\ast}{\ast}{\ast}{1}{1}{1}{\ast}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}
\qquad\EVII{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
3& 3A_2& 3& \EVII{1}{1}{1}{\ast}{1}{\ast}{1}{1}& \EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
\hline
3& 2A_2& 2& \EVII{1}{1}{1}{\ast}{1}{\ast}{\ast}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\EVII {0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
3& A_2& 1& \EVII{\ast}{\ast}{1}{\ast}{1}{\ast}{\ast}{\ast}&
\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}\qquad\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
2& 7A_1& 7& \EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 6A_1& 6& \EVII{\ast}{\ast}{0}{0}{0}{1}{0}{0}&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 5A_1& 5& \EVII{1}{\ast}{1}{1}{\ast}{1}{\ast}{1}&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 4A_1'& 4& \EVII{1}{\ast}{1}{1}{\ast}{1}{\ast}{\ast}&
\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 4A_1''& 4& \EVII{\ast}{\ast}{0}{0}{1}{0}{\ast}{\ast}&
\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
\hline
2& 3A_1'& 3& \EVII{1}{\ast}{1}{1}{\ast}{\ast}{\ast}{\ast}&
\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\EVII {1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
2& 3A_1''& 3& \EVII{1}{\ast}{1}{\ast}{\ast}{1}{\ast}{\ast}&
\EVII {0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
\hline
2& 2A_1& 2& \EVII{\ast}{\ast}{\ast}{1}{\ast}{1}{\ast}{\ast}&
\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\EVII {1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
2& A_1& 1& \EVII{\ast}{\ast}{\ast}{\ast}{\ast}{1}{\ast}{\ast}&
\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\EVII {0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\EVII {1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
\end{array}
$$
\end{center}
\begin{center}
{\small Table 18: $\Kac(w)_{\un}$ and $\Kac(w)$ for $m$-admissible $w$ in $W(E_8)$ }
$$
\begin{array}{|c|c|c|l|l|}
\hline
m& w & r&\Kac(w)_{\un}& \Kac(w)\\
\hline\hline
30& E_8 & 1& \E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}& \E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}\\
\hline
24& E_8(a_1)& 1& \E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}& \E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}\\
\hline
20& E_8(a_2) & 1& \E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}& \E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
18& E_8(a_4) & 1& \E{1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}& \E{1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
18& E_7 & 1& \E{\ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ \ast}&
\E{0}{ 1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}\qquad \E{1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}
\qquad \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}{ 1}\qquad \E{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}\\
&&&&\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}\\
\hline
15& E_8(a_5) & 1& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}\\
\hline
14& E_7(a_1) & 1& \E{\ast}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ \ast}&
\E{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}\qquad \E{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1}\qquad \E{1}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}
\\
\hline
14& D_8 & 1& \E{1}{ \ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}& \E{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}\\
\hline
12& E_8(a_3) & 2& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
\hline
12& E_8(a_7) & 1& \E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}& \E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
12& E_7(a_2) & 1& \E{\ast}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}{ \ast}&
\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad \E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}
\qquad \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\qquad \E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}
\\
\hline
12& D_8(a_1) & 1& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}& \E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
\hline
12& D_7 & 1& \E{\ast}{ \ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}& \E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}
\qquad
\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
\hline
12& E_6 & 1& \E{\ast}{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ \ast}{ \ast}&
\E{0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\qquad\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}
\qquad\E{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1}\\
&&&&\E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1}\qquad\E {1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\\
\hline
10& E_8(a_6) & 2& \E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}& \E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
\hline
10& D_6 & 1& \E{\ast}{\ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ \ast}&
\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}
\qquad\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
\hline
9& E_6(a_1) & 1& \E{\ast}{1}{ 1}{ 1}{ 0}{ 1}{ 1}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}
\qquad\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}\\
&&&&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\\
\hline
9& A_8& 1& \E{1}{ 1}{ \ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}& \E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}\\
\hline
8& D_8(a_3) & 2& \E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}& \E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\\
\hline
8& D_6(a_1) & 1& \E{\ast}{\ast}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\\
\hline
8& D_5 & 1& \E{\ast}{\ast}{ 1}{ 1}{ 1}{ 1}{ 1}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad\E{0}{ 1}{ 0}{0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
&&&&\E{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}
\qquad\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{1}{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
&&&&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}
\\
\hline
8& A_7' & 1& \E{1}{\ast}{ 1}{ \ast}{ 1}{1}{1}{1}{1}&
\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}\qquad \E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}.
\\
\hline
8& A_7'' & 1& \E{\ast}{ 1}{\ast}{1}{1}{1}{1}{1}{1}&
\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}
\\
\hline
7& A_6 & 1& \E{\ast}{ 1}{\ast}{1}{1}{1}{1}{1}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}
\\
\hline
\end{array}
$$
\end{center}
\begin{center}
{\small Table 18 continued: $\Kac(w)_{\un}$ and $\Kac(w)$ for $m$-admissible $w$ in $W(E_8)$ }
$$
\begin{array}{|c|c|c|l|l|}
\hline
m& w & r&\Kac(w)_{\un}& \Kac(w)\\
\hline\hline
6& E_8(a_8) & 4& \E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}& \E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
6& E_7(a_4) & 3& \E{\ast}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ \ast}&
\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad \E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
6& E_6(a_2) & 2& \E{\ast}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
\hline
6& D_6(a_2) & 2& \E{\ast}{ \ast}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ \ast}&
\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}
\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
6& D_4 & 1& \E{\ast}{ \ast}{ 1}{1}{ 1}{ 1}{\ast}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\\
&&&&\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
6& A_5 & 1& \E{\ast}{1}{ \ast}{1}{ 1}{ 1}{1}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}\qquad\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}
\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}
\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
\hline
5& 2A_4 & 2& \E{1}{1}{1}{1}{1}{\ast}{1}{1}{1}& \E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
5& A_4 & 1& \E{\ast}{1}{ \ast}{1}{ 1}{ 1}{\ast}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
&&&&\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
4& 2D_4(a_1) & 4& \E{1}{ \ast}{ 1}{1}{ 0}{ 1}{\ast}{ 1}{ 0}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
4& D_4(a_1) & 2& \E{\ast}{ \ast}{ 1}{1}{ 0}{ 1}{\ast}{ \ast}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
4& 2A_3' & 2& \E{\ast}{ 1}{ \ast}{1}{ \ast}{ 1}{1}{ 1}{ \ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1} \qquad\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
4& 2A_3''& 2& \E{\ast}{ 1}{ \ast}{1}{ 1}{ \ast}{1}{ 1}{1}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\\
\hline
4& A_3& 1& \E{\ast}{ 1}{ \ast}{1}{ 1}{ \ast}{\ast}{ \ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}\qquad\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}
\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
3& 4A_2& 4& \E{1}{ 1}{ 1}{1}{ \ast}{ 1}{1}{ \ast}{1}&
\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
3& 3A_2& 3& \E{1}{ 1}{ \ast}{1}{ \ast}{ 1}{1}{ \ast}{1}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
3& 2A_2& 2& \E{\ast}{ 1}{ \ast}{1}{ \ast}{ 1}{1}{ \ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}
\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
3& A_2& 1&\E{\ast}{ 1}{ \ast}{1}{ \ast}{ \ast}{\ast}{ \ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}
\qquad\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 8A_1& 7& \E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{0}&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{0}\\
\hline
2& 7A_1& 7& \E{\ast}{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{\ast}&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{0}{0}\\
\hline
2& 6A_1& 6& \E{\ast}{\ast}{0}{0}{0}{1}{0}{0}{\ast}&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{0}{0}\\
\hline
2& 5A_1& 5& \E{1}{1}{1}{\ast}{\ast}{1}{\ast}{1}{\ast}&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{0}{0}\\
\hline
2& 4A_1'& 4& \E{\ast}{\ast}{1}{1}{\ast}{1}{\ast}{1}{\ast}&
\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 4A_1''& 4& \E{\ast}{\ast}{0}{0}{1}{0}{\ast}{\ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 3A_1& 3& \E{\ast}{\ast}{1}{1}{\ast}{1}{\ast}{\ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& 2A_1& 2& \E{\ast}{\ast}{1}{1}{\ast}{\ast}{\ast}{\ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
2& A_1& 1& \E{\ast}{\ast}{1}{\ast}{\ast}{\ast}{\ast}{\ast}{\ast}&
\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}\qquad\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}\\
\hline
\end{array}
$$
\end{center}
\subsection{Tables of positive rank gradings for $E_6, E_7$ and $E_8$}\label{Eposrank}
The previous lists contain the Kac coordinates of all positive rank gradings, usually with multiple occurrences. We now discard those in each $\Kac(w)$ which appear in some $\Kac(w')$ with
$\rank(w')>\rank(w)$. The remaining elements of $\Kac(w)$ are then the Kac coordinates
of automorphisms $\theta$ of order $m$ with $\rank(\theta)=\rank(w)$. For each grading $\theta$ there still may be more than one $w$ with $\rank(\theta)=\rank(w)$. It turns out that every $\theta$ of positive rank is contained in $\Kac(w)_{\un}$ for some $m$-admissible $w$ which is a $\mathbb{Z}$-regular element in the Weyl group Levi of a Levi subgroup $L_\theta$ and $\theta$ is a principal inner automorphism of the Lie algebra of $L_\theta$. This Levi subgroup corresponds to the subset $J$ of Lemma \ref{Jlowerbound} and is indicated in the right most column of the tables below.
For example, in $E_7$ the Kac diagrams
$$8_a:\ \EVII {1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\qquad\text{and}\qquad
8_b:\ \EVII{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}
$$
occur in $\Kac(w)$ for $w$ of types $D_6(a_1)$ and $D_5$.
Since $D_6(a_1)$ is not regular in any Levi subgroup of $W(E_7)$ and $w=D_5$ is regular in the $D_5$ Levi subgroup, we choose $w=D_5$, discard $w=D_6(a_1)$, and set $L_\theta=D_5$.
Since $\theta$ is principal on the Lie algebra of $L_\theta$, there is a conjugate $\theta'$ of $\theta$ whose un-normalized Kac diagram has a $1$ on each node of $J$ (cf. Lemma \ref{Jlowerbound}).
There may be more than one such $J$, corresponding to various conjugates $\theta'$, and we just pick one of them.
In the tables we try to write $w$ in a form which exhibits its regularity in the Weyl group
$W_J$. For example, in $E_6$ the gradings $4_a, 4_b$ have $w=D_4(a_1)$.
\footnote{cf. Panyushev, Example 4.5.}
In case $4_a$, which is stable,
we give the alternate expression $w=E_6^3$ to make it clear that $w$ is $\mathbb{Z}$-regular in $W(E_6)$. In case $4_b$ there is no $W_{\aff}(R)$-conjugate of $\theta$ with $1$'s on the $E_6$ subdiagram. However, $w=D_5^2$ is the square of a Coxeter element in $W_{D_5}$, hence is $\mathbb{Z}$-regular in $W_{D_5}$.
The rows in our tables are ordered by decreasing $m$.
The positive rank inner gradings of a given order $m$ are named
$m_a, m_b, m_c,\dots,$ where $m_a$ is the unique principal grading of order $m$.
The principal grading $m_a$ has maximal rank and minimal dimension of $\mathfrak{g}_0$ among all gradings of order $m$.
The remaining rows of order $m$ are grouped according to $w$ and $L_\theta$,
ordered in each group by increasing dimension of $\mathfrak{g}_0$.
The little Weyl groups $W(\mathfrak{c},\theta)$ are also given, along with their degrees.
These are either cyclic or given by their notation in \cite{shephard-todd}.
We explain their computation in section \ref{littleweylE}.
\begin{center}
{\small Table 19: The gradings of positive rank in type $E_6$ (inner case)}
$$
\begin{array}{c c c c c c c c}
\hline
\text{No.}&\text{Kac diagram}& w& W(\mathfrak{c},\theta)&\text{degrees}&\theta'&L_\theta\\
\hline
12_a &\EVI{1}{1}{1}{1}{1}{1}{1} &E_6 &\boldsymbol{\mu}_{12}&12&\EVI{1}{1}{1}{1}{1}{1}{1} &E_6\\
9_a& \EVI{1}{1}{1}{1}{0}{1}{1} &E_6(a_1) &\boldsymbol{\mu}_{9}&9&\EVI{-2}{1}{1}{1}{1}{1}{1} &E_6\\
8_a&\EVI{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&\EVI{-3}{1}{1}{1}{1}{1}{1}&E_6\\
8_b&\EVI{0}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}&D_5 &\boldsymbol{\mu}_8&8&\EVI{-6}{1}{1}{1}{1}{1}{0}&D_5\\
6_a&\EVI{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1} &E_6(a_2) &G_5&6,12&\EVI{-5}{1}{1}{1}{1}{1}{1}&E_6\\
6_b,{6_b}'&\EVI{0}{ 1}{ 1}{ 1}{ 0}{ 0}{ 1} \qquad\EVI{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1}
&D_4 &\boldsymbol{\mu}_6&6&\EVI{-4}{0}{1}{1}{1}{1}{1}&D_4\\
6_c&\EVI{1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0} &D_4 &\boldsymbol{\mu}_{6}&6&\EVI{-3}{0}{1}{1}{1}{1}{0}&D_4\\
6_d&\EVI{0}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0} &A_5 &\boldsymbol{\mu}_6&6&\EVI{-3}{1}{0}{1}{1}{1}{1}&A_5\\
5_a&\EVI{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1} &A_4 &\boldsymbol{\mu}_5&5&\EVI{-6}{1}{1}{1}{1}{1}{1}&A_5\\
5_b&\EVI{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}&A_4 &\boldsymbol{\mu}_5&5&\EVI{-8}{1}{2}{1}{1}{1}{1}&A_5\\
5_c&\EVI{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1} &A_4 &\boldsymbol{\mu}_5&5&\EVI{-10}{1}{3}{1}{1}{1}{1}&A_5\\
4_a&\EVI{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0} &D_4(a_1)=E_6^3 &G_8&8,12&\EVI{-7}{1}{1}{1}{1}{1}{1}
&E_6\\
4_b&\EVI{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1} &D_4(a_1)=D_5^2 &G(4,1,2)&4,8&\EVI{-6}{0}{1}{1}{1}{1}{1}&D_5\\
4_c&\EVI{0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0} &A_3 &\boldsymbol{\mu}_4&4&\EVI{-6}{2}{1}{0}{1}{1}{1}&A_4\\
4_d,{4_d}'&\EVI{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}\qquad\EVI{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1} &A_3&
\boldsymbol{\mu}_{4}&4&\EVI{-4}{0}{0}{1}{1}{1}{1}&A_4\\
3_a&\EVI{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0} &3A_2&G_{25}&6,9,12&\EVI{-8}{1}{1}{1}{1}{1}{1}&E_6\\
3_b&\EVI{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}&2A_2=A_5^2&G(3,1,2)&3,6&\EVI{-6}{1}{0}{1}{1}{1}{1}&A_5\\
3_c&\EVI{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0} &A_2=D_4^2&\boldsymbol{\mu}_6&6&\EVI{-6}{0}{1}{1}{1}{1}{0}&D_4\\
3_d,{3_d}' &\EVI{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVI{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} &A_2=D_4^2&\boldsymbol{\mu}_6&6&
\EVI{-7}{0}{1}{1}{1}{1}{1}&D_4\\
2_a&\EVI{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0} &4A_1=E_6^6&W(F_4)&2,6,8,12&\EVI{-9}{1}{1}{1}{1}{1}{1}&E_6\\
2_b&\EVI{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1} &2A_1=A_3^2&W(B_2)&2,4&\EVI{-6}{0}{0}{1}{1}{1}{1}&A_3\\
1_a&\EVI{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0} &e&W(E_6)&2,5,6,8,9,12&\EVI{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&\varnothing\\
\hline
\end{array}
$$
\end{center}
\begin{center}
{\small Table 20: The gradings of positive rank in type $E_7$ }
$$
\begin{array}{cccccccc}
\hline
\text{No.}&\text{Kac diagram }& w& W(\mathfrak{c},\theta)&\text{degrees}&\theta'&L_\theta\\
\hline
18_a&\EVII{1}{1}{1}{1}{1}{1}{1}{1} &E_7 &\boldsymbol{\mu}_{18}&18&\EVII{1}{1}{1}{1}{1}{1}{1}{1} &E_7\\
14_a&\EVII{1}{1}{1}{1}{0}{1}{1}{1} &E_7(a_1) =-A_6&\boldsymbol{\mu}_{14}&14&\EVII{-3}{1}{1}{1}{1}{1}{1}{1} &E_7\\
12_a&\EVII{1}{1}{1}{1}{0}{1}{0}{1} &E_6 &\boldsymbol{\mu}_{12}&12&\EVII{-5}{1}{1}{1}{1}{1}{1}{1} &E_7\\
12_b&\EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1} &E_7(a_2)=-E_6 &\boldsymbol{\mu}_{12}&12&\EVII{-6}{1}{1}{1}{1}{1}{1}{2} &E_6\\
12_c&\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 1}{ 1}{ 1} &E_6 &\boldsymbol{\mu}_{12}&12&\EVII{-4}{1}{1}{1}{1}{1}{1}{0} &E_6\\
10_a&\EVII{1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1} &D_6 &\boldsymbol{\mu}_{10}&10&\EVII{-7}{1}{1}{1}{1}{1}{1}{1} &
D_6\\
10_b&\EVII {1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1} &D_6 &\boldsymbol{\mu}_{10}&10&\EVII{-9}{2}{1}{1}{1}{1}{1}{1} &D_6\\
10_c&\EVII{0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0} &D_6 &\boldsymbol{\mu}_{10}&10&\EVII{-5}{0}{1}{1}{1}{1}{1}{1} &D_6\\
9_a&\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1} &E_6(a_1)=E_7^2 &\boldsymbol{\mu}_{18}&18&
\EVII{-8}{1}{1}{1}{1}{1}{1}{1} &E_7\\
9_b&\EVII{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1} &E_6(a_1) &\boldsymbol{\mu}_{9}&9&
\EVII{-7}{1}{1}{1}{1}{1}{1}{0} & E_6\\
8_a&\EVII {1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-9}{1}{1}{1}{1}{1}{1}{1} &D_5\\
8_b&\EVII {0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-11}{2}{1}{1}{1}{1}{1}{1} &D_5\\
8_c&\EVII {0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-12}{0}{1}{1}{1}{1}{1}{6} &D_5\\
8_d&\EVII {1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-10}{1}{1}{1}{1}{1}{1}{2} &D_5\\
8_e&\EVII {1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-8}{0}{1}{1}{1}{1}{1}{2} &D_5\\
8_f&\EVII{1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-12}{1}{1}{1}{1}{1}{1}{4} &D_5\\
8_g&\EVII {0}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-6}{0}{1}{1}{1}{1}{1}{0} &D_5\\
8_h&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1} &D_5 &\boldsymbol{\mu}_{8}&8&
\EVII{-8}{1}{1}{1}{1}{1}{1}{0} &D_5\\
7_a&\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} &A_6=E_7(a_1)^2 &\boldsymbol{\mu}_{14}&14&
\EVII{-10}{1}{1}{1}{1}{1}{1}{1} &E_7\\
6_a& \EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} &E_7(a_4)=E_7^3=-3A_2&G_{26}&6,12,18&
\EVII{-11}{1}{1}{1}{1}{1}{1}{1} &E_7\\
6_b&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1} &E_6(a_2)=E_6^2&G_5&6,12&
\EVII{-10}{1}{1}{1}{1}{1}{1}{0} &E_6\\
6_c& \EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0} &D_6(a_2)&G(6,2,2)&6,6&
\EVII{-9}{0}{1}{1}{1}{1}{1}{1} &D_6\\
6_d& \EVII{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 1} &D_4&\boldsymbol{\mu}_{6}&6&
\EVII{-12}{0}{1}{1}{1}{1}{1}{2} &D_4\\
6_e&\EVII {0}{ 0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1} &D_4&\boldsymbol{\mu}_{6}&6&
\EVII{-10}{0}{1}{1}{1}{1}{1}{2} &D_4\\
6_f&\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1} &D_4&\boldsymbol{\mu}_{6}&6&
\EVII{-13}{0}{1}{1}{1}{1}{1}{5} &D_4\\
6_g&\EVII{0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0} &D_4&\boldsymbol{\mu}_{6}&6&
\EVII{-9}{0}{1}{1}{1}{1}{0}{3} &D_4\\
6_h&\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1} &D_4&\boldsymbol{\mu}_{6}&6&
\EVII{-6}{0}{1}{1}{1}{1}{0}{0} &D_4\\
6_i&\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0} &A_5' &\boldsymbol{\mu}_{6}&6&
\EVII{-10}{2}{1}{0}{1}{1}{1}{1} &A_5' \\
6_j&\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0} &A_5''&\boldsymbol{\mu}_{6}&6&
\EVII{-9}{1}{0}{1}{1}{1}{1}{1} &A_5''\\
6_k&\EVII{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1} &A_5' &\boldsymbol{\mu}_{6}&6&
\EVII{-6}{0}{1}{0}{1}{1}{1}{1} &A_5' \\
\end{array}
$$
\end{center}
\begin{center}
{\small Table 20 continued: The gradings of positive rank in type $E_7$}
$$
\begin{array}{cccccccc}
\hline
\text{No.}&\text{Kac diagram}& w& W(\mathfrak{c},\theta)
&\text{degrees}&\theta'&L_\theta\\
\hline
5_a&\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1} &A_4=D_6^2&\boldsymbol{\mu}_{10}&10&
\EVII{-12}{1}{1}{1}{1}{1}{1}{1} &D_6\\
5_b&\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0} &A_4=D_6^2&\boldsymbol{\mu}_{10}&10&
\EVII{-14}{2}{1}{1}{1}{1}{1}{1} &D_6\\
5_c&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1} &A_4=D_6^2&\boldsymbol{\mu}_{10}&10&
\EVII{-10}{0}{1}{1}{1}{1}{1}{1} &D_6\\
5_d&\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1} &A_4&\boldsymbol{\mu}_{5}&5&
\EVII{-11}{1}{0}{1}{1}{1}{1}{0} & A_4\\
5_e&\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1} &A_4&\boldsymbol{\mu}_{5}&5&
\EVII{-11}{1}{1}{1}{1}{1}{0}{2} &A_4\\
4_a&\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1} &D_4(a_1)=E_6^3&G_8&8,12&
\EVII{-13}{1}{1}{1}{1}{1}{1}{1} &E_6\\
4_b&\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0} &D_4(a_1)=E_6^3&G_8&8,12&
\EVII{-14}{1}{1}{1}{1}{1}{1}{2} &E_6\\
4_c&\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1} &D_4(a_1)=E_6^3&G_8&8,12&
\EVII{-12}{1}{1}{1}{1}{1}{1}{0} &E_6\\
4_d&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0} &D_4(a_1)&G(4,1,2)&4,8&
\EVII{-12}{0}{1}{1}{1}{1}{1}{2} &D_5\\
4_e&\EVII{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1} &D_4(a_1)&G(4,1,2)&4,8&
\EVII{-12}{1}{1}{1}{1}{1}{0}{2} &D_5\\
4_f&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0} &A_3&\boldsymbol{\mu}_{4}&4&
\EVII{-9}{1}{1}{1}{1}{0}{0}{2} &A_4\\
4_g&\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1} &A_3&\boldsymbol{\mu}_{4}&4&
\EVII{-7}{1}{1}{1}{1}{0}{0}{0} &A_4\\
3_a&\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0} &3A_2=E_7^6&G_{26}&6,12,18&
\EVII{-14}{1}{1}{1}{1}{1}{1}{1} &E_7\\
3_b&\EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1} &2A_2&G(6,2,2)&6,6&
\EVII{-12}{0}{1}{1}{1}{1}{1}{1} &D_6\\
3_c&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1} &A_2=D_4^2&\boldsymbol{\mu}_{6}&6&
\EVII{-13}{0}{1}{1}{1}{1}{1}{2} &D_4\\
3_d&\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1} &A_2=D_4^2&\boldsymbol{\mu}_{6}&6&
\EVII{-9}{0}{1}{1}{1}{1}{0}{0} &D_4\\
2_a&\EVII{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0} &7A_1&W(E_7)&2,6,8,10,12,14,18&
\EVII{-15}{1}{1}{1}{1}{1}{1}{1} &E_7\\
2_b&\EVII{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0} &4A_1''&W(F_4)&2,6,8,12&
\EVII{-14}{1}{1}{1}{1}{1}{1}{0} &E_6\\
2_c&\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1} &3A_1'&W(B_3)&2,4,6&
\EVII{-10}{0}{0}{1}{1}{1}{1}{1} &A_5'\\
1_a&\EVII{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0} &e&W(E_7)&2,6,8,10,12,14,18&
\EVII{1}{0}{0}{0}{0}{0}{0}{0} &E_7\\
\hline
\end{array}
$$
\end{center}
\begin{center}
{\small Table 21: The gradings of positive rank in type $E_8$}
$$
\begin{array}{cccccccc}
\hline
\text{No.}&\text{Kac diagram}& w& W(\mathfrak{c},\theta)&\text{degrees}&\theta'&L_\theta\\
\hline
30_a&\E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}&E_8 & \boldsymbol{\mu}_{30}&30&
\E{1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
24_a&\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}&E_8(a_1) & \boldsymbol{\mu}_{24}&24&
\E{-5}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
20_a&\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}&E_8(a_2) & \boldsymbol{\mu}_{20}&20&
\E{-9}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1} &E_8\\
18_a&\E{1}{ 1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}&E_7 & \boldsymbol{\mu}_{18}&18&
\E{-11}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}&E_7\\
18_b&\E{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}&E_7 & \boldsymbol{\mu}_{18}&18&
\E{-13}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 2}&E_7\\
18_c&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}{ 1}&E_7 & \boldsymbol{\mu}_{18}&18&
\E{-17}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 4}&E_7\\
18_d&\E{1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}&E_7& \boldsymbol{\mu}_{18}&18&
\E{-15}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 3} &E_7\\
18_e&\E{0}{ 1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}&E_7 & \boldsymbol{\mu}_{18}&18&
\E{-9}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 0} &E_7\\
15_a&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}&E_8(a_5) & \boldsymbol{\mu}_{30}&30&
\E{-14}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
14_a&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1}&E_7(a_1) & \boldsymbol{\mu}_{14}&14&
\E{-15}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_7\\
14_b&\E{1}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}&E_7(a_1) & \boldsymbol{\mu}_{14}&14&
\E{-19}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 3} &E_7\\
14_c&\E{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}&E_7(a_1) & \boldsymbol{\mu}_{14}&14&
\E{-17}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 2} &E_7\\
14_d&\E{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}&E_7(a_1)
& \boldsymbol{\mu}_{14}&14&
\E{-21}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 4} &E_7\\
12_a&\E{1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_8(a_3) & G_{10}&12,24&
\E{-17}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
12_b&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-15}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{1}{ 0} &E_6\\
12_c&\E{1}{ 0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-21}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 3} &E_6\\
12_d&\E{1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}&E_6 &\boldsymbol{\mu}_{12}&12&
\E{-16}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 2} &E_6\\
12_e&\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 1}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-14}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 1} &E_6\\
12_f&\E{0}{ 1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-19}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &E_6\\
12_g&\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 6} &E_6\\
12_h&\E{0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-20}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 4} &E_6\\
12_i&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}{ 1}&E_6 & \boldsymbol{\mu}_{12}&12&
\E{-12}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &E_6\\
12_j&\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 1}&D_7 & \boldsymbol{\mu}_{12}&12&
\E{-15}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_7\\
10_a&\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_8(a_6)=-2A_4 & G_{16}&20,30&
\E{-19}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
10_b&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&D_6 & \boldsymbol{\mu}_{10}&10&
\E{-17}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &D_6\\
10_c&\E{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}&D_6 & \boldsymbol{\mu}_{10}&10&
\E{-21}{ 2}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_6\\
10_d&\E{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}&D_6 & \boldsymbol{\mu}_{10}&10&
\E{-17}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_6\\
10_e&\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}&D_6 & \boldsymbol{\mu}_{10}&10&
\E{-19}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{2} &D_6\\
10_f&\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}&D_6 & \boldsymbol{\mu}_{10}&10&
\E{-15}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &D_6\\
\end{array}
$$
\end{center}
\begin{center}
{\small Table 21 continued: The gradings of positive rank in type $E_8$ }
$$
\begin{array}{cccccccc}
\hline
\text{No.}&\text{Kac diagram}& w& W(\mathfrak{c},\theta)&\text{degrees}&\theta'&L_\theta\\
\hline
9_a&\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}&E_6(a_1)=E_7^2 &\boldsymbol{\mu}_{18}&18&
\E{-20}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_7\\
9_b&\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}&E_6(a_1)=E_7^2 &\boldsymbol{\mu}_{18}&18&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &E_7\\
9_c&\E{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}&E_6(a_1)=E_7^2 &\boldsymbol{\mu}_{18}&18&
\E{-26}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 4} &E_7\\
9_d&\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}&E_6(a_1)=E_7^2 &\boldsymbol{\mu}_{18}&18&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 3} &E_7\\
9_e&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}&E_6(a_1) =E_7^2 &\boldsymbol{\mu}_{18}&18&
\E{-18}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &E_7\\
9_f&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&E_6(a_1) &\boldsymbol{\mu}_{9}&9&
\E{-15}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &E_6\\
8_a&\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}&D_8(a_3) &G_9&8,24&
\E{-21}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
8_b&\E{1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-23}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &D_5\\
8_c&\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-19}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &D_5\\
8_d&\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 0}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 2}{ 2} &D_5\\
8_e&\E{1}{0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-20}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 2} &D_5\\
8_f&\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-31}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 6} &D_5\\
8_g&\E{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 3} &D_5\\
8_h&\E{0}{ 1}{ 0}{0}{ 0}{ 0}{ 1}{ 0}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-12}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 0} &D_5\\
8_i&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-14}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &D_5\\
8_j&\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 4} &D_5\\
8_k&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}&D_5 &\boldsymbol{\mu}_{8}&8&
\E{-16}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &D_5\\
7_a&\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}&A_6=E_7(a_1)^2 &\boldsymbol{\mu}_{14}&14&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_7\\
7_b&\E{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}&A_6=E_7(a_1)^2 &\boldsymbol{\mu}_{14}&14&
\E{-26}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 3} &E_7\\
7_c&\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}&A_6=E_7(a_1)^2 &\boldsymbol{\mu}_{14}&14&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &E_7\\
7_d&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&A_6=E_7(a_1)^2 &\boldsymbol{\mu}_{14}&14&
\E{-28}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 4} &E_7\\
6_a&\E{1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}&E_8(a_8)=-4A_2 &G_{32}&12,18,24,30&
\E{-23}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
6_b&\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}&E_7(a_4)=E_7^3 &G_{26}&6,12,18&
\E{-21}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &E_7\\
6_c&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&D_6(a_2) &G(6,1,2)&6,12&
\E{-21}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_7\\
6_d&\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&E_6(a_2) &G_5&6,12&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 2} &E_6\\
6_e&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&E_6(a_2) &G_5&6,12&
\E{-18}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &E_6\\
6_f&\E{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}&A_5 &\boldsymbol{\mu}_{6}&6&
\E{-22}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &A_5\\
6_g&\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 1}&A_5 &\boldsymbol{\mu}_{6}&6&
\E{-18}{ 1}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &A_5\\
6_h&\E{1}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&D_4 &\boldsymbol{\mu}_{6}&6&
\E{-18}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 2} &D_4\\
6_i&\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&D_4 &\boldsymbol{\mu}_{6}&6&
\E{-25}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2}{ 1}{ 0} &D_4\\
6_j&\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}&D_4 &\boldsymbol{\mu}_{6}&6&
\E{-12}{ 0}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 0} &D_4\\
6_k&\E{0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}&D_4 &\boldsymbol{\mu}_{6}&6&
\E{-18}{ 0}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 3} &D_4\\
\end{array}
$$
\end{center}
\begin{center}
{\small Table 21 continued: The gradings of positive rank in type $E_8$}
$$
\begin{array}{cccccccc}
\hline
\text{No.}&\text{Kac diagram}& w& W(\mathfrak{c},\theta)&\text{degrees}&\theta'&L_\theta\\
\hline
5_a&\E{0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}&2A_4=E_8^6 &G_{16}&20,30&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
5_b&\E{1}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}&A_4 &\boldsymbol{\mu}_{10}&10&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &D_6\\
5_c&\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&A_4 &\boldsymbol{\mu}_{10}&10&
\E{-26}{ 2}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_6\\
5_d&\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&A_4 &\boldsymbol{\mu}_{10}&10&
\E{-30}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 4} &D_6\\
5_e&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&A_4 &\boldsymbol{\mu}_{10}&10&
\E{-24}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &D_6\\
5_f&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&A_4 &\boldsymbol{\mu}_{10}&10&
\E{-20}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &D_6\\
4_a&\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}&2D_4(a_1) &G_{31}&8,12,20,24&
\E{-25}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
4_b&\E{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&D_4(a_1)=E_6^3 &G_8&8,12&
\E{-27}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 2} &E_6\\
4_c&\E{0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}&D_4(a_1)=E_6^3 &G_8&8,12&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 2} &E_6\\
4_d&\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&D_4(a_1)=E_6^3 &G_8&8,12&
\E{-20}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &E_6\\
4_e&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&D_4(a_1)=D_5^2 &G(4,1,2)&4,8&
\E{-16}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 0} &D_5\\
3_a&\E{0}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&4A_2=E_8^{10} &G_{32}&12,18,24,30&
\E{-26}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
3_b&\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}&3A_2=E_7^6 &G_{26}&6,12,18&
\E{-24}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0} &E_7\\
3_c&\E{1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}&2A_2 =D_7^4 &G(6,1,2)&6,12&
\E{-24}{ 0}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &D_7\\
3_d&\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&A_2=D_4^2 &\boldsymbol{\mu}_{6}&6&
\E{-15}{ 0}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0}{ 0} &D_4\\
2_a&\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{0}&8A_1=-1 &W(E_8)&2,8,12,14,18,20,24,30&
\E{-27}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1} &E_8\\
2_b&\E{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}&4A_1'' =E_6^6 &W(F_4)&2,6,8,12&
\E{-22}{ 1}{ 1}{ 1}{ 1}{ 1}{ 1}{ 0}{ 0} &E_6\\
1_a&\E{1}{0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0} &1 &W(E_8)&2,8,12,14,18,20,24,30&
\E{1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0} &E_8\\
\hline
\end{array}
$$
\end{center}
\section{Little Weyl groups for inner type $E$ and Kostant sections}\label{littleweylE}
In this section we compute the little Weyl groups $W(\mathfrak{c},\theta)$ and their degrees when
$\theta$ is inner of positive rank in type $E$. As a byproduct we show that every positive rank inner automorphism is principal in a Levi subgroup. This leads to a verification of Popov's conjecture on the existence of Kostant sections, and gives a characterization of the orders of positive-rank automorphisms.
\subsection{The Levi subgroup $L_\theta$}
In tables 19-21 above we have indicated a Levi subgroup
$L_\theta$ whose corresponding subset $J\subset\{1,\dots,\ell\}$ satisfies the conditions of Lemma \ref{Jlowerbound}, giving an embedding
\begin{equation}\label{embedding}
C_{W_J}(w)\hookrightarrow W(\mathfrak{c},\theta).
\end{equation}
In each case, the embedding \eqref{embedding} turns out to be an isomorphism.
It follows that the degrees of $W(\mathfrak{c},\theta)$ are
those degrees of $W_J$ which are divisible by $m$.
We verify that \eqref{embedding} is an isomorphism as follows.
Let $U_J\subset W$ be the subgroup acting trivially on the span of the roots $\alpha_j$ for $j\in J$ and set $c_J(w)=|C_W(w)|/|U_J|$.
Lemma \ref{simplebound}
shows that $|W(\mathfrak{c},\theta)|$ divides $ c_J(w)$.
The subgroup $U_J$ can be found in the tables of
\cite{carter:weyl} (it is denoted there by $W_2$).
In all but eight cases we find that
$$|C_{W_J}(w)|=c_J(w),$$
showing that $C_{W_J}(w)=W(\mathfrak{c},\theta)$.
We list the exceptional cases for which $|C_{W_J}(w)|<c_J(w)$.
We write $|C_{W_J}(w)|$ as the product of degrees divisible by $m$.
\begin{center}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{ccccccc}
\hline
G& \text{no.} & w& J & |C_{W_J}(w)| & c_J(w)\\
\hline
E_6& 4_b & D_4(a_1) & D_5 & 4\cdot 8 & 8\cdot 12\\
E_7 & 9_b & E_6(a_1)& E_6 & 9 & 18\\
E_7 & 5_d,5_e & A_4 & A_4& 5 & 10\\
E_7 & 4_d,4_e & D_4(a_1) & D_5 & 4\cdot 8 & 8\cdot 12\\
E_8 & 9_f& E_6(a_1) & E_6 & 9 & 18\\
E_8 & 4_e& D_4(a_1) & D_5 & 4\cdot 8 & 8\cdot 12\\
\hline
\end{array}}
$$
\end{center}
To show that $W(\mathfrak{c},\theta)=C_{W_J}(w)$ in all of these cases as well,
it suffices to show that $G_0$ has an invariant polynomial of degree $d=4,9,5,4,9,4$
for the respective rows. If $k$ has characteristic zero this can be done using the computer algebra system LiE to find the dimension of the $G_0$-invariants in $\Sym^d(\mathfrak{g}_1^\ast)$. In fact we did this for all of the positive rank cases in exceptional groups, as a confirmation of our tables. If $k$ has positive characteristic $p$ (not dividing $m$) the desired invariant is provided by the following result which is apparently standard, but we could not find a reference.
\begin{lemma}\label{invarianttheory} Let $\rho:H\to \GL(V)$ be a rational representation of a reductive algebraic group $H$ over the ring $\mathbb{Z}[\zeta]$, where $\zeta\in\overline\mathbb{Q}$ is a primitive $m^{th}$-root of unity. Assume that $H(\overline\mathbb{Q})$ has a nonzero invariant vector in $V(\overline\mathbb{Q})$ with multiplicity one. Then $H(k)$ has a nonzero invariant in $V(k)$ for any algebraically-closed field $k$ of characteristic $p$ not dividing $m$.
\end{lemma}
\proof
Let $W(k)$ be the ring of Witt vectors of $k$, let $K$ be the quotient field of $W(k)$ and let $L$ be an algebraic closure of $K$. Our assumption implies, via complete reducibility, that $\dim_LV(L)^{H(L)}=\dim_{\overline\mathbb{Q}}V(\overline\mathbb{Q})^{H(\overline\mathbb{Q})}=1$. Let $f\in V(L)$ be a generator of $V(L)^{H(L)}$.
The line $L\cdot f$ is preserved by $\Gal(L/K)$,
so Hilbert's theorem 90 implies that $L\cdot f\cap V(K)$ is nonzero.
We may therefore assume that $f\in V(K)$.
Clearing denominators, we may further assume that $f\in V(W(k))$ and
is nonzero modulo the maximal ideal $M$ of $W(k)$. The reduction of $f$ modulo $M$ gives a nonzero invariant of $H(k)$ in $V(k)$.
\qed
As illustrated in the following examples, we can often compute the desired invariant by hand.
\subsubsection{Example: $E_6$ no. $4_b$}
We label the affine diagram of $E_6$ and write the Kac diagram of $\theta$ respectively as as shown:
$$\EVI{0}{1}{6}{2}{3}{4}{5}\qquad\qquad \EVI{1}{0}{0}{1}{0}{0}{1}.$$
We view $\mathfrak{g}_1$ as a representation of $\SL_2\times\SL_4\times T_2$,
where $T_2$ is the two dimensional torus whose cocharacter group has basis
$\{\check\omega_2,\check\omega_5\}$, where $\check\omega_i$ are the fundamental co-weights of $E_6$. Each node $i$ labelled $1$ in the Kac diagram gives a summand $V_i$ of $\mathfrak{g}_1^\ast$
whose highest weight is the fundamental weight on each node adjacent to $i$ and
with central character $\alpha_i$ restricted to $T_2$. Thus, we have
$$
\begin{array}{cccccc}
\mathfrak{g}_1^\ast\simeq
&(\mathbf{2}\boxtimes\mathbf{6})
&\oplus&(\mathbf{1}\boxtimes\check{\mathbf{4}})
&\oplus&(\mathbf{1}\boxtimes\mathbf{4})\\
\check\omega_2=
&1&&-2&&0\\
\check\omega_5=
&0&&-1&&1
\end{array}
$$
Here $\mathbf{2}$ and $\mathbf{4}$ are the standard representations of $\SL_2$ and $\SL_4$,
$\check {\mathbf{4}}$ is the dual of $\mathbf{4}$ and
$\mathbf{6}=\Lambda^2\mathbf{4}$.
It follows that the symmetric algebra of $\mathfrak{g}_1^\ast$ can have
$G_0$-invariants only in tri-degrees $(2k,k,k)$. To find the expected invariant of degree four,
we must find an $\SL_2\times\SL_4$-invariant in the summand for $k=1$:
$$\Sym^2(\mathbf{2}\boxtimes\mathbf{6})
\otimes (\mathbf{1}\boxtimes\check{\mathbf{4}})
\otimes(\mathbf{1}\boxtimes\mathbf{4})=
\Sym^2(\mathbf{2}\boxtimes\mathbf{6})
\otimes (\mathbf{1}\boxtimes\End(\mathbf {4})).
$$
Since $m=4$ we have $p\neq 2$, so $\End(\mathbf {4})=\mathbf{1}\oplus\mathfrak{sl}_4$.
Since $\mathbf{2}$ and $\mathbf{6}$ are self-dual,
affording alternating and symmetric forms, respectively,
our invariant must be given by an $\SL_2\times\SL_4$-equivariant mapping
$\Sym^2(\mathbf{2}\boxtimes\mathbf{6})\to \mathbf{1}\otimes\mathfrak{sl}_4.$
Indeed, wedging in both factors gives a map
$$\Sym^2(\mathbf{2}\boxtimes\mathbf{6})\longrightarrow
\Lambda^2\mathbf{2}\boxtimes\Lambda^2\mathbf{6}
=\mathbf{1}\boxtimes\mathfrak{so}_6\simeq\mathbf{1}\boxtimes\mathfrak{sl}_4,
$$
exhibiting the desired invariant of degree four.
\subsubsection{Example: $E_7$ no. $5_d$}
The Kac diagram is
$$\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}$$
with $G_0^{sc}=\SL_2\times\SL_5$ and
$$\mathfrak{g}_1^\ast=
(\mathbf{2}\boxtimes\mathbf{5})\oplus
(\mathbf{1}\boxtimes\check{\mathbf{5}})\oplus
(\mathbf{1}\boxtimes\Lambda^2\mathbf{5}).
$$
The center of $G_0$ has invariants in tri-degrees $(2k,k,2k)$, leading us to seek an
$\SL_5$-equivariant mapping
$$\mathbf{5}\otimes\Sym^{2}(\Lambda^2\check{\mathbf{5}})\longrightarrow
\Sym^2(\mathbf{2}\boxtimes\mathbf{5})^{\SL_2}.
$$
Let $U$ and $V$ be $k$-vector spaces of dimensions $2$ and arbitrary $n<\infty$, respectively.
Let $P_2(\Hom(V,U))$ be the space of degree two-polynomials on $\Hom(V,U)$, with the natural $SL(V)\times\SL(U)$-action. Then we have a nonzero (hence injective) mapping
$$
\varphi:\Lambda^2(V)\longrightarrow P_2(\Hom(V,U))^{\SL(U)}, \quad \omega\mapsto \varphi_\omega,
$$
given by $\varphi_\omega(f)=f_\ast(\omega)$, where $f_\ast:\Lambda^2(V)\to \Lambda^2(U)\simeq k$
is the map induced by $f$. One checks that
$\dim P_2(\Hom(V,U))^{\SL(U)}=\binom{n}{2}$, so that $\varphi$ is an isomorphism of $\SL(V)$-modules
\begin{equation}\label{SLV}
\Lambda^2(V)\simeq P_2(\Hom(V,U))^{\SL(U)}.
\end{equation}
Returning to our task, we now must find an $\SL_5$-equivariant mapping
$$\mathbf{5}\otimes\Sym^{2}(\Lambda^2\check{\mathbf{5}})\longrightarrow\Lambda^2\mathbf{5}.$$
The contraction mapping
$$\mathbf{5}\otimes\Lambda^2\check{\mathbf{5}}\longrightarrow \check{\mathbf{5}},\quad v\otimes\omega\mapsto c_v(\omega),
$$
where $c_v(\lambda\wedge\mu)=\langle \lambda,v\rangle\mu-\langle\mu,v)\lambda$, extends to a mapping
$$
\mathbf{5}\otimes\Sym^{2}(\Lambda^2\check{\mathbf{5}})
\longrightarrow\Lambda^3\check{\mathbf{5}}, \qquad
v\otimes(\omega\cdot\eta)\mapsto c_v(\omega)\wedge\eta+c_v(\eta)\wedge\omega.
$$
Since $\Lambda^3\check{\mathbf{5}}\simeq\Lambda^2\mathbf{5}$ as $\SL_5$-modules,
we have the desired invariant.
\subsubsection{Example: $E_7$ no. $4_d$} The Kac diagram is
$$\EVII{1}{0}{1}{0}{0}{0}{0}{1}$$
and $G_0^{sc}=\SL_6$ with
$\mathfrak{g}_1=\mathbf{6}\oplus\check{\mathbf{6}}\oplus\Lambda^3\mathbf 6.$
The action of the center leads us to seek an $\SL_6$-invariant in
$$\mathbf{6}\otimes\check{\mathbf{6}}\otimes\Sym^2(\Lambda^3\mathbf 6).$$
If $V$ is a $k$-vector space of even dimension $2m$, we have a nonzero $\SL(V)$-equivariant mapping
$$\varphi:\End(V)\longrightarrow P_2(\Lambda^m V),\qquad A\mapsto \varphi_A,$$
given by $\varphi_A(\omega)=\omega\wedge A_\ast\omega$. Since the $\SL(V)$-module
$\Lambda^mV$ is self-dual this may be viewed as a nonzero mapping $\End(V)\to \Sym^2(\Lambda^mV)$.
Taking $m=3$ gives the desired invariant.
\subsubsection{Example: $E_7$ no. $4_e$} The Kac diagram is
$$\EVII{0}{1}{0}{0}{0}{0}{1}{0}$$
with $G_0^{sc}= H_1\times \Spin_8\times H_2$, where $H_1\simeq H_2\simeq\SL_2$, and
$\mathfrak{g}_1^\ast=
(\mathbf{2}\boxtimes \mathbf{8}\boxtimes\mathbf{1})\oplus
(\mathbf{1}\boxtimes \mathbf{8'}\boxtimes\mathbf{2}),
$
where $\mathbf{8}$ and $\mathbf{8'}$ are non-isomorphic eight dimensional irreducible representations of $\Spin_8$. The action of the center leads us to seek an invariant in
$$\Sym^2(\mathbf{2}\boxtimes \mathbf{8}\boxtimes\mathbf{1})\otimes
\Sym^2(\mathbf{1}\boxtimes \mathbf{8'}\boxtimes\mathbf{2}).
$$
Since every representation in sight is self-dual, we require a $\Spin_8$-equivariant mapping
from the $H_1$-coinvariants to the $H_2$-invariants:
$$\Sym^2(\mathbf{2}\boxtimes \mathbf{8}\boxtimes 1)_{H_1}\longrightarrow
\Sym^2(\mathbf{1}\boxtimes \mathbf{8'}\boxtimes\mathbf{2})^{H_2}.
$$
Since $m=4$, the characteristic of $k$ is not two, so for a $k$-vector space $V$ of arbitrary finite dimension $n$ the symmetric square
$$\Sym^2(V\otimes\mathbf{2})=
\Sym^2(\mathbf{2}^{\oplus n})=
n\cdot\Sym^2(\mathbf{2})\oplus\binom{n}{2}(\mathbf{2}\otimes\mathbf{2})
$$ is completely reducible as an $\SL_2$-module. Hence the canonical map
\begin{equation}\label{coinvar}
\Sym^2(V\otimes\mathbf{2})^{\SL_2}\longrightarrow
\Sym^2(V\otimes\mathbf{2})_{\SL_2},
\end{equation}
from the invariants to the coinvariants, is an isomorphism of $\SL(V)$-modules.
From \eqref{SLV}, both modules are isomorphic to $\Lambda^2V$.
Returning to our task, we now require a $\Spin_8$-equivariant mapping
$$\Lambda^2 \mathbf{8}\to \Lambda^2\mathbf{8'}.$$
But both of these exterior squares are isomorphic to the adjoint representation of $\Spin_8$,
whence the desired invariant.
\subsubsection{Example: $E_8$ no. $4_e$} The Kac diagram is
$$\E{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 0}{ 0}{ 1}$$
with $G_0^{sc}=\Spin(12)\times \SL_2$ and
$$\mathfrak{g}_1^\ast=(\mathbf{32}\boxtimes\mathbf{1})\oplus (\mathbf{12}\boxtimes\mathbf{2}),$$
where $\mathbf{32}$ is one of the half-spin representations of $\Spin_{12}$, which is symplectic since $6\equiv 2\mod 4$. The action of the center of $G_0$ leads us to seek an invariant in
$$\Sym^2(\mathbf{32}\boxtimes\mathbf{1})\otimes\Sym^2(\mathbf{12}\boxtimes\mathbf{2}).
$$
We must therefore find a $\Spin_{12}\times\SL_2$-equivariant mapping
$$\Sym^2(\mathbf{12}\boxtimes\mathbf{2})\longrightarrow \Sym^2(\mathbf{32}\boxtimes\mathbf{1}).$$
From \eqref{SLV} and \eqref{coinvar} this is equivalent to a $\Spin_{12}$-equivariant mapping
$$\Lambda^2\mathbf{12}\longrightarrow \Sym^2(\mathbf{32}).$$
But $\Lambda^2\mathbf{12}\simeq \mathfrak{so}_{12}$ and $\Sym^2(\mathbf{32})\simeq \mathfrak{sp}_{32}$.
The desired mapping $\mathfrak{so}_{12}\to\mathfrak{sp}_{32}$ is simply the representation of
$\mathfrak{so}_{12}$ on the symplectic half-spin representation $\mathbf{32}$.
\subsection{A remark on saturation} Let
$$W^*({\mathfrak c},\theta):=N_{G^\theta}({\mathfrak c})/C_{G^\theta}({\mathfrak c}).$$
Clearly $W({\mathfrak c},\theta)\subset W^*({\mathfrak c},\theta)\subset W_1^\theta$
(see \eqref{W1}).
We say that $\theta$ is {\bf saturated} if $W({\mathfrak c},\theta)=W^*({\mathfrak c},\theta)$. (For the adjoint group $G$ this is equivalent to the definition given in
section 5 of \cite{vinberg:graded}.) Clearly $\theta$ is saturated if $G^\vartheta=G_0$.
As remarked in section \ref{stableisotropy} this holds whenever the group $\Omega_\vartheta(x)$ is trivial.
In particular, saturation holds in types $G_2, {^3D_4}, F_4, E_8, {^2E_6}$, where $\Omega_\vartheta$ itself is trivial.
It is known (\cite{vinberg:graded}, \cite{levy:thetap}) that all gradings on classical Lie algebras are saturated except for certain outer automorphisms of order divisible by 4 in type $D_n$.
It remains to consider only those inner automorphisms of $E_6$ and $E_7$ where the Kac diagram is invariant under the symmetries of the affine Dynkin diagram and we have $W(\theta,\mathfrak{c})\neq W^\theta_1$. The latter implies that $|C_{W_J}(w)|<c_J(w)$. The only cases not thus eliminated are $4_d$ and $4_e$ in type $E_7$. But in these two cases we have $c_J(w)/|W(\theta,\mathfrak{c})|=3$, while $[G^\theta:G_0]=2$, so saturation holds in these cases as well. We conclude that all gradings on exceptional Lie algebras are saturated.
\subsection{Kostant sections and the Levi subgroup $L_\theta$}
A {\bf Kostant section}
\footnote{In the literature, this is also called a "Kostant-Weierstrass" or "KW" section because in the case of the non-pinned outer triality automorphism of $\mathfrak{so}_8$ such a section is equivalent to the Weierstrass-normal form of a nonsingular homogeneous cubic polynomial in three variables.}
for the grading $\mathfrak{g}=\oplus_{i\in\mathbb{Z}/m}\ \mathfrak{g}_i$ is an affine subspace $\mathfrak{v}\subset \mathfrak{g}_1$ such that the embedding $\mathfrak{v}\hookrightarrow\mathfrak{g}_1$ induces an isomorphism of affine varieties
$\mathfrak{v}\overset\sim\longrightarrow \mathfrak{g}_1//G_0$, or equivalently, if the restriction map
$k[\mathfrak{g}_1]^{G_0}\longrightarrow k[\mathfrak{v}]$ is bijective.
Recall that we have fixed a pinning $(X,R,\check X,\check R,\{E_i\})$ in $G$, which determines the co-character $\check\rho\in X_\ast(T)$ and principal nilpotent element $E=\sum E_i$, such that
$\check\rho(t)\cdot E=tE$.
From \cite[Thm.3.5]{panyushev:theta} and
\cite[Prop.5.2]{levy:thetap} we have the following existence result for Kostant sections.
\begin{thm}\label{principalkostant} \label{kostant} Assume the characteristic of $k$ is not a torsion prime for $G$, that $m$ nonzero in $k$. Then the grading $\mathfrak{g}=\oplus_{i\in \mathbb{Z}/m}\ \mathfrak{g}_i$ associated to the principal automorphism
$\theta_m=\check\rho(\zeta)\vartheta$ has a Kostant section $E+\mathfrak{u}$, where $\mathfrak{u}$ is any vector space complement to $[\mathfrak{g}_0,E]$ in $\mathfrak{g}_1$ such that $\mathfrak{u}$ is stable under
$\check\rho(k^\times)$.
\end{thm}
We have seen that for each positive-rank torsion inner automorphism in type $E_{6,7,8}$ there exists a subset
$J\subseteq\{1,2,\dots,\ell\}$ such that $W(\mathfrak{c},\theta)=C_{W_J}(w)$.
This can also be checked for the classical groups and types $F_4, G_2$. Thus, we have a case-by-case proof of the following theorem.
\begin{thm} Let $\theta$ be an inner automorphism of $\mathfrak{g}$ whose order $m$ is nonzero in $k$ and let $\mathfrak{c}$ be a Cartan subspace of $\mathfrak{g}_1$. Then there exists a $\theta$-stable Levi subgroup $L=L_\theta$ whose Lie algebra $\mathfrak{l}$ contains $\mathfrak{c}$ in its derived subalgebra, such that the following hold:
\begin{enumerate}
\item $\theta\vert_\mathfrak{l}=\Ad(\check\rho_L(\zeta))$.
\item The inclusion of little Weyl groups $W_L(\mathfrak{c},\theta)\hookrightarrow W(\mathfrak{c},\theta)$ is a bijection.
In particular, the degrees of $W(\mathfrak{c},\theta)$ are precisely the degrees of
$W_L$ which are divisible by $m$.
\item The restriction map
$k[\mathfrak{g}_1]^{G_0}\longrightarrow k[\mathfrak{l}_1]^{L_0}$
is a bijection.
\end{enumerate}
\end{thm}
In view of Thm. \ref{kostant}, we conclude:
\begin{cor}\label{cor:kostantE}
Every positive-rank torsion inner automorphism in type $E_{6,7,8}$ has a Kostant section contained in the Levi subalgebra $\mathfrak{l}$ of the previous theorem.
\end{cor}
We also observe:
\begin{cor}\label{cor:m}
A positive integer $m$ is the order of a torsion inner automorphism of positive rank precisely if $m$ is the order of a $\mathbb{Z}$-regular element in the Weyl group of a Levi subgroup of $G$.
\end{cor}
\section{Outer gradings of positive rank in type $E_6$}\label{2E6}
We realize the outer pinned automorphism of $E_6$ as the restriction of an affine pinned automorphism of $E_7$, as in section \ref{affine-pinned}.
\subsection{Root systems of type $E_7$ and ${^2E_6}$}\label{E7to2E6}
Let $(Y,R,\check Y, \check R)$ be a root datum of adjoint type $E_7$
and fix a base $\Delta=\{\alpha_1,\dots,\alpha_7\}\subset R$ with lowest root $\alpha_0$,
according to the numbering
\begin{equation}\label{E7numbering}
\EVII{0}{1}{4}{2}{3}{5}{6}{7}.
\end{equation}
The set $\Pi:=\{\alpha_0\}\cup\Delta$ has stabilizer $W_\Pi=\{1,\vartheta\}$ of order two, where
$\vartheta=r_1r_2r_3$ is a product of reflections about mutually orthogonal roots
$\gamma_1,\gamma_2,\gamma_3$ in which the coefficients of simple roots $\{\alpha_1,\dots,\alpha_7\}$ are given by
$$
\gamma_1=\EVII{}{0}{1}{1}{2}{2}{2}{1},\qquad
\gamma_2=\EVII{}{1}{1}{1}{2}{2}{1}{1},\qquad
\gamma_3=\EVII{}{1}{1}{2}{2}{1}{1}{1}.
$$
The sum
$$\check \gamma_1+\check \gamma_2+\check \gamma_3=2\check \mu,$$
where $\check \mu=\check \omega_7$ is the nontrivial minuscule co-weight.
Regard the vector space $V=\mathbb{R}\otimes\check Y$ as an affine space with $0$ as basepoint. Each linear functional $\lambda:V\to \mathbb{R}$
is then regarded as an affine function on $V$ vanishing at $0$, we have the affine root system
$$\Phi=\{\alpha+n:\ \alpha\in R,\ n\in\mathbb{Z}\}$$
with basis $\{\phi_0,\phi_1,\dots,\phi_7\}$ where
$\phi_0=1+ \alpha_0,\phi_1= \alpha_1,\dots,\phi_7= \alpha_7$ satisfy the relation
\begin{equation}\label{E7relation}
\phi_0+2\phi_1+3\phi_2+4\phi_3+2\phi_4+3\phi_5+2\phi_6+\phi_7=1.
\end{equation}
A point $x\in V_\mathbb{Q}$ of order $m$ has Kac diagram
\begin{equation}\label{E7coord}
\eVII{s_0}{s_1}{s_2}{s_3}{s_5}{s_6}{s_7}{s_4},
\end{equation}
where $s_i/m=\phi_i(x)$.
The affine transformation $\widetilde\vartheta:V\to V$ given by
$$\widetilde\vartheta(x)= \check \mu+\vartheta\cdot x$$
permutes the simple affine roots $\{\phi_0,\dots,\phi_7\}$ according to the nontrivial symmetry of the affine diagram of $E_7$.
The fixed-point space of $\widetilde\vartheta$ in $V$ is given by
$$\mathcal{A}^{\vartheta}:=V^\vartheta+\tfrac{1}{2}\check \mu,$$
which is an affine space under the vector space $V^\vartheta=\mathbb{R}\otimes \check Y_\vartheta$,
with basepoint $\tfrac{1}{2}\check\mu$.
The rational points in $\mathcal{A}^{\vartheta}$ are precisely those points $x\in V_\mathbb{Q}$ whose Kac diagram has the symmetric form
\begin{equation}\label{si}
\eVII{s_0}{s_1}{s_2}{s_3}{s_2}{s_1}{s_0}{s_4},
\end{equation}
in which case equation \eqref{E7relation} implies that
\begin{equation}\label{s0}
s_0+2s_1+3s_2+2s_3+s_4=m/2,
\end{equation}
where $m$ is the order of $x$.
The automorphism $\vartheta$ permutes the roots $\alpha_1,\dots,\alpha_6$ which generate a root subsystem $R'$ of type $E_6$. The co-weight lattice $\check X=\Hom(\mathbb{Z} R',\mathbb{Z})$ has dual basis $\{\check \omega_1,\dots,\check \omega_6\}$ and we have
$$\check X^\vartheta=\check Y^\vartheta.$$
Hence $\mathcal{A}^{\vartheta}$ is also an affine space under $\mathbb{R}\otimes \check X^\vartheta$
and we may construct the affine root system $\Psi(R',\vartheta)$ as in section \ref{affine},
using the point $x_0=\tfrac{1}{2}\check \mu$. We have $\ell_\vartheta=4$ and
$\Psi(R',\vartheta)$ has basis $\psi_0,\dots,\psi_4$, where
$\psi_i=\alpha_i\vert_{\mathcal{A}^{\vartheta}}$ for $1\leq i\leq 4$ and
\begin{equation}\label{2E6relation}
\psi_0+2\psi_1+3\psi_2+2\psi_3+\psi_4=1/2.
\end{equation}
A rational point $x\in \mathcal{A}^{\vartheta}_\mathbb{Q}$ with $E_7$ Kac-diagram \eqref{si}
has ${^2E_6}$ Kac-diagram
$$s_0\ s_1\ s_2\Leftarrow s_3\ s_4.$$
This is clear for $s_1,\dots, s_4$ since $\psi_i$ is the restriction of $\phi_i$, and follows for $s_0$ by comparing the relations \eqref{E7relation} and \eqref{2E6relation}.
\subsection{Lie algebras of type $E_7$ and ${^2E_6}$}
Let $k$ be an algebraically closed field of characteristic $\neq 2,3$
and let $\mathfrak{g}$ be a simple Lie algebra over $k$ of type $E_7$ with automorphism group
$G=\Aut(\mathfrak{g})$.
We fix a maximal torus $T\subset G$ with Lie algebra $\mathfrak{t}$ and we choose an affine pinning $\widetilde\Pi=\{E_0,\dots,E_7\}$ for $T$ in $\mathfrak{g}$, numbered as in
\eqref{E7numbering}.
As above we let $\vartheta=r_1r_2r_3\in W_\Pi$ be the unique involution acting on $\Pi$ via the permutation $(0 7)(1 6)(2 5)$. Recall from section \ref{affine-pinned} that $\vartheta$ has a lift $n\in N$ of order two defined via the homomorphism $\varphi:\SL_2\to G$ as in equation \eqref{npinned}.
Let $S=(T^\vartheta)^\circ$ be the identity component of the group of fixed-points of $\vartheta$ in $T$. The co-weight group of $S$ is $\check X^\vartheta$ and we have
$$T^\vartheta=S\times\langle \check \mu(-1)\rangle,$$
where $\check \mu=\check \omega_7$ is the nontrivial minuscule co-weight.
The automorphism $\varepsilon:=\Ad(\check \mu(-1))$ has order two; its fixed-point subalgebra $\mathfrak{g}^{\varepsilon}$ decomposes as
$$\mathfrak{g}^{\varepsilon}=\mathfrak{h}\oplus \mathfrak{z}$$
where $\mathfrak{z}=d\check \mu(k)$ and $\mathfrak{h}=[\mathfrak{g}^\varepsilon,\mathfrak{g}^\varepsilon]$, the derived subalgebra of $\mathfrak{g}^{\varepsilon}$,
has type $E_6$ and is generated by the root spaces $\mathfrak{g}_\alpha$ for
$\alpha\in\pm\{\alpha_1,\dots,\alpha_6\}$. Note that $\varepsilon$ and $n$ both lie in the subgroup $\varphi(\SL_2)$ and are conjugate therein.
The centralizer $C_G(\varepsilon)$ is the normalizer in $G$ of $\mathfrak{h}$, surjecting onto
$\Aut(\mathfrak{h})$, and is also the normalizer of in $G$ of $\mathfrak{z}$.
The centralizer $C_G(\mathfrak{z})$ of $\mathfrak{z}$ is the identity component of $C_G(\varepsilon)$,
and the image of $C_G(\mathfrak{z})$ in $\Aut(\mathfrak{h})$ is the group
$$H:=\Aut(\mathfrak{h})^\circ$$
of inner automorphisms of $\mathfrak{h}$.
It follows that
we have an exact sequence
\begin{equation}\label{Hexact}
1\longrightarrow \mu(k^\times)\longrightarrow C_G(\mathfrak{z})\longrightarrow H\longrightarrow 1.
\end{equation}
\begin{prop}\label{symmetric} Let $\theta\in\Aut(\mathfrak{g})$ be a torsion automorphism whose order $m$ is nonzero in $k$. Then the centralizer $G^\theta$ has at most two components, and the following are equivalent.
\begin{enumerate}
\item The normalized Kac diagram of $\theta$ has the symmetric form
$\ \EVII{a}{b}{e}{c}{d}{c}{b}{a}.$
\item The $G$-conjugacy class of $\theta$ meets $Sn$.
\item The centralizer $G^\theta$ has two components and $n$ lies in the non-identity component.
\end{enumerate}
\end{prop}
\proof After conjugating by $G$, we may assume $\theta=\Ad(t)$, where $t=\check \lambda(\zeta)$, for some $\check \lambda\in \check X$ and $\zeta\in k^\times$ of order $m$.
We set $x=\frac{1}{m}\check \lambda$.
Over $\mathbb{C}$, the equivalence $1\Leftrightarrow 3$ follows from \cite[Prop. 2.1]{reeder:torsion}, whose proof, once we replace $\exp(x)$ by $\check\lambda(\zeta)$, is also valid over $k$.
We prove $1\Leftrightarrow 2$.
From the previous section the Kac coordinates of $\theta$ are symmetric precisely if
$$x=\check \mu+\vartheta\cdot x.$$
This is equivalent to having $\check \lambda-\frac{m}{2}\check \mu\in \check X^\vartheta$.
Evaluating at $\zeta$ this is in turn equivalent to having $t\varepsilon\in S$, or $t\in S\varepsilon$.
Since $n$ and $\varepsilon$ are conjugate in $\varphi(\SL_2)$ which centralizes $S$
(see Lemma \ref{nfixed}) we can replace $\varepsilon$ by $n$.
\qed
\begin{prop}\label{E7vsE6} Let $s\in S$ and suppose $sn$ has order $m$ invertible in $k$ and let
$\theta=\Ad(sn)$ have symmetric normalized Kac diagram
$\EVII{a}{b}{e}{c}{d}{c}{b}{a}.$
Then
\begin{enumerate}
\item
$\theta$ normalizes $\mathfrak{h}$ and $\theta\vert_\mathfrak{h}$ is an outer automorphism of
$\mathfrak{h}$ with Kac diagram
$$a\ b\ c\Leftarrow d\ e.$$
\item
Every torsion outer automorphism of $\mathfrak{h}$ is conjugate to $\theta\vert_\mathfrak{h}$,
where $\theta=\Ad(sn)$ for some $s\in S$.
\item We have $\rank(\theta\vert_\mathfrak{h})\leq\rank(\theta)$.
\end{enumerate}
\end{prop}
\proof
Since $\Ad(n)=\vartheta$ normalizes $\mathfrak{h}$, acting there via a pinned automorphism,
and $s\in S\subset H$, we have that $\theta\vert_\mathfrak{h}$ is an outer automorphism of $\mathfrak{h}$. The relation between the Kac diagrams of $\theta$ and $\theta\vert_\mathfrak{h}$ follows from the discussion in section \ref{E7to2E6}.
Assertion 2 is now clear, since every Kac diagram $s_0\ s_1\ s_2\Leftarrow s_3\ s_4$ corresponds to $\Ad(sn)\vert_\mathfrak{h}$ for some $s\in S$. We can also prove assertion 2 directly, as follows: Since $\vartheta$ preserves the maximal torus $T\cap H$ of $H$, and permutes the simple roots $\{\alpha_1,\dots,\alpha_6\}$, every torsion outer automorphism of $\mathfrak{h}$ is $H$-conjugate to one of the form $\Ad(s)\vartheta$ for some $s\in (T\cap H)^\vartheta$ (see \cite[Lemma 3.2]{reeder:torsion}, whose proof is valid for $k$). We must therefore show that $(T\cap H)^\vartheta=S$. Since the Lie algebra of $S$ is $\mathfrak{t}^\vartheta$ which is contained in $(\mathfrak{t}\cap\mathfrak{h})^\vartheta$, it suffices to show that $\mathfrak{t}^\vartheta\subset\mathfrak{h}$. But $\mathfrak{t}^\vartheta$ has dimension four and is spanned by $d\check \alpha_i(1)+d\check \alpha_{7-i}(1)$ for $1\leq i\leq 4$, and these vectors lie in $\mathfrak{h}$.
Finally, a Cartan subspace for $\theta\vert_\mathfrak{h}$ is contained in a Cartan subspace for $\theta$, so assertion 3 is obvious.
\qed
Prop. \ref{E7vsE6} implies that the Kac diagram of any outer positive rank automorphism of $\mathfrak{h}$ must have the form $a\ b\ c\ \Leftarrow d\ e$, where
$\EVII{a}{b}{e}{c}{d}{c}{b}{a}$ is a positive rank diagram for $E_7$ appearing in section \ref{Eposrank}.
For example, there are two outer automorphisms of $\mathfrak{h}$ having order $m=2$,
namely the restrictions to $\mathfrak{h}$ of $\vartheta=\Ad(n)$ and $\vartheta_0=\Ad(n_0)$ where $n_0$ is a lift of $-1\in W(E_7)$. These are the involutions in $E_7$ numbered $2_c$ and $2_a$ respectively Table 20 of section \ref{Eposrank}. The Kac diagrams in $E_7$ and ${^2E_6}$ are shown:
$$ \begin{array}{c c c c c}
\vartheta:&\eVII{1}{0}{0}{0}{0}{0}{1}{0}&\qquad&\vartheta_0:&\eVII{0}{0}{0}{0}{0}{0}{0}{1}\\
&&&&\\
\vartheta\vert_{\mathfrak{h}}:&1\ 0\ 0\Leftarrow 0\ 0&\qquad&
\vartheta_0\vert_{\mathfrak{h}}:&0\ 0\ 0\Leftarrow 0\ 1.
\end{array}
$$
Both $\vartheta$ and $\vartheta_0$ act by $-1$ on $\mathfrak{z}$. It follows that their ranks in $E_6$ are one less than their ranks in $E_7$, namely
$$\rank(\vartheta\vert_\mathfrak{h})=2,\qquad \rank(\vartheta_0\vert_\mathfrak{h})=6.$$
\subsection{Positive rank gradings on $E_6$ (outer case)}
From Props. \ref{symmetric} and \ref{E7vsE6} we know that the Kac diagrams for positive rank gradings in outer type $E_6$ are obtained from symmetric positive-rank diagrams for $E_7$.
We now adapt our methods for the inner case to complete the classification of positive rank outer gradings of $E_6$.
We regard $W(E_6)$ as the subgroup of $W(E_7)$ generated by the reflections for the roots $\alpha_1,\dots,\alpha_6$. Equivalently, $W(E_6)$ is the centralizer of $\mathfrak{z}$ in $W(E_7)$.
The coset $-W(E_6)=\{w\vartheta_0:\ w\in W(E_6)\}$ consists of the elements in $W(E_7)$ acting by $-1$ on $\mathfrak{z}$ and contains both $\vartheta$ and $\vartheta_0$.
\begin{lemma}\label{nw}
Let $n_w\in N_G(\mathfrak{t})$ be a lift of an element $w\in -W(E_6)$.
Then $\Ad(n_w)$ normalizes $\mathfrak{h}$ and acts on $\mathfrak{h}$ as an outer automorphism.
\end{lemma}
\proof Since $w$ permutes the root spaces in $\mathfrak{h}$
it follows that $n_w$ normalizes $\mathfrak{h}$.
Let $n\in N_G(\mathfrak{t})$ be the lift of $\vartheta$ constructed above.
Both $n$ and $ n_w$ act by $-1$ on $\mathfrak{z}$, so $n\cdot n_w$ lies in the connected subgroup $C_G(\mathfrak{z})$ and the image of $n\cdot n_w$ in $\Aut(\mathfrak{h})$ lies in the subgroup $\Aut(\mathfrak{h})^\circ$ of inner automorphisms. Since $\Ad(n)=\vartheta$ is outer on $\mathfrak{h}$, it follows that $\Ad(n_w)$ is outer on $\mathfrak{h}$ as well.
\qed
Let $w\in W(E_7)$ be any element whose order $m$ is invertible in $k$ and such that $w$ has an eigenvalue $\zeta$ of order $m$ on $\mathfrak{t}$.
Recall that $\Kac(w)$ is the set of normalized Kac diagrams of torsion automorphisms
$\theta\in \Aut(\mathfrak{g})$ of order $m$ such that $\theta$ normalizes $\mathfrak{t}$ and acts on $\mathfrak{t}$
via $w$.
Let $\tau\in\Aut(\mathfrak{h})$ be a torsion outer automorphism with Kac coordinates
$a\ b\ c\Leftarrow d\ e$. We write
$$\tau\leadsto w$$ to mean that the symmetric Kac diagram
$\eVII{a}{b}{c}{d}{c}{b}{a}{e}$ appears in $\Kac(w)$. Let $\Kac(w)_{\sym}$ denote the set of symmetric diagrams in $\Kac(w)$.
\begin{prop}\label{prop:2E6} Let $\tau\in\Aut(\mathfrak{h})$ be a torsion outer
automorphism whose order $m>2$ is invertible in $k$. Assume that $\rank(\tau)>0$.
Then there exists $w\in -W(E_6)$ such $\tau\leadsto w$. Moreover, we have
$$\rank(\tau)=\max\{\rank(w):\ w\in -W(E_6),\ \tau\leadsto w\}.$$
\end{prop}
\proof Let
$\mathfrak{c}\subset\mathfrak{h}(\tau,\zeta)$ be a Cartan subspace.
Then $\mathfrak{c}$ is contained in a $\tau$-stable Cartan subalgebra
$\mathfrak{t}'$ of $\mathfrak{h}$ so that $\mathfrak{c}=\mathfrak{t}'(\tau,\zeta)$. Conjugating by $H$,
we may assume that $\mathfrak{t}'\subset\mathfrak{t}$ and therefore $\mathfrak{t}=\mathfrak{t}'\oplus\mathfrak{z}$.
We have $\tau=\theta\vert_\mathfrak{h}$ for some $\theta\in \Aut(\mathfrak{g})$ normalizing $\mathfrak{h}$.
Then $\theta$ also normalizes the centralizer $\mathfrak{z}$ of $\mathfrak{h}$.
Since $\theta\vert_\mathfrak{h}$ is outer but $\theta^2\vert_\mathfrak{h}$ is inner, it follows that $\theta$ acts by $-1$ on $\mathfrak{z}$.
Since $\theta$ normalizes $\mathfrak{t}$, it projects to an element $w\in W(E_7)$.
The subgroup of $W(E_7)$ normalizing $\mathfrak{z}$ is $\{\pm 1\}\times W(E_6)$
and $W(E_6)$ is the subgroup centralizing $\mathfrak{z}$. It follows that $w\in -W(E_6)$.
Since the normalized Kac diagram of $\theta$ belongs to $\Kac(w)$
and $\tau=\theta\vert_\mathfrak{h}$, we have $\tau\leadsto w$. We also have
$$
\rank(w)=\mathfrak{t}(w,\zeta)=\mathfrak{t}'(w,\zeta).
$$
Suppose now that $w\in -W(E_6)$ is any element for which $\tau\leadsto w$.
Let $a\ b\ c\Leftarrow d\ e$ be the normalized Kac coordinates for $\tau$.
Since $\tau\leadsto w$ there is a lift $n_w\in N_G(\mathfrak{t})$ such that $\Ad(n_w)$ has normalized Kac diagram $\eVII{a}{b}{c}{d}{c}{b}{a}{e}$.
By Lemma \eqref{nw}, we have that $\Ad(n_w)$ is an outer automorphism of $\mathfrak{h}$.
Hence there is $s\in S$ such that
$\Ad(n_w)\vert_\mathfrak{h}$ is $H$-conjugate to $\Ad(sn)\vert_\mathfrak{h}$.
From the exact sequence \eqref{Hexact}
there are $g\in C_G(\mathfrak{z})$ and $z\in Z$ such that
$$gn_wzg^{-1}=sn.$$
But $n_w$ is $Z$-conjugate to $n_wz$, since $w=-1$ on $\mathfrak{z}$.
Therefore $n_w$ and $sn$ are conjugate under $C_G(\mathfrak{z})$,
so $\Ad(sn)$ also has normalized Kac diagram $\eVII{a}{b}{c}{d}{c}{b}{a}{e}$.
By Prop. \ref{E7vsE6}, $\Ad(sn)\vert_\mathfrak{h}$ has Kac diagram $a\ b\ c\Leftarrow d\ e$,
and therefore $\Ad(sn)\vert_\mathfrak{h}$ is $H$-conjugate to $\tau$.
But
$$\Ad(sn)\vert_\mathfrak{h}=\Ad(gn_wzg^{-1})\vert_\mathfrak{h}=\Ad(gn_wg^{-1})\vert_\mathfrak{h}$$
is conjugate to $\Ad(n_w)\vert_\mathfrak{h}$, via the element $h=\Ad(g)\vert_\mathfrak{h}\in H$.
Thus, $\tau$ and $\Ad(n_w)\vert_\mathfrak{h}$ are $H$-conjugate. Since $\mathfrak{t}(w,\zeta)\subset \mathfrak{h}$, an $H$-conjugate of $\mathfrak{t}(w,\zeta)$ is contained in a Cartan subspace of $\tau$, so $\rank(w)\leq \rank(\tau)$. This completes the proof.
\qed
The Kac diagrams of positive rank for ${^2E_6}$ are obtained from symmetric positive rank diagrams for $E_7$, of which there are $20$ (see Table 20).
Three of these ($14_a, 8_d, 8_e$) have rank zero for ${^2E_6}$ as will be explained. Two more have order $m=2$ and are easily handled by known results. The ranks for the remaining $15$
are found as follows. Using Prop. \ref{prop:2E6}, it is enough to extract the symmetric diagrams
from the preliminary table for $E_7$ in section \ref{preliminary}. The results are shown below, where $r$ is the rank of $\tau$ in ${^2E_6}$.
\begin{center}
{\small Table 22: $\Kac(w)_{\sym}$ for certain $w$ in $-W(E_6)$ }
$$
\begin{array}{|c|c|c|c|l|l|}
\hline
m& w\in-W(E_6)&w\in W(E_7) & r&\Kac(w)_{\un}& \Kac(w)_{\sym}\\
\hline\hline
18& -E_6(a_1)& E_7 & 1& \EVII{1}{1}{1}{1}{1}{1}{1}{1}& \EVII{1}{1}{1}{1}{1}{1}{1}{1}\\
\hline
12& -E_6&E_7(a_2) & 1& \EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}&
\EVII{1}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
10& -(A_4+A_1)&D_6& 1& \EVII{\ast}{\ast}{1}{1}{1}{1}{1}{1}
&\EVII{0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}{ 0}\qquad\EVII {1}{ 0}{ 1}{ 1}{ 0}{ 1}{ 0}{ 1}
\qquad\EVII {1}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 1}\\
\hline
8& -D_5&D_5+A_1& 1& \EVII{1}{\ast}{1}{1}{1}{1}{1}{\ast}
&\EVII{0}{ 1}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}\qquad\EVII{1}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
6& -(3A_2)&E_7(a_4) & 3& \EVII{1}{0}{0}{0}{1}{0}{0}{1}&
\EVII{1}{0}{0}{0}{1}{0}{0}{1}\\
\hline
6& -(2A_2)&A_1+D_6(a_2) & 2& \EVII{1}{\ast}{1}{1}{0}{1}{0}{1}&
\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
6& -A_2&3A_1+D_4& 1&\EVII{1}{\ast}{1}{1}{1}{1}{\ast}{1}&
\EVII{0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
6& -(A_1+A_5'')&A_5'& 1&\EVII{\ast}{1}{\ast}{1}{1}{1}{1}{\ast}&
\EVII{0}{ 0}{ 0}{ 1}{ 0}{ 1}{ 0}{ 0}\qquad\EVII {1}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 1}
\qquad\EVII{0}{ 1}{ 1}{ 0}{ 0}{ 0}{ 1}{ 0}\\
&&&&&\EVII{1}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 1}\\
\hline
4& -D_4(a_1)&A_1+2A_3& 2&\EVII{1}{1}{1}{1}{\ast}{1}{1}{1}&
\EVII{0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\\
\hline
4& -A_3+2A_1&(A_1+A_3)''& 1&\EVII{1}{1}{1}{1}{\ast}{1}{1}{1}&
\EVII {0}{ 0}{ 0}{ 0}{ 1}{ 0}{ 0}{ 0}\qquad \EVII{0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}{ 0}\qquad
\EVII{1}{ 0}{ 1}{ 0}{ 0}{ 0}{ 0}{ 1}\\
\hline
\end{array}
$$
\end{center}
Case $14_a$ has rank zero since there are no elements of order $7$ or $14$ in $W(E_6)$.
Cases $8_d$ and $8_e$ have rank zero since $D_5$ is the only element of order $8$ in $W(E_6)$ and the Kac diagrams for $8_{d,e}$ do not appear in the row for $w=-D_5$ in Table 22 above.
\subsection{Little Weyl groups for ${^2E_6}$}
The little Weyl groups $W_H(\mathfrak{c},\tau)$ and their degrees are determined as follows.
{\bf Cases $18_a,\ 12_b,\ 6_a,\ 4_b,\ 2_a$:\ } These cases are stable, hence by Cor. \ref{vinberg:stable}
we have $W_H(\mathfrak{c},\tau)=W(\mathfrak{t}')^\theta$, where
$\mathfrak{t}'$ is the unique Cartan subalgebra of $\mathfrak{h}$ containing $\mathfrak{c}$. Then $W(\mathfrak{t}')^\theta$ and its degrees are determined from \cite{springer:regular}.
\begin{lemma}\label{m/2}
If $\dim \mathfrak{c}=1$ then $W_H(\mathfrak{c},\tau)\simeq\boldsymbol{\mu}_d$ for some integer $d$ divisible by $m/2$.
\end{lemma}
\proof Since $\dim\mathfrak{c}=1$ we have $W_H(\mathfrak{c},\tau)\simeq\boldsymbol{\mu}_d$ for some integer $d$.
We may assume $\tau=\Ad(n_w)\vert_\mathfrak{h}$, where $n_w\in N_G(\mathfrak{t})$ has image
$w\in-W(E_6)$. Then $n_w^2\in H_0$ has eigenvalue $\zeta^2$ on $\mathfrak{c}$, where $\zeta\in k^\times$ has order $m$ equal to the order of $\tau$.
It follows that so $m/2$ divides $d$.
\qed
{\bf Cases $10_a, 10_b, 10_c$:\ } In these cases we have $m=10$ and $\dim\mathfrak{c}=1$ so $\boldsymbol{\mu}_5\leq W_H(\mathfrak{c},\tau)$, by Lemma \ref{m/2}. And $W_H(\mathfrak{c},\tau)\leq W_H(\mathfrak{c},\tau^2)$. Now $w^2$ has type $A_4$ in $E_6$, and all lifts of this type have little Weyl group $\boldsymbol{\mu}_5$, from Table 19.
\footnote {In fact, using Kac diagrams one can check that classes $10_{a,b,c}$ in Table 20 square to classes $5_{a,b,c}$, respectively, in Table 19.}
So And $W_H(\mathfrak{c},\tau)\leq W_H(\mathfrak{c},\tau^2)\simeq\boldsymbol{\mu}_5$.
{\bf Cases $8_c,\ 8_f$:\ } In these cases we have $m=8$ and $\dim\mathfrak{c}=1$ so $\boldsymbol{\mu}_4\leq W_H(\mathfrak{c},\tau)\leq \boldsymbol{\mu}_8$, by Lemma \ref{m/2}.
In case $8_f$ the diagram for $\theta'$ in Table 20 shows that $\tau$ is principle in $\Aut(\mathfrak{h})$. Hence
$W_H(\mathfrak{c},\tau)=N_{W_H}(\mathfrak{c})/Z_{W_H}(\mathfrak{c})$, by Prop. \ref{pan}. The element $w$ has type $-D_5$ and $\mathfrak{c}$ may be chosen to be the $-\zeta$-eigenspace for $y=-w$ in $\mathfrak{t}$. Since $\langle y\rangle$ acts faithfully on $\mathbb{C}$, there is a copy of $\boldsymbol{\mu}_8$ in $W_H(\mathfrak{c},\tau)$.
In case $8_c$ we rule out $\boldsymbol{\mu}_4$ using invariant theory, as in section \ref{littleweylE}.
A degree-four invariant in $\mathfrak{h}_1$ would correspond to an element of
\begin{equation}\label{hom}
\Hom_{M}\left(
\Sym^2(\mathbf{2}\boxtimes \mathbf{2})^L,
\Sym^2(\mathbf{3}\boxtimes\mathbf{2})^R\right),
\end{equation}
arising from the action of $L\times M\times R=\SL_2\times\SL_2\times\SL_2$ on
$$\mathfrak{h}_1\quad\simeq \quad
\mathbf{2}\boxtimes\mathbf{2}\boxtimes\mathbf{1}\quad\oplus\quad
\mathbf{1}\boxtimes\mathbf{3}\boxtimes\mathbf{2}.
$$
But
$\Sym^2(\mathbf{2}\boxtimes \mathbf{2})^L$ is the trivial representation of $M$
and $\Sym^2(\mathbf{3}\boxtimes\mathbf{2})^R$ is the adjoint representation of $M$,
which is irreducible since $p>2$. Hence the vector space \eqref{hom} is zero.
{\bf Case $6_c$:\ } Here the centralizer of $w=-2A_2$ in $W(E_6)$ has order $108$ and contains a subgroup $W(A_2)$ acting trivially on the root subsystem spanned by the $2A_2$. It
follows that $|W_H(\tau,\mathfrak{c})|\leq 18$. Results in the next section show that $W_H(\tau,\mathfrak{c})$ contains the centralizer of a $[33]$-cycle in the symmetric group $S_6$, which has order $18$.
{\bf Case $6_g$:\ } Here $\dim\mathfrak{c}=1$ and $w^2$ has type $A_2$, of which all lifts in $H$ have little Weyl group $\mu_6$. Hence $\mu_3\leq W_H(\mathfrak{c},\tau)\leq \mu_6$.
One checks that an $H_0$-invariant in degree $3$ in $\mathfrak{h}_1$ is a quadratic form on $S^2\mathbf{\check 4}$, which must be trivial. Hence $W_H(\mathfrak{c},\tau)\simeq \mu_6$.
{\bf Cases $6_i$, $6_k$:\ } These cases have $m=6$ and $\dim\mathfrak{c}=1$ so $\mu_3\leq W_H(\mathfrak{c},\tau)$, by Lemma \ref{m/2}. We show this is equality by finding an $H_0$-invariant of degree $3$ on $\mathfrak{h}_1$.
In case $6_i$, $\mathfrak{h}_1$ is the respresentation $\mathbf{3}\boxtimes\mathbf{\check 3}=\End(\mathbf{3})$ of $\SL_3\times \SL_3$, and the determinant is a cubic invariant.
In case $6_k$, $\mathfrak{h}_1$ is the respresentation $\mathbf{1}\oplus\mathbf{8}$ of $\Spin_7$,
where $\mathbf{8}$ is the Spin representation, which affords an invariant quadratic form $q$. The map $(x,v)\mapsto x\cdot q(v)$ is a cubic invariant.
{\bf Cases $4_e, 4_d$:\ } These cases have $m=4$ and $\dim\mathfrak{c}=1$ so $\mu_2\leq W_H(\mathfrak{c},\tau)$, by Lemma \ref{m/2}. We show that in both cases there is a quartic invariant but no quadratic invariant.
In case $4_e$, $\mathfrak{h}_1$ is the representation $\Lambda^3(\mathbf{6})=\mathbf{6}\oplus\mathbf{14}$ of $\Sp_6\times T_1$, where $t\in T_1$ acts by $t,t^{-1}$ on the respective summands. Since $p>2$ both summands are irreducible so there is no invariant in bidegree $(1,1)$. In characteristic zero one computes that $\Sym^2(\mathbf{6})$ appears in $\Sym^2(\mathbf{14})$, giving a nonzero $H_0$ quartic invariant, which persists in positive characteristic by Lemma \ref{invarianttheory}.
In case $4_d$, $\mathfrak{h}_1$ is the representation $\mathbf{2}\boxtimes\mathbf{8}$ of $\SL_2\times\Spin_7$. Since this representation is irreducible and symplectic there is no quadratic invariant. To find a quartic invariant we may assume the characteristic of $k$ is zero. Write
$$\mathfrak{h}_1=\mathbf{8}_+\oplus \mathbf{8}_-,$$
according to the characters $t\mapsto t^{\pm 1}$ of the maximal torus of $\SL_2$.
One checks that
$$\dim\left[\Sym^{4-i}(\mathbf{8_+})\otimes \Sym^i(\mathbf{8_-})\right]^{\Spin_7}=
\begin{cases}
1&\quad\text{for\ $i\neq 2$}\\
2&\quad\text{for\ $i=2$}.
\end{cases}
$$
Since this summand affords the character $t^{4-2i}$ of the maximal torus of $SL_2$,
it follows that there is a one-dimensional space of quartic invariants in $\mathfrak{h}_1$ for $\SL_2\times\Spin_7$.
\subsection{Standard subalgebras and Kostant sections}\label{E6outerKostant}
Fix a torsion automorphism $\theta=\Ad(s)\vartheta$ of $\mathfrak{h}=\mathfrak{e}_6$,
with $s\in S=(T^\vartheta)^\circ$,
and let $\tau\in\Aut(\mathfrak{h})$ be another torsion automorphism of the form
$\tau=\Ad(t)$ (inner case) or $\tau=\Ad(t)\vartheta$ (outer case), for some $t\in S$.
We call the fixed-point subalgebra $\mathfrak{h}^\tau$ a {\bf standard subalgebra}.
The standard subalgebras $\mathfrak{h}^\tau$ for inner automorphisms $\tau=\Ad(t)$
are in bijection with proper subdiagrams of the affine diagram of type $E_6$;
these subalgebras all contain $\mathfrak{t}$ as a Cartan subalgebra. The standard subalgebras $\mathfrak{h}^\tau$ for outer automorphisms $\tau=\Ad(t)\vartheta$
are in bijection with proper subdiagrams of the affine diagram of type ${^2E_6}$;
these subalgebras all contain $\mathfrak{t}^\vartheta$ as a Cartan subalgebra.
The automorphisms $\theta$ and $\tau$ commute,
so $\theta$ acts on the standard subalgebra
$\mathfrak{k}:=\mathfrak{g}^\tau$. If $\tau$ is inner and $\vartheta$ acts nontrivially on the subdiagram for $\mathfrak{k}$ then $\theta\vert_\mathfrak{k}$ is outer, because $\theta$
permutes a basis of the root-system of $\mathfrak{t}$ in $\mathfrak{k}$. And if $\tau$ is outer then
$\theta\vert_\mathfrak{k}$ must be inner, because $\theta$ acts trivially on the Cartan subalgebra $\mathfrak{t}^\vartheta$ of $\mathfrak{k}$.
Suppose now that $\rank(\theta\vert_\mathfrak{k})=\rank(\theta)$, so that there is a Cartan subspace $\mathfrak{c}$ for $\theta$ such that $\mathfrak{c}\subset\mathfrak{k}$. Let $K=\Aut(\mathfrak{k})^\circ$ and let $\widetilde K$ be the connected subgroup of $H$ corresponding to $\mathfrak{k}$. These groups are normalized by $\theta$ and the natural map $\widetilde K\to K$ restricts to a surjection
$$\widetilde K_0:=(\widetilde K^\theta)^\circ\longrightarrow
(K^\theta)^\circ=: K_0
$$
which induces an isomorphism
$$N_{\widetilde K_0}(\mathfrak{c})/Z_{\widetilde K_0}(\mathfrak{c})\simeq
N_{K_0}(\mathfrak{c})/Z_{K_0}(\mathfrak{c}).
$$
It follows that we have an embedding of little Weyl groups
$$W_K(\mathfrak{c},\theta\vert_\mathfrak{k})\hookrightarrow W_H(\mathfrak{c},\theta).$$
With the exception of number $2_c$, the next-to-right-most column of Table 23 below gives
the Kac diagram of an
$H$-conjugate $\theta'$ of $\theta$ such that the subdiagram of $1's$ determines a standard subalgebra $\mathfrak{k}$ (given in the last column) such that
$$\rank(\theta\vert_\mathfrak{k})=\rank(\theta)\qquad \text{and}\qquad
W_K(\mathfrak{c},\theta\vert_\mathfrak{k})=W_H(\mathfrak{c},\theta),
$$
and such that $\theta\vert_\mathfrak{k}$ satisfies the conditions of Lemma \ref{vinberg:stable}.
From \cite[Prop. 5.2]{levy:thetap} it follows that $\theta$ admits a Kostant section contained in $\mathfrak{k}$.
In the table below we indicate $\mathfrak{k}=\mathfrak{h}^\tau$ as the subdiagram of $1's$ in a Kac diagram of type $E_6$ or ${^2E_6}$ according to whether $\tau$ is inner or outer. Recall that $\theta\vert_\mathfrak{k}$ is then outer or inner, respectively. The superscript ${^2X}$ means that $\theta\vert_\mathfrak{k}$ is outer. The notation ${^2(2A_2)}$ indicates that $\mathfrak{k}\simeq \mathfrak{sl}_3\oplus\mathfrak{sl}_3$ and $\theta$ swaps the two factors.
In the exceptional case $2_c$, previous work on involutions \cite[Prop. 23]{kostant-rallis}
(for $k=\mathbb{C}$) and \cite[6.3]{levy:involutions} (for $p\neq 2$) shows that there is a $\theta$-stable subalgebra $\mathfrak{k}\simeq \mathfrak{sl}_3$ containing $\mathfrak{c}$ as a Cartan subalgebra,
and $W_H(\mathfrak{c},\theta)$ is just the ordinary Weyl group of $\mathfrak{c}$ in $\mathfrak{k}$. In this case $\theta$ is the unique (up to conjugacy) pinned involution of $\mathfrak{sl}_3$, which is known to have a Kostant section.
\begin{center}
{\small Table 23: The gradings of positive rank in type $E_6$ (outer case)}
$$
{\renewcommand{\arraystretch}{1.3}
\begin{array}{cccccccc}
\hline
\text{ No.} &\theta\vert_\mathfrak{h}& w\!\in\!-W(E_6)& w\in W(E_7)&
W_H(\mathfrak{c},\theta\vert_\mathfrak{h})&\text{degrees}&\theta'\vert_\mathfrak{h}&\mathfrak{k}\\
\hline
18_a&\outEVI{1}{1}{1}{1}{1} &-E_6(a_1)&E_7&\boldsymbol{\mu}_{9}&9&
\outEVI{1}{1}{1}{1}{1}&{^2E_6}\\
12_b
&\outEVI{1}{1}{0}{1}{1} &-E_6&E_7(a_2)&\boldsymbol{\mu}_{12}&12&
\outEVI{-\!2}{1}{1}{1}{1}&{^2E_6}\\
10_b&\outEVI{1}{1}{0}{1}{0} &-(A_4+A_1)&D_6&\boldsymbol{\mu}_{5}&5&
\outEVI{-\!3}{1}{1}{1}{1}&{^2E_6}\\
10_a&\outEVI{1}{0}{1}{0}{1} &-(A_4+A_1)&D_6&\boldsymbol{\mu}_{5}&5&
\outEVI{-1}{1}{1}{1}{-\!\!1}&{^2A_5}\\
10_c&\outEVI{0}{1}{0}{1}{1} &-(A_4+A_1)&D_6&\boldsymbol{\mu}_{5}&5&
\outEVI{9}{-\!5}{\ 1}{1}{1}&{^2D_5}\\
8_f&\outEVI{1}{0}{0}{1}{1} &-D_5&D_5+A_1&\boldsymbol{\mu}_{8}&8&
\outEVI{-\!4}{1}{1}{1}{1}&{^2E_6}\\
8_c&\outEVI{0}{1}{0}{1}{0} &-D_5&D_5+A_1&\boldsymbol{\mu}_{8}&8&
\outEVI{1}{1}{1}{1}{-\!4}&C_4\\
6_a&\outEVI{1}{0}{0}{1}{0} &-(3A_2)&E_7(a_4)&G_{25}&6,9,12&
\outEVI{-\!5}{1}{1}{1}{1}&{^2E_6}\\
6_c&\outEVI{0}{1}{0}{0}{1} &-(2A_2)&D_6(a_2)+A_1&G(3,1,2)&3,6&
\outEVI{2}{1}{1}{1}{-6}&{^2A_5}\\
6_g&\outEVI{0}{0}{0}{1}{1} &-A_2&D_4+3A_1&\boldsymbol{\mu}_{6}&6&
\outEVI{-3}{\ 0}{\ 1}{1}{1}&B_3\\
6_i&\outEVI{0}{0}{1}{0}{0} &-(A_5+A_1)&A_5'&\boldsymbol{\mu}_{3}&3&
\outEVI{0}{1}{1}{0}{-\!2}&{^2(2A_2)}\\
6_k
&\outEVI{1}{1}{0}{0}{0} &-(A_5+A_1)&A_5'&\boldsymbol{\mu}_{3}&3&
\outEVI{0}{1}{1}{2}{-\!6}&{^2(2A_2)}\\
4_b&\outEVI{0}{0}{0}{1}{0} &-D_4(a_1)&2A_3+A_1&G_8&8,12&
\outEVI{-\!6}{1}{1}{1}{1}&{^2E_6}\\
4_d&\outEVI{0}{1}{0}{0}{0} &-(A_3+2A_1)&(A_3+A_1)''&\boldsymbol{\mu}_{4}&4&
\outEVI{1}{1}{1}{-2}{0}&A_3\\
4_e&\outEVI{1}{0}{0}{0}{1} &-(A_3+2A_1)&(A_3+A_1)''&\boldsymbol{\mu}_{4}&4&
\outEVI{-1}{-\!\!1}{1}{1}{0}&B_2\\
2_a&\outEVI{0}{0}{0}{0}{1} &-1&7A_1&W(E_6)&2,5,6,8,9,12&
\outEVI{-\!7}{1}{1}{1}{1}&{^2E_6}\\
2_c&\outEVI{1}{0}{0}{0}{0} &-(4A_1)&(3A_1)'&W(A_2)&2,3&
---&{^2A_2}\\
\hline
\end{array}}
$$
\end{center}
\def\noopsort#1{}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
1,108,101,562,738 | arxiv | \section{Introduction}{\label{sec:intro}}
Over the past two decades the orbital architecture of giant planets has expanded from a single order of magnitude in the Solar System (5--30 AU) to over five orders of magnitude among extrasolar planetary systems (0.01--5000 AU; Figure~\ref{fig:mass_sma}).
High-contrast adaptive optics (AO) imaging has played a critical role in this advancement by probing separations beyond $\sim$10~AU and masses $\gtrsim$1~\mbox{$M_\mathrm{Jup}$}. Uncovering planetary-mass objects at
hundreds and thousands of AU has fueled novel theories of planet formation and migration, inspiring a more complex framework for the origin of giant planets in which multiple mechanisms (core accretion, dynamical scattering, disk instability, and cloud fragmentation) operate on different timescales and orbital separations. In addition to probing unexplored orbital distances, imaging entails directly capturing photons that originated in planetary atmospheres, providing unparalleled information about the initial conditions, chemical composition, internal structure, atmospheric dynamics, photospheric condensates, and physical properties of extrasolar planets.
These three science goals --- the architecture, formation, and atmospheres of gas giants --- represent the main motivations to directly image and spectroscopically characterize extrasolar giant planets.
This pedagogical review summarizes the field of direct imaging in the era leading up to and transitioning towards extreme adaptive optics systems, the \emph{James Webb Space Telescope}, \emph{WFIRST}, and the thirty meter-class telescopes. This ``classical'' period of high-contrast imaging spanning approximately 2000 to 2015 has set the stage and baseline expectations for the next generation of instruments and telescopes that will deliver ultra-high contrasts and reach unprecedented sensitivities.
In addition to the first images of \emph{bona fide} extrasolar planets, this early phase experienced a number of surprising discoveries including planetary-mass companions orbiting brown dwarfs; planets on ultra-wide orbits beyond 100~AU; enigmatic (and still poorly understood) objects like the optically-bright companion to Fomalhaut; and unexpectedly red, methane-free, and dust-rich atmospheres at low surface gravities. Among the most important results has been the gradual realization that massive planets are exceedingly rare on wide orbits; only a handful of discoveries have been made despite thousands of hours spent on hundreds of targets spanning over a dozen surveys. Although dismaying, these null detections provide valuable information about the efficiency of planet formation and the resulting demographics at wide separations. Making use of mostly general, non-optimized facility instruments and early adaptive optics systems has also led to creative observing strategies and post-processing solutions for PSF subtraction.
\begin{figure*}
\vskip -1.2 in
\hskip -.3 in
\resizebox{7.7in}{!}{\includegraphics{pasp_review_mass_sma_fig_wide_threeepochs.eps}}
\vskip -1.1 in
\caption{Substellar companions discovered via radial velocities (gray circles) and direct imaging (red circles) as of 1996, 2006, and 2016. Over this twenty year period the number of directly imaged companions below 10~\mbox{$M_\mathrm{Jup}$} \ has steadily increased from one (2M1207--3932~b in 2004) to over a dozen. The surprising discovery of planetary companions at extremely wide separations of hundreds to thousands of AU has expanded the architecture of planetary systems to over five orders of magnitude. Note that the radial velocity planets are minimum masses ($m$sin$i$) and the directly imaged companion masses are inferred from evolutionary models. RV-detected planets are from exoplanets.org (\citealt{Wright:2011cqa}; \citealt{Han:2014hn}) and are supplemented with a compilation of RV-detected brown dwarfs from the literature. Imaged companions are from \citet{Deacon:2014ey} together with other discoveries from the literature. \label{fig:mass_sma} }
\end{figure*}
Distinguishing giant planets from low-mass brown dwarfs is a well-trodden intellectual exercise (e.g., \citealt{Oppenheimer:2000vo}; \citealt{Basri:2006kv}; \citealt{Chabrier:2007ty}; \citealt{Chabrier:2014up}).
Except in the few rare cases where the architectures or abundance patterns of individual systems offer clues about a specific formation route, untangling the origin of imaged planetary-mass companions must necessarily be addressed as a population and in a statistical manner. This review is limited in scope to self-luminous companions detected in thermal emission at near- and mid-infrared wavelengths (1--5~$\mu$m) with masses between $\approx$1--13~\mbox{$M_\mathrm{Jup}$} \ with the understanding that multiple formation routes can probably produce objects in this ``planetary'' mass regime (see Section~\ref{sec:dbl}).
Indeed, the separations regularly probed in high-contrast imaging surveys--- typically tens to hundreds of AU---
lie beyond the regions in protoplanetary disks containing the highest surface densities of solids where core accretion operates most efficiently (e.g., \citealt{Andrews:2015vzb}).
Direct imaging has therefore predominantly surveyed the wide orbital distances where alternative formation and migration channels like disk instability, cloud fragmentation, and planet-planet scattering are most likely to apply.
In the future, the most efficient strategy to detect even smaller super-Earths and terrestrial worlds close to their host stars will be in reflected light from a dedicated space-based optical telescope.
By focusing on the optimal targets, early discoveries, largest surveys, and statistical results, this observationally-oriented overview aims to complement recent reviews on giant planet formation (\citealt{Chabrier:2007ty}; \citealt{Helled:2013et}; \citealt{Chabrier:2014up}; \citealt{Helling:2014hb}), atmospheric models (\citealt{Marley:2007uc}; \citealt{Helling:2008gs}, \citealt{Allard:2012fp}; \citealt{Marley:2015bj}), evolutionary models, (\citealt{Burrows:2001wq}; \citealt{Fortney:2009jt}), observational results (\citealt{Absil:2009dja}; \citealt{Lagrange:2014gp}; \citealt{Bailey:2014dc}; \citealt{Helling:2014fx}; \citealt{Madhusudhan:2014wu}; \citealt{Quanz:2015fg}; \citealt{Crossfield:2015jd}), and high contrast imaging instruments and speckle suppression techniques (\citealt{Guyon:2006jp}; \citealt{Beuzit:2007tj}; \citealt{Oppenheimer:2009gh}; \citealt{Biller:2008kk}; \citealt{Marois:2010hs}; \citealt{Traub:2010vo}; \citealt{Mawet:2012il}; \citealt{Davies:2012ds}).
\begin{figure*}
\vskip -1.2 in
\hskip -.2 in
\resizebox{7.3in}{!}{\includegraphics{contrast_sensitivitylimits.eps}}
\vskip -1 in
\caption{The influence of contrast (left), age (middle), and distance (right) on mass sensitivity to planets.
The bold curve in each panel shows the 50\% sensitivity contour based on the median
NICI contrast from \citet{Biller:2013fu} for a 30~Myr K1 star at 30~pc. The left panel shows the effect of
increasing or decreasing the fiducial contrast curve between --10 magnitudes to +5 magnitudes. Similarly, the middle and
right panels show changes to the fiducial age spanning 5~Myr to 5~Gyr and distances spanning 10~pc to 100~pc.
Planet absolute magnitudes depend steeply on mass and age. As a result, a small gain in contrast in the
brown dwarf regime corresponds to a large
gain in limiting mass, but the same contrast gain in the planetary regime translates into a much smaller gain in mass.
Mass sensitivity is particularly sensitive to stellar age, while closer distances mean smaller physical separations can be studied.
\label{fig:contrast_sensitivitylimits} }
\end{figure*}
\section{Optimal Targets for High-Contrast Imaging}
Planets radiatively cool over time by endlessly releasing the latent heat generated during their formation and gravitational contraction.
Fundamental scaling relations for the evolution of brown dwarfs and giant planets
can be derived analytically with basic assumptions of a polytropic equation of state and
degenerate electron gas (\citealt{Stevenson:1991ie}; \citealt{Burrows:1993kt}).
Neglecting the influence of lithium burning, deuterium burning, and atmospheres,
which act as partly opaque wavelength-dependent boundary conditions,
substellar objects with different masses cool in a similar monotonic fashion over time:
\begin{equation}
L_\mathrm{bol} \propto t^{-5/4} M^{5/2}.
\end{equation}
\noindent Here $L_\mathrm{bol}$ is the bolometric luminosity, $t$ is the object's age, and $M$
is its mass.
This steep mass-luminosity relationship means that luminosity tracks are compressed in the
brown dwarf regime ($\approx$13--75~\mbox{$M_\mathrm{Jup}$}) and
fan out in the planetary regime with significant consequences for
high-contrast imaging.
A small gain in contrast in the brown dwarf regime results in a large gain in the
limiting detectable mass, whereas the same contrast gain in the planetary regime
has a much smaller influence on limiting mass (Figure~\ref{fig:contrast_sensitivitylimits}).
It is much more difficult, for example, to improve sensitivity from
10~\mbox{$M_\mathrm{Jup}$} \ to 1~\mbox{$M_\mathrm{Jup}$} \ than from 80~\mbox{$M_\mathrm{Jup}$} \ to 10~\mbox{$M_\mathrm{Jup}$}.
Moreover, sensitivity to low masses and close separations is highly dependent on
a star's youth and proximity.
In terms of limiting detectable planet mass, observing younger and closer stars is
equivalent to improving speckle suppression or integrating for longer.
Note that in a contrast-limited regime
the absolute magnitude of the host star is also important.
The same contrast around low-mass stars and brown dwarfs corresponds to lower limiting masses compared to
higher-mass stars.
Young stars are therefore attractive targets for two principal reasons: planets are their most luminous at early ages, and the relative contrast between young giant planets and their host stars is lower than at older ages because stellar luminosities plateau on the main sequence while planets and brown dwarfs continue to cool, creating a luminosity bifurcation. For example, evolutionary models predict the $H$-band contrast between a 5 \mbox{$M_\mathrm{Jup}$} \ planet orbiting a 1 \mbox{$M_{\odot}$} \ star to be $\approx$25~mag at 5 Gyr but only $\approx$10~mag at 10 Myr (\citealt{Baraffe:2003bj}; \citealt{Baraffe:2015fw}). At old ages beyond $\sim$1~Gyr, 1--10 \mbox{$M_\mathrm{Jup}$} \ planets are expected to have effective temperatures between 100--500~K and cool to the late-T and Y spectral classes with near-infrared absolute magnitudes $\gtrsim$18~mag (\citealt{Dupuy:2013ks}).
Below are overviews of the most common classes of targets in direct imaging surveys highlighting the
scientific context, strengths and drawbacks, and observational results for each category.
\subsection{Young Moving Group Members}
In principle, younger stars make better targets for imaging planets. In practice, the youngest T Tauri stars reside in star-forming regions beyond 100~pc. At these distances, the typical angular scales over which high-contrast imaging can probe planetary masses translate to wide physical separations beyond $\sim$20--50 AU (with some notable exceptions with non-redundant aperture masking and extreme AO systems). Moreover, these extremely young ages of $\sim$1--10~Myr correspond to timescales when giant planets may still be assembling through core accretion and therefore might have lower luminosities than at slightly later epochs (e.g., \citealt{Marley:2007bf}; \citealt{Molliere:2012go}; \citealt{Marleau:2013bh}). On the other hand, the closest stars to the Sun probe the smallest physical scales but their old ages of $\sim$1--10 Gyr mean that high contrast imaging only reaches brown dwarf masses.
\begin{figure*}
\vskip -1.2 in
\hskip -.6 in
\resizebox{7.8in}{!}{\includegraphics{ymg_sensitivitylimits.eps}}
\vskip -1.2 in
\caption{Typical sensitivity maps for high-contrast imaging observations of T Tauri stars (5~Myr at 150~pc),
young moving group members (30~Myr at 30~pc), and field stars (5~Gyr at 10~pc). Young moving group members are ``Golidlocks targets''---
not too old, not too distant. Black curves denote 10\% and 90\% contour
levels assuming circular orbits, Cond hot-start evolutionary models (\citealt{Baraffe:2003bj}), and the median NICI contrast curve
from \citet{Biller:2013fu}. Gray and orange circles are RV- and directly imaged companions, respectively (see Figure~\ref{fig:mass_sma}). \label{fig:ymg_sensitivitylimits} }
\end{figure*}
Young moving groups--- coeval, kinematically comoving associations young stars and brown dwarfs in the solar neighborhood--- represent a compromise in age ($\approx$10--150~Myr) and distance ($\approx$10--100 pc) between the nearest star forming regions and field stars (Figure~\ref{fig:ymg_sensitivitylimits}; see \citealt{Zuckerman:2004ku}, \citealt{Torres:2008vq}, and \citealt{Mamajek:2016ik}). One distinct advantage they hold is that their members span a wide range of masses and can be used to age date each cluster from lithium depletion boundaries and isochrone fitting (e.g., \citealt{Bell:2015gw}; \citealt{Herczeg:2015bp}). As a result, the ages of these groups are generally much better constrained than those of isolated young stars. For these reasons young moving group members have emerged as the primary targets for high-contrast imaging planet searches over the past decade (e.g., \citealt{Chauvin:2010hm}; \citealt{Biller:2013fu}; \citealt{Brandt:2014hc})
Identifying these nearby unbound associations of young stars is a difficult task. Each moving group's $UVW$ space velocities cluster closely together with small velocity dispersions of $\approx$1--2~km s$^{-1}$ but individual stars in the same group can be separated by tens of parsecs in space and tens of degrees across the sky. $UVW$ kinematics can be precisely determined if the proper motion, radial velocity, and parallax to a star are known. Incomplete knowledge of one or more of these parameters (usually the radial velocity and/or distance) means the $UVW$ kinematics are only partially constrained, making it challenging to unambiguously associate stars with known groups. Historically, most groups themselves and new members of these groups were found with the aid of the Tycho Catalog and \emph{Hipparcos}, which provided complete space velocities for bright stars together with ancillary information pointing to youth such as infrared excess from IRAS; X-ray emission from the $Einstein$ or $ROSAT$ space observatories; strong H$\alpha$ emission; and/or \ion{Li}{1} $\lambda$6708 absorption. As a result, most of the faint low-mass stars and brown dwarfs have been neglected.
\begin{figure}
\vskip -.45 in
\hskip -1.1 in
\resizebox{6.3in}{!}{\includegraphics{ymg_hist_paspreview.eps}}
\vskip -1.4 in
\caption{The census of members and candidates of young moving groups. Prior to 2010 the M dwarf members were
largely missing owing to their faintness and lack of parallax measurements from $Hipparcos$.
Concerted efforts to find low-mass members over the past few years have filled in this population and
generated a wealth of targets for dedicated direct imaging planet searches. \label{fig:newmgmembers} }
\end{figure}
In recent years the population of ``missing'' low-mass stars and brown dwarfs in young moving groups has been increasingly uncovered as a result of large all-sky dedicated searches (Figure \ref{fig:newmgmembers}; \citealt{Shkolnik:2009dx}; \citealt{Lepine:2009ey}; \citealt{Schlieder:2010gk}; \citealt{Kiss:2010cb}; \citealt{Rodriguez:2011gb}; \citealt{Schlieder:2012gj}; \citealt{Schlieder:2012gu}; \citealt{Shkolnik:2012cs}; \citealt{Malo:2013gn}; \citealt{Moor:2013cy}; \citealt{Rodriguez:2013fv}; \citealt{Malo:2014dk}; \citealt{Gagne:2014gp}; \citealt{Riedel:2014ce}; \citealt{Kraus:2014ur}; \citealt{Gagne:2015ij}; \citealt{Gagne:2015dc}; \citealt{Binks:2015bu}). Parallaxes and radial velocities are generally not available for these otherwise anonymous objects, but by adopting the $UVW$ kinematics of known groups, it is possible to invert the problem and predict a distance, radial velocity, and membership probability. Radial velocities are observationally cheaper to acquire en masse compared to parallaxes, so membership confirmation has typically been accomplished with high-resolution spectroscopy. The exceptions are for spectroscopic binaries, which require multiple epochs to measure a systemic velocity, and rapidly rotating stars with high projected rotational velocities ($v$sin$i$), which produce large uncertainties in radial velocity measurements. The abundance of low-mass stars in the field means that some old interlopers will inevitably share similar space velocities with young moving groups. These must be distilled from bona fide membership lists on a case-by-case basis (\citealt{Barenfeld:2013bf}; \citealt{Wollert:2014go}; \citealt{Janson:2014gz}; \citealt{Mccarthy:2014jp}; \citealt{Bowler:2015ch}).
The current census of directly imaged planets and companions near the deuterium-burning limit are listed in Table~\ref{tab:planets}. Many of these host stars are members of young moving groups. $\beta$~Pic, 51~Eri, and possibly TYC~9486-927-1 are members of the $\beta$~Pic moving group (\citealt{Zuckerman:2001go}; \citealt{Feigelson:2006tz}; \citealt{Deacon:2016dg}). HR~8799 and possibly $\kappa$~And are thought to be members of Columba (\citealt{Zuckerman:2011bo}). 2M1207--3932 is in the TW~Hydrae Association (\citealt{Gizis:2002je}). GU~Psc and 2M0122--2439 are likely members of the AB Dor moving group (\citealt{Malo:2013gn}; \citealt{Naud:2014jx}; \citealt{Bowler:2013ek}).
AB~Pic, 2M0103--5515, and 2M0219--3925 are in Tuc-Hor (\citealt{Song:2003hh}; \citealt{Delorme:2013bo}; \citealt{Gagne:2015ij}), though the masses of their companions are somewhat uncertain and may not reside in the planetary regime. In addition, the space motion of VHS~1256--1257 is well-aligned with the $\beta$~Pic or possibly AB~Dor moving groups (\citealt{Gauza:2015fw}; \citealt{Stone:2016fz}), but the lack of lithium in the host indicates the system is older and may be a kinematic interloper.
The number of moving groups in the solar neighborhood is still under debate, but at least five are generally considered to be well-established: the TW Hydrae Association, $\beta$~Pic, Tuc-Hor, Carina, and AB Dor. Other associations have been proposed and may constitute real groups which formed together and are useful for age-dating purposes, but may require more scrutiny to better understand their size, structure, physical nature, and relationship to other groups. \citet{Mamajek:2016ik} provide a concise up-to-date census of their status and certitude. Soon, micro-arcsecond astrometry and parallaxes from $Gaia$ will dramatically change the landscape of nearby young moving groups by readily identifying overlooked groups, missing members, and even massive planets on moderate orbits.
\subsection{T Tauri Stars, Herbig Ae/Be Stars, and \\ Transition Disks}
Despite their greater distances ($\approx$120--150~pc), the extreme youth ($\approx$1--10~Myr) of
T Tauri stars and their massive counterparts, Herbig Ae/Be stars,
in nearby star-forming regions like Taurus, the Sco-Cen complex, and $\rho$ Oph
have made them attractive targets to search
for planets with direct imaging and probe the earliest stages of planet formation when gas giants are still assembling
(\citealt{Itoh:2008ta}; \citealt{Ireland:2011id}; \citealt{Mawet:2012fz}; \citealt{Janson:2013ke};
\citealt{LaFreniere:2014dj}; \citealt{Daemgen:2015fp}; \citealt{Quanz:2015fg}; \citealt{Hinkley:2015dga}).
One of the most
surprising results from these efforts has been the unexpected discovery of planetary-mass companions
on ultra-wide orbits at several hundred AU from their host stars
(Table~\ref{tab:planets}).
These wide companions pose challenges to canonical theories of planet formation
via core accretion and disk instability and may instead represent the tail end of brown
dwarf companion formation, perhaps as opacity-limited fragments of
turbulent, collapsing molecular cloud cores (e.g., \citealt{Low:1976wt}; \citealt{Silk:1977il}; \citealt{Boss:2001vw}; \citealt{Bate:2009br}).
Many (and perhaps most) of these young planetary-mass companions harbor accreting circum-planetary disks, which
provide valuable information about mass accretion rates, circum-planetary disk structure,
formation route, and the moon-forming capabilities of young planets.
Accretion luminosity is partially radiated in line emission, making H$\alpha$ a potentially
valuable tracer to find and characterize protoplanets (\citealt{Sallum:2015ej}).
For example, \citet{Zhou:2014ct} find that up to 50\% of
the accretion luminosity in the $\approx$15~\mbox{$M_\mathrm{Jup}$} \ companion GSC~6214-210~B
is emitted at H$\alpha$.
Searching for these nascent protoplanets has become a leading motivation to achieve AO correction
in the optical and is actively being carried out with MagAO (\citealt{Close:2014kt}).
Deep sub-mm observations with ALMA
have opened the possibility of measuring the masses of these subdisks
(\citealt{Bowler:2015hx}) and possibly even indirect identification via gas kinematics (\citealt{Perez:2015jn}).
Larger disks may be able to be
spatially resolved and a dynamical mass for the planet may be measured from Keplerian motion.
The relationship between protoplanetary disks and young planets is also being explored
in detail at these extremely young ages. In particular, transition disks---
young stars whose spectral energy distributions indicate they host disks with
large optically thin cavities generally depleted of dust (e.g., see reviews by
\citealt{Williams:2011js}, \citealt{Espaillat:2014hh}, \citealt{Andrews:2015vzb}, and \citealt{Owen:2016id})---
have been used as signposts to search for embedded protoplanets.
This approach has been quite fruitful, resulting in the discovery of companions within these gaps
spanning the stellar (CoKu Tau 4: \citealt{Ireland:2008kj};
HD~142527: \citealt{Biller:2012cb}, \citealt{Close:2014kt}, \citealt{Rodigas:2014bm}, \citealt{Lacour:2015uv}),
brown dwarf (HD~169142: \citealt{Biller:2014ft}, \citealt{Reggiani:2014dj}),
and planetary (LkCa~15: \citealt{Kraus:2012gk}, \citealt{Ireland:2014jm}, \citealt{Sallum:2015ej};
HD~100546: \citealt{Quanz:2013ii}, \citealt{Currie:2014hn}, \citealt{Quanz:2015dz}; \citealt{Currie:2015jk}, \citealt{Garufi:2016vt})
mass regimes using a variety of techniques.
However, environmental factors can severely complicate the interpretation of these detections.
Extinction and reddening, accretion onto and from circum-planetary disks, extended emission from accretion streams,
and circumstellar disk sub-structures seen in scattered light can result in false alarms, degenerate interpretations,
and large uncertainties in the mass estimates of actual companions.
T Cha offers a cautionary example; \citet{Huelamo:2011hx} discovered a candidate substellar companion
a mere 62~mas from the transition-disk host star with aperture masking interferometry,
but additional observations did not show orbital motion as expected for a real companion. Additional
modeling indicates
that the signal may instead be a result of scattering by grains in the outer disk or possibly even noise in the data
(\citealt{Olofsson:2013cx}; \citealt{Sallum:2015gm}; \citealt{Cheetham:2015hg}).
This highlights an additional complication with aperture masking: because model fits
to closure phases usually consist of binary models with two or more point sources, it can difficult to discern
actual planets from other false positives. In these situations the astrometric detection of orbital motion is essential to confirm young
protoplanets embedded in disks.
Other notable examples of ambiguous candidate protoplanets at wider separations include
FW~Tau~b, an accreting low-mass companion to the Taurus binary FW~Tau~AB orbiting at a projected
separation of 330~AU (\citealt{White:2001ic}; \citealt{Kraus:2014tl}),
and TMR-1C, a faint, heavily extincted protoplanet candidate located $\approx$1400~AU
from the Taurus protostellar binary host TMR-1AB
showing large-amplitude photometric variability and circumstantial evidence of a dynamical ejection
(\citealt{Terebey:1998co}; \citealt{Terebey:2000jq}; \citealt{Riaz:2011fj}; \citealt{Riaz:2013fg}).
Follow-up observations of both companions suggest they may instead be low-mass stars or brown dwarfs
with edge-on disks (\citealt{PetrGotzens:2010hj}; \citealt{Bowler:2014dk}; \citealt{Kraus:2015fx}; \citealt{Caceres:2015hg}),
underscoring a few of the difficulties that arise when interpreting candidate protoplanets at
extremely young ages.
Altogether the statistics of planets orbiting the youngest T Tauri stars from direct imaging are still
fairly poorly constrained. Quantifying this occurrence rate is important because it can be compared with the
same values at older ages to determine the \emph{evolution} of this population. Planet-planet
scattering, for example, implies an increase in the frequency of planets on ultra-wide orbits over time.
\citet{Ireland:2011id} found the frequency of 6--20~\mbox{$M_\mathrm{Jup}$} \ companions from
$\approx$200--500~AU to be $\sim$4$^{+5}_{-1}$\% in Upper Scorpius.
Combining these results with those from \citet{Kraus:2008bh} and their own shallow imaging survey,
\citet{LaFreniere:2014dj} find that the frequency of 5--40~\mbox{$M_\mathrm{Jup}$} \ companions between
50--250~AU is $<$1.8\% and between 250--1000~AU is 4.0$^{+3.0}_{-1.2}$\%
assuming hot-start evolutionary models.
In future surveys it will be just as important to report nondetections together with new discoveries so this frequency
can be measured with greater precision.
\subsection{Brown Dwarfs}
Young brown dwarfs ($\approx$13--75~\mbox{$M_\mathrm{Jup}$})
have low circum-substellar disk masses (\citealt{Mohanty:2013kl}; \citealt{Andrews:2013ku}) and
are not expected to host giant planets as frequently as stars.
Nevertheless,
their low luminosities make them especially advantageous for high-contrast imaging
because lower masses can be probed with contrast-limited observations.
Several deep imaging surveys with ground-based AO or $HST$ have included brown dwarfs in their samples
(\citealt{Kraus:2005tc}; \citealt{Ahmic:2007ju}; \citealt{Stumpf:2010es}; \citealt{Biller:2011hq};
\citealt{Todorov:2014fqa}; \citealt{Garcia:2015dra}).
A handful of companions in the 5--15~\mbox{$M_\mathrm{Jup}$} \ range have been discovered with direct imaging:
2M1207--3932~b (\citealt{Chauvin:2004cy}; \citealt{Chauvin:2005gg}),
2M0441+2301~b (\citealt{Todorov:2010cn}),
and possibly both FU~Tau~B (\citealt{Luhman:2009cx})
and VHS~1256--1257 (\citealt{Gauza:2015fw}) depending on their ages.
A few other low-mass companions
(and in some cases the primaries themselves) to late-T and early-Y field brown dwarfs
may also reside in the planetary regime depending on the system ages:
CFBDSIR~J1458+1013~B (\citealt{Liu:2011hb}), WISE~J1217+1626~B (\citealt{Liu:2012cy}), and
WISE~J0146+4234~B (\citealt{Dupuy:2015dza}).
The low mass ratios of these systems ($q$$\approx$0.2--0.5) bear a closer resemblance to binary stars
than canonical planetary systems ($q$$\lesssim$0.001), and the formation route of these very low-mass binaries
is probably quite different than around stars (\citealt{Lodato:2005ef}).
High-order multiple systems with low total masses like 2M0441+2301 AabBab and VHS~1256--1257 ABb suggest
that cloud fragmentation can form objects in the planetary-mass domain (\citealt{Chauvin:2005gg};
\citealt{Todorov:2010cn}; \citealt{Bowler:2015en}; \citealt{Stone:2016fz}).
Continued astrometric monitoring of ultracool binaries will eventually yield orbital elements and
dynamical masses for these intriguing systems to test
formation mechanisms (\citealt{Dupuy:2011ip}) and giant planet evolutionary models.
\subsection{Binary Stars}
Close stellar binaries ($\approx$0.1--5$''$) are generally avoided in direct imaging surveys.
Multiple similarly-bright stars can confuse wavefront sensors, which are
optimized for single point sources, and
deep coronagraphic imaging generally saturates nearby stellar companions.
Physically, binaries carve out large dynamically-unstable regions that are inhospitable to planets
and there is strong evidence that they inhibit planet formation
by rapidly clearing or truncating protoplanetary disks (e.g., \citealt{Cieza:2009hb}; \citealt{Duchene:2010gb}; \citealt{Kraus:2012dh}).
Nevertheless, many planets have been found in binary systems in both S-type orbital configurations
(a planet orbiting a single star; e.g., \citealt{Ngo:2015hn})
and P-type orbits (circumbinary planets; see \citealt{Winn:2015jt} for a recent summary).
Binaries are common products of star formation,
so understanding how stellar multiplicity influences
the initial conditions (protoplanetary disk mass and structure),
secular evolution (Kozai-Lidov interactions)
and end products (dynamically relaxed planetary systems)
of planet formation has important consequences for the galactic census of exoplanets
(e.g., \citealt{Wang:2014jf}; \citealt{Kraus:2016wn}).
Several planetary-mass companions have been imaged around binary stars on wide circumbinary orbits:
ROXs~42B~b (\citealt{Kraus:2014tl}; \citealt{Currie:2014gp}), Ross~458~c (\citealt{Goldman:2010ct}; \citealt{Scholz:2010cy}),
SR~12~C (\citealt{Kuzuhara:2011ic}),
HD~106906~b (\citealt{Bailey:2014et}; \citealt{Lagrange:2016bh}), and VHS~1256--1257 (\citealt{Gauza:2015fw}; \citealt{Stone:2016fz}).
2M0103--5515~b (\citealt{Delorme:2013bo}) and FW Tau~b (\citealt{Kraus:2014tl}) orbit close, near-equal mass stellar binaries, but
the masses of the wide tertiaries are highly uncertain (\citealt{Bowler:2014dk}).
On the other hand, few imaged planets orbiting single stars also have wide stellar companions. 51~Eri Ab is orbited by the pair of
M dwarf binaries GJ~3305 AB (\citealt{Feigelson:2006tz}; \citealt{Kasper:2007dm}; \citealt{Montet:2015ky}) at $\sim$2000~AU.
Fomalhaut has two extremely distant stellar companions at $\approx$57 kAU and $\approx$158 kAU
(\citealt{Mamajek:2013gzb}).
2M0441+2301 Bb orbits a low-mass brown dwarf and is part of a hierarchical quadruple system with
a distant star-brown dwarf pair at a projected separation of 1800 AU (\citealt{Todorov:2010cn}).
There has been little work comparing the occurrence rate of imaged planets in binaries and single stars.
However, several surveys and post-processing techniques are now expressly focusing on binary systems and should clarify the statistical properties of planets
in these dynamically complicated arrangements (\citealt{Thalmann:2014bq}; \citealt{Rodigas:2015hq}; \citealt{Thomas:2015iga}).
\subsection{Debris Disks: Signposts of Planet Formation?}
Debris disks are extrasolar analogs of the asteroid and Kuiper belts in the Solar System.
They are the continually-replenished outcomes of cascading planetesimal collisions that result
in large quantities of transient dust heated by the host star. Observationally, debris disks
are identified from unresolved infrared or sub-mm excesses over stellar photospheric emission.
Deep observations spanning the optical, IR, and sub-mm can spatially resolved the largest and most luminous disks
in scattered light and thermal emission to investigate disk morphology and grain properties.
Topical reviews by \citet{Zuckerman:2001ez}, \citet{MoroMartin:2008vi}, \citet{Wyatt:2008ht},
\citet{Krivov:2010er}, and \citet{Matthews:2014id} highlight
recent theoretical and observational progress on the formation, modeling, and evolution of debris disks.
Debris disks are intimately linked to planets, which can stir planetesimals,
sculpt disk features, produce offsets between disks and their host stars,
and carve gaps to form belts with spectral energy distributions showing multiple temperature components.
The presence of debris disks, and especially those with features indicative of a massive perturber,
may therefore act as signposts for planets.
The four directly imaged planetary systems $\beta$~Pic, HR~8799, 51~Eri, and HD~95086
all possess debris disks, the latter three having multiple belts interior and exterior to the imaged planet(s).
This remarkably consistent configuration is analogous to the Solar System's architecture in
which gas giants are flanked by (very low-level) zodiacal emission.
Anecdotal signs point to a possible correlation between disks and imaged planets
but this relationship has not yet been statistically validated.
There have been hints of a correlation between debris disks and
low-mass planets detected via radial velocities (\citealt{Wyatt:2012kn}; \citealt{Marshall:2014bp}),
but these were not confirmed in a recent analysis by \citet{MoroMartin:2015dk}.
Indeed, many stars hosting multi-component debris disks have now been targeted
with high-contrast imaging and do not appear to harbor massive planets
(\citealt{Rameau:2013it}; \citealt{Wahhaj:2013iq}; \citealt{Janson:2013cjb}; \citealt{Meshkat:2015dh}).
Given the high incidence of debris disks around main-sequence stars ($\gtrsim$16--20\%:
\citealt{Trilling:2008ey}; \citealt{Eiroa:2013bk}),
with even higher rates at younger ages (\citealt{Rieke:2005hv}; \citealt{Meyer:2008vh}),
any correlation of imaged giant planets and debris disks will be difficult to discern because
the overall occurrence rate of massive planets on wide orbits is extremely low ($\lesssim$1\%; see Section~\ref{sec:occurrencerate}).
Perhaps more intriguing would be a subset of this sample with additional contextual clues,
for example the probability of an imaged planet given a two-component debris disk compared to a diskless
control sample.
The Fabulous Four--- Vega, $\beta$~Pic, Fomalhaut, and $\epsilon$ Eridani--- host the brightest
debris disks discovered by IRAS (\citealt{Aumann:1984bg}; \citealt{Aumann:1985zz})
and have probably been targeted more than any other stars with
high-contrast imaging over the past 15 years, except perhaps for HR~8799. Despite having similarly large and
luminous disks, their planetary systems are quite different and demonstrate
a wide diversity of evolutionary outcomes.
Fomalhaut's disk possesses a sharply truncated, offset, and eccentric ring about 140~AU in radius
suggesting sculpting from a planet (\citealt{Dent:2000ix}; \citealt{Boley:2012fh}; \citealt{Kalas:2005ca}).
A comoving optical source (``Fomalhaut b'') was discovered
interior to the ring by \citet{Kalas:2008un} and appears to be orbiting on a highly inclined and eccentric orbit not
coincident with the ring structure (\citealt{Kalas:2013hpa}).
The nature of this intriguing companion remains puzzling; it may be a low-mass planet with a large circum-planetary disk, a swarm of
colliding irregular satellites, or perhaps
a recent collision of protoplanets (e.g., \citealt{Kalas:2008cs}; \citealt{Kennedy:2011ca}; \citealt{Kenyon:2014kf}).
Massive planets have been ruled out from deep imaging down to
about 20 AU (\citealt{Kalas:2008un}; \citealt{Kenworthy:2009hc}; \citealt{Marengo:2009de}; \citealt{Absil:2011jm};
\citealt{Janson:2012bn}; \citealt{Nielsen:2013jy}; \citealt{Kenworthy:2013bt};
\citealt{Currie:2012ef}; \citealt{Currie:2013iw}; \citealt{Janson:2015bh}).
Vega's nearly face-on debris disk is similar to Fomalhaut's in terms of its two-component structure
comprised of warm and cold dust belts and wide gaps with orbital ratios $\gtrsim$10, possibly indicating the presence
of multiple low-mass planets (e.g., \citealt{Wilner:2002wl}; \citealt{Su:2005wx}; \citealt{Su:2013da}).
Deep imaging of Vega over the past 15 years has thus far failed to identify planets with detection
limits down to a few Jupiter masses
(\citealt{Metchev:2003fv}; \citealt{Macintosh:2003in}; \citealt{Itoh:2006vp}; \citealt{Marois:2006df}; \citealt{Hinz:2006tx};
\citealt{Hinkley:2007hf}; \citealt{Heinze:2008fg}; \citealt{Janson:2011hu}; \citealt{Mennesson:2011ia}; \citealt{Janson:2015bh}).
$\beta$~Pic hosts an extraordinarily large, nearly edge-on disk spanning almost 2000 AU in radius (\citealt{Smith:1984wy}; \citealt{Larwood:2001wd}).
Its proximity, brightness, and spatial extent make it one of the best-studied debris disks, showing signs of multiple belts (e.g., \citealt{Wahhaj:2003eh}),
asymmetries (e.g., \citealt{Kalas:1995jo}), molecular gas clumps (\citealt{Dent:2014br}), and
an inner warp (\citealt{Heap:2000jr}) predicted to be caused by a close-in inclined planet (\citealt{Mouillet:1997ib}).
\citet{Lagrange:2009hq} uncovered a possible massive planet at $\sim$9~AU and, despite immediate follow-up
(\citealt{Fitzgerald:2009fs}; \citealt{Lagrange:2009gg}), it was not until it reemerged on
the other side of the star that $\beta$~Pic~b was unambiguously confirmed (\citealt{Lagrange:2010fsa}).
$\epsilon$ Eridani is another particularly fascinating example of a nearby, relatively young K2 star hosting a bright
debris disk with spatially resolved ring structure and a warm inner component
(\citealt{Greaves:1998kg}; \citealt{Greaves:2005vn}; \citealt{Backman:2009hk}).
At 3.2~pc this star harbors the closest debris disk to the Sun, has a Jovian-mass planet
detected by radial velocity and astrometric variations (\citealt{Hatzes:2000gr}; \citealt{Benedict:2006dr}),
and possesses a long-term RV trend pointing to an additional long-period giant planet.
Because of its favorable age and proximity,
$\epsilon$ Eridani has been exhaustively imaged with adaptive optics on
the largest ground-based telescopes in an effort to recover the known planets and search for others
(\citealt{Luhman:2002ed}; \citealt{Macintosh:2003in}; \citealt{Itoh:2006vp}; \citealt{Marengo:2006gj}; \citealt{Janson:2007cm};
\citealt{Lafreniere:2007cv}; \citealt{Biller:2007ht}; \citealt{Janson:2008gp}; \citealt{Heinze:2008fg};
\citealt{Marengo:2009de}; \citealt{Heinze:2010dm}; \citealt{Wahhaj:2013iq}; \citealt{Janson:2015bh}).
Together with long-baseline radial velocity monitoring, these deep observations have ruled out planets $>$3~\mbox{$M_\mathrm{Jup}$} \ anywhere in this system.
\subsection{Field Stars and Radial Velocity Trends}
At the old ages of field stars ($\sim$1--10~Gyr), giant planets have cooled to
late spectral types and
low luminosities where high-contrast imaging does not regularly reach
the planetary-mass regime.
Nevertheless, several surveys have focused on this population
because their proximity means very close separations can be probed and their old ages provide
information about potential dynamical evolution of substellar companions and giant planets over time
(\citealt{McCarthy:2004hl}; \citealt{Carson:2005ia}; \citealt{Carson:2006ez};
\citealt{Heinze:2010ko}; \citealt{Heinze:2010dm}; \citealt{Tanner:2010bp};
\citealt{Leconte:2010ed}).
Of particular interest are stars showing low-amplitude, long-baseline radial velocity changes (Doppler ``trends'').
These accelerations are regularly revealed in planet searches and point to the existence of unseen stars, brown dwarfs, or giant planets on wide orbits. High-contrast imaging is a useful
tool to diagnose the nature of these companions and, in the case of non-detections, rule out massive objects at
wide projected separations (\citealt{Kasper:2007dm}; \citealt{Geissler:2007dg};
\citealt{Luhman:2002ed}; \citealt{Chauvin:2006hm}; \citealt{Janson:2009id}; \citealt{Jenkins:2010cz};
\citealt{Kenworthy:2009hc}; \citealt{Rodigas:2011gp}).
When a companion is detected, its minimum mass can be inferred from the host star's acceleration ($\dot{v}$),
the distance to the system ($d$), and the angular separation of the companion ($\rho$) following \citet{Torres:1999gc} and
\citet{Liu:2002fx}:
\begin{equation}
M_\mathrm{comp} > 1.388 \times 10^{-5} \left( \frac{d}{\mathrm{pc}} \ \frac{\rho}{''} \right) ^2 \Big| \frac{\dot{v}}{\mathrm{m} \ \mathrm{s^{-1}} \ \mathrm{yr^{-1}}} \Big| \ M_{\odot}.
\end{equation}
\noindent The coefficient is 0.0145 when expressed in \mbox{$M_\mathrm{Jup}$}.
This equation assumes an instantaneous radial velocity slope, but longer baseline coverage
or a change in the acceleration (``jerk'') can provide better constraints on a companion's mass and
period (\citealt{Rodigas:2016cf}). If a significant fraction of the orbit is measured
with both astrometry and radial velocities, simultaneous modeling of both data sets can yield a
robust dynamical mass measurement.
Perhaps the best example of this is from \citet{Crepp:2012eg}, who measured
the mass and three-dimensional orbit of the brown dwarf companion HR~7672~B,
initially discovered by \citet{Liu:2002fx} based on an acceleration from the host star.
Many stellar and white dwarf companions have been discovered in this fashion but only
a few substellar companions have been found (Table~\ref{tab:trends}).
Figure~\ref{fig:trends} shows the population of known companions inducing shallow RV trends on their host stars
and which have also been recovered with high resolution (and often high-contrast) imaging.
Most of these are M dwarfs with masses between $\sim$0.1--0.5~\mbox{$M_{\odot}$} \ at separations of $\sim$10--100~AU.
This is primarily due to the two competing methods at play: at these old ages,
direct imaging is insensitive to low
masses and close separations, while small accelerations induced from wide-separation
and low-mass companions are difficult to measure even for long-baseline, precision radial velocity
planet searches.
The TRENDS program (e.g., \citealt{Crepp:2012eg}; \citealt{Crepp:2014ce}) is the largest survey
to combine these two methods and demonstrates the importance of both
detections and non-detections to infer the population of planets on moderate orbits out to $\sim$20~AU (\citealt{Montet:2014fa}).
\begin{figure}
\vskip -.5 in
\hskip -0.7 in
\resizebox{5.1in}{!}{\includegraphics{trends_sma_mass_paspreview.eps}}
\vskip -.7 in
\caption{Imaged companions inducing shallow radial velocity trends on their host stars.
Blue, orange, and red circles are white dwarf companions, low-mass stellar companions, and substellar companions, respectively.
Concentric circles indicate the host star has a planetary system.
Only three brown dwarf companions inducing shallow trends have been found:
HR~7672~B (\citealt{Liu:2002fx}), HD~19467~B (\citealt{Crepp:2014ce}), and HD~4747~B (\citealt{Crepp:2016ta}).
Gray dashed lines show constant accelerations assuming circular orbits;
the maximum host star acceleration is proportional to the companion mass and
inversely proportional to the square of the projected physical separation, so
a 1~\mbox{$M_\mathrm{Jup}$} \ planet at 10~AU
will produce the same maximum acceleration as a 100~\mbox{$M_\mathrm{Jup}$} \ low-mass star at 100 AU (namely
0.7~m s$^{-1}$ yr$^{-1}$). See Table~\ref{tab:trends} for details on these systems. \label{fig:trends} }
\end{figure}
Dynamical masses of planets may eventually be measured by combing radial velocity monitoring of the host star and direct imaging,
effectively treating the system like a spatially resolved single-lined spectroscopic binary.
Stellar jitter is a limiting factor at very young ages and at older ages the low luminosities of planets generally preclude
imaging. The intermediate ages of moving group members may provide an adequate solution,
and at least one ambitious survey by \citet{Lagrange:2013gh}
is currently underway to search for planets and long-term radial velocity trends
for this population. Another solution is to image planets in reflected light at optical wavelengths,
which requires a space-based telescope and
coronagraph like \emph{WFIRST} (\citealt{Traub:2014ft}; \citealt{Spergel:2015wr};
\citealt{Brown:2015bj}; \citealt{Greco:2015ko}; \citealt{Robinson:2016gb}).
Similarly, astrometric accelerations can be used to identify and measure the masses of substellar companions when combined with
high-contrast imaging. This will be particularly relevant in the near-future with $Gaia$ as thousands of
planets are expected to be found from the orbital reflex motion of their host stars (\citealt{Sozzetti:2013de}; \citealt{Perryman:2014jra}).
Young stars in the field not necessarily associated with coherent moving groups have also been
popular targets for high-contrast imaging planet searches. The advantage of this population is that they are numerous
and often reside at closer distances than actual members of young moving groups, but their ages and metallicities are generally highly uncertain so substellar companions uncovered with deep imaging can
have a wide range of possible masses (e.g., \citealt{Mawet:2015kk}).
GJ~504 and $\kappa$~And are recent examples of young field stars with faint companions
that were initially thought to have planetary masses
(\citealt{Kuzuhara:2013jz}; \citealt{Carson:2013fw}) but which
follow-up studies showed are probably older (\citealt{Hinkley:2013ko}; \citealt{Bonnefoy:2014dx}; \citealt{Fuhrmann:2015dk}; \citealt{Jones:2016hg}),
implying the companions are likely more massive and
reside in the brown dwarf regime.
Sirius is another example of a young massive field star extensively
targeted with high-contrast imaging (\citealt{Kuchner:2000kf}; \citealt{BonnetBidaud:2008hk};
\citealt{Skemer:2011is}; \citealt{Thalmann:2011jx}; \citealt{Vigan:2015fsa}).
This system is particularly noteworthy for possible periodic astrometric perturbations to the orbit of its white dwarf companion Sirius B that may be caused
by an still-hidden giant planet or brown dwarf (\citealt{Benest:1995to}).
\section{The Masses of Imaged Planets}{\label{sec:masses}}
The masses of directly imaged planets are generally highly uncertain, heavily model-dependent,
and difficult to independently measure.
Yet mass is fundamentally important to test models of giant planet formation and empirically calibrate substellar evolutionary models.
This Section describes how observables like bolometric luminosity, color, and absolute magnitude
coupled with evolutionary models and semi-empirical quantities like age are used to infer the masses of planets.
Although no imaged planet has yet had its mass directly measured, there are several promising
routes to achieve this which will eventually enable rigorous tests of giant planet cooling models.
\begin{figure*}
\vskip -1.2 in
\hskip -.5 in
\resizebox{8.4in}{!}{\includegraphics{lum_age_paspreview.eps}}
\vskip -0.7 in
\caption{The current census of companions in the brown dwarf (green) and planetary (blue) mass regimes that have both age and bolometric luminosity measurements from the compilation in Table~\ref{tab:planets}. Many companions lie near the deuterium-burning limit while only a handful of objects are unambiguously in the planetary-mass regime. Hot-start evolutionary models are from \citet{Burrows:1997jq}; orange, green, and blue tracks denote masses $>$80~\mbox{$M_\mathrm{Jup}$}, 14--80~\mbox{$M_\mathrm{Jup}$}, and $<$14~\mbox{$M_\mathrm{Jup}$}. \label{fig:lum_age} }
\end{figure*}
\subsection{Inferring Masses}
Like white dwarfs and brown dwarfs, giant planets
cool over time so evolutionary models along with two physical parameters---
luminosity, age, effective temperature, or radius---
are needed to infer a planet's mass.
Among these, luminosity and age are usually
better constrained and less reliant on atmospheric
models than effective temperature and radius, which can substantially vary with
assumptions about cloud properties, chemical composition,
and sources of opacity.
Below are summaries of the major assumptions
(in roughly descending order)
involved in the inference of planet masses using atmospheric and evolutionary models along
with notable advantages, drawbacks, and limitations of various techniques.
\begin{itemize}
\item \textbf{Initial conditions and formation pathway.}
The most important assumption
is the
amount of initial energy and entropy a planet begins with following its formation.
This defines its evolutionary pathway, which is embodied in three broad classes informed by
formation mechanisms.
Hot-start models begin with arbitrarily large radii and oversimplified, idealized initial conditions
that generally ignore the effects of accretion and mass assembly.
As such, they represent the most luminous outcome and correspond to the most optimistically
(albeit unrealistically) low mass estimates.
Ironically, hot-start grids are nearly unanimously adopted for estimating the masses
of young brown dwarfs and giant planets
even though the early evolution in these models is the least reliable.
The most widely used hot-start models for imaged planets are
the Cond and Dusty grids from \citet{Baraffe:2003bj} and \citet{Chabrier:2001bf},
\citet{Burrows:1997jq}, and \citet{Saumon:2008im}.
Cold-start models were made prominent by \citet{Marley:2007bf} and \citet{Fortney:2008ez} in the context of direct imaging
as an attempt to emulate a more realistic formation scenario for giant planets through core accretion.
In this model, accretion shocks radiate the gravitational potential energy of infalling gas as a giant planet grows.
After formation, these planets begin cooling with much lower luminosities and
initial entropies compared to the hot-start scenario, taking between $\sim$10$^8$ and $\sim$10$^9$ years to
converge with hot-start cooling models depending on the planet mass.
The observational implications of this are severe: planets formed from core accretion may be orders of magnitude
less luminous than those produced from cloud fragmentation or disk instability. While this
may offer a diagnostic for the formation route if the mass of a planet is independently measured,
it also introduces considerable uncertainty in the more typical case when only an age and luminosity are known.
For example, 51~Eri~b may be as low as 2~\mbox{$M_\mathrm{Jup}$} \ or as high as 12~\mbox{$M_\mathrm{Jup}$} \ depending on which cooling model
(hot or cold)
is assumed (\citealt{Macintosh:2015ewa}).
This picture is made even more complicated by large uncertainties in the details of cold-start models.
The treatment of accretion shocks, circumplanetary disks, core mass, and even deuterium
burning for the most massive planets can dramatically influence the initial entropy and luminosity evolution
of planets (\citealt{Mordasini:2012jy}; \citealt{Bodenheimer:2013ki}; \citealt{Mordasini:2013cr}; \citealt{Owen:2016bu}).
This motivated a class of warm-start models with intermediate initial entropies that probably better
reflect dissipative accretion shocks that occur in nature (\citealt{Spiegel:2012ea}; \citealt{Marleau:2013bh}).
Unfortunately, the relevant details of giant planet assembly
are poorly constrained by observations. There is also likely to be intrinsic scatter in the
initial conditions for a given planet which may result in large degeneracies in the planet mass, core mass,
and accretion history for young gas giants with the same age and luminosity. It is quite possible, for instance,
that the HR 8799 c, d, and e planets which all share the same age and nearly the same luminosity could have
very different masses.
\begin{figure}
\vskip -.4 in
\hskip -1.5 in
\resizebox{6.9in}{!}{\includegraphics{cmd_paspreview_v2b.eps}}
\vskip -0.05 in
\caption{The modern color-magnitude diagram spans nearly 35 magnitudes in
the near-infrared and 5 magnitudes in $J$--$K$ color.
The directly imaged planets (bold circles) extend the L dwarf sequence
to redder colors and fainter absolute
magnitudes owing to a delayed transition from cloudy atmospheres to
condensate-free T dwarfs at low surface gravities. The details of this
transition for giant planets remains elusive.
OBAFGK stars (gray) are from the extended $Hipparcos$ compilation
XHIP (\citealt{Anderson:2012cu});
M dwarfs (orange) are
from \citet{Winters:2015ji};
late-M dwarfs (orange), L dwarfs (green), and T dwarfs (light blue) are from \citet{Dupuy:2012bp};
and Y dwarfs (blue) are compiled largely from
\citet{Dupuy:2014iz}, \citet{Tinney:2014bl}, and \citet{Beichman:2014jr} by
T. Dupuy (2016, private communication).
Directly imaged planets or planet candidates (bold circles) represent all companions from
Table~\ref{tab:planets} with near-infrared photometry
and parallactic distances.
\label{fig:cmd} }
\vskip -.1 in
\end{figure}
\item \textbf{Stellar age.}
After bolometric luminosity, which is generally uncomplicated to estimate for imaged planets,
the age of the host star is the most sensitive parameter on which the mass of an imaged companion depends.
It is also one of the most difficult quantities to accurately determine and usually relies on stellar evolutionary models
or empirical calibrations. Recent reviews on this topic include \citet{Soderblom:2010kr}, \citet{Jeffries:2014js},
and \citet{Soderblom:2014ve}.
Figure~\ref{fig:lum_age} shows the current census of imaged companions near and below
the deuterium-burning limit with both age and luminosity measurements (from Table~\ref{tab:planets}).
Apart from uncertainties in the formation history of these objects, age uncertainties
dominate the error budget for inferring masses.
Clusters of coeval stars spanning a wide range of masses
provide some of the best age constraints but are still dominated by systematic errors.
Several star-forming regions and young moving groups in particular have been systematically
adjusted to older ages over the past few years, which has propagated to the ages and masses of planets
in those associations (\citealt{Pecaut:2012gp}; \citealt{Binks:2014gd}; \citealt{Kraus:2014ur}; \citealt{Bell:2015gw}).
The implied hot-start mass for $\beta$~Pic~b, for example, increases by several Jupiter masses
(corresponding to several tens of percent) assuming the planet's age is $\approx$23~Myr instead of $\approx$12~Myr (\citealt{Mamajek:2014bf}; although see the next bullet point).
For young field stars, distant stellar companions can help age-date the entire system. For example,
the age of Fomalhaut was
recently revised to $\sim$400~Myr from $\sim$200~Myr in part due to constraints from its wide
M dwarf companions (\citealt{Mamajek:2012ga}; \citealt{Mamajek:2013gzb}).
Ultimately, if the age of a host star is unknown, the significance and interpretation of a faint companion is limited if
basic physical properties like its mass are poorly constrained.
\item \textbf{Epoch of planet formation.}
Planets take time to form so they are not exactly coeval with their host stars. Their ages may span
the stellar age to the stellar age minus $\sim$10~Myr depending on the timescale for giant planets to assemble.
Planets formed via cloud fragmentation or disk instability might be nearly coeval with their host star, but those formed by
core accretion are expected to build mass over several Myr. While this difference is negligible
at intermediate and old ages beyond a few tens of Myrs, it can have a large
impact on the inferred masses of the youngest planets ($\lesssim$20~Myr).
For example, if the age of the young planetary-mass companion 2M1207--3932~b is assumed to be coeval with the TW~Hyrdae Association ($\tau$ = 10~$\pm$~3~Myr) then
its hot-start mass is $\approx$5~\mbox{$M_\mathrm{Jup}$}. On the other hand, if its formation was delayed by 8~Myr ($\tau$ = 2 Myr) then its mass is only $\approx$2.5~\mbox{$M_\mathrm{Jup}$}.
\begin{figure*}
\vskip -.6 in
\hskip -0 in
\resizebox{7.6in}{!}{\includegraphics{spt_age_paspreview.eps}}
\vskip -1.15 in
\caption{Ultracool substellar companions with well-constrained ages and spectroscopically-derived classifications. Red circles are low-mass stars and brown dwarfs ($>$13~\mbox{$M_\mathrm{Jup}$}) while blue circles show companions near and below the deuterium-burning limit. Companions are primarily from \citet{Deacon:2014ey} together with additional discoveries from the literature. \label{fig:spt_age} }
\end{figure*}
\item \textbf{Atmospheric models.}
Atmospheric models can influence the inferred masses of imaged exoplanets in several ways.
They act as surface boundary conditions for evolutionary models and regulate radiative cooling through molecular and
continuum opacity sources. This in turn impacts the luminosity evolution of giant planets, albeit minimally
because of the weak dependence on mean opacity ($L(t)$ $\propto$ $\kappa^{0.35}$; \citealt{Burrows:1993kt}; \citealt{Burrows:2001wq}).
Even the unrealistic cases of permanently dusty and perpetually condensate-free photospheres do not dramatically affect the luminosity evolution of
cooling models or mass determinations using age and bolometric luminosity (\citealt{Baraffe:2002di}; \citealt{Saumon:2008im}), although more realistic (``hybrid'') models accounting for the evolution and dissipation of clouds at the L/T transition can influence the shape of cooling curves in slight but significant ways (\citealt{Saumon:2008im}; \citealt{Dupuy:2015gl}).
On the other hand, mass determinations in color-magnitude space are highly sensitive to atmospheric models and can
result in changes of several tens of percent depending on the specific treatment of atmospheric condensates.
Dust reddens spectra and can modify the near-infrared colors and absolute magnitudes of ultracool objects
by several magnitudes. This introduces another source of uncertainty if the spectral shape is poorly constrained, though
the difference between dusty and cloud-free models is smaller at longer wavelengths and higher temperatures.
One of the most important and unexpected empirical results to emerge from direct imaging
has been the realization that young brown dwarfs and massive
planets retain photospheric clouds even at low effective temperatures where older, high-gravity brown dwarfs
have already transitioned to T dwarfs (\citealt{Metchev:2006bq}; \citealt{Chauvin:2004cy}; \citealt{Marois:2008ei};
\citealt{Patience:2010hf}; \citealt{Bowler:2010ft}; \citealt{Faherty:2012bc}; \citealt{Bowler:2013ek}; \citealt{Liu:2013gya}; \citealt{Filippazzo:2015dv}).
This is demonstrated in Figure~\ref{fig:cmd}, which shows the location of imaged companions near and below the deuterium-burning limit on the near-infrared color-color diagram. At young ages, warm giant planets are significantly redder than the field population of brown dwarfs, and several of the most extreme examples have anomalously low absolute magnitudes.
For old brown dwarfs, this evolution from dusty, CO-bearing L dwarfs to cloud-free, methane-dominated T dwarfs takes place over a narrow
temperature range ($\sim$1200--1400~K) but occurs at a lower (albeit still poorly constrained) temperature
for young gas giants. The lack of methane is likely caused by disequilibrium carbon chemistry
at low surface gravities as a result of vigorous vertical mixing (e.g., \citealt{Barman:2011fe}; \citealt{Zahnle:2014hl}; \citealt{Ingraham:2014gx}; \citealt{Skemer:2014hy}), while the
preservation of photospheric condensates can be explained by a dependency of cloud base pressure
and particle size on surface gravity (\citealt{Marley:2012fo}).
Unfortunately, the dearth of known planets between $\sim$L5--T5 is the main limitation to understanding this transition in detail (Figure~\ref{fig:spt_age}).
In principle, the mass of a planet can also be inferred by fitting synthetic spectra to the planet's
observed spectrum or multi-band photometry. The mass can then be obtained from best-fitting model as follows:
\begin{equation}
M_p \ (M_\mathrm{Jup}) = 12.76 \times 10^{\log(g) - 4.5~\mathrm{dex}} \left( \frac{R}{R_\mathrm{Jup}} \right)^2.
\end{equation}
\noindent Here $\log(g)$ is the surface gravity (in cm s$^{-2}$) and $R$ is the
planet's radius. The radius can either be taken from evolutionary models or alternatively from
the multiplicative factor that scales the emergent model spectrum to the observed flux-calibrated
spectrum (or photometry) of the planet.
This scale factor corresponds to the planet's radius over
its distance, squared ($R^2$/$d^2$; see \citealt{Cushing:2008kb} for details).
Clearly the inferred mass is very sensitive to both the surface gravity and the radius.
In practice, gravity is usually poorly constrained for model fits to brown dwarf and giant planet spectra
because its influence on the emergent spectrum is more subtle
(e.g., \citealt{Cushing:2008kb}; \citealt{Bowler:2011gw}; \citealt{Barman:2011fe}; \citealt{Macintosh:2015ewa}).
In addition, the scale factor strongly depends on the model effective temperature ($\propto$ $T_\mathrm{eff}^{-4}$), which is
typically not known to better than $\sim$100~K.
Altogether, the current level of systematic imperfections present in atmospheric models and observed spectra of exoplanets (e.g., \citealt{Greco:2016ww})
mean that masses cannot yet be reliably measured from fitting grids of synthetic spectra.
\item \textbf{Deuterium burning history.}
As brown dwarfs with masses between about 13~\mbox{$M_\mathrm{Jup}$} \ and 75~\mbox{$M_\mathrm{Jup}$} \ contract, their core temperatures become
hot enough to burn deuterium, though not at sufficient rates to balance surface radiative losses
(e.g., \citealt{Kumar:1963ht}; \citealt{Burrows:1993kt}).\footnote{ Solar-metallicity brown dwarfs with masses above $\approx$63~\mbox{$M_\mathrm{Jup}$} \ can also burn lithium. This limit changes slightly for non-solar values (\citealt{Burrows:2001wq}).}
The onset and timescale of deuterium burning varies primarily with mass but also with metallicity, helium fraction,
and initial entropy (e.g., \citealt{Spiegel:2011ip}); lower-mass brown dwarfs
take longer to initiate deuterium burning than objects near the hydrogen-burning minimum mass.
This additional transient energy source delays the otherwise invariable cooling and causes
luminosity tracks to overlap. Thus, objects with the same luminosity and
age can differ in mass depending on their deuterium-burning history.
Many substellar companions fall in this ambiguous region, complicating mass determinations by up to a factor of $\sim$2 (Figure~\ref{fig:lum_age} and Table~\ref{tab:planets}).
With a large sample of objects in this region, spectroscopy may ultimately be able to distinguish higher- and lower-mass
scenarios through relative surface gravity measurements (\citealt{Bowler:2013ek}).
\item \textbf{Planet composition.}
The gas and ice giants in the Solar System are enriched in heavy elements compared to solar values.
The specific mechanism for this enhancement is still under debate but exoplanets formed via core
accretion are expected to show similar compositional and abundance ratio differences compared to
their host stars, whereas planets formed through cloud fragmentation or disk instability are probably quite similar
to the stars they orbit.
The bulk composition of planets modifies their atmospheric opacities and influences both their emergent spectra
and luminosity evolution (\citealt{Fortney:2008ez}).
A common practice when deriving masses for imaged planets
is to assume solar abundances, which is largely dictated by the availability of published
atmospheric and evolutionary models.
Many of these assumptions can be removed with atmospheric retrieval methods by directly fitting for
atomic and molecular abundances (\citealt{Lee:2013br}; \citealt{Line:2014eu}; \citealt{Todorov:2015vha}).
\item \textbf{Additional sources of uncertainty.}
A number of other factors and implicit assumptions can also introduce
random and systematic uncertainties in mass derivations.
Different methods of PSF subtraction can bias photometry if planet self-subtraction or speckle over-subtraction is not
properly corrected (e.g., \citealt{Marois:2006df}; \citealt{Lafreniere:2007bg}; \citealt{Soummer:2012ig}).
Photometric variability from rapidly changing or
rotationally-modulated surface features can introduce
uncertainties in relative photometry (e.g., \citealt{Radigan:2014dj};
\citealt{Metchev:2015dr}; \citealt{Zhou:2016gc}; \citealt{Biller:2015hza}). The host star can also be variable if
it has unusually large starspot coverage or if it is very young and happens to
have an edge-on disk (e.g., TWA~30A; \citealt{Looper:2010cb}).
Multiplicity can also bias luminosity measurements. Roughly 20\% of brown dwarfs are
close binaries with separation distributions peaking near 4.5~AU and mass ratios approaching
unity (e.g., \citealt{Burgasser:2007uf}; \citealt{Kraus:2012et}; \citealt{Duchene:2013il}).
If some planetary-mass companions form in the same
manner as brown dwarfs, and if the same trends in multiplicity continue into the planetary regime,
then a small fraction of planetary-mass companions are probably close, unresolved, equal-mass binaries.
These systems will appear twice as luminous.
If atmospheric chemistry or cloud structure varies latitudinally then orientation and viewing angle
could be important.
For the youngest protoplanets embedded in their host stars' circumstellar disks,
accretion streams might dominate over thermal photospheric emission, complicating luminosity
measurements and mass estimates (e.g., LkCa15~b and HD~100546~b; \citealt{Kraus:2012gk}; \citealt{Quanz:2013ii}).
Approaches to applying bolometric corrections, measuring
partly opaque coronagraphs and neutral density filters,
finely interpolating atmospheric and
evolutionary model grids, or converting models between filter systems
(for example, $CH_4S$ to $K$) may vary.
Finally, additional energy sources like radioactivity or stellar insolation are assumed to be negligible
but could impact the luminosity evolution of some exoplanets.
\end{itemize}
\subsection{Measuring Masses}
No imaged planet has yet had its mass measured. The most robust, model-independent way to do so
is through dynamical interactions with other objects.
Because planets follow mass-luminosity-age relationships, knowledge of all three parameters
are needed to test cooling models. Once a mass is measured, its age (from the host star) and
bolometric luminosity (from its distance and spectral energy distribution) enable precision model tests,
although an assumption about energy losses from accretion via hot-, warm-, or cold-start
must be made.
Nevertheless, if all hot-start models overpredict the luminosities of giant planets,
that would suggest that accretion history is indeed an important factor in
both planet formation and realistic cooling models.
Below is a summary of methods to measure substellar masses.
\begin{itemize}
\item \textbf{Dynamical masses.}
Most close-in ($<$100~AU) planets have shown significant orbital motion since their discoveries (Table~\ref{tab:planets}).
This \emph{relative} motion provides a measure of the total mass of the system ($M_\mathrm{star}$+$M_\mathrm{planet}$).
If stationary background stars can simultaneously be observed with the planet-star pair
then \emph{absolute} astrometry is possible. This then gives individual masses
for each component ($M_\mathrm{star}$ and $M_\mathrm{planet}$ separately).
Unfortunately the long orbital periods and lack of nearby background stars for the present census of imaged planets means
this method is currently impractical to measure masses.
Relative astrometry can also be combined with radial velocities to measure a planet's mass.
Assuming the visual orbit and total mass are well constrained from imaging, the mass of
the companion can be measured by monitoring the line of sight reflex motion of the host star (e.g., \citealt{Crepp:2012eg}).
This treats the system as a single-lined spectroscopic binary, giving the mass function
$m_\mathrm{p}^3 \sin^3 i$/$M_\mathrm{tot}^2$,
where $M_\mathrm{tot}$ is the measured total mass, $i$ is the measured inclination, and $m_\mathrm{p}$
is the mass of the planet.
If precise radial velocities are not possible for the host star because it has an early spectral type (with
few absorption lines) or
high levels of stellar activity (RV jitter)
then RV monitoring of the planet can also yield its mass.
This can be achieved by combining adaptive optics imaging and high-resolution near-infrared
spectroscopy to spatially separate the star and planet, as has been demonstrated with
$\beta$~Pic~b (\citealt{Snellen:2014kz}).
Soon $Gaia$ will produce precise astrometric measurements of the host stars of imaged planets.
Together with orbit monitoring though high-contrast imaging, this may offer another way to directly
constrain the masses of imaged planets.
Close substellar binaries offer another approach. Their orbital periods are typically faster
and, in rare cases when such binaries themselves orbit a star,
the age of the tertiary components can be adopted from the host star.
Several brown dwarf-brown dwarf masses have been measured in this fashion:
HD~130948~BC (\citealt{Dupuy:2009jq}), Gl~417~BC (\citealt{Dupuy:2014iz}),
and preliminary masses for $\epsilon$~Indi~Bab (\citealt{Cardoso:2009jf}).
Isolated substellar pairs are also useful for dynamical mass measurements but their
ages are generally poorly constrained unless they are members of young clusters
or moving groups.
No binaries with both components unambiguously residing in the planetary-mass regime are known, but there
is at least one candidate (WISE~J014656.66+423410.0; \citealt{Dupuy:2015dza}).
\item \textbf{Keplerian disk rotation.}
Dynamical masses for young protoplanets may eventually be possible using ALMA through
Keplerian rotation of circumplanetary disks.
This requires resolving faint gas emission lines (e.g., CO~$J$=3--2 or CO~$J$=2--1) from the planet
both spatially and spectrally, something
that has yet to be achieved for known young planets harboring
subdisks (e.g., \citealt{Isella:2014fz}; \citealt{Bowler:2015hx}).
Although challenging, this type of measurement can act as a detailed probe
of the initial conditions of giant planet formation and evolution.
\item \textbf{Stability analysis.}
Numerical modeling of planets and their interactions with debris disks, protoplanetary disks, or additional
planets offers another way to constrain the mass of an imaged planet. If their masses are too low, planets will not
be able to gravitationally shape dust and planetesimals in a manner consistent with observations.
Modeling of the disks and companions orbiting Fomalhaut and $\beta$~Pic illustrate this approach;
independent constraints on the orbit and masses of these companions can be made
by combining spatially-resolved disk structures (a truncated, offset dust ring encircling
Fomalhaut and a warped inner
disk surrounding $\beta$~Pic~b) and observed orbital motion
(e.g., \citealt{Chiang:2009co}; \citealt{Dawson:2011eu}; \citealt{Kalas:2013hpa}; \citealt{Beust:2014dj};
\citealt{MillarBlanchaer:2015ha}).
Likewise, if a planet's mass is too high then
it carves a larger disk gap or may destabilize other planets in the system through mutual interactions.
For example, detailed $N$-body simulations of HR~8799's planets have shown that they
must have masses $\lesssim$10--20~\mbox{$M_\mathrm{Jup}$} \ --- consistent with giant planet evolutionary models --- or they would have become dynamically unstable
by the age of the host star
(e.g., \citealt{Gozdziewski:2009fv}; \citealt{Fabrycky:2010fi}; \citealt{Currie:2011iz}; \citealt{Sudol:2012gn}; \citealt{Gozdziewski:2014gz}).
\item \textbf{Disk morphology.}
Large-scale structures in disks --- clumps, asymmetries, warps, gaps, rings, truncated edges, spiral arms,
and geometric offsets --- can
also be used to indirectly infer the presence of unseen planets and predict their masses and locations
(e.g., \citealt{Wyatt:1999ty}; \citealt{Ozernoy:2000dd}; \citealt{Kenyon:2004wr}).
This approach relies on assumptions about disk surface density
profiles and grain properties, so it is not a completely model-free measurement, but it is potentially sensitive to
planet masses as low as a few tens of Earth masses (\citealt{Rosotti:2016vsa}).
It also enables an immediate mass evaluation without the need for long-term orbit monitoring.
Recently, \citet{Dong:2015fg} and \citet{Zhu:2015fa} presented a novel approach along these lines
to predict the locations and masses
widely-separated companions inducing spiral arms on a circumstellar disk
(see also \citealt{Dong:2016kc}, \citealt{Dong:2016wc}, and \citealt{Jilkova:2015joa}). This may prove to be
a valuable way to constrain masses of planetary companions at extremely wide orbital distances.
\end{itemize}
\begin{figure*}
\vskip -.4 in
\hskip -1. in
\resizebox{9.in}{!}{\includegraphics{paspreview_planetgallery.eps}}
\vskip -0.3 in
\caption{Gallery of imaged planets at small separations ($<$100~AU). HR~8799 harbors four massive planets (5--10~\mbox{$M_\mathrm{Jup}$}) at orbital distances of 15--70~AU (\citealt{Marois:2008ei}; \citealt{Marois:2010gpa}), $\beta$~Pic
hosts a nearly edge-on debris disk and a $\approx$13~\mbox{$M_\mathrm{Jup}$} \ planet at 9~AU (\citealt{Lagrange:2009hq}; \citealt{Lagrange:2010fsa}), a $\approx$5~\mbox{$M_\mathrm{Jup}$} \ planet orbits HD~95086 at 56~AU
(\citealt{Rameau:2013vh}; \citealt{Rameau:2013ds}), and 51~Eri hosts a $\sim$2~\mbox{$M_\mathrm{Jup}$} \ planet
at 13~AU (\citealt{Macintosh:2015ewa}). Images are from \citet{Maire:2015ek},
\citet{Nielsen:2014js}, \citet{Galicher:2014er}, and \citet{Macintosh:2015ewa}. \label{fig:planetgallery} }
\end{figure*}
\section{Survey of Surveys}{\label{sec:surveys}}
Myriad large high-contrast imaging surveys have been carried out over the past decade\footnote{The basic properties
for most of these programs until 2014 are summarized in Table 1 of \citet{Chauvin:2015jy}.}. The most impactful
programs are highly focused, carefully designed with well-understood biases, and have meticulously-selected target lists
to address specific science questions.
The advantages of large surveys include homogeneous observations, instrument setups,
data reduction pipelines, and statistical treatments of the results.
Below are summaries of the most substantial
high-contrast imaging surveys carried out to date with a focus on deep adaptive optics imaging programs
that routinely reach planetary masses and employ modern observing and post-processing techniques
to suppress speckle noise.
These surveys produced the first wave of discoveries (Figure~\ref{fig:planetgallery}), opening
the door to directly characterizing the atmospheres of exoplanets as well as their orbits
through astrometric monitoring (Figure~\ref{fig:astrometry} and Appendix~\ref{tab:astrometry})
This section follows
an historical approach by outlining early ground- and space-based experiments, the first generation of planet-finding instruments and associated surveys,
and the next generation of instruments characterized by extreme adaptive optics systems with exceptionally high Strehl ratios.
\begin{figure*}
\vskip -1 in
\hskip -1.2 in
\resizebox{9.in}{!}{\includegraphics{hr8799_astrometry_paspreview.eps}}
\vskip -2.1 in
\caption{The orbits of HR~8799~bcde and $\beta$~Pic~b. Astrometric monitoring has revealed significant orbital motion over the past decade, enabling measurements of their Keplerian orbital parameters and offering clues about their dynamical history. The HR~8799 planets are on low-eccentricity orbits with some evidence that HR~8799~d may be mutually misaligned with the other planets, which otherwise appear to be coplanar (e.g., \citealt{Pueyo:2015cx}). $\beta$~Pic~b follows a nearly edge-on orbit with a low eccentricity (e.g., \citealt{MillarBlanchaer:2015ha}). Astrometric measurements are compiled in Appendix~\ref{tab:astrometry}. Orbits depicted here are from \citet{Zurlo:2016hl} and \citet{Nielsen:2014js}. \label{fig:astrometry} }
\end{figure*}
\subsection{Early Surveys}
Early high-contrast imaging surveys in search of closely-separated brown dwarf companions
and giant planets were conducted with
speckle interferometry (\citealt{Henry:1990ig}),
image stabilizers (\citealt{Nakajima:1994ea}),
$HST$ (\citealt{Sartoretti:1998uc}; \citealt{Schroeder:2000tk}; \citealt{Brandner:2000vp}; \citealt{Lowrance:2005ci}; \citealt{Luhman:2005cn}),
speckle cameras (\citealt{Neuhauser:2003tb}),
or newly-commissioned adaptive optics systems from the ground with facility instruments
(\citealt{Oppenheimer:2001kl}; \citealt{Macintosh:2001tr}; \citealt{Chauvin:2003gi};
\citealt{McCarthy:2004hl}; \citealt{Carson:2005ia}; \citealt{Nakajima:2005cm}; \citealt{Tanner:2007hu}).
When PSF subtraction was performed, it usually entailed roll-subtraction
(for $HST$; e.g., \citealt{Liu:2004kk}), self-subtraction with a rotated PSF, or reference star subtraction.
Some of these pioneering programs are especially noteworthy for their depth and emphasis on statistical results.
\citet{Lowrance:2005ci} targeted 45 single young A--M stars with NICMOS in the $F160W$
filter (1.6~$\mu$m) on board $HST$. Two brown dwarfs were uncovered, TWA~5~B (\citealt{Lowrance:1999ck}) and
HR~7372~B (\citealt{Lowrance:2000ic}), as well as Gl~577~BC, a tight binary companion near the
hydrogen-burning limit.
\citet{Masciadri:2005gl} used NaCo at the VLT to obtain deep adaptive optics imaging of 28 young nearby stars.
No substellar companions were found, but the importance of thoroughly reporting survey results is highlighted,
even for non-detections,
a theme that continues today.
Focusing exclusively on young moving group members in $L'$-band enabled \citet{Kasper:2007dm} to reach
exceptionally low limiting masses for a sample of 22 stars with NaCo.
The Palomar and Keck adaptive optics survey by \citet{Metchev:2009ky} is another especially
valuable contribution; they imaged 266 FGK stars and discovered two brown dwarf companions,
HD~49197~B (\citealt{Metchev:2004kl}) and HD~203030~B (\citealt{Metchev:2006bq}),
implying a substellar occurrence rate of 3.2$^{+3.1}_{-2.7}$\%. HD~203030~B was
the first young brown dwarf for which signs of a
discrepancy between the field spectral type-effective temperature sequence was recognized, now understood
as a retention of clouds to lower effective temperatures at low surface gravities.
These groundbreaking surveys helped define the scientific motivation,
framework, and early expectations for the first generation planet-finding instruments and
larger observing programs.
\subsection{The First Generation: Dedicated Instruments, Expansive Surveys, and Innovative Speckle Suppression Techniques}{\label{sec:firstgen}}
High-contrast imaging is largely driven by advances in instrumentation and speckle suppression.
The first wave of instruments specifically designed to image giant planets
gave rise to large surveys ($N$$\approx$50--500) targeting mostly young nearby stars.
Deep observations in pupil-tracking mode (angular differential imaging) have become standardized as a way to
distinguish quasi-static speckles from planets (\citealt{Marois:2006df}).
This era is also characterized by the advent of advanced PSF subtraction techniques
to optimally remove speckles during post-processing. Two especially important algorithms are the Locally Optimized Combination of Images
(LOCI; \citealt{Lafreniere:2007bg}), which is based on least-squares minimization of residual speckle noise, and
Karhunen-Lo{\`e}ve Image Projection (KLIP; \citealt{Soummer:2012ig}), a computationally-fast method based on principal component analysis.
The introduction of these new methods gave rise to an array of sophisticated data reduction pipelines with additional features
aimed at minimizing biases and avoiding both self- and over-subtraction of planet flux in ADI and SDI datasets
(\citealt{Marois:2010hs}; \citealt{Amara:2012hv}; \citealt{Pueyo:2012ft}; \citealt{Meshkat:2013ej};
\citealt{Wahhaj:2013fq}; \citealt{Brandt:2013in}; \citealt{Fergus:2014hv}; \citealt{Mawet:2014ga}; \citealt{Marois:2014ep};
\citealt{Currie:2014fm}; \citealt{Cantalloube:2015km}; \citealt{Rameau:2015fg}; \citealt{Wahhaj:2015jz}; \citealt{Savransky:2015kg}; \citealt{Dou:2015gk};
\citealt{Hagelberg:2016kh}; \citealt{Gonzalez:2016ul}).
The suite of instrumentation for high-contrast imaging has ballooned over the past 15 years and
includes dual-channel imagers, infrared wavefront sensors,
non-redundant aperture masking interferometry, adaptive secondary mirrors,
integral field units, high-order adaptive optics systems, and
specialized coronagraphs (e.g., apodized Lyot coronagraph, annular groove phase mask coronagraph,
vector vortex coronagraph, apodizing phase plate,
and four quadrant phase mask; \citealt{Rouan:2000jn}; \citealt{Guyon:2005dp}; \citealt{Soummer:2005co};
\citealt{Mawet:2005vb}; \citealt{Kenworthy:2007vz}; \citealt{Mawet:2010el}).
Many of these have been implemented
in the first generation of instruments in part as testbeds for regular use in second-generation systems.
These instruments are reviewed in detail in \citet{Guyon:2006jp}, \citet{Beuzit:2007tj}, \citet{Oppenheimer:2009gh},
\citet{Perryman:2011uo}, and \citet{Mawet:2012il}.
\subsubsection{VLT and MMT Simultaneous Differential Imager Survey}{\label{sec:gdps}}
This survey (PI: B Biller) targeted 45 young stars between 2003--2006 with ages $\lesssim$250~Myr and distances within 50~pc
using Simultaneous Differential Imagers mounted on the VLT and MMT (\citealt{Biller:2007ht}).
It was among the first to utilize simultaneous differential imaging to search for cold planets
around a large sample of young stars. The SDI method takes advantage
of expected spectral differences between the star, which has a nearly flat continuum,
and cool, methanated planets by simultaneously imaging in multiple narrow-band filters
across this deep absorption feature at 1.6~$\mu$m. Because speckles radially scale with wavelength
while real objects remain stationary, their observations also had some sensitivity to warmer planets without
methane (though it is now clear that the onset of methane occurs at lower temperatures for giant planets than for
brown dwarfs). No substellar companions were found, which ruled out a
linearly-flat extension of close-in giant planets out to 45~AU with high confidence.
\subsubsection{GDPS: Gemini Deep Planet Survey}{\label{sec:gdps}}
GDPS (PI: D. Lafreni\`{e}re) was a large high-contrast imaging
program at the Gemini-North 8.1-m telescope with the NIRI camera and Altair AO system
focusing on 85 stars, 16 of which were identified as close multiples (\citealt{Lafreniere:2007cv}).
The sample contained a mix of nearby GKM stars within 35~pc comprising then-known or suspected nearby young moving group members,
stars with statistically young ages, and several others harboring circumstellar disks.
Altogether the ages span 10~Myr to $\sim$5~Gyr.
The observations were taken in ADI mode with the $CH_4S$ filter, and PSF subtraction was
carried out with the LOCI algorithm (\citealt{Lafreniere:2007bg}).
No substellar companions were discovered, implying an occurrence rate of
$<$23\% for $>$2~\mbox{$M_\mathrm{Jup}$} \ planets between 25--420~AU and
$<$12\% for $>$2~\mbox{$M_\mathrm{Jup}$} \ planets between 50--295~AU at the 95\% confidence level.
\subsubsection{MMT $L'$ and $M$-Band Survey of Nearby Sun-Like Stars}{\label{sec:youngaustral}}
\citet{Heinze:2010dm} carried out a deep $L'$- and $M$-band survey of 54 nearby FGK stars at the MMT with Clio.
The MMT adaptive optics system uses a deformable secondary mirror which reduces the thermal background by
minimizing the number of optical elements along the light path. Observations were carried out between 2006--2007 with angular differential
imaging. The image processing pipeline is described in \citet{Heinze:2008fg} and \citet{Heinze:2010dm}.
The target ages are generally older ($\sim$0.1--2~Gyr) but the long wavelengths of the observations and proximity of the sample ($\lesssim$25~pc)
enabled sensitivity to planetary masses for most of the targets.
One new low-mass stellar companion was discovered and the binary brown dwarf HD~130948~BC was
recovered in the survey.
The statistical results are detailed in \citet{Heinze:2010ko}; they find that no more than 50\% of Sun-like stars
host $\ge$5~\mbox{$M_\mathrm{Jup}$} \ planets between 30--94~AU and no more than 15\% host $\ge$10~\mbox{$M_\mathrm{Jup}$} \ planets between 22--100~AU at the 90\% confidence level.
\subsubsection{NaCo Survey of Young Nearby Austral Stars}{\label{sec:youngaustral}}
This program utilized NaCo at the VLT between 2002--2007 to target 88 young GKM
stars within 100~pc (\citealt{Chauvin:2010hm}).
17 new close multiple systems were uncovered and deep imaging was obtained for 65 single
young stars.
Observations were taken with a Lyot coronagraph in $H$ and $K_S$ bands and
PSF subtraction was performed with azimuthally-averaged subtraction and high-pass filtering.
The most important discovery from this survey was 2M1207--3932~b,
a remarkable 5~$\pm$~2~\mbox{$M_\mathrm{Jup}$} \ companion to a 25~\mbox{$M_\mathrm{Jup}$} \ brown dwarf in the 10~Myr
TWA moving group (\citealt{Chauvin:2004cy}; \citealt{Chauvin:2005gg}) enabled with infrared wavefront sensing.
The unusually red colors and spectral shape of 2M1207--3932~b (\citealt{Patience:2010hf}) have made it the prototype of
young dusty L dwarfs, now understood as a cloudy extension of the
L dwarf sequence to low temperatures (\citealt{Barman:2011dq}; \citealt{Marley:2012fo}).
This system is also unusual from the perspective of brown dwarf demographics; the mass ratio of
$\sim$0.2 and separation of $\sim$41~AU make it an outlier compared to brown dwarf
mass ratio and separation distributions in the field (\citealt{Burgasser:2007uf}).
Two other substellar companions were discovered in this survey:
GSC~08047-00232~B (\citealt{Chauvin:2005it}), also independently found by \citet{Neuhauser:2003tb},
and AB~Pic~B (\citealt{Chauvin:2005dh}),
which resides near the deuterium-burning limit.
\subsubsection{NaCo Survey of Young Nearby Dusty Stars}{\label{sec:youngdusty}}
This VLT/NaCo survey targeted 59 young nearby AFGK stars with ages $\lesssim$200~Myr and
distances within 65~pc (\citealt{Rameau:2013it}). Most of the sample are members of young moving groups and the majority (76\%) were chosen to
have mid-infrared excesses, preferentially selected for having debris disks.
Observations were carried out in $L'$-band between 2009--2012 using angular differential imaging.
Four targets in the sample had known substellar companions (HR~7329, AB~Pic, HR~8799, and $\beta$~Pic).
No new substellar companions were discovered but eight new visual binaries were resolved.
A statistical analysis of AF stars between 5--320~AU and 3--14~\mbox{$M_\mathrm{Jup}$} \ implies a
giant planet occurrence rate of 7.4$^{+3.6}_{-2.4}$\% (68\% confidence level).
\subsubsection{SEEDS: Strategic Exploration of Exoplanets and Disks with Subaru}{\label{sec:seeds}}
The SEEDS survey (PI: M. Tamura) was a 125-night program on the 8.2-m Subaru Telescope targeting about 500 stars to search for
giant planets and spatially resolve circumstellar disks (\citealt{Tamura:2009ip}).
\citet{Tamura:2016jg} provide an overview of the observing strategy, target samples, and main results.
Observations were carried out
with the HiCIAO camera behind Subaru's AO188 adaptive optics system
over five years beginning in 2009.
The sample contained a mixture of young stars in star-forming regions, moving groups, and open clusters;
nearby stars and white dwarfs; and stars with protoplanetary disks and debris disks.
Most of the observations were taken in $H$-band in angular differential imaging mode as well as polarimetric
differential imaging for young disk-bearing stars.
The ADI reduction pipeline is described in \citet{Brandt:2013in}.
Three new substellar companions were found in SEEDS: GJ~758~B (\citealt{Thalmann:2009ca}),
$\kappa$~And~B (\citealt{Carson:2013fw}), and GJ~504~b (\citealt{Kuzuhara:2013jz}).
The masses of GJ~504~b and $\kappa$~And~B may
fall in the planetary regime depending on the ages and metallicities of the system, which are still under debate.
Two brown dwarf companions found in the Pleiades (HD~23514~B and HII~1348~B; \citealt{Yamamoto:2013gu})
had also independently been discovered by other groups.
SEEDS resolved a remarkable number of protoplanetary and transition disks in polarized light--- over two dozen in total ---
revealing previously unknown gaps, rings, and spiral structures down to 0$\farcs$1 with exceptional clarity
(e.g., \citealt{Thalmann:2010hz}; \citealt{Hashimoto:2011bt}; \citealt{Mayama:2012ez}; \citealt{Muto:2012is}).
The statistical results for debris disks are presented in \citet{Janson:2013cjb}.
At the 95\% confidence level, they find that $<$15--30\% of stars host $>$10~\mbox{$M_\mathrm{Jup}$} \ planets at the gap edge.
\citet{Brandt:2014cw} inferred a frequency of 1.0--3.1\% at the 68\% confidence level for 5--70~\mbox{$M_\mathrm{Jup}$} \
companions between 10--100~AU by
combining results from the SEEDS moving group sample (\citealt{Brandt:2014hc}),
the SEEDS disk sample (\citealt{Janson:2013cjb}), the SEEDS Pleiades sample
(\citealt{Yamamoto:2013gu}), GDPS (\citealt{Lafreniere:2007cv}), and
the NICI Campaign moving group sample (\citealt{Biller:2013fu}).
\subsubsection{Gemini NICI Planet-Finding Campaign}{\label{sec:nici}}
The Gemini NICI Planet-Finding Campaign (PI: M. Liu) was a 500-hour survey targeting about 230 young stars
of all spectral classes with deep imaging using the Near-Infrared Coronagraphic Imager on the
Gemini-South 8.1-m telescope (\citealt{Liu:2010hn}).
NICI is an imaging instrument encompassing an adaptive optics system, tapered and partly-translucent Lyot coronagraph,
and dual-channel camera (\citealt{Chun:2008et}).
Campaign observations spanned 2008--2012 and were carried out in two modes: single-channel
$H$-band with angular differential imaging, and simultaneous dual-channel ($CH_4S$ at 1.578~$\mu$m and $CH_4L$ at 1.652~$\mu$m)
angular and spectral differential imaging to maximize sensitivity to methane-dominated planets.
The observing strategy and reduction pipeline are detailed in \citet{Biller:2008kk} and \citet{Wahhaj:2013fq},
and NICI astrometric calibration is discussed in \citet{Hayward:2014dk}.
One previously-known brown dwarf companion was resolved into a close binary, HIP~79797~Bab (\citealt{Nielsen:2013jy}), and
three new substellar companions were found: PZ~Tel~B, a highly eccentric brown dwarf companion in the $\beta$~Pic
moving group (\citealt{Biller:2010ku}); CD-35~2722~B, a young mid-L dwarf in the AB~Dor moving group (\citealt{Wahhaj:2011by});
and HD~1160~B, a substellar companion orbiting a young massive star (\citealt{Nielsen:2012jk}).
No new planets were discovered but $\beta$~Pic~b was recovered during the survey and its orbit was shown to be
misaligned with the inner and outer disks (\citealt{Nielsen:2014js}; \citealt{Males:2014jl}).
Two debris disks surrounding HR~4796~A and HD~141569 were also resolved with unprecedented detail (\citealt{Wahhaj:2014ur}; \citealt{Biller:2015bu}).
The statistical results are organized in several studies.
From a sample of 80 members of young moving groups, \citet{Biller:2013fu} measured the frequency
of 1--20~\mbox{$M_\mathrm{Jup}$} \ planets between 10--150~AU to be $<$6--18\% at the 95.4\% confidence level, depending on which hot-start evolutionary models
are adopted. The high-mass sample of 70 B and A-type stars was described in \citet{Nielsen:2013jy}; they found that
the frequency of $>$4~\mbox{$M_\mathrm{Jup}$} \ planets between 59--460~AU is $<$20\% at 95\% confidence.
\citet{Wahhaj:2013iq} found that $<$13\% of debris disk stars have $\ge$5~\mbox{$M_\mathrm{Jup}$} \ planets beyond 80~AU at 95\% confidence
from observations of 57 targets.
\subsubsection{IDPS: International Deep Planet Search}{\label{sec:idps}}
IDPS is an expansive imaging survey carried out at the VLT with NaCo, Keck with NIRC2, Gemini-South with NICI, and Gemini-North with NIRI targeting
$\approx$300 young A--M stars (PI: C. Marois).
This 14-year survey was mostly carried out in $K$ band, though much of the survey comprised a
mix of broad- and narrow-band near-infrared filters.
Target ages were mostly $\lesssim$300~Myr and encompassed distances from $\sim$10--80~pc (Galicher et al. 2016, submitted).
The main result from this survey was the discovery of the HR~8799 planets (\citealt{Marois:2008ei}; \citealt{Marois:2010gpa}).
Altogether over 1000 unique point sources were found, most of which were meticulously shown to be
background stars from multi-epoch astrometry (Galicher et al. 2016, submitted).
The preliminary analysis of a subset of high-mass A and F stars spanning $\approx$1.5--3.0~\mbox{$M_{\odot}$} \
was presented in \citet{Vigan:2012jm}.
39 new observations in $H$, $K$, and $CH4_S$ filters were carried out in angular differential
imaging mode between 2007--2012 and were combined with three high-mass targets from the literature.
Stellar ages span 8--400~Myr with distances out to 90~pc and comprise a mix
of young moving group members, young field stars, and debris disk hosts.
The subsample of 42 massive stars includes three hosts of substellar companions:
HR~8799, $\beta$~Pic, and HR~7329, a $\beta$~Pic moving group member with a wide
brown dwarf companion (\citealt{Lowrance:2000ic}).
Including the detections of planets around HR~8799 and $\beta$~Pic, \citet{Vigan:2012jm} measure the occurrence
rate of 3--14~\mbox{$M_\mathrm{Jup}$} \ planets between 5--320~AU to be 8.7$^{+10.1}_{-2.8}$\% \ at 68\% confidence.
The complete statistical analysis for the entire sample is presented in Galicher et al. (2016, submitted).
They merge their own results for 292 stars with the GDPS and NaCo-LP surveys,
totaling a combined sample of 356 targets. From this they infer an occurrence rate
of 1.05$^{+2.80}_{-0.70}$\% (95\% confidence interval) for 0.5--14~\mbox{$M_\mathrm{Jup}$} \ planets
between 20--300~AU. They do not find evidence that this frequency depends
on stellar host mass. In addition, 16 of the 59 binaries resolved in IDPS are new.
\subsubsection{PALMS: Planets Around Low-Mass Stars}{\label{sec:palms}}
The PALMS survey (PI: B. Bowler) is a deep imaging search for planets and brown dwarfs orbiting low-mass stars
(0.1--0.6~\mbox{$M_{\odot}$}) carried out at Keck Observatory with NIRC2 and Subaru Telescope with HiCIAO.
Deep coronagraphic observations were acquired for 78 single young M dwarfs in $H$- and $Ks$-bands between 2010--2013 using
angular differential imaging. An additional 27 stars were found to be close binaries.
Targets largely originate from \citet{Shkolnik:2009dx}, \citet{Shkolnik:2012cs}, and an additional $GALEX$-selected sample
(E. Shkolnik et al., in preparation). Most of these
lie within 40~pc and have ages between 20--620~Myr; about one third of the sample are members
of young moving groups.
The observations and PSF subtraction pipeline are described in \citet{Bowler:2015ja}.
Four substellar companions were found in this program: 1RXS~J235133.3+312720~B (\citealt{Bowler:2012cs}),
GJ~3629~B (\citealt{Bowler:2012dc}), 1RXS~J034231.8+121622~B (\citealt{Bowler:2015ja}),
and 2MASS~J15594729+4403595~B (\citealt{Bowler:2015ja}).
1RXS~J235133.3+312720~B is a particularly useful benchmark
brown dwarf because it orbits a member of a young moving group (AB Dor)
and therefore has a well-constrained age ($\approx$120~Myr).
The statistical results from the survey are presented in \citet{Bowler:2015ja}.
No planets were found, implying an occurrence rate of $<$10.3\% for 1--13~\mbox{$M_\mathrm{Jup}$} \
planets between 10--100~AU at the 95\% confidence level assuming hot-start models and $<$16.0\% assuming
cold-start models. For the most massive planets between 5--13~\mbox{$M_\mathrm{Jup}$}, the upper limits
are $<$6.0\% and $<$9.9\% for hot- and cold-start cooling models.
The second, parallel phase of the PALMS survey is an ongoing program
targeting a larger sample of $\sim$400 young M dwarfs primarily at Keck
with shallower contrasts (Bowler et al., in prep.). Initial discoveries include
two substellar companions: 2MASS~J01225093--2439505~B, an L-type member of AB~Dor
at the planet/brown dwarf boundary with an unusually red spectrum (\citealt{Bowler:2013ek}; \citealt{Hinkley:2015gk}),
and 2MASS~J02155892--0929121~C (\citealt{Bowler:2015ch}), a brown dwarf in a close quadruple
system which probably belongs to the Tuc-Hor moving group.
\subsubsection{NaCo-LP: VLT Large Program to Probe the Occurrence of Exoplanets and Brown Dwarfs at Wide Orbits}{\label{sec:vltlp}}
The NaCo-LP survey was a Large Program at the VLT focused on 86 young,
bright, primarily FGK stars (PI: J.-L. Beuzit). $H$-band observations were carried out with NaCo in ADI mode
between 2009--2013 (\citealt{Chauvin:2015jy}). The target sample is described in detail
in \citet{Desidera:2015fl}; stars were chosen to be single, have ages $\lesssim$200~Myr, and
lie within 100~pc. Many of these stars were identified as new
members of young moving groups.
Although no new substellar companions were discovered,
an intriguing white dwarf was found orbiting
HD~8049, an ostensibly young K2 star that may instead be
much older due to mass exchange with its now evolved companion (\citealt{Zurlo:2013kb}).
New observations of the spatially resolved debris disk around
HD~61005 (``the Moth'') were presented by \citet{Buenzli:2010ga},
and 11 new close binaries were resolved during this program (\citealt{Chauvin:2015jy}).
The statistical analysis of the sample of single stars was performed in \citet{Chauvin:2015jy}.
Based on a subsample of 51 young FGK stars, they found that $<$15\% of Sun-like stars host
planets with masses $>$5~\mbox{$M_\mathrm{Jup}$} \ between 100--500~AU and
$<$10\% host $>$10~\mbox{$M_\mathrm{Jup}$} \ planets between 50--500~AU at the 95\% confidence level.
\citet{Reggiani:2016dn} use these NaCo-LP null results together with additional deep archival
observations to study the companion mass function as it relates to binary star formation and planet formation.
From their full sample of 199 Sun-like stars, they find that the results from direct imaging
are consistent with the superposition of the planet mass function determined from
radial velocity surveys and the stellar companion
mass ratio distribution down to 5~\mbox{$M_\mathrm{Jup}$}, suggesting that many planetary-mass companions
uncovered with direct imaging may originate from the tail of the brown dwarf mass distribution instead of being the most
massive representatives of the giant planet population.
\subsection{Other First Generation Surveys}{\label{sec:otherfirst}}
Several smaller, more focused surveys have also been carried out with angular differential imaging: \citealt{Apai:2008gk}
targeted 8 debris disk hosts with NaCo using simultaneous differential imaging;
\citet{Ehrenreich:2010dc} imaged 38 high-mass stars primarily with NaCo;
\citet{Janson:2011hu} observed 15 B and A stars with NIRI;
\citet{Delorme:2012bq} presented observations of 16 young
M dwarfs in $L'$-band with NaCo;
\citet{Maire:2014il} targeted 16 young AFGK stars with NaCo's four-quadrant phase mask, simultaneous
differential imaging, and angular differential imaging;
\citet{Meshkat:2015dh} used NaCo's Apodizing Phase Plate coronagraph in $L'$ band to image
six young debris disk hosts with gaps as part of the Holey Debris Disk survey;
and \citet{Meshkat:2015iz} also used the APP at NaCo to image a sample of 13~A- and F-type main sequence
stars in search of planets, uncovering a probable brown dwarf around HD~984 (\citealt{Meshkat:2015hd}).
The Lyot Project is another survey with important contributions for its early use of coronagraphy behind an
extreme adaptive optics system (\citealt{Oppenheimer:2004go}). This survey was carried out at the 3.6-m AEOS telescope equipped with a
941-actuator deformable mirror and targeted 86 nearby bright stars using angular differential imaging (\citealt{Leconte:2010ed}).
\subsection{The Second Generation: Extreme Adaptive Optics, Exceptional Strehl Ratios, and Optimized Integral Field Units}
The transition to second-generation planet-finding instruments began over the past few years.
This new era is characterized by regular implementation of high-order (``extreme'') adaptive optics systems with thousands of
actuators and exceptionally low residual wavefront errors;
pyramid wavefront sensors providing better sensitivity and higher precision wavefront correction;
Strehl ratios approaching (and often exceeding) 90\% at near-infrared wavelengths;
high-contrast integral field units designed for on-axis observations enabling speckle subtraction and low-resolution spectroscopy;
sensitivity to smaller inner working angles than first-generation instruments; and advanced coronagraphy.
\subsubsection{Project 1640}{\label{sec:leech}}
Project 1640 is a large ongoing survey (PI: R. Oppenheimer) and
high-contrast imaging instrument with the same name located behind the PALM-3000 second-generation adaptive
optics system at the Palomar Observatory 200-inch (5.1-meter) Hale Telescope.
The instrument itself contains an apodized-pupil Lyot coronagraph and integral field unit that samples 32 spectral
channels across $Y$, $J$, and $H$ bands (\citealt{Hinkley:2011kh}), producing a low-resolution
spectrum to broadly characterize the physical properties of faint companions (\citealt{Roberts:2012et}; \citealt{Rice:2015kp}).
The survey consists of two phases, the first (now concluded) with the original
Palomar Adaptive Optics System (PALM-241; \citealt{Troy:2000wp}) and a second ongoing three-year program
focusing on nearby massive stars with the upgraded PALM-3000 adaptive optics system (\citealt{Dekany:2013hv}).
The data reduction pipeline is described in \citet{Zimmerman:2011iq} and a detailed treatment of
speckle subtraction, precision astrometry, and robust spectrophotometry can be found in
\citet{Crepp:2011kc}, \citet{Pueyo:2012ft}, \citet{Oppenheimer:2013gy}, \citet{Fergus:2014hv}, and \citet{Pueyo:2015cx}.
Results from the Project 1640 survey include several discoveries of faint stellar companions to massive
A stars (\citealt{Zimmerman:2010dn}; \citealt{Hinkley:2010id}) and follow-up astrometric and spectral characterization of
known substellar companions (\citealt{Crepp:2015gt}; \citealt{Hinkley:2013ko}).
In addition, \citet{Oppenheimer:2013gy} and \citet{Pueyo:2015cx} presented detailed spectroscopic and astrometric analysis of the
HR~8799 planets and found intriguing evidence for mutually dissimilar spectral properties and
signs of non-coplanar orbits.
\subsubsection{LEECH: LBTI Exozodi Exoplanet Common Hunt}{\label{sec:leech}}
LEECH is an ongoing $\sim$70-night high-contrast imaging program (PI: A. Skemer) at the twin 8.4-m Large Binocular Telescope.
Survey observations began in 2013 and are carried out in angular differential imaging mode in $L'$-band with LMIRcam
utilizing deformable secondary mirrors
to maximize sensitivity at mid-IR wavelengths by
limiting thermal emissivity from warm optics (\citealt{Skemer:2014ch}).
The target sample focuses on intermediate-age stars $<$1~Gyr including members of the $\sim$500~Myr Ursa Majoris moving group,
massive BA stars, and nearby young FGK stars.
In addition to searching for new companions, this survey is also characterizing known
planets using the unique mid-IR instrumentation, sensitivity,
and filter suite at the LBT. \citet{Maire:2015dt} refined the orbits of the HR~8799 planets and found
them to be consistent with 8:4:2:1 mean motion resonances.
\citet{Skemer:2016ha} observed GJ~504~b in three narrow-band filters spanning 3.7--4.0 $\mu$m. Model fits
indicate an exceptionally low effective temperature of $\approx$540~K and enhanced metallicity,
possibly pointing to an origin through core accretion.
Additionally, \citet{Schlieder:2016ih} presented LEECH observations and
dynamical mass measurements the Ursa Majoris binary NO UMa.
Recently the integral field unit Arizona Lenslets for Exoplanet Spectroscopy (ALES; \citealt{Skemer:2015is})
was installed inside LMIRcam and will enable integral-field spectroscopy of planets between 3--5~$\mu$m
for the first time.
\subsubsection{GPIES: Gemini Planet Imager Exoplanet Survey}{\label{sec:gpies}}
GPIES is an ongoing 890-hour, 600-star survey to image extrasolar giant planets and debris disks
with the Gemini Planet Imager
at Gemini-South (PI: B. Macintosh). GPI is expressly built to image planets at small
inner working angles;
its high-order adaptive optics system
incorporates an apodized pupil Lyot coronagraph, integral field spectrograph,
imaging polarimeter, and (imminent) non-redundant masking capabilities (\citealt{Macintosh:2014js}).
Survey observations targeting young nearby stars began in 2014 and will span three years.
\citet{Macintosh:2015ewa} presented the discovery of 51 Eri b, the first exoplanet found in GPIES
and the lowest-mass planet imaged in thermal emission to date.
This remarkable young, methanated T dwarf has a contrast of 14.5~mag in $H$-band at a separation of 0$\farcs$45,
which translates into a mass of only 2~\mbox{$M_\mathrm{Jup}$} \ at 13~AU assuming hot-start cooling models.
It is also the only imaged planet consistent with the most pessimistic cold-start evolutionary models, in which
case its mass may be as high as 12~\mbox{$M_\mathrm{Jup}$}. \citet{DeRosa:2015jla} obtained follow-up observations with GPI
and showed that 51~Eri~b shares a
common proper motion with its host and exhibits slight (but significant) orbital motion.
Other initial results from this survey include astrometry and a refined orbit for $\beta$~Pic~b (\citealt{Macintosh:2014js}),
as well as resolved imaging of the debris disks around HD~106906 (\citealt{Kalas:2015en}) and
HD~131835 (\citealt{Hung:2015dpa}).
\subsubsection{SPHERE GTO Survey}{\label{sec:sphere}}
SPHERE (Spectro-Polarimetric High-contrast Exoplanet Research)
is an extreme adaptive optics system (SAXO) and versatile instrument for high-contrast imaging and spectroscopy
at the VLT with a broad range of capabilities (\citealt{Beuzit:2008gt}). IRDIS (\citealt{Dohlen:2008eu})
offers classical and dual-band imaging (\citealt{Vigan:2010cs}), dual-polarization imaging (\citealt{Langlois:2014dt}),
and long-slit spectroscopy (\citealt{Vigan:2008he}); IFS provides low-resolution ($R$$\sim$30--50)
integral field spectroscopy spanning 0.95--1.65~$\mu$m (\citealt{Claudi:2008dj}); and ZIMPOL enables diffraction-limited imaging
and polarimetry in the optical (\citealt{Thalmann:2008gi}).
As part of a guaranteed time observing program, the SPHERE GTO team is carrying out a large ongoing
survey (PI: J.-L. Beuzit) with a range of science goals. About 200 nights of this time are devoted to a deep near-infrared imaging survey
(SHINE: SpHere Infrared survEy) to search for
and characterize exoplanets around 400--600 stars, while the remaining $\sim$60 nights will be used
for a broad range of science including spatially-resolved observations of circumstellar disks and optical imaging of planets.
Initial results from the SHINE survey include observations of the brown dwarf GJ~758~B (\citealt{Vigan:2016gq}),
PZ Tel B and HD~1160 B (\citealt{Maire:2016go}), HD~106906 (\citealt{Lagrange:2016bh}),
and the HR~8799 planets (\citealt{Zurlo:2016hl}; \citealt{Bonnefoy:2016gx}).
Other results have focused on resolved imaging of
the debris disk surrounding HD~61005, which may be
a product of a recent planetesimal collision (\citealt{Olofsson:2016ts}),
and HD~135344~B, host of a transition disk with striking spiral arm structure (\citealt{Stolker:2016ul}).
\citet{Boccaletti:2015ji} uncovered intriguing and temporally evolving features in AU~Mic's debris disk.
In addition, \citet{Garufi:2016vt} presented deep IRDIS near-infrared images and visible ZIMPOL polarimetric
observations of HD~100546 revealing a complex disk environment with considerable structure
and resolved $K$-band emission at the location of the candidate protoplanet HD~100546~b.
\subsubsection{Other Second Generation Instruments and Surveys}{\label{sec:sphere}}
A number of other novel instruments and forthcoming surveys bear highlighting.
MagAO (PI: L. Close) at the Magellan 6.5-m Clay telescope is a versatile adaptive optics system consisting of
a 585-actuator adaptive secondary mirror, pyramid wavefront sensor, and two science
cameras offering simultaneous diffraction-limited imaging spanning the visible (0.6--1.05~$\mu$m) with VisAO and near-infrared (1--5.3~$\mu$m)
with Clio2 (\citealt{Close:2012ed}; \citealt{Morzinski:2014ep}; \citealt{Morzinski:2015gb}).
Strehl ratios of $\sim$20--30\% in the optical are opening up new science fronts including deep
red-optical observations of exoplanets (\citealt{Males:2014jl}; \citealt{Wu:2015cz}), characterization of accreting protoplanets in H$\alpha$ (\citealt{Sallum:2015ej}),
and high spatial resolution imaging down to $\sim$20~mas (\citealt{Close:2013bu}).
Vector apodizing phase plate coronagraphs were recently installed in MagAO and
other upgrades such as an optical integral field unit are possible in the future.
Subaru Coronagraphic Extreme Adaptive Optics (SCExAO; PI: O. Guyon) is being built for the Subaru telescope and is
the newest extreme adaptive optics system on a large telescope. A detailed description of all facets of this instrument
is described in \citet{Jovanovic:2015ja}.
In short, a pyramid wavefront sensor is coupled
with a 2000-element deformable mirror to produce Strehl ratios in excess of 90\%.
The instrument is particularly flexible, allowing for a variety of setups and instrument subcomponents including
speckle nulling to suppress static and slowly changing speckles (\citealt{Martinache:2014jj}),
a near-infrared science camera (currently HiCIAO), sub-diffraction-limited interferometric
science in the visible with VAMPIRES (\citealt{Norris:2015jw}) and FIRST (\citealt{Huby:2012bp}),
high-contrast integral field spectroscopy (\citealt{Brandt:2014jv}),
and coronagraphy with phase-induced amplitude apodization (\citealt{Guyon:2003gw})
and vector vortex coronagraphs (\citealt{Mawet:2010el}).
\begin{figure*}
\vskip -.8 in
\hskip -.1 in
\resizebox{7.8in}{!}{\includegraphics{imaging_survey_compilation_mean_sensitivity_paspreview.eps}}
\vskip -0.7 in
\caption{Mean sensitivity maps from a meta-analysis of 384 unique stars with published high-contrast imaging observations. M dwarfs provide the highest sensitivities to lower planet masses in the contrast-limited regime. Altogether, current surveys probe the lowest masses at separations of $\sim$30--300~AU. Contours denote 10\%, 30\%, 50\%, 70\%, and 90\% sensitivity limits. \label{fig:meansensitivity} }
\end{figure*}
\begin{figure}
\vskip -.2 in
\hskip -.6 in
\resizebox{4.1in}{!}{\includegraphics{mass_plfrequency_correlation_paspreview_v2.eps}}
\vskip -0.2 in
\caption{Probability distributions for the occurrence rate giant planets
from a meta-analysis of direct imaging surveys in the literature.
2.8$^{+3.7}_{-2.3}$\% of BA stars, $<$4.1\% of FGK stars, and $<$3.9\% of M dwarfs harbor giant planets
between 5--13~\mbox{$M_\mathrm{Jup}$} \ and 30--300~AU. The correlation between stellar host mass and giant planet
frequency at small separations ($<$2.5~AU) from \citet{Johnson:2010gu} is shown in blue. Larger sample sizes are needed
to discern any such correlation on wide orbits.
0.6$^{+0.7}_{-0.5}$\% of stars of any mass host giant planets over the same mass and separation range. \label{fig:plfrequency} }
\end{figure}
\subsection{The Occurrence Rate of Giant Planets on Wide Orbits: Meta-Analysis of Imaging Surveys}{\label{sec:occurrencerate}}
The frequency and mass-period distribution of planets spanning various orbital distances, stellar host masses, and system ages
provides valuable clues about the dominant processes shaping the
formation and evolution of planetary systems.
These measurements are best addressed with large samples and uniform statistical analyses.
\citet{Nielsen:2008kk} carried out the first such large-scale study based on
adaptive optics imaging surveys from \citet{Biller:2007ht} and \citet{Masciadri:2005gl}.
From their sample of 60~unique stars they found an upper limit of 20\% for $>$4~\mbox{$M_\mathrm{Jup}$} \ planets between 20--100~AU
at the 95\% confidence level.
This was expanded to 118 targets in \citet{Nielsen:2010jt} by including the GDPS survey of \citet{Lafreniere:2007cv},
resulting in the same upper limit and planet mass regime but for a broader range of separations of 8--911~AU at 68\% confidence.
\citet{Vigan:2012jm} and \citet{Rameau:2013it} combined their own observations of high-mass stars with previous surveys
and measured occurrence rates of 8.7$^{+10.1}_{-2.8}$\% (for 3--14~\mbox{$M_\mathrm{Jup}$} \ planets between 5--320~AU) and
16.1$^{+8.7}_{-5.3}$\% (for 1--13~\mbox{$M_\mathrm{Jup}$} \ planets between 1--1000~AU), respectively.
\citet{Brandt:2014cw} incorporated the SEEDS, GDPS, and the NICI moving group surveys and found
a frequency of 1.0--3.1\% for 5--70~\mbox{$M_\mathrm{Jup}$} \ companions between 10--100~AU.
Recently, Galicher et al. (2016, submitted) combined results from IDPS, GDPS, and the
NaCo Survey of Young Nearby Austral Stars and found an occurrence rate of 1.05$^{+2.80}_{-0.70}$\%
for 0.5--14~\mbox{$M_\mathrm{Jup}$} \ companions between 20--300~AU based on a sample of 356 unique stars.
Breaking this into stellar mass bins did not reveal
any signs of a trend with stellar host mass.
\begin{figure*}
\vskip -1.2 in
\hskip -.2 in
\resizebox{7.3in}{!}{\includegraphics{paspreview_mass_sma_exoplanets.eps}}
\vskip -.6 in
\caption{The demographics of exoplanets from direct imaging (dark blue), radial velocity (light blue), transit (orange), and microlensing (green) surveys. Planets detected with radial velocities are minimum masses. It remains unclear whether imaged planets and brown dwarfs represent distinct populations or whether they form a continuous distribution down to the fragmentation limit. Directly imaged substellar companions are compiled from the literature, while planets found with other methods are from exoplanets.eu as of April 2016. \label{fig:mass_sma_exoplanets} }
\end{figure*}
Here I reexamine the occurrence rate of giant planets with a meta-analysis of
the largest and deepest high-contrast imaging surveys.
696 contrast curves are assembled from the literature from the programs
outlined in Section~\ref{sec:firstgen}. For stars with more than one observation,
the deeper contrast curve at 1$''$ is chosen. Targets with stellar companions within 100 AU are removed from
the sample because binaries can both inhibit planet formation and dynamically disturb planetary orbits.
Most candidate planets uncovered during these surveys are rejected as background stars
from second epoch observations, but some candidates are either not recovered or
are newly revealed in follow-up data. Because of finite telescope allocation,
some of these candidates remain untested for common proper motion.
These ambiguous candidates cannot
be ignored in a statistical analysis because one (or more) could be indeed be bound.
In these cases, contrast curves are individually truncated
one standard deviation above the brightest candidate.
Ages are taken from the literature except for members of young moving groups, for which
the most recent (and systematically older) ages of young
moving groups from \citet{Bell:2015gw} are adopted.
Most ages in the sample are less than 300~Myr and within 100~pc.
Altogether this leaves 384 unique stars spanning B2--M6 spectral types:
76 from \citet{Bowler:2015ja}, 72 from \citet{Biller:2013fu}, 61 from \citet{Nielsen:2013jy}, 54 from \cite{Lafreniere:2007cv},
45 from \citet{Brandt:2014hc}, 30 from \citet{Janson:2013cjb}, 25 from \citet{Vigan:2012jm},
14 from \citet{Wahhaj:2013iq}, and 7 from \citet{Janson:2011hu}.
Sensitivity maps and planet occurrence rates are derived following \citet{Bowler:2015ja}.
For a given planet mass and semimajor axis, a population of artificial planets on random circular orbits
are generated in a Monte Carlo fashion and converted into apparent magnitudes and
separations using Cond hot-start evolutionary models
from \citet{Baraffe:2003bj}, the age of the host star, and the distance to the system, including
uncertainties in age and distance.
These are compared with the measured contrast curve to infer the fractional sensitivity at each grid point
spanning 30 logarithmically-uniform bins in mass and separation between 1--1000~AU and 0.5--100~\mbox{$M_\mathrm{Jup}$}.
When available, fractional field of view coverage is taken into account.
Contrasts measured in $CH_4S$ filters are converted to $H$-band using an empirical color-spectral type
relationship based on synthetic colors of ultracool dwarfs from the SpeX Prism Library (\citealt{Burgasser:2014tr}) as well as
the spectral type-effective temperature sequence
from \citet{Golimowski:2004en}\footnote{Synthesized colors of ultracool
dwarfs using the Keck/NIRC2
$CH_4S$ and $H_\mathrm{MKO}$ filter profiles yields the following relation:
$CH_4S$--$H_\mathrm{MKO}$ = $\sum_{i=0}^{4}$$c_i$SpT$^i$, where $c_0$=0.03913178,
$c_1$=0.008678245, $c_2$=--0.001542768, $c_3$=0.0001033761, $c_4$=--2.902588$\times$10$^{-6}$,
and SpT is the numerical near-infrared spectral type (M0=1.0, L0=10.0, T0=20.0).
This relation is valid from M3--T8 and the rms of the fit is 0.025~mag.
\citet{Golimowski:2004en} provide an empirical $T_\mathrm{eff}$(SpT) relationship, but the inverse SpT($T_\mathrm{eff}$) is necessary
for this filter conversion at a given mass and age. Refitting the same data from Golimowski et al. yields the following:
SpT = $\sum_{i=0}^{4}$$c_i$$T_\mathrm{eff}$$^i$, where
$c_0$=36.56779, $c_1$=--0.004666549, $c_2$=--9.872890$\times$10$^{-6}$,
$c_3$=4.108142$\times$10$^{-09}$, $c_4$=-- 4.854263$\times$10$^{-13}$.
This is valid for 700~K $<$ $T_\mathrm{eff}$ $<$ 3900~K, the rms is 1.89~mag, and SpT
is the same numerical near-infrared spectral type as above.}.
The mean sensitivity maps for all 384 targets and separate bins containing BA stars (110 targets),
FGK stars (155 targets), and M dwarfs (118 targets) are shown in Figure~\ref{fig:meansensitivity}.
In general, surveys of high-mass stars probe higher planet masses than deep imaging around M dwarfs
owing to differences in the host stars' intrinsic luminosities. The most sensitive region for all stars is between
$\sim$30--300~AU, with less coverage at extremely wide separations because of limited fields of view
and at small separations in contrast-limited regimes.
The occurrence rate of giant planets for all targets and for each stellar mass bin are listed in Table~\ref{tab:gpfreq}, which assumes
logarithmically-uniform distributions in
mass and separation (see Section 6.5 of \citealt{Bowler:2015ja} for details).
The mode and 68.3\% minimum credible interval (also known as the highest posterior density interval) of the planet frequency probability distribution are reported.
Two massive stars in the sample host planets that were either discovered or successfully
recovered in these surveys: $\beta$~Pic, with a
planet at 9~AU, and HR~8799, with planets spanning 15--70~AU.
HR~8799 is treated as a single detection.
The most precise occurrence rate measurement is between 5--13~\mbox{$M_\mathrm{Jup}$} \ and 30--300~AU.
Over these ranges, the frequency of planets orbiting
BA, FGK, and M stars is 2.8$^{+3.7}_{-2.3}$\%, $<$4.1\%, and $<$3.9\%, respectively
(Figure~\ref{fig:plfrequency}).
Here upper limits are 95\% confidence intervals.
Although there are hints of a higher giant planet occurrence rate around massive stars
analogous to the well-established correlation at small separations
(\citealt{Johnson:2007et}; \citealt{Lovis:2007cy}; \citealt{Johnson:2010gu}; \citealt{Bowler:2010eo}),
this trend is not yet statistically significant at wide orbital distances
and requires larger sample sizes in each stellar mass bin to unambiguously test this correlation.
Marginalizing over stellar host mass, the overall giant planet occurrence rate for the full sample of 384 stars
is 0.6$^{+0.7}_{-0.5}$\%, which happens to be comparable to the frequency of hot Jupiters around
FGK stars in the field (1.2~$\pm$~0.4\%; \citealt{Wright:2012kma}) and in the $Kepler$ sample
(0.5~$\pm$~0.1\%; \citealt{Howard:2012di}).
However, compared to the high occurrence rate of giant planets (0.3--10~\mbox{$M_\mathrm{Jup}$}) with orbital periods out to 2000~days ($\sim$10\%; \citealt{Cumming:2008hg}),
massive gas giants are clearly quite rare at wide orbital distances.
\section{Brown Dwarfs, Giant Planets, and the Companion Mass Function}{\label{sec:dbl}
Direct imaging has shown that planetary-mass companions exist at unexpectedly wide separations
but the provenance of these objects remains elusive.
There is substantial evidence that the tail-end of the star formation process can produce objects
extending from low-mass stars at the hydrogen burning limit ($\approx$75~\mbox{$M_\mathrm{Jup}$})
to brown dwarfs at the opacity limit for fragmentation
($\approx$5--10~\mbox{$M_\mathrm{Jup}$}), which corresponds to the minimum mass of a pressure-supported fragment
during the collapse of a molecular cloud core
(\citealt{Low:1976wt}; \citealt{Silk:1977il}; \citealt{Boss:2001vw}; \citealt{Bate:2002iq}; \citealt{Bate:2009br}).
Indeed, isolated objects with inferred masses below 10~\mbox{$M_\mathrm{Jup}$} \ have been found in a range of contexts over the past decade:
in star-forming regions (\citealt{Lucas:2001ed}; \citealt{Luhman:2009cn}; \citealt{Scholz:2012kv}; \citealt{Muzic:2015kn}),
among closer young stellar associations (\citealt{Liu:2013gya}; \citealt{Gagne:2015kf}; \citealt{Kellogg:2016fo}; \citealt{Schneider:2016iq}),
and at much older ages as Y dwarfs in the field (\citealt{Cushing:2011dk}; \citealt{Kirkpatrick:2012ha}; \citealt{Beichman:2013dl}).
Similarly, several systems with \emph{companions} below $\approx$10~\mbox{$M_\mathrm{Jup}$} \ are difficult to explain with any formation scenario other than
cloud fragmentation: 2M1207--3932~Ab is a $\sim$25~\mbox{$M_\mathrm{Jup}$} \ brown dwarf with a $\sim$5~\mbox{$M_\mathrm{Jup}$} \ companion at an
orbital distance of 40~AU (\citealt{Chauvin:2004cy})
and 2M0441+2301 AabBab is a quadruple system comprising a low-mass star, two brown dwarfs, and a
10~\mbox{$M_\mathrm{Jup}$} \ object in a hierarchical and distinctly non-planetary configuration (\citealt{Todorov:2010cn}).
From the radial velocity perspective, the distribution of gas giant minimum masses
is generally well-fit with a decaying power law (\citealt{Butler:2006dd}; \citealt{Johnson:2009iz}; \citealt{Lopez:2012jp})
or exponential function (\citealt{Jenkins:2016um}) that tapers off beyond $\sim$10~\mbox{$M_\mathrm{Jup}$}.
This is evident in Figure~\ref{fig:mass_sma_exoplanets}, although inhomogeneous radial velocity detection biases
which exclude lower-mass planets at wide separations are not taken into account.
The dominant formation channel for this population of close-in giant planets is thought to be core accretion plus gas capture,
in which growing cores reach a critical mass and undergo runaway gas accretion (e.g., \citealt{Helled:2013et}).
The totality of evidence indicates that
the decreasing brown dwarf companion mass function almost certainly overlaps with the the rising giant planet mass function in the
5--20~\mbox{$M_\mathrm{Jup}$} \ mass range.
No strict mass cutoff can therefore unambiguously divide giant planets from brown dwarfs,
and many of the imaged companions below 13~\mbox{$M_\mathrm{Jup}$} \ listed in Table~\ref{tab:planets}
probably originate from the dwindling brown dwarf companion mass function.
Another approach to separate these populations is to consider
formation channel: planets originate in disks while brown dwarfs form like stars from
the gravitational collapse of molecular cloud cores.
However, not only are the relic signatures of formation difficult to discern for individual discoveries, but
objects spanning the planetary up to the stellar mass regimes may also form in large Toomre-unstable circumstellar disks at
separations of tens to hundreds of AU (e.g., \citealt{Durisen:2007wg}; \citealt{Kratter:2016dn}).
Any binary narrative based on origin in a disk versus a cloud core is therefore also problematic.
Furthermore, both giant planets and brown dwarf companions may migrate,
dynamically scatter, or undergo periodic Kozai-Lidov orbital oscillations if a third body is present, further mixing these
populations and complicating
the interpretation of very low-mass companions uncovered with direct imaging.
The deuterium-burning limit at $\approx$13~\mbox{$M_\mathrm{Jup}$} \ is generally acknowledged as a nebulous, imperfect,
and ultimately artificial division
between brown dwarfs and giant planets.
Moreover, this boundary is not fixed and may
depend on planet composition, core mass, and accretion history
(\citealt{Spiegel:2011ip}; \citealt{Bodenheimer:2013ki}; \citealt{Mordasini:2013cr}).
Uncertainties in planet luminosities,
evolutionary histories, metallicities, and ages can also produce large systematic errors in
inferred planet masses (see Section~\ref{sec:masses}), rendering
inconsequential any sharp boundary set by mass.
However, despite these shortcomings, this border lies in the planet/brown dwarf ``mass valley" and
may still serve as a pragmatic (if flawed)
qualitative division between two populations
formed \emph{predominantly}
with their host stars and \emph{predominantly} in protoplanetary disks.
Observational tests of formation routes will eventually provide the necessary tools to understand the
relationship between these populations.
This can be carried out at an individual level with environmental clues such as coplanarity of multi-planet systems
or orbital alignment within a debris disk; enhanced metallicities or abundance ratios relative to host stars (\citealt{Oberg:2011je});
or overall system orbital architecture.
Similarly, the statistical properties of brown dwarfs and giant planets can be used to identify dominant formation channels:
the separation distribution of
objects formed through cloud fragmentation should resemble that of binary stars; disk instability and core accretion
may result in a bimodal period distribution for giant planets (\citealt{Boley:2009dk});
planet scattering to wide orbits should produce a rising
mass function at low planet masses as opposed to a truncated mass distribution at the fragmentation limit for
cloud fragmentation and disk instability; and the companion mass function and mass ratio distribution
are expected to smoothly extend from low-mass stars down to the fragmentation
limit if a common formation channel in at play (\citealt{Brandt:2014cw}; \citealt{Reggiani:2016dn}).
Testing these scenarios will require much larger sample sizes given the low occurrence rates
uncovered in direct imaging surveys.
}
\section{Conclusions and Future Outlook}{\label{sec:intro}}
High-contrast imaging is still in its nascence. Radial velocity, transit, and microlensing
surveys have unambiguously demonstrated that giant planets are much rarer than super-Earths
and rocky planets at separations $\lesssim$10~AU.
In that light, the discovery of truly massive planets at tens, hundreds, and even thousands of AU
with direct imaging is fortuitous, even if the overall occurrence rate of this population is quite low.
Each detection technique has produced many micro paradigm shifts over the past
twenty years that disrupt and rearrange perceptions about the demographics and architectures
of planetary systems.
Hot Jupiters, correlations with stellar mass and metallicity,
the ubiquity of super-Earths, compact systems of small planets,
resonant configurations, orbital misalignments,
the prevalence of habitable-zone Earth-sized planets,
circumbinary planets, and featureless clouds and hazes
are an incomplete inventory within just a few AU (e.g., \citealt{Winn:2015jt}).
The most important themes to emerge from direct imaging are that
massive planets exist but are uncommon at wide separations ($>$10~AU),
and at young ages the low-gravity atmospheres of giant planets do not resemble
those of older, similar-temperature brown dwarfs.
There are many clear directions forward in this field. Deeper contrasts and smaller
inner working angles will probe richer portions of planetary mass- and separation
distributions. Thirty meter-class telescopes with extreme adaptive optics systems
will regularly probe sub-Jovian masses at separations down to 5~AU.
This next generation will uncover more planets and enable a complete mapping of the evolution of
giant planet atmospheres over time.
Other fertile avenues for high-contrast imaging include
precise measurements of atmospheric composition (\citealt{Konopacky:2013jvc}; \citealt{Barman:2015dy}),
doppler imaging (\citealt{Crossfield:2014es}; \citealt{Crossfield:2014cy}),
photometric monitoring to map variability of rotationally-modulated features (e.g., \citealt{Apai:2016ky}),
synergy with other detection methods
(e.g., \citealt{Lagrange:2013gh}; \citealt{Sozzetti:2013de}; \citealt{Montet:2014fa}; \citealt{Clanton:2016ft}),
advances in stellar age-dating at the individual and population levels,
merging high-contrast imaging with high-resolution spectroscopy (\citealt{Snellen:2014kz}; \citealt{Snellen:2015bta}),
surveying the companion mass function to sub-Jovian masses,
polarimetric observations of photospheric clouds (e.g., \citealt{Marley:2013vq}; \citealt{JensenClem:2016kt}),
statistical correlations with stellar host properties,
probing the earliest stages of protoplanet assembly (\citealt{Kraus:2012gk}; \citealt{Sallum:2015ej}),
astrometric orbit monitoring and constraints on dynamical histories,
and robust dynamical mass measurements to test evolutionary models and
probe initial conditions (e.g., \citealt{Dupuy:2009jq}; \citealt{Crepp:2012eg}).
High-contrast imaging has a promising future and will play an ever-growing role
in investigating the architecture, atmospheres, and origin of exoplanets.
\acknowledgments
It is a pleasure to thank the referee, Rebecca Oppenheimer, as well as
Lynne Hillenbrand, Dimitri Mawet, Sasha Hinkley, and Trent Dupuy for their thoughtful comments and constructive feedback on this review.
Michael Liu, Arthur Vigan, Christian Marois, Motohide Tamura, Ga{\"e}l Chauvin,
Andy Skemer, Adam Kraus, and Bruce Macintosh contributed helpful suggestions on past and ongoing imaging surveys.
Bruce Macintosh, Eric Nielsen, Andy Skemer, and Rapha{\"e}l Galicher kindly provided
images for Figure~\ref{fig:planetgallery}. Trent Dupuy generously shared his compilation
of late-T and Y dwarfs used in Figure~\ref{fig:cmd}.
This research has made use of the Exoplanet Orbit Database,
the Exoplanet Data Explorer at exoplanets.org,
and the SpeX Prism Spectral Libraries maintained by Adam Burgasser.
NASA's Astrophysics Data System Bibliographic Services together with the VizieR catalogue access tool and SIMBAD database
operated at CDS, Strasbourg, France, were invaluable resources for this work.
\newpage
|
1,108,101,562,739 | arxiv | \section{Introduction}
This paper is an extended and modified version of preprint \cite{EGZ}.
Let $X,X_1,\ldots,X_n$ be independent identically distributed (i.i.d.) random variables
with common distribution $F=\mathcal L(X)$.
The L\'evy concentration function of a random variable $X$ is defined by the equality
$$Q(F,\lambda)=\sup_{x\in\mathbf{R}}F\{[x,x+\lambda]\}, \quad \lambda>0.$$
Let $a=(a_1,\ldots,a_n)\in \mathbf{R}^n$, $a\ne0$.
In this paper we study the behavior of the concentration functions
of the weighted sums $S_a=\sum_{k=1}^{n}a_k X_k$ with respect to
the arithmetic structure of coefficients~$a_k$.
Refined concentration results for these weighted sums play an important role
in the study of singular values of random matrices (see, for instance, Nguyen and Vu \cite{Nguyen and Vu},
Rudelson and Vershynin \cite{Rudelson and Vershynin08,
Rudelson and Vershynin}, Tao and Vu \cite{Tao and Vu, Tao and Vu2}, Vershynin \cite{Vershynin}).
In this context the problem is referred to as the Littlewood--Offord problem (see also \cite{Erd, Hal, LO}).
In the sequel, let $F_a$ denote the distribution of the sum
$S_a$, and let $G$ be the distribution of the symmetrized random variable
$\widetilde{X}=X_1-X_2$.
Let
\begin{eqnarray} \label{0}M(\tau)=\tau^{-2}\int\limits_{|x|\leq\tau}x^2
\,G\{dx\}+\int\limits_{|x|>\tau}G\{dx\}
=\mathbf{E}
\min\big\{{\widetilde{X}^2}/{\tau^2},1\big\}, \quad \tau>0.
\end{eqnarray}
The symbol $c$ will be used for absolute positive constants. Note that $c$ can be different in different (or even in the same) formulas.
We will write $A\ll B$ if $A\leq c B$. Also we will write $A\asymp
B$ if $A\ll B$ and $B\ll A$. For
${x=(x_1,\dots,x_n )\in\mathbf R^n}
$
we will denote $\|x\|^2= x_1^2+\dots +x_n^2$
and
$\|x\|_\infty= \max_j|x_j|$.
The elementary properties of concentration functions are well studied (see, for instance,
\cite{Arak and Zaitsev, Hengartner and Theodorescu,
Petrov}).
In particular, it is obvious that
$$
Q(F,\mu)\le (1+\lfloor \mu/\lambda\rfloor)\,Q(F,\lambda)
$$
for any $\mu,\lambda>0$, where $\lfloor x\rfloor$ is the integer part of a number~$x$.
Hence, \begin{equation}\label{8a}
Q(F,c\lambda)\asymp\,Q(F,\lambda)
\end{equation}
and\begin{equation}\label{8j}
\hbox{if }Q(F,\lambda)\ll B,\hbox{ then }Q(F,\mu)\ll B\,(1+\mu/\lambda).
\end{equation}
The problem of estimating the concentration function of weighted sums $S_a$
under different restrictions on the vector $a\in \mathbf{R}^n$ and distributions of summands
has been studied in \cite{Friedland and Sodin, Nguyen and Vu, Rudelson and Vershynin, Rudelson and Vershynin08,
Tao and Vu, Tao and Vu2, Vershynin}. Eliseeva and Zaitsev \cite{Eliseeva and Zaitsev} (see also \cite{Eliseeva})
obtained some improvements of the results \cite{Friedland and Sodin, Rudelson and Vershynin}. In this paper we formulate
and prove similar refinements of a result of Vershynin~\cite{Vershynin}.
Note that a relation between the rate of decay of the concentration function and
the arithmetic structure of distributions of independent random variables was discovered
for arbitrary distributions of summands in a paper of Arak \cite{Arak}
(see also \cite{Arak and Zaitsev, Zaitsev2}). Much later, similar relations was found in
\cite{Nguyen and Vu, Rudelson and Vershynin08, Rudelson and Vershynin, Tao and Vu, Tao and Vu2, Vershynin}
in the particular case of distributions involved
in the Littlewood--Offord problem. The authors of the present paper are going to devote
a separate publication to compare the results of aforementioned papers.
Let $ \log_+(x)$ = $\max\{ 0,\log x\}$. The result of Vershynin \cite{Vershynin}, related to the Littlewood--Offord problem,
is formulated as follows.
\begin{proposition}\label{thV} Let $X, X_1,\ldots,X_n$ be i.i.d. random variables
and
$a=(a_1,\ldots,a_n)\in \mathbf{R}^n$
with $\|a\|=1$. Assume that there exist positive numbers
$\tau, p, K, L,D$ such that $Q(\mathcal{L}(X),\tau)\leq 1-p$,
$\mathbf{E} \,\left|X\right|\le K$, and
\begin{equation} \label{5bk}
\|\,t a-m\|\geq L\sqrt{\log_+(t/L)}\ \hbox{ for all $m\in \mathbf Z^n$ and \
$t\in(0,D]$}.
\end{equation}
If $L^2\geq 1/{p}$, then
\begin{equation} \label{5bff} Q\Big(F_a,\cfrac{1}{D}\Big)\le \frac{C\,L}{D},\end{equation}
where the quantity $C$ depends on $\tau, p, K$ only.\end{proposition}
\begin{corollary}\label{c1a} Let the conditions of Proposition\/ $\ref{thV}$ be satisfied.
Then, for any $ \varepsilon\ge 0$,
\begin{equation} \label{7bb}Q(F_a,\varepsilon)
\ll C\,L\,\Big(\varepsilon+\cfrac{1}{D}\Big). \end{equation}
\end{corollary}
\medskip
It is clear that if
\begin{equation} \label{4bb}
0<D\le D(a)=D_{L}(a)=
\inf\Big\{t>0:\hbox{dist}(ta,\mathbf{Z}^n)<
L\sqrt{\log_+(t/L)}\Big\},
\end{equation}
where
$$\hbox{dist}\,(ta,\mathbf{Z}^n)= \min_{m \in \mathbf{Z}^n}\|\,ta - m\|
=\Big(\sum_{k=1}^{n} \min_{m_k \in \mathbf{Z}} |\,ta_k -
m_k|^2\Big)^{1/2} ,
$$ then condition \eqref{5bk} holds.
In paper \cite{Vershynin} the quantity~$D(a)$ is called
the least common denominator of the vector ${a\in\mathbf{R}^n}$ (see also \cite{Rudelson and Vershynin08, Rudelson and Vershynin} for similar definitions).
Note that for $|\,t|\leq 1/2\,\|a\|_{\infty}$ we have
\begin{equation}\label{4s}
\big(\hbox{dist}(ta,\mathbf{Z}^n)\big)^2 =
\sum_{k=1}^{n}|\,ta_k|^2= \|a\|^2t^2= t^2. \end{equation}
By definition, $D(a)> L$. Moreover, equality~\eqref{4s} implies
that $D(a)\ge {1}/{2\,\|a\|_{\infty}}$ (see \cite{Vershynin}, Lemma~6.2).
Note that just the statement of Corollary~\ref{c1a} with $D=D(a)$
is presented as the corresponding concentration result for
the Littlewood--Offord problem in~\cite{Vershynin}.
The formulation of Proposition~\ref{thV} is more natural than the statement of Corollary~\ref{c1a}.
Furthermore, Proposition~\ref{thV} implies Corollary~\ref{c1a} using relations
\eqref{8j} and~\eqref{4bb}. Minimal $L$, for which Proposition~\ref{thV} holds, depends on $a$ and $D$.
Moreover, generally speaking, it can be essentially larger then $p^{-1/2}$.
In the formulation of Proposition~\ref{thV}, w.l.o.g. we can replace assumption~\eqref{5bk} by the following:
\begin{equation} \label{5bh}
\|\,t a-m\|\geq f_L(t)\quad \hbox{ for all $m\in \mathbf Z^n$ and \
$t\in \Big[\cfrac{1}{2\,\|a\|_\infty},D\Big]$},
\end{equation}where
\begin{equation} \label{5cv}
f_L(t)= \begin{cases}\qquad t/6,& \hbox{ for }0<t< eL,\\ L\sqrt{\log(t/L)},&\hbox{ for }t\ge eL.
\end{cases}
\end{equation}
Note that equality~\eqref{4s} justifies why the assumption $t\geq1/2\,\|a\|_{\infty}$
in condition~\eqref{5bh} is natural. For $0<t< 1/2\,\|a\|_{\infty}$, inequality~\eqref{5bh} is satisfied automatically.
Formally, condition~\eqref{5bh} can be more restrictive than condition \eqref{5bk}.
However, if condition~\eqref{5bk} is satisfied, but condition \eqref{5bh} is not satisfied,
then inequality \eqref{7bb} holds for trivial reasons.
Indeed, if $t\ge eL$, then condition \eqref{5bh} for such a $t$ follows from assumption~\eqref{5bk}.
If $0<t< eL$ and there exists an $m\in \mathbf Z^n$ such that $\|\,t a-m\|<t/6$, then, denoting
$k=\lfloor eL/t\rfloor+1$, we have $tk\ge eL$ and
$$
\|\,tk a-km\|<tk/6\le 2eL/6<L\leq L\sqrt{\log_+(tk/L)}.
$$
Since $km\in\mathbf Z^n$, we have
$D\le D(a)\le tk<6 L$ and the required inequality~\eqref{5bff} is a consequence of $Q(F_a, 1/D)\le1$.
Note that there may be a situation such that condition \eqref{5bh} is satisfied,
but condition \eqref{5bk} is not satisfied, for some $t$ from the interval $L<t<eL$.
Then the estimates for the concentration fuctions in Proposition~\ref{thV} and Corollary~\ref{c1a} still hold.
This follows from Theorem~\ref{th1a} of this paper.
The above argument justifies that it would be reasonable to define the alternative least common denominator as
\begin{equation} \label{4bt}
D^*(a)=
\inf\Big\{t>0:\hbox{dist}(ta,\mathbf{Z}^n)<
f_L(t\|a\|) \Big\}.
\end{equation}
This definition will be also used below in the case when $\|a\|\ne1$.
Obviously,
\begin{equation} \label{4btr}
D^*(\lambda a)=D^*(a)/\lambda, \quad\hbox{ for any }\lambda>0,
\end{equation}
and equality~\eqref{4s} implies again that $D^*(a)\ge 1/2\,\|a\|_{\infty}$.
\medskip
Now we formulate the main result of this paper.
\begin{theorem}\label{th1a} Let $X,X_1,\ldots,X_n$ be {i.i.d.} random variables. Let
$a=(a_1,\ldots,a_n)\in \mathbf{R}^n$
with $\|a\|=1$. Assume that condition \eqref{5bh}
is satisfied.
If $L^2\geq 1/{M(1)}$, where the quantity $M(1)$ is defined by formula \eqref{0}, then
\begin{equation} \label{5bf} Q\Big(F_a,\cfrac{1}{D}\Big)\ll \frac{1}{D\sqrt{M(1)}}.
\end{equation}
\end{theorem}
\medskip
Let us reformulate Theorem\/ $\ref{th1a}$ for arbitrary $a$, without assuming that $\|a\|=1$.
\begin{corollary}\label{c1b} {Let the conditions of Theorem\/ $\ref{th1a}$ be satisfied with
condition~\eqref{5bh} replaced by the condition
\begin{equation} \label{5bj}
\|\,t a-m\|\geq f_L(t\|a\|)\ \hbox{ for all $m\in \mathbf Z^n$ and \
$t\in \Big[\cfrac{1}{2\,\|a\|_\infty},D\Big]$},
\end{equation}
and without the assumption $\|a\|=1$. If $L^2\geq 1/{M(1)}$, then
\begin{equation} \label{5bj3}
Q\Big(F_a,\cfrac{1}{D}\Big)\ll \cfrac{1}{\|a\|D\sqrt{M(1)}} \,.
\end{equation}}
\end{corollary}
The proofs of our Theorem \ref{th1a} and Corollary \ref{c1b}
are similar to the proof of the main results of~\cite{Eliseeva and Zaitsev}.
They are in a sense more natural than the proofs of
Vershynin~\cite{Vershynin}, since they do not use unnecessary assumptions like
$\mathbf{E} \,\left|X\right|\le K$. This is achieved by an application of relation~\eqref{1b}.
Our proof differs from the arguments used
in \cite{Friedland and Sodin, Rudelson and Vershynin, Vershynin}. We
apply the methods developed by Ess\'een \cite{Esseen} (see the proof of Lemma~4 of Chapter~II in
\cite{Petrov}).
\medskip
Applying Corollary~\ref{c1b} to the random variables
${X_k}/{\tau}$, $\tau>0$, we obtain the following result.
\begin{corollary}\label{c2a} Let
$V_{a,\tau}=\mathcal{L}\big(\sum\limits_{k=1}^{n}a_k {X_k}/{\tau}\big)$, $\tau>0$.
Then, under the conditions of Corollary~$\ref{c1b}$
with the condition~$L^2\geq 1/{M(1)}$ replaced by the condition
$L^2\geq 1/{M(\tau)}$, we have
\begin{equation}\label{4p}
Q\Big(V_{a,\tau},\cfrac{1}{D}\Big) = Q\Big(F_a,\cfrac{\tau}{D}\Big) \ll
\cfrac{1}{\|a\|D\sqrt{M(\tau)}}\,.
\end{equation}
In particular, if $\|a\|=1$, then
\begin{equation}\label{6p}
Q\Big(F_a,\cfrac{\tau}{D}\Big) \ll
\cfrac{1}{D\sqrt{M(\tau)}}\,.
\end{equation}
\end{corollary}
\medskip
For the proof of Corollary \ref{c2a}, it suffices to use Corollary \ref{c1b} and relation \eqref{0}.
\medskip
It is evident that $M(\tau)\gg 1-Q(G,\tau)\geq 1-Q(F,\tau)\geq p$, where $p$ is
introduced in Proposition \ref{thV}. Note that $M(\tau)$ can be essentially larger than~$p$.
For example, $p$ may be equal to $0$, while
$M(\tau)>0$ for any non-degenerate distribution ${F=\mathcal L(X)}$.
Comparing the bounds \eqref{5bff} and \eqref{6p}, we see that the factor~$L$ is replaced by
the factor~${1}/{\sqrt{M(\tau)}}$
which can be essentially smaller than~$L$ under the conditions of Corollary~\ref{c2a}.
Moreover, there is an unnecessary assumption $\mathbf{E} \,\left|X\right|\le K$ in the formulation of
Proposition \ref{thV}. Finally, the dependence of constants on the distribution $\mathcal L(X)$ is stated explicitly,
in inequalities \eqref{5bf}, \eqref{5bj3}, \eqref{6p} constants are absolute as opposed to inequalities \eqref{5bff}, \eqref{7bb},
where the value $C$ depends on $\tau$, $p$, $K$ not explicitly.
An improvement of Corollary \ref{c1a} is given below in Theorem~\ref{th1b}.
\medskip
We recall now the well-known Kolmogorov--Rogozin
inequality (see \cite{Arak and Zaitsev, Hengartner and Theodorescu, Petrov, Rogozin}).
\medskip
\begin{proposition}\label{thKR} Assume that $Y_1,\ldots,Y_n$ are
independent random variables with the distributions $W_k=\mathcal{L}(Y_k)$.
Let $\lambda_1,\ldots,\lambda_n$
be positive numbers such that ${\lambda_k \leq \lambda}$, for $k=1,\ldots,n$. Then
\begin{equation} \label{2}
Q\Big(\mathcal{L}\Big(\sum_{k=1}^{n}Y_k\Big),\lambda\Big)\ll\lambda\,
\Big(\sum_{k=1}^{n}\lambda_k^2\,\big(1-Q(W_k,\lambda_k)\big)\Big)^{-1/2}.
\end{equation}
\end{proposition}
Ess\'een \cite{Esseen} (see \cite{Petrov}, Theorem 3 of Chapter III)
has improved this result. He has shown that the following statement is true.
\begin{proposition}\label{thE}Under the conditions of Proposition $\ref{thKR}$ we have
\begin{equation} \label{3}
Q\Big(\mathcal{L}\Big(\sum_{k=1}^{n}Y_k\Big),\lambda\Big)\ll\lambda
\,\Big(\sum_{k=1}^{n}\lambda_k^2\,
M_k(\lambda_k)\Big)^{-1/2},
\end{equation}
where $M_k(\tau)=\mathbf{E}\,
\min\big\{{\widetilde{Y_k}^2}/{\tau^2},1\big\}$.\end{proposition}
Further improvements of \eqref{2} and \eqref{3} may be found
in \cite{Arak, Arak and Zaitsev, Bretagnolle, GZ1, GZ2,
Kesten, Miroshnikov and Rogozin, Nagaev and Hodzhabagyan, Zaitsev}.
\medskip
It is clear that Theorem \ref{th1a} is related to Proposition
\ref{thV} in a similar way as Ess\'een's inequality~\eqref{3} is related to the Kolmogorov--Rogozin inequality~\eqref{2}.
Moreover, in inequalities \eqref{5bff} and~\eqref{7bb}, the dependence of $C$ on $\tau$, $p$ and $K$ is not written out explicitly.
If we consider a special case, where $D=1/2\,\|a\|_{\infty}$, then
no resctictions on the arithmetic structure of the vector $a$ are made,
and Corollary~\ref{c2a} implies the bound
\begin{equation}\label{4}
Q(F_a,\,\|a\|_{\infty}\,\tau)\ll \cfrac{\|a\|_{\infty}}{\|a\|\sqrt{M(\tau)}}.
\end{equation}
This result follows from Ess\'een's inequality \eqref{3}
applied to the sum of non-identically distributed random variables $Y_k=a_kX_k$ with
$\lambda_k=a_k\,\tau$,
$\lambda=\|a\|_\infty\,\tau$.
For $a_1=\cdots=a_n=n^{-1/2}$, inequality \eqref{4} turns into
the well-known particular case of Proposition \ref{thE}:
\begin{equation}\label{4y}
Q(F^{*n},\tau)\ll \cfrac{1}{\sqrt{n\,M(\tau)}}\,.
\end{equation}
Inequality \eqref{4y} implies as well the Kolmogorov--Rogozin inequality for i.i.d. random variables:
$$Q(F^{*n},\tau)\ll \cfrac{1}{\sqrt{n\,(1-Q(F,\tau))}}\,.$$
Inequality \eqref{4} is not able to yield a bound of better order than $O(n^{-1/2})$,
since the right-hand side of \eqref{4} is at least $n^{-1/2}$.
The results stated above are more interesting if $D$ is essentially larger than
~$1/2\,\|a\|_{\infty}$. In this case one can expect the estimates
of better order than $O(n^{-1/2})$. Just such estimates of
$Q(F_a,\lambda)$ are required to study the distributions of eigenvalues of random matrices.
For
$0<D<1/2\,\|a\|_{\infty}$, the inequality
\begin{equation}\label{4m}
Q\Big(F_a,\cfrac{\tau}{D}\Big) \ll
\cfrac{1}{\|a\|D\sqrt{M(\tau)}}
\end{equation}
holds assuming the conditions of Corollary \ref{c2a} too.
In this case it follows from \eqref{8j} and \eqref{4}.
\medskip
Under the conditions of Corollary \ref{c2a}, there exist many possibilities to represent
a fixed $\varepsilon$ as $\varepsilon=\tau/D$ for an appication of inequality~\eqref{4p}.
Therefore, for a fixed $\varepsilon=\tau/D$ we can try to minimize
the right-hand side of inequality~\eqref{4p} choosing optimal $\tau$ and~$D$. This is possible, and the optimal bound is given
in the following Theorem~\ref{th1b}.
\begin{theorem}\label{th1b} Let
the conditions of Corollary $\ref{c1b}$ for $D \leq D^*(a)$ be satisfied
except the condition~$L^2\geq 1/{M(1)}$. Let $L^2> 1/{ P}$, where
$$
P={\mathbf P}(\widetilde{X}\neq 0)=\lim_{\tau\to0}M(\tau).
$$
Then there exists a $\tau_0$ such that $L^2=1/{M(\tau_0)}$.
Moreover, the bound \begin{equation}\label{4pt2}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{1}{\|a\|D^*(a)\sqrt{M(\varepsilon\, D^*(a))}}
\end{equation}
is valid for $0<\varepsilon\le\varepsilon_0=\tau_0/D^*(a)$.
Furthermore, for $\varepsilon\ge\varepsilon_0$,
the bound
\begin{equation}\label{4pr2}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{\varepsilon L}{\varepsilon_0\,\|a\|D^*(a)}
\end{equation} holds.
\end{theorem}
\medskip
In the statement of Theorem~\ref{th1b}, the quantity~$\varepsilon$ can be arbitrarily small. If
$\varepsilon$ tends to zero and $L^2> 1/{ P}$, we obtain
\begin{equation}\label{4pp4}
Q(F_a,0)\ll \cfrac{1}{\|a\|D^*(a)\sqrt{P}} .
\end{equation}
Applying inequalities \eqref{4pt2}--\eqref{4pp4}, we should take into account
that $\|a\|D^*(a)=D^*(a/\|a\|)$ by virtue of \eqref{4btr}.
Theorem\/ $\ref{th1b}$ follows easily from Corollary \ref{c2a}.
Indeed, denoting $\varepsilon=\tau/D$, we can rewrite inequality~\eqref{4p} as
\begin{equation}\label{4pp}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{1}{\|a\|D\sqrt{M(\varepsilon\, D)}}.
\end{equation}
Inequality~\eqref{4pp} holds if $L^2\geq 1/{M(\varepsilon\, D)}$ and $0<D\le D^*(a)$.
If $L^2\geq 1/{M(\varepsilon\, D^*(a))}$, then the choice $D=D^*(a)$ is optimal
in inequality~\eqref{4pp} since $$D^2{M(\varepsilon\, D)}=\mathbf{E}
\min\big\{{\widetilde{X}^2}/{\varepsilon^2},D^2\big\}$$ is increasing when $D$ increases.
For the same reason, if $L^2< 1/{M(\varepsilon\, D^*(a))}$, the optimal choice of $D$
in inequality~\eqref{4pp}
is given by the solution $D_0(\varepsilon)$
of the equation~${L^2= 1/{M(\varepsilon\, D)}}$. This solution exists and is unique if
$L^2> 1/{ P}$, since
the function $M(\tau)$ is continuous and strictly decreasing if $M(\tau)<P$.
Moreover, $M(\tau)\to0$ as $\tau\to\infty$.
In this case inequality~\eqref{4pp} turns into
\begin{equation}\label{4ppp}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{L}{\|a\|D_0(\varepsilon)}.
\end{equation}
Furthermore, choosing $\tau_0$ to be the solution of the equation
$L^2=1/{M(\tau)}$, we see that inequality~\eqref{4pt2}
is valid for $0<\varepsilon\le\varepsilon_0=\tau_0/D^*(a)$. It is clear that $D_0(\varepsilon_0)=D^*(a)$.
Moreover, for $\varepsilon\ge\varepsilon_0$, we have
$$
M(\varepsilon\, D_0(\varepsilon))=M(\varepsilon_0\, D_0(\varepsilon_0))=L^{-2}
$$
and, hence, $\varepsilon\, D_0(\varepsilon)=\varepsilon_0\, D_0(\varepsilon_0)$.
Therefore, for $\varepsilon\ge\varepsilon_0$, inequality~\eqref{4pr2} holds.
The right-hand side of the inequality~\eqref{4pr2} with $\|a\|=1$ admits the following representations
$$
\cfrac{\varepsilon L}{\varepsilon_0\,D^*(a)}=
\cfrac{L}{D_0(\varepsilon)}=
\cfrac{1}{D_0(\varepsilon)\sqrt{M(\varepsilon\, D_0(\varepsilon))}}\,.
$$
Obviously, inequality \eqref{4pr2} could be derived from \eqref{4pp} with $\varepsilon=\varepsilon_0$
by an application of inequality~\eqref{8j}. On the other hand, for
$0<\varepsilon_1<\varepsilon\le\varepsilon_0$,
we could apply inequality~\eqref{8j} to inequality \eqref{4pt2} obtaining the bound
\begin{equation}\label{4ps}
Q\big(F_a,\varepsilon\big) \ll \frac\varepsilon{\varepsilon_1}\, Q\big(F_a,\varepsilon_1\big) \ll
\cfrac{\varepsilon}{\varepsilon_1\,\|a\|D^*(a)\sqrt{M(\varepsilon_1\, D^*(a))}}\,.
\end{equation}
However, inequality \eqref{4ps} is weaker than inequality \eqref{4pt2} since, evidently,
\begin{equation}\label{4pa}
\varepsilon^2M(\varepsilon\, \mu)=\mathbf{E}
\min\big\{{\widetilde{X}^2}/{\mu^2},\varepsilon^2\big\}\ge \mathbf{E}
\min\big\{{\widetilde{X}^2}/{\mu^2},\varepsilon_1^2\big\}=\varepsilon_1^2\,M(\varepsilon_1\,\mu),
\end{equation}
for any $\mu>0$.
Theorem~\ref{th1b} is an essential improvement of Corollary \ref{c1a}.
In particular, in contrast with inequality \eqref{7bb} of Corollary~\ref{c1a}, for small $\varepsilon$,
the right-hand side of
inequality~\eqref{4pt2} of Theorem~\ref{th1b} may be decreasing as $\varepsilon$ decreases.
Moreover, we have just shown that an application of inequality~\eqref{8j} would lead to a
loss of precision. Recall that just an application of inequality~\eqref{8j} allows us to derive
Corollary~\ref{c1a} from Proposition~\ref{thV}.
Consider a simple example. Let $X$
be the random variable
taking values $0$ and $1$ with probabilities
\begin{equation}
{\bf P}\{ X=1\}=1-{\bf P}\{ X=0\}=p>0.\label{1p}
\end{equation}Then
\begin{equation}
{\bf P}\{ \widetilde X=\pm1\}=p(1-p),\quad{\bf P}\{
\widetilde X=0\}=1-2\,p(1-p),\label{11p}
\end{equation}
and the function $M(\tau)$ has the form
\begin{equation} \label{5hh}
M(\tau)= \begin{cases}\quad 2\,p(1-p),& \hbox{ for }
0<\tau< 1,\\ 2\,p(1-p)/\tau^2,&\hbox{ for }\tau\ge 1.
\end{cases}
\end{equation}
Assume for simplicity that $\|a\|=1$.
If $L^2>1/2\,p(1-p)$, then $\tau_0=L\sqrt{2\,p(1-p)}$ and,
for $\varepsilon\ge\varepsilon_0=L\sqrt{2\,p(1-p)}/D^*(a)$,
we have the bound
\begin{equation}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{\varepsilon }{\sqrt{p(1-p)}}.\label{17p}
\end{equation}
The same bound~\eqref{17p} follows from inequality~\eqref{4pt2} of Theorem~\ref{th1b} for $1/D^*(a)\le\varepsilon\le\varepsilon_0$.
For $0<\varepsilon\le1/D^*(a)$, inequality~\eqref{4pt2} implies the bound
\begin{equation}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{1 }{D^*(a)\sqrt{p(1-p)}}.\label{18p}
\end{equation}
Thus, \begin{equation}
Q\big(F_a,\varepsilon\big) \ll\min\bigg\{
\cfrac{1 }{\sqrt{p(1-p)}}\Big(\varepsilon+\cfrac{1}{D^*(a)}\Big),\:1\bigg\},
\quad\hbox{for all }\varepsilon\ge0.\label{19p}
\end{equation}
Inequality~\eqref{19p} cannot be essentially improved. Consider, for example,
\begin{equation}\label{19py}
a=(s^{-1/2}, \ldots, s^{-1/2},0,\ldots,0)
\end{equation}
with
the first $s\le n$ coordinates equal to~$s^{-1/2}$ and
the last $n-s$ coordinates equal to zero. In this case $D^*(a)\asymp s^{1/2}$,
the random variable $s^{1/2}S_a$ has binomial distribution with parameters $s$ and $p$,
and it is well-known that
\begin{equation}
Q\big(F_a,\varepsilon\big) \gg\min\bigg\{
\cfrac{1 }{\sqrt{p(1-p)}}\,\Big(\varepsilon+\cfrac{1}{\sqrt s}\Big),\:1\bigg\},
\quad\hbox{for all }\varepsilon\ge0.\label{13pp}
\end{equation}
Comparing the bounds \eqref{19p} and \eqref{13pp}, we see that Theorem~\ref{th1b}
provides the optimal order of $Q\big(F_a,\varepsilon\big)$ for all possible values
of~$\varepsilon$. Moreover, the corresponding constant depend on $p$ optimally.
It would seem that the last example may be reduced to the trivial case $n=s$. This is not quite right.
It is clear that the value $Q\big(F_a,1\big)$ does not change significantly
after a small perturbation of the vector $a$ (defined in~\eqref{19py}),
if the absolute values of the last $n-s$ coordinates of vector $a$ are small but nonzero.
Moreover, the order of smallness of the last $n-s$ coordinates can be chosen in such a way that
inequalities \eqref{19p} and \eqref{13pp} are satisfied with $\varepsilon\gg s^{-1}$ and $D^*(a)\asymp s^{1/2}$.
For the sake of completeness, we give below a short proof of inequality~\eqref{13pp}.
It is easy to see that $\hbox{Var}\,S_a\,=p(1-p)$. Therefore, by Chebyshev's inequality,
\begin{equation}
\mathbf{P}\big\{|S_a-\mathbf{E}\,S_a|<2\sqrt{p(1-p)}\big\}\ge3/4.\label{13r}
\end{equation}
The random variable $S_a$ takes values which are multiples of~$s^{-1/2}$.
Therefore, if $s\,p(1-p)\le1$, then inequality~\eqref{13r} implies that
$Q\big(F_a,0\big)\asymp1$ and inequality~\eqref{13pp} is trivially valid.
Assume now $s\,p(1-p)>1$.
If $0<\varepsilon\le4\sqrt{p(1-p)}$, then, using \eqref{8j} and~\eqref{13r},
we obtain
\begin{equation}
3/4\le Q\big(F_a,4\sqrt{p(1-p)}\big)\ll
\varepsilon^{-1}\sqrt{p(1-p)}\,Q\big(F_a,\varepsilon\big),\label{15r}
\end{equation} and, hence,
\begin{equation}
Q\big(F_a,\varepsilon\big) \gg
\cfrac{\varepsilon}{\sqrt{p(1-p)}}\,.\label{2pp}
\end{equation}
It is clear that \eqref{8a}, \eqref{8j} and~\eqref{2pp} imply that $Q\big(F_a,\varepsilon\big)\asymp1$,
for $\varepsilon\ge4\sqrt{p(1-p)}$.
Applying inequality~\eqref{2pp} for $\varepsilon=s^{-1/2}$
and using the lattice structure of the support of distribution~$F_a$,
we conclude that,
for $0\le\varepsilon<{s^{-1/2}}$,
\begin{equation}
Q\big(F_a,\varepsilon\big) \ge Q\big(F_a,0\big)\gg
\cfrac{1 }{\sqrt{s\,p(1-p)}}\,.\label{12pp}
\end{equation}
Thus, inequalities~\eqref{8a}, \eqref{8j}, \eqref{2pp} and~\eqref{12pp} imply \eqref{13pp}.
\medskip
The results of this paper are formulated for a fixed $L$.
It is clear that in their application we should try to choose the optimal $L$,
which satisfies the conditions and minimizes the right-hand sides of inequalities for the concentration functions.
Recall that the least common denominator $D^*(a)$ depends on $L$.
The quantity $\tau_0=\varepsilon_0\, D^*(a)$ (which is the solution of the
equation ${L^2=1/{M(\tau)}}$) may be interpreted as a quantity depending on $L$
and on the distribution~$\mathcal{L}(X)$.
Moreover, comparing the bounds \eqref{7bb} and \eqref{4pr2} for relatively large
values of~$\varepsilon$, we see that $\tau_0\to\infty$ as $L\to\infty$. Therefore, the factor
$L/\tau_0$ is much smaller than~$L$ for large values of~$L$.
In particular, in the example above we have $\tau_0=L\sqrt{2\,p(1-p)}$.
Another example would be a symmetric stable distribution with parameter~$\alpha$,
$0<\alpha<2$. In this case the
characteristic function $\widehat{F}(t)= \mathbf{E}\,\exp(itX)$ has the form~$\widehat{F}(t)=
\exp(-c |t|^\alpha)$. It could be shown that then $\tau_0$ behaves as $L^{2/\alpha}$ as $L\to\infty$.
\medskip
Inequality \eqref{17p} can be rewritten in the form
\begin{equation}
Q\big(F_a,\varepsilon\big) \ll
\cfrac{\varepsilon }{\sigma},\quad\hbox{for }\varepsilon\ge\varepsilon_0,\label{117p}
\end{equation}
where $\sigma^2=\hbox{Var}\,X$.
It is clear that a similar situation occurs for any random variable~$X$ with finite variance.
In particular, inequality \eqref{117p} is obviously satisfied
for all $\varepsilon\ge0$,
if $\|a\|=1$ and $X$ has a Gaussian distribution with $\hbox{Var}\,X=\sigma^2$.
Moreover, the order of the right-hand side of the inequality is optimal.
In this particular case, for any $\tau>0$ the relation
$$
\frac1{\sqrt{M(\tau)}}\asymp1+\frac{\tau}{\sigma},
$$
holds. The use of this inequality and of Theorem~\ref{th1b} with $\|a\|=1$ implies easily that
\begin{equation}\label{169p}
Q\big(F_a,\varepsilon\big) \ll\cfrac{\varepsilon }{\sigma},\quad\hbox{for }\varepsilon\ge \cfrac{\sigma}{D^*(a)}.
\end{equation}
Inequality~\eqref{169p} provides the correct dependence of the concentration function on $\sigma$ for
$\sigma/D^*(a)\le\varepsilon\le\sigma$. It is impossible to obtain estimates of such order from
inequality~\eqref{7bb}. Estimate \eqref{169p} cannot be deduced from Theorem~\ref{th1b}
for small $\varepsilon$, since in Theorem~\ref{th1b} the distribution ${F=\mathcal L(X)}$ is arbitrary
and the concentration function $Q\big(F_a,\varepsilon\big)$ may be not
tending to zero as $\varepsilon\to0$ (see~\eqref{13pp}).
\bigskip
\section{Proofs}
We will use the classical Ess\'een inequalities (\cite{Esseen66}, see also
\cite{Hengartner and Theodorescu} and \cite{Petrov}):
\begin{equation} \label{1}
Q(F,\lambda) \ll
\lambda\int_{0}^{\lambda^{-1}} {|\widehat{F}(t)|
\,dt},\quad \lambda>0,
\end{equation}
where $\widehat{F}(t)$ is the corresponding characteristic function.
In the general case, $Q(F,\lambda)$ cannot be estimated from below by the right hand side of
inequality~\eqref{1}.
However, if we assume additionally that the distribution $F$ is symmetric
and its characterictic function is non-negative for all~$t\in\mathbf R$, then we have the lower bound:
\begin{equation} \label{1a}Q(F,\lambda)\gg
\lambda\int_{0}^{\lambda^{-1}} {\widehat{F}(t) \,dt}
\end{equation}
and, therefore,
\begin{equation} \label{1b}
Q(F,\lambda)\asymp\lambda\int_{0}^{\lambda^{-1}}
{\widehat{F}(t) \,dt}
\end{equation} (see \cite{Arak and Zaitsev}, Lemma~1.5 of Chapter II).
The use of relation \eqref{1b} allows us to simplify the arguments of
\cite{Friedland and Sodin, Rudelson and Vershynin, Vershynin} which were applied to the Littlewood--Offord problem
(see also~{\cite{Eliseeva, Eliseeva and Zaitsev}}).
\medskip
\emph{Proof of Theorem\/ $\ref{th1a}$.} Let $r$ be a fixed number satisfying $1<r\le\sqrt 2$.
Represent the distribution $G=\mathcal{L}(\widetilde{X})$
as a mixture
$$G=q E +\sum_{j=0}^{\infty}p_j G_j,$$ where $q={\mathbf
P}(\widetilde{X}=0)$,
$$
p_j={\mathbf P}(\widetilde{X} \in A_j),\quad j=0,1,2,\ldots,
$$
$A_0=\{x:|x|>1\}$,
$A_j=\{x: r^{-j}<|x|\leq r^{-j+1}\}$, $E$ is probability measure concentrated in zero, $G_j$
are probability measures defined for $p_j>0$ by the formula
$$G_j\{X\}=G\{X\cap A_j\}/{p_j},$$ for any Borel set~$X$.
In fact, $G_j$ is the conditional distribution of $\widetilde X$ provided that $\widetilde X\in A_j$. If $p_j=0$,
then we can take as $G_j$ arbitrary measures.
For $z\in \mathbf{R}$, $\gamma>0$, introduce the distribution
$H_{z,\gamma}$, with the characteristic function
\begin{equation} \label{11}\widehat{H}_{z,\gamma}(t)=\exp\Big(-\cfrac{\gamma}{\,2\,}\;\sum_{k=1}^{n}\big(1-\cos(2a_k
zt)\big)\Big).\end{equation}
It is clear that $H_{z,\gamma}$ is a symmetric infinitely divisible distribution.
Therefore, its characteristic function is positive for all $t\in \mathbf{R}$.
For the characteristic function $\widehat{F}(t)= \mathbf{E}\,\exp(itX)$, we have
$$|\widehat{F}(t)|^2 = \mathbf{E}\,\exp(it\widetilde{X}) =
\mathbf{E}\,\cos(t\widetilde{X}),$$
where $\widetilde{X}=X_1-X_2$ is the corresponding symmetrized random variable. Hence,
\begin{equation}\label{6}|\widehat{F}(t)| \leq
\exp\Big(-\cfrac{\,1\,}{2}\;\big(1-|\widehat{F}(t)|^2\big)\Big) =
\exp\Big(-\cfrac{\,1\,}{2}\;\mathbf{E}\,\big(1-\cos(t\widetilde{X})\big)\Big).
\end{equation}
According to \eqref{1} and \eqref{6}, we have
\begin{align}Q(F_a,1/D)&=Q(F_{2a},2/D)\le 2\,Q(F_{2a},1/D)\nonumber
\\ &\ll \frac 1D\int\limits_{0}^{D}|\widehat{F_{2a}}(t)|\,dt\nonumber
\\
&\ll\frac 1D
\int\limits_{0}^{D}\exp\Big(-\frac{\,1\,}{2}\,\sum_{k=1}^{n}\mathbf{E}\,\big(1-\cos(2a_k
t \widetilde{X})\big)\Big)\,dt=I.\label{ww}
\end{align}
It is evident that
\begin{eqnarray*}
\sum_{k=1}^{n}\mathbf{E}\big(1-\cos(2a_k t
\widetilde{X})\big)&=&\sum_{k=1}^{n}\int\limits_{-\infty}^{\infty}\big(1-\cos(2a_k
t x)\big)\,G\{dx\}
\\
&=&\sum_{k=1}^{n}\sum_{j=0}^{\infty}\int\limits_{-\infty}^{\infty}\big(1-\cos(2a_k
t x)\big)\,p_j
\,G_j\{dx\}\\
&=&\sum_{j=0}^{\infty}\sum_{k=1}^{n}\int\limits_{-\infty}^{\infty}\big(1-\cos(2
a_k t x)\big)\,p_j \,G_j\{dx\}.
\end{eqnarray*}
We denote $\beta_j=r^{-2j}p_j $,
$\beta=\sum\limits_{j=0}^{\infty}\beta_j$, $\mu_j={\beta_j}/{\beta}$,
$j=0,1,2,\ldots$. It is clear that $\sum\limits_{j=0}^{\infty}\mu_j=1$ and
${p_j}/{\mu_j}=r^{2j}\beta $ (for $p_j> 0$).
Let us estimate the quantity $\beta$:
\begin{eqnarray*}
\beta = \sum_{j=0}^{\infty}\beta_j &=&\sum_{j=0}^{\infty}r^{-2j} p_j \,
= {\mathbf P}\big\{|\widetilde{X}|>1\big\} +
\sum_{j=1}^{\infty}r^{-2j}\,{\mathbf
P}\big\{r^{-j}<|\widetilde{X}|\leq r^{-j+1}\big\} \\
&\geq&\int\limits_{|x|>1}\,G\{dx\} + \sum_{j=1}^{\infty}
\int\limits_{r^{-j}<|x|\leq r^{-j+1}}\cfrac{x^2}{r^{2}}\,G\{dx\}\\ &\geq&
\cfrac{\,1\,}{r^{2}}
\int\limits_{|x|>1}\,G\{dx\} + \cfrac{\,1\,}{r^{2}} \int\limits_{|x|\leq1}x^2 \,G\{dx\} =
\cfrac{\,1\,}{r^{2}} \,M(1).
\end{eqnarray*}
Since $1<r\le\sqrt 2$, this implies \begin{equation}\label{9}
\beta \geq \cfrac{\,1\,}{{2}}\, M(1).\end{equation}
Inequality \eqref{9} and condition $L^2\geq 1/{M(1)}$ give the bound
\begin{equation}\label{99}L^2\beta \geq \cfrac{\,1\,}{{2}}\,.
\end{equation}
We now proceed similarly to the proof of
a result of Ess\'een \cite{Esseen} (see \cite{Petrov}, Lemma 4 of Chapter II).
Using the H\"older inequality, it is easy to see that
\begin{equation}
\label{66}
I\leq \prod _{j=0}^{\infty}I_j^{\mu_j},
\end{equation}
where
\begin{eqnarray*}
I_j&=&\frac 1D\int\limits_{0}^{D}\exp\Big(-\cfrac{p_j}{2\,\mu_j}\;\sum_{k=1}^{n}\int\limits_{-\infty}^{\infty}\big(1-\cos(2a_k
t x)\big)\,G_j\{dx\}\Big)\,dt \\
&=&
\frac 1D\int\limits_{0}^{D}\exp\Big(-\frac{\,1\,}2\,r^{2j}\beta\;\sum_{k=1}^{n}\int\limits_{A_j}\big(1-\cos(2a_k
t
x)\big)\,G_j\{dx\}\Big)\,dt
\end{eqnarray*}
if $p_j > 0$, and
$I_j=1$ if $p_j=0$.
Applying Jensen's inequality to the exponential in the integral (see
\cite{Petrov},
p. 49)), we obtain
\begin{eqnarray}
I_j&\leq&\frac 1D\int\limits_{0}^{D}\int_{A_j}\exp\Big(-\frac{\,1\,}2\,r^{2j}\beta\;
\sum_{k=1}^{n}\big(1-\cos(2a_k t x)\big)\Big)\,G_j\{dx\}\,dt\nonumber \\
&=
&\frac 1D\int\limits_{A_j}\int\limits_{0}^{D}\exp\Big(-\frac{\,1\,}2\,r^{2j}\beta\;\sum_{k=1}^{n}\big(1-\cos(2a_k
t x)\big)\Big)\,dt\,G_j\{dx\}\nonumber \\
&\leq& \sup_{z\in A_j}\frac 1D\int\limits_{0}^{D}\widehat{H}_{z,1}^{r^{2j}\beta}(t)\,dt.\label{rr}
\end{eqnarray}
Let us estimate the characterictic function $\widehat{H}_{\pi,1}(t)$ for
$|\,t|\leq D$. We can proceed in the same way as the authors
of~\cite{Friedland and Sodin, Rudelson and Vershynin, Vershynin}.
It is evident that $$1-\cos x \geq 2x^2/\pi^2, \quad\hbox{for~${|x|\leq\pi}$}.$$
For an arbitrary~$x$, this implies that
$$1-\cos x\geq 2\,\pi^{-2}
\min_{m\in \mathbf{Z}}|\,x-2\pi m|^2.$$
Substituting this inequality into \eqref{11},
we obtain
\begin{eqnarray}
\widehat{H}_{\pi,1}(t)&\leq&\exp \Big(-\cfrac{1}{\pi^{2}} \;\sum_{k=1}^{n}\min_{m_k \in
\mathbf{Z}}\big|2\pi t a_k -2 \pi m_k\big|^2\Big)\nonumber \\
&=&\exp\Big(- 4\;\sum_{k=1}^{n}\min_{m_k \in
\mathbf{Z}}|\,ta_k-m_k|^2\Big)\nonumber \\
&=&\exp\big(- 4\;\big(\hbox{dist}(ta,\mathbf{Z}^n)\big)^2\big).\label{7b}
\end{eqnarray}
Using \eqref{4s}, wee see that, for $|\,t|\leq 1/2\,\|a\|_{\infty}$,
inequality \eqref{7b} turns into
\begin{equation}\label{7a}
\widehat{H}_{\pi,1}(t)\leq\exp(-4\,t^2).
\end{equation}
Now we can use relations \eqref{5bh}, \eqref{7b}
and \eqref{7a} to estimate the integrals~$I_j$.
First we consider the case $j=1,2,\ldots$. Note that
the characteristic functions~$\widehat{H}_{z,\gamma}(t)$ satisfy the equalities
\begin{equation} \label{5}
\widehat{H}_{z,\gamma}(t)=\widehat{H}_{y,\gamma}\big({zt}/{y}\big)\quad\hbox{and}\quad
\widehat{H}_{z,\gamma}(t)=\widehat{H}_{z,1}^{\gamma}(t).
\end{equation}
The first equality \eqref{5} implies that
\begin{equation} \label{55}
\hbox{if}\quad{H}_{z,\gamma}=\mathcal L(\xi),\quad\hbox{then}\quad {H}_{y,\gamma}=\mathcal L(y\,\xi/z).
\end{equation}
For $z\in A_j$ we have $r^{-j}<|z|\leq r^{-j+1}<\pi$. Hence, for
${|\,t|\leq D}$, we have $|{zt}/{\pi}|<D$. Therefore, using the properties
\eqref{5} with $y=\pi$ and aforementioned estimates
\eqref{5bh}, \eqref{7b} and~\eqref{7a}, we obtain, for $z\in A_j$ and for $z=\pi$,
\begin{eqnarray*} \label{5cq}
\widehat{H}_{z,1}(t)
&\leq&\exp\big(-4\,f_L^2({zt}/{\pi})\big)\\
&=& \begin{cases}\qquad \exp\big(-({zt}/{\pi})^2/9\big),
& \hbox{ for }0<t\le eL\pi/z,\\ \exp\big(-4\,L^2\,\log(zt/L\pi)\big),&\hbox{ for }t> eL\pi/z.
\end{cases}
\end{eqnarray*}
Hence,
\begin{multline}\label{jj}
\sup_{z\in A_j}\int\limits_{0}^{D}\widehat{H}_{z,1}^{r^{2j}\beta}(t)\,dt
\leq
\int\limits_{0}^{D}\exp\big(-t^2\beta/9\pi^2\big)\,dt +
\int\limits_{r^{j-1} L\pi e}^{\infty}\Big(\frac{r^{j}L\pi}t\Big)^{4\,r^{2j}\beta L^2}\,dt
\ll \cfrac{1}{\sqrt{\beta}} \ .
\end{multline}In the last inequality we used inequality \eqref{99}.
Consider now the case $j=0$.
The relation \eqref{55} yields, for $z>0,\,\gamma>0$,
\begin{equation}\label{8c}
Q(H_{z,\gamma},1/D)=Q\big(H_{1,\gamma},{1}/D{z}\big).
\end{equation}
Thus, according to \eqref{8a}, \eqref{1b}, \eqref{5} and
\eqref{8c}, we obtain
\begin{eqnarray}
\sup_{z\in A_0}\frac 1D\int\limits_{0}^{D}\widehat{H}_{z,1}^{\beta} (t) \,dt &=&
\sup_{z> 1} \frac 1D\int\limits_{0}^{D}\widehat{H}_{z,\beta} (t) \,dt \asymp
\sup_{z> 1}\; Q(H_{z,\beta},1/D)\nonumber\\ &=&
\sup_{z> 1}\; Q\big(H_{1,\beta},{1}/{Dz}\big)
\leq Q(H_{1,\beta},1/D)\nonumber \\ &\asymp& Q\big(H_{1,\beta},{1}/D{\pi}\big) =
Q(H_{\pi,\beta},1/D)\nonumber\\ &\asymp&
\frac 1D\int\limits_{0}^{D} \widehat{H}_{\pi,\beta}(t)\, dt =
\frac 1D\int\limits_{0}^{D}\widehat{H}_{\pi,1}^{\beta}(t) \,dt.\label{pp}
\end{eqnarray}
Using the bounds \eqref{5bh}, \eqref{7b} and \eqref{7a} for the characteristic
function~$\widehat{H}_{\pi,1}(t)$ and taking into account inequality \eqref{99}, we have:
\begin{equation}\label{ee}
\int\limits_{0}^{D}\widehat{H}_{\pi,1}^\beta(t)\,dt \leq
\int\limits_{0}^{D}\exp(-t^2\beta/9)\,dt +
\int\limits_{Le}^{\infty}\Big(\frac{L}t\Big)^{4\beta L^2}\,dt
\ll \cfrac{1}{\sqrt{\beta}} .
\end{equation}
According to \eqref{rr}, \eqref{jj}, \eqref{pp} and \eqref{ee},
we obtained the same estimate
\begin{equation}\label{yy}
I_j \ll \cfrac{\,1\,}{D\sqrt{\beta}}
\end{equation}
for all integrals $I_j$ with $p_j\neq 0$. In view of
$\sum\limits_{j=0}^{\infty}\mu_j=1$, from \eqref{66} and \eqref{yy} it follows that
\begin{equation}\label{yy8}I\leq\prod_{j=0}^{\infty}I_j^{\mu_j} \ll \cfrac{\,1\,}{D\sqrt{\beta}} .
\end{equation}
Using \eqref{ww}, \eqref{9} and \eqref{yy8}, we complete the proof. $\square$
\medskip
Now we will deduce Corollary $\ref{c1b}$ from Theorem \ref{th1a}.
\medskip
\emph{Proof of Corollary $\ref{c1b}$.} We denote $b=a /\|a\|\in
\mathbf{R}^n$.
Then the equality $Q(F_a,\lambda)=Q(F_b,\lambda/\|a\|)$, for all $\lambda\ge0$, holds. The vector~$b$ satisfies
the conditions of Theorem \ref{th1a} which hold for the vector~$a$
when replacing $D$ by $D\|a\|$. Indeed,
$\|ub-m\|\geq f_L(u) $
for $u \in \Big[\cfrac{1}{2\,\|b\|_{\infty}},D\|a\|\Big]$ and for all $m\in \mathbf Z^n$.
This follows from condition~\eqref{5bh} of Theorem \ref{th1a}, if we denote
$u={t}\|a\|$.
It remains to apply Theorem~\ref{th1a} to the vector $b$. $\square$
\bigskip
{\bf Acknowledgements.} The first and the third authors are supported by grants RFBR 10-01-00242 and 13-01-00256.
The second and the third authors are supported by the SFB 701 in Bielefeld.
The third author is supported by grant RFBR 11-01-12104 and by the Program of
Fundamental Researches of Russian Academy of Sciences ``Modern
Problems of Fundamental Mathematics''.
|
1,108,101,562,740 | arxiv | \section{Introduction and notation}
A variety of important observables are essentially given by the
correlators of currents with different tensorial structure. The cross
section for $e^+e^-$ annihilation into hadrons is governed by the vector
current correlator, the $Z$ decay rate by a combination of vector and
axial correlators \cite{CKKRep}.
Higgs decays can be expressed by
the correlators of the corresponding scalar or pseudo-scalar current
densities. Theoretical predictions for all these quantities in one-
and two-loop approximation, corresponding to the Born level and the
order $\alpha_s$ corrections, are available since long. However,
in view of the present or the foreseeable experimental precision
improved calculations of these quantities are required, at least up to
order $\alpha_s^2$, if possible even up to order $\alpha_s^3$. For
massless quarks the NNLO result is available both for the vector
\cite{GorKatLar91SurSam91} and the scalar correlators
\cite{Che96}. Quark mass
effects are often incorporated by inclusion of the lowest one or
two terms in an expansion in $m^2/s$. This approach is well justified
for many applications \cite{CKKRep}. Even for relatively low energy
values, down to about three to four times the quark mass this approach
leads to an adequate result if sufficiently many terms are included in
the expansion \cite{CheHarKueSte97,HarSte97}.
Nevertheless it is desirable to calculate
the correlator for arbitrary values of $m^2/s$ without
directly invoking the
high momentum expansion. Recently this was achieved for both real and
imaginary parts of the three-loop polarization function
\cite{CheKueSte96} with a heavy quark coupled to an external current.
In a first step the method was applied to the case of the vector current
correlator.
In this paper also the axial-vector, scalar and pseudo-scalar cases
are considered.
This completes the relevant ${\cal O}(\alpha_s^2)$ corrections
to polarization functions for neutral gauge bosons
induced by heavy quarks, more specifically to their ``non-singlet'' parts.
Contributions from the double-triangle diagram, giving rise to
``singlet'' contributions, are not considered in this work and will
be treated elsewhere.
It is useful to define dimensionless variables:
\begin{eqnarray}
z\,\,=\,\,\frac{q^2}{4m^2},
&&
x\,\,=\,\,\frac{2m}{\sqrt{s}},
\end{eqnarray}
where $q$ is the external momentum of the polarization function
and $s$ is the center of mass energy in the
process $e^+e^-\to\mbox{hadrons}$.
Then the velocity, $v$, of one of the produced quarks reads
\begin{eqnarray}
v&=&\sqrt{1-x^2}.
\end{eqnarray}
Every time the generic index $\delta$ appears without further explanation
it is understood that
$\delta$ represents one of the letters $a,v,s$ or $p$.
The polarization functions for the four cases of interest are defined by
\begin{eqnarray}
\left(-q^2g_{\mu\nu}+q_\mu q_\nu\right)\,\Pi^\delta(q^2)
+q_\mu q_\nu\,\Pi^\delta_L(q^2)
&\!\!=\!\!&
i\int dx\,e^{iqx}\langle 0|Tj^\delta_\mu(x) j^\delta_\nu(0)|0 \rangle
\,\,\,\mbox{for}\,\,\, \delta=v,a\,,
\label{eqpivadef}
\\
q^2\,\Pi^\delta(q^2)
&\!\!=\!\!&
i\int dx\,e^{iqx}
\langle 0|Tj^\delta(x)j^\delta(0)|0 \rangle
\,\,\,\mbox{for}\,\,\, \delta=s,p,
\label{eqpispdef}
\end{eqnarray}
with the currents
\begin{eqnarray}
j_\mu^v = \bar{\psi}\gamma_\mu \psi,\qquad
j_\mu^a = \bar{\psi}\gamma_\mu\gamma_5 \psi,\qquad
j^s = \bar{\psi}\psi,\qquad
j^p = i \bar{\psi}\gamma_5 \psi.
\end{eqnarray}
In Eqs.~(\ref{eqpivadef}) and (\ref{eqpispdef}) two powers
of $q$ are factored out in order to end up with dimensionless
quantities $\Pi^\delta(q^2)$.
As we are only interested in the imaginary part the overall
renormalization can be performed in such a way that this is
possible.
Furthermore we ensure that $\Pi^\delta(0)=0$.
The transformation from this scheme to other
(overall) renormalization conditions is discussed in
App.~\ref{appmsbar}.
Concerning the renormalization it should be mentioned that
for the scalar and pseudo-scalar current the combinations
$mj^s$ and $mj^p$, where $m$ is the pole mass,
have to be considered in order to arrive at finite results.
Note that $\Pi^v_L=0$ and $\Pi_L^a$ is trivially obtained from
$\Pi^p$ through the axial Ward identity
$(q^2)^2\Pi_L^a(q^2)=4m^2q^2\,(\Pi^p(q^2)
-q^2(\partial\Pi^p(q^2)/\partial q^2)|_{q^2=0})$.
The physical observable $R(s)$ is related to $\Pi(q^2)$ by
\begin{eqnarray}
R^\delta (s)&=&12\pi\,\mbox{Im}\,\Pi^\delta(q^2=s+i\epsilon)
\qquad\qquad \mbox{for } \delta=v,a\,,
\label{eqrtopiva}
\\
R^\delta (s)&=&8\pi\,\,\,\,\mbox{Im}\,\Pi^\delta(q^2=s+i\epsilon)
\qquad\qquad \mbox{for } \delta=s,p\,.
\label{eqrtopisp}
\end{eqnarray}
It is convenient to define
\begin{eqnarray}
\Pi^\delta(q^2) &=& \Pi^{(0),\delta}(q^2)
+ \frac{\alpha_s(\mu^2)}{\pi} C_F \Pi^{(1),\delta}(q^2)
+ \left(\frac{\alpha_s(\mu^2)}{\pi}\right)^2\Pi^{(2),\delta}(q^2)
+ \ldots\,\,,
\\
\Pi^{(2),\delta} &=&
C_F^2 \Pi_A^{(2),\delta}
+ C_A C_F \Pi_{\it NA}^{(2),\delta}
+ C_F T n_l \Pi_l^{(2),\delta}
+ C_F T \Pi_F^{(2),\delta}
+ C_F T \Pi_S^{(2),\delta},
\label{eqpi2}
\end{eqnarray}
and similarly for $R^\delta(s)$.
The abelian contribution $\Pi_A^{(2),\delta}$ is already present in
(quenched) QED and $\Pi_{NA}^{(2),\delta}$ originates from the
non-abelian structure
specific for QCD. The polarization functions containing a second
massless or massive quark loop are denoted
by $\Pi_l^{(2),\delta}$ and $\Pi_F^{(2),\delta}$, respectively.
$\Pi_S^{(2),\delta}$ represents the double-triangle contribution.
Our procedure will be applied to the first three terms in
Eq.~(\ref{eqpi2}), the last two terms will be studied elsewhere.
The paper is organized as follows: In the next section
the expressions for the polarization functions in the different
kinematical regions are provided. In Section \ref{secapprox} the
approximation method is described and
the results are given in Section \ref{secresults}.
The conclusions are finally presented in Section~\ref{seccon}.
\section{Discussion of the kinematical regions}
\label{seckinreg}
This section provides a discussion of three kinematical regions
where analytical results are available and contains the
input data required for the approximation method.
\vspace{1em}
\noindent
{\bf High energy region}
\vspace{1em}
The input from the kinematical region where $-q^2\gg m^2$ puts
stringent constraints on the form of the polarization function and
plays an important r\^ole for our procedure.
In the limit of large external momentum the
polarization function can be cast into the following form:
\begin{eqnarray}
\Pi^\delta(q^2) &=& \frac{3}{16\pi^2}\sum_{n\ge0} D^\delta_n \frac{1}{z^n}.
\end{eqnarray}
The coefficients $D_n^\delta$ contain $\ln(-z)$-terms up to third order.
For the subsequental discussion only the terms with $n=0$ and $1$
are needed. These first two terms can be calculated by simply
Taylor expanding the
diagrams in the mass
\footnote{Recently terms up to order $n=4$ for the scalar and
pseudo-scalar correlator \cite{HarSte97} and up to $n=6$ for the
vector correlator \cite{CheHarKueSte97}
have been calculated. This requires also the
knowledge of massive tadpole integrals.}.
This leads to massless
three-loop integrals for which the technique is known since long
\cite{CheTka81}.
The following decomposition of the coefficients $D^\delta_n$
is adopted:
\begin{eqnarray}
D^\delta_n &=& D^{(0),\delta}_n
+ \frac{\alpha_s(\mu^2)}{\pi} C_F D^{(1),\delta}_n
+ \left(\frac{\alpha_s(\mu^2)}{\pi}\right)^2 D^{(2),\delta}_n
+ \ldots\,\,,
\nonumber\\
D^{(2),\delta}_n &=&
C_F^2 D^{(2),\delta}_{A,n}
+C_AC_F D^{(2),\delta}_{NA,n}
+C_FTn_l D^{(2),\delta}_{l,n}
+C_FT D^{(2),\delta}_{F,n},
\nonumber\\
D^{(j),\delta}_{x,n} &=& \sum_{k=0}^3\, d_{x,n,k}^{(j),\delta}
\left(\ln\frac{-q^2}{m^2}\right)^k
\qquad\qquad
j\in\{0,1,2\},\,
x\in\{A,NA,l,F\},
\end{eqnarray}
where $d_{x,n,k}^{(j),\delta}$ are numerical constants.
For $D^{(0),\delta}_{n}$ and $D^{(1),\delta}_{n}$ the sum runs only
up to $k=1$ and $k=2$, respectively.
The results for the four cases are available in the literature
\cite{GorKatLar91SurSam91,axpssc,Che96}.
\begin{table}[t]
{\footnotesize
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{|l||r|r|r|r||r|r|r|r|}
\hline
&\multicolumn{4}{c||}{$n=0$}
&\multicolumn{4}{c|}{$n=1$}
\\
\hline
$k$ &0&1&2&3&0&1&2&3\\
\hline
\hline
$D_n^{(0),v}$
& 2.2222& -1.3333& 0.0000& 0.0000& 2.0000& 0.0000& 0.0000& 0.0000\\
$D_n^{(1),v}$
& -3.9749& -1.0000& 0.0000& 0.0000& 0.0000& -3.0000& 0.0000& 0.0000\\
$D_{A,n}^{(2),v}$
& 1.3075& 0.1250& 0.0000& 0.0000& 2.7727& -0.3750& 2.2500& 0.0000\\
$D_{NA,n}^{(2),v}$
& -9.5651& -0.7175& 0.4583& 0.0000& -3.9903& -7.7083& 1.3750& 0.0000\\
$D_{l,n}^{(2),v}$
& 2.9723& 0.2306& -0.1667& 0.0000& 2.2899& 2.1667& -0.5000& 0.0000\\
$D_{F,n}^{(2),v}$
& -1.2583& 0.2306& -0.1667& 0.0000& -5.1048& 2.1667& -0.5000& 0.0000\\
\hline
\hline
$D_n^{(0),a}$
& 3.5556& -1.3333& 0.0000& 0.0000& -2.0000& 2.0000& 0.0000& 0.0000\\
$D_n^{(1),a}$
& -2.0860& -1.0000& 0.0000& 0.0000& 1.9623& 1.5000& -1.5000& 0.0000\\
$D_{A,n}^{(2),a}$
& -1.0387& 0.1250& 0.0000& 0.0000& -8.7928& 0.2672& -1.3125& 0.7500\\
$D_{NA,n}^{(2),a}$
& -9.8007& -0.7175& 0.4583& 0.0000& 14.8142& -1.5224& -4.5417& 0.4583\\
$D_{l,n}^{(2),a}$
& 3.6266& 0.2306& -0.1667& 0.0000& -8.5657& 2.3606& 1.3333& -0.1667\\
$D_{F,n}^{(2),a}$
& -0.2639& 0.2306& -0.1667& 0.0000& 6.5500& -4.5090& 1.3333& -0.1667\\
\hline
\hline
$D_n^{(0),s}$
& 5.3333& -2.0000& 0.0000& 0.0000& -3.0000& 3.0000& 0.0000& 0.0000\\
$D_n^{(1),s}$
& -3.9623& -4.5000& 1.5000& 0.0000& 7.8185& 3.0000& -4.5000& 0.0000\\
$D_{A,n}^{(2),s}$
& 2.0416& 0.8578& 3.5625& -0.7500& -6.0543&-10.0450& -5.0625& 4.5000\\
$D_{NA,n}^{(2),s}$
&-28.2865& -5.2693& 5.9167& -0.4583& 34.4008& -7.3623&-12.9375& 1.3750\\
$D_{l,n}^{(2),s}$
& 15.5016& -0.5273& -1.8333& 0.1667&-21.3855& 9.3091& 3.7500& -0.5000\\
$D_{F,n}^{(2),s}$
&-10.1876& 6.3423& -1.8333& 0.1667& 18.5432&-14.2997& 3.7500& -0.5000\\
\hline
\hline
$D_n^{(0),p}$
& 4.0000& -2.0000& 0.0000& 0.0000& 1.0000& 1.0000& 0.0000& 0.0000\\
$D_n^{(1),p}$
& -2.9623& -4.5000& 1.5000& 0.0000& 2.6062& -3.0000& -1.5000& 0.0000\\
$D_{A,n}^{(2),p}$
& 1.0691& 0.8578& 3.5625& -0.7500& -5.6088& -6.8483& 4.3125& 1.5000\\
$D_{NA,n}^{(2),p}$
&-17.6658& -5.2693& 5.9167& -0.4583& 6.0218&-13.6485& -2.4792& 0.4583\\
$D_{l,n}^{(2),p}$
& 10.9195& -0.5273& -1.8333& 0.1667& -0.9696& 6.3253& 0.5833& -0.1667\\
$D_{F,n}^{(2),p}$
&-11.5909& 6.3423& -1.8333& 0.1667& 0.8079& -3.5443& 0.5833& -0.1667\\
\hline
\end{tabular}
\end{center}
}
\caption{\label{tabdn}
Numerical values for the coefficients $D_n^\delta$
at one-, two- and three-loop level for $\mu^2=m^2$.
}
\end{table}
We have independently repeated this calculation.
The results in the on-shell scheme are listed in Tab.~\ref{tabdn}.
Due to our renormalization condition ($\Pi^\delta(0)=0$)
for a comparison with \cite{GorKatLar91SurSam91,axpssc,Che96}
the terms presented in App.~\ref{appmsbar} have to be taken into
account.
The analytic expressions for $D_n^v$ can be found in
\cite{CheHarKueSte97}
and for $D_n^s$ and $D_n^p$ in
\cite{HarSte97}. $D_0^a$ and $D_1^a$ are listed in
App.~\ref{appdn}.
This information will serve as input for the procedure described in
Section \ref{secapprox}. It should be noted that in contrast to the
vector case treated in
\cite{CheKueSte96}
the axial-vector, scalar and pseudo-scalar correlator
also develop cubic logarithms $\ln^3(-z)$. If we had adopted
the $\overline{\mbox{MS}}$ definition for the mass and $\mu^2=q^2$,
these cubic logarithms would vanish.
\vspace{1em}
\noindent
{\bf Behaviour at $q^2=0$}
\vspace{1em}
An important input to the behaviour of the polarization function
originates from the Taylor expansion around $q^2=0$. In this case the
three-loop diagrams have to be expanded in the external momentum leading to
massive tadpole integrals. The calculation is performed with the help of the
algebraic program MATAD written in FORM
\cite{VerFORM}.
It automatically expands in $q$ up to the desired order, performs the
traces and applies recurrence relations \cite{Bro92}
to reduce the many different diagrams to a small set of master integrals.
The structure of $\Pi(q^2)$ is as follows:
\begin{eqnarray}
\Pi^\delta(q^2) &=& \frac{3}{16\pi^2}\sum_{n>0} C^\delta_n z^n,
\nonumber\\
C^\delta_n &=& C^{(0),\delta}_n
+ \frac{\alpha_s(\mu^2)}{\pi} C_F C^{(1),\delta}_n
+ \left(\frac{\alpha_s(\mu^2)}{\pi}\right)^2 C^{(2),\delta}_n
+ \ldots\,\,,
\nonumber\\
C^{(2),\delta}_n &=&
C_F^2 C^{(2),\delta}_{A,n}
+C_AC_F C^{(2),\delta}_{NA,n}
+C_FTn_l C^{(2),\delta}_{l,n}
+C_FT C^{(2),\delta}_{F,n}.
\end{eqnarray}
Although the calculation is performed analytically the results
are listed in numerical form in Tab.~\ref{tabcn}
with the choice $\mu^2=m^2$.
The analytic expressions are given in App.~\ref{appcn}.
\begin{table}[ht]
{\footnotesize
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{|l|l|r|r|r|r|r|r|}
\hline
&&&&&&&\\[-4mm]
$\delta$ & $n$ & $C_n^{(0),\delta}$ & $C_n^{(1),\delta}$
& $C_{A,n}^{(2),\delta}$
& $C_{{\it NA},n}^{(2),\delta}$ & $C_{l,n}^{(2),\delta}$
& $C_{F,n}^{(2),\delta}$ \\
&&&&&&&\\[-4mm]
\hline
\hline
$v$
&1& 1.06667& 4.04938& 5.07543& 7.09759& -2.33896& 0.72704\\
&2& 0.45714& 2.66074& 6.39333& 6.31108& -2.17395& 0.26711\\
&3& 0.27090& 2.01494& 6.68902& 5.39768& -1.89566& 0.14989\\
&4& 0.18470& 1.62997& 6.68456& 4.69907& -1.67089& 0.09947\\
&5& 0.13640& 1.37194& 6.57434& 4.16490& -1.49436& 0.07230\\
&6& 0.10609& 1.18616& 6.42606& 3.74591& -1.35348& 0.05566\\
&7& 0.08558& 1.04568& 6.26672& 3.40886& -1.23871& 0.04459\\
&8& 0.07094& 0.93558& 6.10789& 3.13175& -1.14341& 0.03677\\
\hline
\hline
$a$
&1& 0.53333& 1.70123& 2.31402& 2.71368& -1.04643& 0.34325\\
&2& 0.15238& 0.71577& 1.64824& 1.69284& -0.66383& 0.06509\\
&3& 0.06772& 0.39678& 1.18988& 1.09011& -0.43131& 0.02238\\
&4& 0.03694& 0.25243& 0.89978& 0.75793& -0.30145& 0.01013\\
&5& 0.02273& 0.17477& 0.70769& 0.55831& -0.22280& 0.00540\\
&6& 0.01516& 0.12819& 0.57409& 0.42925& -0.17167& 0.00320\\
&7& 0.01070& 0.09805& 0.47720& 0.34095& -0.13656& 0.00205\\
&8& 0.00788& 0.07743& 0.40449& 0.27780& -0.11138& 0.00139\\
\hline
\hline
$s$
&1& 0.80000& 0.45185& 0.03484& -2.51105& 0.88148& 0.71856\\
&2& 0.22857& 0.77651& 1.42576& 0.89546& -0.35912& 0.19112\\
&3& 0.10159& 0.52152& 1.52607& 0.99889& -0.40023& 0.07144\\
&4& 0.05541& 0.35707& 1.32709& 0.82750& -0.33359& 0.03362\\
&5& 0.03410& 0.25650& 1.11491& 0.66459& -0.26913& 0.01832\\
&6& 0.02273& 0.19215& 0.93714& 0.53778& -0.21845& 0.01103\\
&7& 0.01605& 0.14892& 0.79516& 0.44174& -0.17982& 0.00715\\
&8& 0.01182& 0.11862& 0.68238& 0.36853& -0.15024& 0.00489\\
\hline
\hline
$p$
&1& 1.33333& 2.33333& 2.71218& -1.85805& 0.92593& 1.31106\\
&2& 0.53333& 2.61481& 7.03952& 3.57843& -1.23162& 0.49637\\
&3& 0.30476& 2.12783& 8.27333& 4.10409& -1.50248& 0.25889\\
&4& 0.20317& 1.75361& 8.46632& 3.93931& -1.47479& 0.16180\\
&5& 0.14776& 1.48309& 8.33148& 3.65904& -1.38655& 0.11237\\
&6& 0.11366& 1.28241& 8.09038& 3.38071& -1.29093& 0.08353\\
&7& 0.09093& 1.12869& 7.82133& 3.13004& -1.20154& 0.06509\\
&8& 0.07488& 1.00753& 7.55403& 2.91007& -1.12143& 0.05249\\
\hline
\end{tabular}
\end{center}
}
\caption{\label{tabcn}
Numerical values for the coefficients $C_n^\delta$ at one-, two-
and three-loop level for $\mu^2=m^2$.
}
\end{table}
For the vector correlator the first seven coefficients
are already listed in
\cite{CheKueSte96}, whereas all other results are new.
\vspace{1em}
\noindent
{\bf Threshold behaviour}
\vspace{1em}
At threshold it is most convenient to consider first the
information about $R^\delta(v)$ and transform this subsequently
into the
corresponding expression for $\Pi^\delta(q^2)$ via
dispersion relations.
Whereas the treatment
of the four cases
in the high energy and small $q^2$ region is quite similar
there is a big difference at threshold
between the vector and pseudo-scalar correlators
on one hand and the axial-vector and scalar correlators
on the other hand.
The latter are suppressed by a factor $v^2$ w.r.t. the former.
This can already be seen by considering the Born results:
\begin{eqnarray}
R^{(0),v}\,\,=\,\,3\frac{v\left(3-v^2\right)}{2},
\quad
R^{(0),a}\,\,=\,\,3v^3,
\quad
R^{(0),s}\,\,=\,\,3v^3,
\quad
R^{(0),p}\,\,=\,\,3v.
\end{eqnarray}
At ${\cal O}(\alpha_s)$ an expansion for $v\to 0$ is helpful
for the considerations below. It reads:
\begin{eqnarray}
R^{(1),v} &=& R^{(0),v}\left(\frac{\pi^2\left(1+v^2\right)}{2v}-4\right)
+{\cal O}(v^3),
\label{eqrv1l}
\\
R^{(1),a} &=& R^{(0),a}\left(\frac{\pi^2\left(1+v^2\right)}{2v}-2\right)
+{\cal O}(v^5),
\\
R^{(1),s} &=& R^{(0),s}\left(\frac{\pi^2\left(1+v^2\right)}{2v}-1\right)
+{\cal O}(v^5),
\\
R^{(1),p} &=& R^{(0),p}\left(\frac{\pi^2\left(1+v^2\right)}{2v}-3\right)
+{\cal O}(v^3).
\label{eqrp1l}
\end{eqnarray}
The exact results can be found in
\cite{JerLaeZer82,DreHik90}.
The analytical evaluation of the double-bubble diagrams with massless
fermion loop insertions indicates
\cite{HoaKueTeu95}
that the characteristic scale of the first two terms
proportional to $\pi^2$
is given by
the relative momentum, the last term (which is due to hard
transversal gluon exchange) by the mass of the heavy fermion. This
motivates the decomposition adopted in Eqs.~(\ref{eqrv1l}-\ref{eqrp1l}).
Let us at ${\cal O}(\alpha_s^2)$ first discuss the abelian
contribution proportional to $C_F^2$. The ratio between
$R^{(2),\delta}_A$ and $R^{(0),\delta}$ is proportional to the Sommerfeld
factor $y/(1-e^{-y})$ with $y=C_F\pi\alpha_s/v$, which resums
contributions of the form $(\alpha_s/v)^n$. Axial-vector
and scalar
correlators follow the P-wave scattering solution for the Coulomb
potential. Hence an additional
factor $(1+y^2/(4\pi^2))$ has to be
taken into account
in order to obtain the correct
leading term of ${\cal O}(\alpha_s^2)$
\cite{FadKho91}.
For the vector and pseudo-scalar contribution also the next-to-leading
term in $v$ can be determined by taking into account the correction
factor arising from the exchange of transversal gluons which reads
$(1-C_F 4\alpha_s/\pi)$ for the vector and
$(1-C_F 3\alpha_s/\pi)$ for the pseudo-scalar case.
As $R_A^{(2),a}$ and $R_A^{(2),s}$
already start at ${\cal O}(v)$ the corresponding factors are not
considered.
Finally we arrive at
\begin{eqnarray}
R_A^{(2),v}\,\,=\,\,3\left(\frac{\pi^4}{8v}-3\pi^2\right) +{\cal O}(v),
&&
R_A^{(2),a}\,\,=\,\, 3\left(\frac{\pi^2(3+\pi^2)}{12}v\right) +{\cal O}(v^2),
\label{eqAthr}\\
R_A^{(2),s} \,\,=\,\, 3\left(\frac{\pi^2(3+\pi^2)}{12}v\right) +{\cal O}(v^2),
&&
R_A^{(2),p} \,\,=\,\,
3\left(\frac{\pi^4}{12v}-\frac{3}{2}\pi^2\right) +{\cal O}(v).
\end{eqnarray}
It is, of course, necessary to incorporate
the strong $1/v$ singularity into the approximation method
for the vector and pseudo-scalar case.
In contrast the axial-vector and scalar current correlator are
very smooth at threshold
so that these terms are not used for our procedure. The comparison
of these Pad\'e results with the exact terms at threshold will be
performed in Section \ref{secresults} and demonstrates that the threshold
behaviour is well reproduced --- an independent test of our method.
The analytical results for the three-loop diagrams where a massless
quark loop is inserted into the
gluon propagator plus the corresponding real corrections
are available for the vector current correlator
\cite{HoaKueTeu95}
and the remaining correlators as well\footnote{We would like to thank
the authors of \cite{HoaTeu96} for providing their results prior to
publication.}
\cite{HoaTeu96}
in analytic form.
The scalar case can also be found in
\cite{Mel96}.
Expanding the results near threshold leads to
\begin{eqnarray}
R^{(2),v}_l &=& R^{(0),v}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(\frac{1}{6}\ln\frac{v^2s}{\mu^2}-\frac{5}{18}\right)
+{\cal O}(v),
\\
R^{(2),a}_l &=& R^{(0),a}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(\frac{1}{6}\ln\frac{v^2s}{\mu^2}-\frac{11}{18}\right)
+{\cal O}(v^3),
\\
R^{(2),s}_l &=& R^{(0),a}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(\frac{1}{6}\ln\frac{v^2s}{\mu^2}-\frac{11}{18}\right)
+{\cal O}(v^3),
\\
R^{(2),p}_l &=& R^{(0),p}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(\frac{1}{6}\ln\frac{v^2s}{\mu^2}-\frac{5}{18}\right)
+{\cal O}(v).
\end{eqnarray}
We include subleading terms proportional to $\ln v$ in this expansion,
since the agreement of our approximation improves visibly in those cases
where the analytical result is known.
In order to get the threshold behaviour for the
non-abelian part it is either
possible to use the QCD potential and the perturbative relation
between $\alpha_V({\vec{q}}\,^2)$ and $\alpha_s(\mu^2)$ or to
proceed as demonstrated in
\cite{CheHoaKueSteTeu96}
and deduce the gluonic double-bubble diagram, $R_g^{(2),\delta}$,
from the corresponding fermionic contribution and evaluate it for
a special choice of the gauge parameter $\xi$.
This is based on the observation that
the terms proportional to $C_A$ in the relation between
$\alpha_V({\vec{q}}\,^2)$ and $\alpha_s(\mu^2)$ are covered by
the (one-loop) gluon propagator choosing $\xi=4$.
We will choose the second method since
this trick is used also for the actual calculation.
Following
\cite{CheKueSte96}
the expansion of the ``double-bubble'' result for $\xi=4$ is taken
to represent the expansion of the full non-abelian part. For the
four correlators it is given by:
\begin{eqnarray}
R^{(2),v}_{NA} &=& R^{(0),v}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(-\frac{11}{24}\ln\frac{v^2s}{\mu^2}+\frac{31}{72}\right)
+{\cal O}(v),
\label{eqthrvna}
\\
R^{(2),a}_{NA} &=& R^{(0),a}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(-\frac{11}{24}\ln\frac{v^2s}{\mu^2}+\frac{97}{72}\right)
+{\cal O}(v^3),
\\
R^{(2),s}_{NA} &=& R^{(0),s}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(-\frac{11}{24}\ln\frac{v^2s}{\mu^2}+\frac{97}{72}\right)
+{\cal O}(v^3),
\\
R^{(2),p}_{NA} &=&R^{(0),p}\,\,\frac{\pi^2\left(1+v^2\right)}{v}
\left(-\frac{11}{24}\ln\frac{v^2s}{\mu^2}+\frac{31}{72}\right)
+{\cal O}(v).
\label{eqthrpna}
\end{eqnarray}
To combine the results from different
kinematical regions the above expressions for the imaginary part have to be
transformed into analytical functions for $\Pi^\delta(q^2)$ which respect
Eqs.~(\ref{eqrtopiva}) and (\ref{eqrtopisp}).
This can be done in close analogy to
\cite{CheKueSte96}.
\section{The approximation procedure}
\label{secapprox}
This section is devoted to the description of the approximation
method. In order to save space we will not present explicit
formulae. They look very similar to the ones for the vector case
discussed in
\cite{CheKueSte96}.
The treatment of the abelian part of the pseudo-scalar correlator is in
close analogy to \cite{BaiBro95}.
In a first step a function $\tilde{\Pi}^\delta(q^2)$ is constructed which
contains no high energy singularities and no logarithmic
terms at threshold. This is achieved with the help of the function
\begin{eqnarray}
G(z)=\frac{2u\ln u}{u^2-1},\,\,\,\,
&&
u=\frac{\sqrt{1-1/z}-1}{\sqrt{1-1/z}+1}.
\end{eqnarray}
The combination $(1-z)G(z)$ has a polynomial behaviour for $z\to0$
and vanishes at threshold ($z\to1$). For the case $z\to -\infty$ the
expansion of $G(z)$ develops logarithms starting with
$\ln(-1/(4z))/(2z)$. This property is exploited and a function of the
form
\begin{eqnarray}
\sum_{n,m,l}\,c_{nml}\,z^n(1-z)^m \left(G(z)\right)^l,
\end{eqnarray}
where $n$ is an integer and $m,l\ge0$ is constructed in order to remove
the $\ln(-z)$ terms of $\Pi^\delta(q^2)$. Whereas for the vector case
described in
\cite{CheKueSte96}
no cubic logarithms appear (see Tab.~\ref{tabdn}) and therefore
quadratic combinations in $G(z)$ are sufficient, for the
other three cases this is not true: The axial-vector correlator
develops $\ln^3(-z)/z$ terms and combinations like
$(1-z)^2(G(z))^3$ are required. For the scalar and pseudo-scalar case
also cubic logarithms appear which are not suppressed by powers of $z$,
whence terms like $z(1-z)^2(G(z))^3$ must appear in
the expression subtracted from $\Pi^\delta(q^2)$.
If logarithmic terms are present near threshold
($\Pi_{NA}^{(2),\delta}$ and $\Pi_{l}^{(2),\delta}$)
they are first subtracted and then the high energy singularities
are removed. For the abelian polarization function where either
$1/v$ singularities
($\Pi_{A}^{(2),v}$ and $\Pi_{A}^{(2),p}$)
or just constant terms
($\Pi_{A}^{(2),a}$ and $\Pi_{A}^{(2),s}$)
are present for $z\to1$
only the high energy logarithms are removed.
In a second step we perform a variable change. Via the
conformal mapping
\begin{eqnarray}
\omega = \frac{1-\sqrt{1-q^2/4m^2}}{1+\sqrt{1-q^2/4m^2}},\,\,\,\,
&&
z = \frac{q^2}{4m^2} = \frac{4\omega}{(1+\omega)^2}.
\label{omega}
\end{eqnarray}
the complex $q^2$-plane is mapped into the interior of the unit circle
and the upper (lower) part of the cut starting at $z=1$ is mapped
onto the upper (lower) perimeter of the circle.
The special points
$q^2=0,4m^2,-\infty$ correspond to $\omega=0,1,-1$, respectively.
In this new variable we construct a function $P(\omega)$
for which the Pad\'e approximation is performed. According to
the different behaviour of $\tilde{\Pi}^\delta(q^2)$ near threshold
actually two different functions have to be defined:
\begin{eqnarray}
P^{I}(\omega)&=&\frac{1-\omega}{(1+\omega)^2}\left(
\tilde{\Pi}(q^2) - \tilde{\Pi}(-\infty)
\right),
\\
P^{II}(\omega)&=&\frac{1}{(1+\omega)^2}\left(
\tilde{\Pi}(q^2) - \tilde{\Pi}(-\infty)
\right).
\end{eqnarray}
Thereby $P^{I}(\omega)$ takes care of the cases where a
$1/v$ singularity is present
($\Pi_{A}^{(2),v}$ and $\Pi_{A}^{(2),p}$).
The factor $(1-\omega)$ corresponds effectively to a multiplication with $v$.
At this point we should mention that in order to incorporate also the
constant term at threshold
which has its origin in the correction factor introduced
before Eq.~(\ref{eqAthr})
the combinations
$\Pi_{A}^{(2),v}+4\Pi^{(1),v}$
and
$\Pi_{A}^{(2),p}+3\Pi^{(1),p}$
are considered.
The two-loop results
$\Pi^{(1),v}$ and $\Pi^{(1),p}$ may be found in
\cite{pi1v}
and
\cite{pi1p},
respectively.
$P^{II}(\omega)$ treats all other cases where
$\tilde{\Pi}^\delta(q^2)$ is just a constant for $z=1$. Note,
that this constant is unknown and consequently $P^{II}(1)$
may not be used for the construction of the Pad\'e approximation.
$P^{I}(1)$ is directly connected with the $1/v$ singularity
and, of course, known.
The high energy terms are treated in the same way for
$P^{I}(\omega)$ and
$P^{II}(\omega)$: Due to the subtraction of $\tilde{\Pi}^\delta(-\infty)$
the constant terms transform to $P(0)$ and the difference together
with the prefactor $1/(1+\omega)^2$ projects out the $1/z$ suppressed
terms in the limit $\omega\to -1$.
Finally the moments from $z\to0$ transform into derivatives of
$P(\omega)$ at $\omega=0$.
In total the following information is
available for $P^{I}(\omega)$:
$\{P^{I}(-1),P^{I}(0),P^{I,(1)}(0),\ldots,P^{I,(8)}(0),P^{I}(1)\}$.
These eleven data points allow the construction of
Pad\'e approximations like $[5/5]$, $[6/4]$ or $[4/6]$.
For $P^{II}(\omega)$ the threshold information $P^{II}(1)$ is not
available which means that at most Pad\'e approximations like
$[5/4]$ or $[4/5]$ may be constructed.
For the non-abelian contributions proportional to $C_AC_F$ there is
an alternative approach.
Following the method outlined in
\cite{CheHoaKueSteTeu96}
the imaginary part of the gluonic double-bubble contributions,
$R_g^{(2),\delta}(s)$,
can be computed analytically from the knowledge of the
fermionic contribution
$R_{l}^{(2),\delta}(s)$.
$R_g^{(2),\delta}(s)$
is, of course, gauge dependent. However, for
the special choice $\xi=4$, where $\xi$ is defined via the
gluon propagator
$(-\,g^{\mu\nu} + \xi\,q^\mu\,q^\nu/q^2)/(q^2+i\epsilon)$
the threshold behaviour of the non-abelian contribution
(see Eqs.~(\ref{eqthrvna}-\ref{eqthrpna}))
and the leading high energy logarithms are covered by
$R_g^{(2),\delta}(s)$.
Therefore it is promising to apply the procedure described above
to the difference
$\Pi_{NA}^{(2),\delta}(q^2) - \Pi_g^{(2),\delta}(q^2)|_{\xi=4}$
which has a less singular behaviour than
$\Pi_{NA}^{(2),\delta}(q^2)$.
The results for the non-abelian contribution presented in the next
section are based on this method.
\section{Results}
\label{secresults}
After the Pad\'e approximation is performed for the function
$P(\omega)$ the corresponding equations are inverted in order to
get $\Pi^\delta(q^2)$.
In Figs.~\ref{figvpv}--\ref{figasx} the results are presented
grouped according to the threshold behaviour.
In Fig.~\ref{figvpv} and \ref{figasv} $R(s)$ is plotted against
the velocity for the vector and pseudo-scalar and
the axial-vector and scalar case, respectively.
The Figs.~\ref{figvpx} and \ref{figasx} contain the corrections
plotted versus $x$.
Also the threshold and high energy approximations are shown
(dashed lines).
For the vector correlator terms up to ${\cal O}(x^{12})$ are
available
\cite{CheHarKueSte97}.
Although in our procedure only terms up to
${\cal O}(x^2)$ are incorporated the higher order terms
are very well reproduced. For the scalar and pseudo-scalar
polarization function terms up to ${\cal O}(x^8)$
are available
\cite{HarSte97}.
Again only the quadratic terms are build into the approximation method.
However, the numerical coincidence with the high energy approximations
is very good. Actually it is hardly possible to detect a difference
between the high energy terms and the Pad\'e results when $x$ is used as
abscissa.
A similar behaviour is observed for the axial-vector case where only quartic
terms are available
\cite{CheKue94}.
In this presentation it is not possible to notice any difference between
the different Pad\'e approximants. Minor differences can be seen
after the leading terms at threshold are subtracted.
This can be seen in Figs.~\ref{figvpvsub} and \ref{figasvsub}. It
should be stressed that
the vertical scale is expanded by up to
a factor 100 in comparison with
Figs.~\ref{figvpv} and \ref{figasv}.
The following notation is adopted: All Pad\'e approximations containing
information up to $C_6$ are plotted as a dashed line and the higher ones
as full lines. The obvious exceptions are represented by a dash-dotted
line and the exact results are drawn as dotted curves.
The vector case is already discussed in \cite{CheKueSte96}.
The inclusion of $C_8$ into the analysis shows a further
stabilization of the results. The plot for the
abelian part in Fig.~\ref{figvpvsub}
contains altogether 14 Pad\'e approximations. Eight of them
contain information up to $C_6$ (dashed lines), and six contain also
information from $C_7$ and $C_8$.
These latter six lines coincide even on the expanded scale.
The dash-dotted curve belongs to the $[2/5]$ result and contains a pole
for $\omega\approx1.06$.
In the case of the non-abelian contribution 15 Pad\'e
approximations are plotted. Again the dashed lines belong to
the lower order results. The dash-dotted curves are the results of two
Pad\'e approximants which have poles close to $\omega=1$
($[4/3]: \omega\approx1.07$ and $[2/5]: \omega\approx1.06$).
For the pseudo-scalar correlator we find similar results
concerning the behaviour of the Pad\'e approximations when
more information is included.
The abelian contribution for the pseudo-scalar case contains
17 different results. The dash-dotted lines differ from the
remaining ones significantly. The corresponding Pad\'e approximants
are $[3/2]$ and $[2/5]$.
A spread between the different
Pad\'e approximants can also be observed for the non-abelian contribution
to the vector and pseudo-scalar cases. In both cases, however,
convergence is visible if more information is included
into the construction procedure. In the plot for
$\delta R^{(2),p}_{NA}(s)$, e.g., the dash-dotted line correspond to the
Pad\'e approximation $[2/3]$ containing only the first three
moments for $q^2\to0$.
If this curve is ignored the spread is much less dramatic and the
difference between the remaining Pad\'e approximations shown is
very tiny and completely negligible.
The excellent agreement for the fermionic contribution
in the pseudo-scalar case is
comparable to the one for the vector correlator. In both cases the exact
results
\cite{HoaTeu96},
plotted as a dotted line,
is indistinguishable from the approximations.
\begin{table}[t]
{\footnotesize
\renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{tabular}{|l|r||l|r|}
\hline
&&&\\[-4mm]
P.A. & $R^{(2),a}_A$ & P.A. & $R^{(2),s}_A$ \\
\hline
\hline
$[3/2]$ & 29.56 & $[3/3]$ & 31.31\\
$[2/3]$ & 29.59 & $[4/2]$ & 31.31\\
$[3/3]$ & 29.64 & $[2/4]$ & 31.44\\
$[4/2]$ & 29.69 & $[3/4]$ & 31.38\\
$[2/4]$ & 29.63 & $[5/2]$ & 31.39\\
$[2/5]$ & 29.77 & $[2/5]$ & 31.39\\
$[3/5]$ & 29.96 & $[6/3]$ & 31.22\\
$[6/2]$ & 30.14 & $[3/6]$ & 31.28\\
$[2/6]$ & 29.82 &&\\
$[5/4]$ & 31.23 &&\\
$[4/5]$ & 31.71 &&\\
$[6/3]$ & 32.55 &&\\
\hline
\hline
exact & 31.75 & exact & 31.75 \\
\hline
\end{tabular}
\end{center}
}
\caption{\label{tabthras}
Comparison of the leading term at threshold for $R^{(2),\delta}_A$
$(\delta=a,s)$ with the exact expression. All Pad\'e approximants (P.A.)
contain two terms from the high energy region as input. Only the
number of moments from the expansion $q^2\to 0$ is different.}
\end{table}
Coming to the axial-vector and scalar case we would like to remind
the reader that for these correlators
no singularities are present in the limit $v\to0$.
However, also here it is instructive to subtract the leading terms
and look closer to the remainders $\delta R^{(2),a}$ and $\delta R^{(2),s}$.
As can be seen by comparing Fig.~\ref{figasv}
and Fig.~\ref{figasvsub} the reduction in the scale lies between
a factor two and ten.
Also the very smooth behaviour of the subtracted results
near threshold is clearly visible. One recognizes that,
e.g. the remainders of the non-abelian and light-fermion
contributions shown in Fig.~\ref{figasvsub} are zero
almost up to $v\approx 0.2$.
Let us now consider the threshold behaviour of the abelian contribution.
As mentioned in Section~\ref{seckinreg}
the corrections start with a term linear in $v$.
We are now in the position to compare the Pad\'e results with
the exact expressions.
In Tab.~\ref{tabthras} the numerical values of the coefficients
of both the expansion and the exact result is shown.
Although the analytically known terms are not incorporated into
the approximation method they are very well reproduced
by our method.
In the abelian part of the scalar correlator there are two
Pad\'e approximants which differ significantly (dash-dotted lines)
from the other eight results.
One of them is a low-order Pad\'e approximation containing only
the information up to $C_2$ and the other one ($[4/3]$)
has a pole close to $\omega=1$ $(1.02)$ which is reflected in the
enhancement in the vicinity of the threshold.
Both the non-abelian and fermionic contributions show an excellent
agreement between the different Pad\'e approximations. We
should mention that at least 14 approximations are plotted and for
$R_l^{(2),a}(s)$ and $R_l^{(2),s}(s)$ in addition the exact results are
included. Again no differences are visible.
Finally we present handy approximation formulae for the
abelian and non-abelian contributions. The procedure
used to get them is described in \cite{CheKueSte96}.
There the approximation formulae for the vector case are already
listed. For completeness we repeat them at this point:
\begin{eqnarray}
R_A^{(2),v} &=& \frac{(1-v^2)^4}{v}\frac{3\pi^4}{8}
- 4 R^{(1),v}
+v\frac{2619}{64}-v^3\frac{2061}{64}
+\frac{81}{8}\left(1-v^2\right)\ln\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
-198\left(\frac{m^2}{s}\right)^{3/2} \left(v^4-2v^2\right)^6
\nonumber\\
&&\mbox{}
+100 p^{3/2} (1-p) \left[
2.21\, P_0(p)
-1.57\, P_1(p)
+0.27\, P_2(p)
\right],
\label{appfora}
\\
R_{NA}^{(2),v} &=& R_{g}^{(2),v}\Big|_{\xi=4}
+ v\frac{351}{32} - v^3\frac{297}{32}
\nonumber\\
&&\mbox{}
-18\left(\frac{m^2}{s}\right)^{3/2} \left(v^4-2v^2\right)^4
\nonumber\\
&&\mbox{}
+50 p^{3/2} (1-p) \left[
1.73\, P_0(p)
-1.24\, P_1(p)
+0.64\, P_2(p)
\right],
\label{appfornaxi}
\\[4mm]
R_A^{(2),a} &=&
-\frac{585}{32}v
+ 18 v^3
+ \left(\left(27 v - 27 v^3\right)\left(1-\ln2\right)\right)\zeta(2)
+ \left(\frac{135}{8}\left(v - v^3\right)\right)\zeta(3)
\nonumber\\
&&\mbox{}
+ \left(-\frac{189}{32}\left(1-v^2\right)\right)\ln\frac{1-v}{1+v}
+ \left(-\frac{81}{16}\left(v - v^3\right)\right)\ln^2\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
+50 p^{3/2} (1-p) \left[
11.97\, P_0(p)
-25.37\, P_1(p)
+20.67\, P_2(p)
\right.
\nonumber\\
&&
\left.
\qquad
\mbox{}
-9.048\, P_3(p)
+1.85\, P_4(p)
\right],
\\
R_{NA}^{(2),a} &=& R_{g}^{(2),a}\Big|_{\xi=4}
-\frac{9}{16}v
+\frac{9}{4}v^3
+\left(\left(-\frac{135}{8}
+\frac{27}{2}\ln2\right)\left(v-v^3\right)\right)\zeta(2)
\nonumber\\
&&\mbox{}
+ \left(\frac{27}{16}\left(v-v^3\right)\right)\zeta(3)
+\left(-\frac{135}{16}\left(1-v^2\right)\right)\ln\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
+50 p^{3/2} (1-p) \left[
-1.88 P_0(p)
+3.31 P_1(p)
-1.96 P_2(p)
+0.483 P_3(p)
\right],
\\[4mm]
R_A^{(2),s} &=&
-\frac{1125}{64}v + \frac{1779}{64}v^3
+ \left(\frac{189}{4}v
-\frac{261}{4}v^3
+\left(-27v
+45v^3
\right)\ln2
\right) \zeta(2)
\nonumber\\
&&\mbox{}
+\left(\frac{189}{8}v - \frac{279}{8}v^3\right)\zeta(3)
+\left(-\frac{63}{8} + \frac{297}{16}v^2\right)\ln\frac{1 - v}{1 + v}
\nonumber\\
&&\mbox{}
+\left(-\frac{243}{16}v + \frac{297}{16}v^3\right)\ln^2\frac{1 - v}{1 + v}
\nonumber\\
&&\mbox{}
+ 240\left(\frac{m^2}{s}\right)^{3/2}\left(v^4-2v^2\right)^4
\nonumber\\
&&\mbox{}
+ 50p^{3/2}(1 - p)
\left[
1.30\,P_0(p)
-4.37\,P_1(p)
+3.58\,P_2(p)
-0.91\,P_3(p)
\right],
\\
R_{NA}^{(2),s} &=& R_{g}^{(2),s}\Big|_{\xi=4}
+ \frac{99}{32}v + \frac{147}{32}v^3
+ \left(-\frac{135}{8}v + \frac{225}{8}v^3
+\left(\frac{27}{2}v - \frac{45}{2}v^3\right)\ln2\right)\zeta(2)
\nonumber\\
&&\mbox{}
+ \left(-\frac{27}{16}v + \frac{9}{16}v^3\right)\zeta(3)
+ \left(-\frac{45}{4} + \frac{135}{8}v^2\right)\ln\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
+ 50p^{3/2}(1 - p)
\left[
-3.94\,P_0(p)
+6.97\,P_1(p)
-4.11\,P_2(p)
+1.00\,P_3(p)
\right],
\\[4mm]
R_A^{(2),p} &=&
+ \frac{(1-v^2)^4}{v}\frac{\pi^4}{4}
- 3 R^{(1),p}
+ \frac{2763}{64}v
- \frac{813}{64}v^3
+ \left(-\frac{9}{4}v - \frac{63}{4}v^3
\right.
\nonumber\\
&&
\left.
\mbox{}
+\left(9v + 9v^3\right)\ln2\right)\zeta(2)
+ \left(-\frac{27}{8}v - \frac{63}{8}v^3\right)\zeta(3)
\nonumber\\
&&\mbox{}
+ \left(\frac{81}{4} + \frac{63}{16}v^2\right)\ln\frac{1-v}{1+v}
+ \left(-\frac{27}{16}v + \frac{81}{16}v^3\right)\ln^2\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
-300 \left(\frac{m^2}{s}\right)^{3/2}\left(v^4-2v^2\right)^6
\nonumber\\
&&\mbox{}
+ 100p^{3/2}(1 - p)
\left[
2.79\,P_0(p)
-1.83\,P_1(p)
+0.419\,P_2(p)
\right],
\\
R_{NA}^{(2),p} &=& R_{g}^{(2),p}\Big|_{\xi=4}
+ \frac{459}{32}v
- \frac{213}{32}v^3
+ \left(\frac{45}{8}-\frac{9}{2}\ln2\right)
\left(v + v^3\right)\zeta(2)
\nonumber\\
&&\mbox{}
+ \left(-\frac{27}{16}v + \frac{9}{16}v^3\right)\zeta(3)
+ \frac{45}{8}v^2\ln\frac{1-v}{1+v}
\nonumber\\
&&\mbox{}
+ 12\left(\frac{m^2}{s}\right)^{3/2}\left(v^4-2v^2\right)^4
\nonumber\\
&&\mbox{}
+ 50p^{3/2}(1 - p)
\left[
0.354\,P_0(p)
-0.251\,P_1(p)
+0.456\,P_2(p)
\right],
\end{eqnarray}
where $p=(1-v)/(1+v)$, $\zeta(2)=\pi^2/6$, $\zeta(3)\approx1.20206$
and $P_i(p)$ are the Legendre polynoms:
\begin{eqnarray}
&&
P_0(p)=1,\,\,\, P_1(p)=p,\,\,\, P_2(p)=-\frac{1}{2}+\frac{3}{2}p^2,
\\
&&
P_3(p)=-\frac{3}{2}p+\frac{5}{2}p^3,
\,\,\,
P_4(p)=\frac{3}{8}-\frac{15}{4}p^2+\frac{35}{8}p^4.
\end{eqnarray}
$R_{g}^{(2),\delta}$ is the exact result for the
gluonic double-bubble to be reconstructed from
the fermionic contribution
\cite{CheHoaKueSteTeu96}:
\begin{eqnarray}
R_g^{(2),\delta}\Big|_{\xi=4} &=&
-\frac{11}{4} R_l^{(2),\delta} - \frac{2}{3} R^{(1),\delta}.
\end{eqnarray}
For some cases the degree of the polynomial used for the fit has to be
increased in order to end up with reasonable approximations.
The first lines of the result contain
the exactly known high energy and threshold contributions.
The proceeding lines represent the numerically small remainder,
$R_x^{(2),\delta,rem}$ with $x\in\{A,NA\}$,
which is plotted in Fig.~\ref{figappr} together with
the result from the Pad\'e approximation.
\section{\label{seccon}Conclusions and summary}
The vacuum polarization function has been evaluated in order
$\alpha_s^2$ for vector, axial-vector, scalar and pseudo-scalar currents.
The results take full account of the quark mass and are applicable between
the production threshold and the high energy region. The method is based
on the Pad\'e approximation and uses the leading terms at high energies
and at threshold plus the lowest eight coefficients of the Taylor series of
$\Pi^\delta(q^2)$ around zero. The stability of this approximation has been
verified and excellent agreement between the present result and the
predictions based on the high energy expansion is observed. These results
can be used to evaluate the cross section for top pair production in
electron positron annihilation through the vector and axial-vector
current and the decay rate of a scalar or pseudo-scalar Higgs boson
into top quarks in the full kinematical region and in next-to-leading order.
\vspace{1em}
\centerline{\bf Acknowledgments}
\smallskip\noindent
We would like to thank A.H. Hoang and T. Teubner for interesting
discussions and for providing us with the analytic results
for $R_l^{(2),\delta}$ prior to publication which were
crucial for our tests of the approximation methods.
\noindent
\vspace{5ex}
\noindent
{\Large \bf Appendix}
\renewcommand {\theequation}{\Alph{section}.\arabic{equation}}
\begin{appendix}
\setcounter{equation}{0}
\section{\label{appmsbar}$\overline{\mbox{MS}}$ definition of the
polarization functions}
In this appendix we present the missing pieces needed to express
the polarization functions, $\Pi^\delta(q^2)$, in the
$\overline{\mbox{MS}}$ scheme which means that in the expression
obtained after the renormalization of $\alpha_s$ and $m$
only the poles are subtracted.
Expressing the results still in terms of the on-shell mass, $m$,
$\bar{\Pi}^\delta(q^2)$ reads:
\begin{eqnarray}
\bar{\Pi}^\delta(q^2) &=& \frac{3}{16\pi^2}
\left[
\bar{C}^\delta_{-1}\frac{1}{z} + \bar{C}^\delta_0
\right]
+\Pi^\delta(q^2),
\end{eqnarray}
where the bar only refers to the overall renormalization. For the
different cases we get
($\bar{C}^v_0$ is already listed in \cite{CheKueSte96}):
\begin{eqnarray}
\bar{C}^v_0 &=&
\frac{4}{3} L_{\mu m} +
\frac{\alpha_s}{\pi} C_F
\left(\frac{15}{4} + L_{\mu m}
\right)
\nonumber\\&&\mbox{}
+
\left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[
C_F^2
\left(
\frac{77}{144}
-\frac{1}{8} L_{\mu m}
+ (5 - 8 \ln2) \zeta(2)
+ \frac{1}{48} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+
C_F C_A
\left(
\frac{14977}{2592}
+ \frac{157}{36} L_{\mu m}
+ \frac{11}{24} L_{\mu m}^2
+ (-\frac{4}{3} + 4 \ln2) \zeta(2)
+ \frac{127}{96} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+
C_F T n_l
\left(
-\frac{917}{648}
-\frac{14}{9} L_{\mu m}
-\frac{1}{6} L_{\mu m}^2
-\frac{4}{3} \zeta(2)
\right)
\nonumber\\&&\mbox{}
+
C_F T
\left(
-\frac{695}{162}
-\frac{14}{9} L_{\mu m}
-\frac{1}{6} L_{\mu m}^2
+ \frac{8}{3} \zeta(2)
+ \frac{7}{16} \zeta(3)
\right)
\Bigg]
,\\
\bar{C}^v_{-1} &=& 0,
\\
\bar{C}^a_0 &=&
-\frac{4}{3}
+ \frac{4}{3} L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F \left(\frac{67}{36}
+ L_{\mu m}\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2 \left(\frac{131}{54}
-\frac{1}{8} L_{\mu m}
+ (5 - 8 \ln2) \zeta(2)
+ \frac{115}{288} \zeta(3)\right)
\nonumber\\&&\mbox{}
+ C_F C_A
\left(
\frac{4081}{432}
+ \frac{71}{27} L_{\mu m}
+ \frac{11}{24} L_{\mu m}^2
+ (-\frac{4}{3} + 4 \ln2) \zeta(2)
-\frac{883}{576} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l \left(
-\frac{149}{72}
-\frac{25}{27} L_{\mu m}
+ -\frac{1}{6} L_{\mu m}^2
-\frac{4}{3} \zeta(2)\right)
\nonumber\\&&\mbox{}
+ C_F T \left(
-\frac{55}{12}
-\frac{25}{27} L_{\mu m}
-\frac{1}{6} L_{\mu m}^2
+ \frac{8}{3} \zeta(2)
-\frac{7}{48} \zeta(3)\right)
\Bigg]
,\\
\bar{C}^a_{-1} &=&
-2 L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F
\left(
-\frac{33}{8}
+ \frac{3}{2} L_{\mu m}
+ \frac{3}{2} L_{\mu m}^2
\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2
\left(
\frac{529}{64}
-\frac{3}{2} B_4
+ \frac{13}{2} L_{\mu m}
-\frac{15}{16} L_{\mu m}^2
-\frac{3}{4} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(-\frac{15}{2} + 12 \ln2\right)
\left(1- L_{\mu m}\right) \zeta(2)
+ \left(-12 -\frac{3}{2} L_{\mu m}\right) \zeta(3)
+ \frac{27}{4} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_FC_A
\left(
-\frac{1039}{96}
+ \frac{3}{4} B_4
+ \frac{143}{48} L_{\mu m}
+ \frac{109}{24} L_{\mu m}^2
+ \frac{11}{12} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(2 - 6 \ln2\right)\left(1 - L_{\mu m}\right) \zeta(2)
+ \left(\frac{23}{6} + \frac{3}{4} L_{\mu m}\right) \zeta(3)
-\frac{27}{8} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l
\left(
\frac{35}{12}
-\frac{7}{12} L_{\mu m}
-\frac{4}{3} L_{\mu m}^2
-\frac{1}{3} L_{\mu m}^3
+ \left(2 - 2 L_{\mu m}\right) \zeta(2)
+ \frac{4}{3} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T
\left(
\frac{59}{12}
-\frac{43}{12} L_{\mu m}
-\frac{4}{3} L_{\mu m}^2
-\frac{1}{3} L_{\mu m}^3
+ \left(-4 + 4 L_{\mu m} \right) \zeta(2)
+ \frac{7}{12} \zeta(3)
\right)
\Bigg]
,\\
\bar{C}^s_0 &=&
-\frac{4}{3}
+ 2 L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F
\left(
\frac{41}{8}
-\frac{3}{2} L_{\mu m}
+ -\frac{3}{2} L_{\mu m}^2
\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2
\left(
-\frac{505}{64}
+\frac{3}{2} B_4
-\frac{13}{2} L_{\mu m}
+ \frac{15}{16} L_{\mu m}^2
+ \frac{3}{4} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(\frac{25}{2} - 20 \ln2 \right)
\left(1 - \frac{3}{5} L_{\mu m}\right) \zeta(2)
+ \left(\frac{93}{8}+ \frac{3}{2} L_{\mu m}\right) \zeta(3)
-\frac{27}{4} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F C_A
\left(
\frac{5429}{288}
-\frac{3}{4}B_4
-\frac{33}{16} L_{\mu m}
-\frac{109}{24} L_{\mu m}^2
-\frac{11}{12} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(-\frac{10}{3} + 10 \ln2 \right)
\left(1 -\frac{3}{5} L_{\mu m}\right) \zeta(2)
+ \left(-\frac{175}{48} -\frac{3}{4} L_{\mu m}\right)\zeta(3)
+ \frac{27}{8} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l
\left(
-\frac{191}{36}
+ \frac{1}{4} L_{\mu m}
+ \frac{4}{3} L_{\mu m}^2
+ \frac{1}{3} L_{\mu m}^3
+ \left(-\frac{10}{3}+ 2 L_{\mu m}\right) \zeta(2)
+ -\frac{4}{3} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T
\left(
-\frac{733}{72}
+ \frac{13}{4} L_{\mu m}
+ \frac{4}{3} L_{\mu m}^2
+ \frac{1}{3} L_{\mu m}^3
+ \left(\frac{20}{3} - 4 L_{\mu m} \right) \zeta(2)
+ -\frac{49}{48} \zeta(3)
\right)
\Bigg]
,\nonumber\\
\\
\bar{C}^s_{-1} &=&
-1 - 3 L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F
\left(
-\frac{9}{2}
+ 9 L_{\mu m}
+ \frac{9}{2} L_{\mu m}^2
\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2
\left(
\frac{1673}{64}
-3 B_4
+ \frac{45}{8} L_{\mu m}
-\frac{207}{16} L_{\mu m}^2
-\frac{9}{2} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(-\frac{15}{4} + 6\ln2\right)
\left(1 - 6 L_{\mu m} \right) \zeta(2)
- 27 \zeta(3)
+ \frac{27}{2} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_A C_F
\left(
-\frac{2641}{192}
+ \frac{3}{2} B_4
+ \frac{163}{8} L_{\mu m}
+ \frac{251}{16} L_{\mu m}^2
+ \frac{11}{4} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(1 - 3 \ln2\right)
\left(1 - 6 L_{\mu m}\right) \zeta(2)
+ 10 \zeta(3)
-\frac{27}{4} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l
\left(
\frac{161}{48}
-\frac{11}{2} L_{\mu m}
-\frac{19}{4} L_{\mu m}^2
- L_{\mu m}^3
+ \left(1 - 6 L_{\mu m}\right) \zeta(2)
+ 4 \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T
\left(
\frac{99}{16}
-\frac{23}{2} L_{\mu m}
-\frac{19}{4} L_{\mu m}^2
- L_{\mu m}^3
+ \left(-2 + 12 L_{\mu m}\right) \zeta(2)
-\frac{7}{2} \zeta(3)
\right)
\Bigg]
,\\
\bar{C}^p_0 &=&
2 L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F
\left(
\frac{33}{8}
-\frac{3}{2} L_{\mu m}
+ -\frac{3}{2} L_{\mu m}^2
\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2
\left(
-\frac{529}{64}
+\frac{3}{2} B_4
+ -\frac{13}{2} L_{\mu m}
+ \frac{15}{16} L_{\mu m}^2
+ \frac{3}{4} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(\frac{15}{2} - 12 \ln2\right)
\left(1 - L_{\mu m}\right) \zeta(2)
+ \left(12 + \frac{3}{2} L_{\mu m}\right) \zeta(3)
+ -\frac{27}{4} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_A C_F
\left(
\frac{1039}{96}
-\frac{3}{4} B_4
-\frac{143}{48} L_{\mu m}
-\frac{109}{24} L_{\mu m}^2
-\frac{11}{12} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(-2 + + 6 \ln2 \right)\left(1 - L_{\mu m}\right) \zeta(2)
+ \left(-\frac{23}{6} - \frac{3}{4} L_{\mu m}\right) \zeta(3)
+ \frac{27}{8} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l
\left(
-\frac{35}{12}
+ \frac{7}{12} L_{\mu m}
+ \frac{4}{3} L_{\mu m}^2
+ \frac{1}{3} L_{\mu m}^3
+ \left(-2 + 2 L_{\mu m}\right) \zeta(2)
-\frac{4}{3} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T
\left(
-\frac{59}{12}
+ \frac{43}{12} L_{\mu m}
+ \frac{4}{3} L_{\mu m}^2
+ \frac{1}{3} L_{\mu m}^3
+ \left(4 - 4 L_{\mu m}\right) \zeta(2)
-\frac{7}{12} \zeta(3)
\right)
\Bigg]
,\\
\bar{C}^p_{-1} &=&
-1 - L_{\mu m}
+ \frac{\alpha_s}{\pi} C_F
\left(
-\frac{1}{2}
+ 3 L_{\mu m}
+ \frac{3}{2} L_{\mu m}^2
\right)
\nonumber\\&&\mbox{}
+ \left(\frac{\alpha_s}{\pi}\right)^2
\Bigg[ C_F^2
\left(
\frac{1409}{192}
-B_4
+ \frac{15}{8} L_{\mu m}
-\frac{69}{16} L_{\mu m}^2
-\frac{3}{2} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(\frac{15}{4} - 6 \ln2\right)
\left(1 + 2 L_{\mu m}\right) \zeta(2)
- 9 \zeta(3)
+ \frac{9}{2} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F C_A
\left(
-\frac{129}{64}
+\frac{1}{2}B_4
+ \frac{185}{24} L_{\mu m}
+ \frac{251}{48} L_{\mu m}^2
+ \frac{11}{12} L_{\mu m}^3
\right.\nonumber\\&&\left.\mbox{}\quad
+ \left(-1 + 3 \ln2\right)
\left(1 + 2 L_{\mu m}\right) \zeta(2)
+ \frac{10}{3} \zeta(3)
-\frac{9}{4} \zeta(4)
\right)
\nonumber\\&&\mbox{}
+ C_F T n_l
\left(
\frac{19}{48}
-\frac{13}{6} L_{\mu m}
-\frac{19}{12} L_{\mu m}^2
-\frac{1}{3} L_{\mu m}^3
+ \left(-1 - 2 L_{\mu m}\right) \zeta(2)
+ \frac{4}{3} \zeta(3)
\right)
\nonumber\\&&\mbox{}
+ C_F T
\left(
\frac{107}{48}
-\frac{13}{6} L_{\mu m}
-\frac{19}{12} L_{\mu m}^2
-\frac{1}{3} L_{\mu m}^3
+ \left(2 + 4 L_{\mu m}\right) \zeta(2)
-\frac{14}{3} \zeta(3)
\right)
\Bigg],
\end{eqnarray}
with $L_{\mu m}=\ln\mu^2/m^2$.
$\zeta$ is Riemanns zeta-function with the values
$\zeta(2)=\pi^2/6$,
$\zeta(3)\approx1.20206$,
$\zeta(4)=\pi^4/90$,
and $B_4\approx-1.76280$ is a numerical constant typical for
three-loop tadpole integrals
\cite{Bro92}.
\setcounter{equation}{0}
\section{\label{appdn}Analytic results for $D_n^\delta$}
For completeness we present in this appendix the analytical results for
$D_0^a$ and $D_1^a$.
\begin{eqnarray}
D_0^{(0),a} &=&
\frac{32}{9} - \frac{4}{3}\ln\frac{-q^2}{m^2}
,\\
D_1^{(0),a} &=&
- 2 + 2\ln\frac{-q^2}{m^2}
,\\
D_0^{(1),a} &=&
\frac{49}{18}
- 4\zeta(3)
- \ln\frac{-q^2}{m^2}
,\\
D_1^{(1),a} &=&
- \frac{21}{4}
+ 6\zeta(3)
+ \frac{3}{2}\ln\frac{-q^2}{m^2}
- \frac{3}{2}\ln^2\frac{-q^2}{m^2}
,\\
D_0^{(2),a} &=&
C_F^2 \,\Bigg(
- {953\over 216}
+ 8\,\zeta(2)\,\ln 2
- 5\,\zeta(2)
- {1891\over 288}\,\zeta(3)
+ 10\,\zeta(5)
+ {1\over 8}\,\ln{-q^2\over m^2}
\Bigg)
\nonumber\\&&\mbox{}
+ C_F\,C_A \,\Bigg(
{19729\over 2592}
- 4\,\zeta(2)\,\ln 2
+ {4\over 3}\,\zeta(2)
+ {11\over 3}\,\zeta(3)\,\ln{-q^2\over \mu^2}
- {709\over 64}\,\zeta(3)
\nonumber\\&&\mbox{}\quad
- {5\over 3}\,\zeta(5)
+ {11\over 12}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
- {71\over 27}\,\ln{-q^2\over m^2}
- {11\over 24}\,\ln^{2}{-q^2\over m^2}
- {539\over 216}\,\ln{-q^2\over \mu^2}
\Bigg)
\nonumber\\&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {295\over 81}
+ {4\over 3}\,\zeta(2)
- {4\over 3}\,\zeta(3)\,\ln{-q^2\over \mu^2}
+ {38\over 9}\,\zeta(3)
\nonumber\\&&\mbox{}\quad
- {1\over 3}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {25\over 27}\,\ln{-q^2\over m^2}
+ {1\over 6}\,\ln^{2}{-q^2\over m^2}
+ {49\over 54}\,\ln{-q^2\over \mu^2}
\Bigg)
\nonumber\\&&\mbox{}
+ C_F\,T \,\Bigg(
- {731\over 648}
- {8\over 3}\,\zeta(2)
- {4\over 3}\,\zeta(3)\,\ln{-q^2\over \mu^2}
+ {629\over 144}\,\zeta(3)
\nonumber\\&&\mbox{}\quad
- {1\over 3}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {25\over 27}\,\ln{-q^2\over m^2}
+ {1\over 6}\,\ln^{2}{-q^2\over m^2}
+ {49\over 54}\,\ln{-q^2\over \mu^2}
\Bigg)
,\\
D_1^{(2),a} &=&
C_F^2 \,\Bigg(
{111\over 32}
+ 12\,\zeta(2)\,\ln 2\,\ln{-q^2\over m^2}
- 24\,\zeta(2)\,\ln 2
- {15\over 2}\,\zeta(2)\,\ln{-q^2\over m^2}
+ 15\,\zeta(2)
\nonumber\\
&&\mbox{}\quad
- {15\over 2}\,\zeta(3)\,\ln{-q^2\over m^2}
+ {87\over 4}\,\zeta(3)
- 9\,\zeta(4)
- {45\over 2}\,\zeta(5)
+ {3\over 2}\,B_4
+ {127\over 16}\,\ln{-q^2\over m^2}
\nonumber\\
&&\mbox{}\quad
- {21\over 16}\,\ln^{2}{-q^2\over m^2}
+ {3\over 4}\,\ln^{3}{-q^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {1219\over 72}
- 6\,\zeta(2)\,\ln 2\,\ln{-q^2\over m^2}
+ 12\,\zeta(2)\,\ln 2
+ 2\,\zeta(2)\,\ln{-q^2\over m^2}
- 4\,\zeta(2)
\nonumber\\
&&\mbox{}\quad
- {3\over 4}\,\zeta(3)\,\ln{-q^2\over m^2}
- {11\over 2}\,\zeta(3)\,\ln{-q^2\over \mu^2}
+ {223\over 12}\,\zeta(3)
+ {9\over 2}\,\zeta(4)
- {15\over 4}\,\zeta(5)
- {3\over 4}\,B_4
\nonumber\\
&&\mbox{}\quad
- {11\over 8}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {227\over 48}\,\ln{-q^2\over m^2}
+ {11\over 8}\,\ln^{2}{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
- {19\over 6}\,\ln^{2}{-q^2\over m^2}
\nonumber\\
&&\mbox{}\quad
- {11\over 12}\,\ln^{3}{-q^2\over m^2}
+ {77\over 16}\,\ln{-q^2\over \mu^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
{217\over 36}
+ 2\,\zeta(2)\,\ln{-q^2\over m^2}
- 4\,\zeta(2)
+ 2\,\zeta(3)\,\ln{-q^2\over \mu^2}
- {20\over 3}\,\zeta(3)
\nonumber\\
&&\mbox{}\quad
+ {1\over 2}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
- {19\over 12}\,\ln{-q^2\over m^2}
- {1\over 2}\,\ln^{2}{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {5\over 6}\,\ln^{2}{-q^2\over m^2}
\nonumber\\
&&\mbox{}\quad
+ {1\over 3}\,\ln^{3}{-q^2\over m^2}
- {7\over 4}\,\ln{-q^2\over \mu^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {155\over 36}
- 4\,\zeta(2)\,\ln{-q^2\over m^2}
+ 8\,\zeta(2)
+ 2\,\zeta(3)\,\ln{-q^2\over \mu^2}
- {23\over 12}\,\zeta(3)
\nonumber\\
&&\mbox{}\quad
+ {1\over 2}\,\ln{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {17\over 12}\,\ln{-q^2\over m^2}
- {1\over 2}\,\ln^{2}{-q^2\over m^2}\,\ln{-q^2\over \mu^2}
+ {5\over 6}\,\ln^{2}{-q^2\over m^2}
\nonumber\\
&&\mbox{}\quad
+ {1\over 3}\,\ln^{3}{-q^2\over m^2}
- {7\over 4}\,\ln{-q^2\over \mu^2}
\Bigg),
\end{eqnarray}
with $\zeta(5)\approx1.03693$.
$B_4$ appears in our results because
of the normalization condition $\Pi^a(0)=0$.
\setcounter{equation}{0}
\section{\label{appcn}Analytic results for $C_n^\delta$}
In this appendix we list the first eight
moments for $q^2\to0$ expressed in terms of the on-shell mass, $m$,
in analytic form for the four correlators.
\begin{eqnarray}
\Pi^{(0),v} &=&
\frac{3}{16\pi^2}\bigg\{
\frac{16}{15}z
+ \frac{16}{35}z^2
+ \frac{256}{945}z^3
+ \frac{128}{693}z^4
+ \frac{2048}{15015}z^5
+ \frac{2048}{19305}z^6
+ \frac{65536}{765765}z^7
\nonumber\\&&\mbox{}
+ \frac{16384}{230945}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
\Pi^{(1),v} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{328}{81}z
+ \frac{1796}{675}z^2
+ \frac{999664}{496125}z^3
+ \frac{207944}{127575}z^4
+ \frac{1729540864}{1260653625}z^5
\nonumber\\
&&\mbox{}
+ \frac{21660988864}{18261468225}z^6
+ \frac{401009026048}{383490832725}z^7
+ \frac{633021048064}{676610809875}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
C^{(2),v}_1 &=&
C_F^2 \,\Bigg(
- {8687\over 864}
- {32\over 5}\,\zeta(2)\,\ln 2
+ 4\,\zeta(2)
+ {22781\over 1728}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{127\over 192}
+ {902\over 243}\,\ln{\mu^2\over m^2}
+ {16\over 5}\,\zeta(2)\,\ln 2
- {16\over 15}\,\zeta(2)
+ {1451\over 384}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {142\over 243}
- {328\over 243}\,\ln{\mu^2\over m^2}
- {16\over 15}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {11407\over 2916}
- {328\over 243}\,\ln{\mu^2\over m^2}
+ {32\over 15}\,\zeta(2)
+ {203\over 216}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_2 &=&
C_F^2 \,\Bigg(
- {223404289\over 1866240}
- {192\over 35}\,\zeta(2)\,\ln 2
+ {24\over 7}\,\zeta(2)
+ {4857587\over 46080}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {1030213543\over 93312000}
+ {4939\over 2025}\,\ln{\mu^2\over m^2}
+ {96\over 35}\,\zeta(2)\,\ln 2
- {32\over 35}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {723515\over 55296}\,\zeta(3)
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
- {40703\over 60750}
- {1796\over 2025}\,\ln{\mu^2\over m^2}
- {32\over 35}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {1520789\over 414720}
- {1796\over 2025}\,\ln{\mu^2\over m^2}
+ {64\over 35}\,\zeta(2)
+ {14203\over 18432}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_3 &=&
C_F^2 \,\Bigg(
- {885937890461\over 1161216000}
- {512\over 105}\,\zeta(2)\,\ln 2
+ {64\over 21}\,\zeta(2)
+ {33067024499\over 51609600}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {95905830011197\over 1706987520000}
+ {2749076\over 1488375}\,\ln{\mu^2\over m^2}
+ {256\over 105}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {256\over 315}\,\zeta(2)
+ {5164056461\over 103219200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {9703588\over 17364375}
- {999664\over 1488375}\,\ln{\mu^2\over m^2}
- {256\over 315}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {83936527\over 23328000}
- {999664\over 1488375}\,\ln{\mu^2\over m^2}
+ {512\over 315}\,\zeta(2)
+ {12355\over 13824}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_4 &=&
C_F^2 \,\Bigg(
- {269240669884818833\over 61451550720000}
- {1024\over 231}\,\zeta(2)\,\ln 2
+ {640\over 231}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {1507351507033\over 412876800}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {36675392331131681\over 158018273280000}
+ {571846\over 382725}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {512\over 231}\,\zeta(2)\,\ln 2
- {512\over 693}\,\zeta(2)
+ {1455887207647\over 7431782400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {54924808\over 120558375}
- {207944\over 382725}\,\ln{\mu^2\over m^2}
- {512\over 693}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {129586264289\over 35831808000}
- {207944\over 382725}\,\ln{\mu^2\over m^2}
+ {1024\over 693}\,\zeta(2)
+ {2522821\over 2359296}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_5 &=&
C_F^2 \,\Bigg(
- {360248170450504167133\over 15209258803200000}
- {4096\over 1001}\,\zeta(2)\,\ln 2
+ {2560\over 1001}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {939939943788973\over 47687270400}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {21883348499544169357\over 23658847027200000}
\nonumber\\
&&\mbox{}
+ {432385216\over 343814625}\,\ln{\mu^2\over m^2}
+ {2048\over 1001}\,\zeta(2)\,\ln 2
- {2048\over 3003}\,\zeta(2)
+ {14724562345079\over 19074908160}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {4881989801536\over 13104494431875}
- {1729540864\over 3781960875}\,\ln{\mu^2\over m^2}
- {2048\over 3003}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {512847330943\over 139087872000}
- {1729540864\over 3781960875}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {4096\over 3003}\,\zeta(2)
+ {1239683\over 983040}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_6 &=&
C_F^2 \,\Bigg(
- {64959156551995419148501103\over 529285210649395200000}
- {8192\over 2145}\,\zeta(2)\,\ln 2
+ {1024\over 429}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {330704075360938001\over 3238841548800}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {4826864658245605658856745531\over 1317342772469012889600000}
+ {5415247216\over 4980400425}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {4096\over 2145}\,\zeta(2)\,\ln 2
- {4096\over 6435}\,\zeta(2)
+ {580922571682067161\over 190443883069440}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {151249070952032\over 493552701717075}
- {21660988864\over 54784404675}\,\ln{\mu^2\over m^2}
- {4096\over 6435}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {3411069430668887863\over 899847347503104000}
- {21660988864\over 54784404675}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {8192\over 6435}\,\zeta(2)
+ {1760922667\over 1207959552}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_7 &=&
C_F^2 \,\Bigg(
- {571365897351090627148045413923471\over
927409311818185074278400000}
- {131072\over 36465}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {16384\over 7293}\,\zeta(2)
+ {13386367971827490465799\over 26118018249523200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {7342721436809271685822267340249\over 505859624628100949606400000}
+ {100252256512\over 104588408925}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {65536\over 36465}\,\zeta(2)\,\ln 2
- {65536\over 109395}\,\zeta(2)
+ {14019414333929589373\over 1160800811089920}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {13125091764358528\over 51823033680292875}
- {401009026048\over 1150472498175}\,\ln{\mu^2\over m^2}
- {65536\over 109395}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {7927736038867601807\over 2024656531881984000}
- {401009026048\over 1150472498175}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {131072\over 109395}\,\zeta(2)
+ {4497899939\over 2717908992}\,\zeta(3)
\Bigg)
,\nonumber\\
C^{(2),v}_8 &=&
C_F^2 \,\Bigg(
- {190302182417255312898886115648452691\over
63063833203636585050931200000}
\nonumber\\
&&\mbox{}
- {786432\over 230945}\,\zeta(2)\,\ln 2
+ {98304\over 46189}\,\zeta(2)
+ {31209476560803609727258477\over 12432176686773043200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {11413196924379471880248867066065741\over
198256619379886265047449600000}
+ {393216\over 230945}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {131072\over 230945}\,\zeta(2)
+ {24302541873458280280067\over 507435783133593600}\,\zeta(3)
+ {158255262016\over 184530220875}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {65233327834094144\over 310874926094357625}
- {633021048064\over 2029832429625}\,\ln{\mu^2\over m^2}
- {131072\over 230945}\,\zeta(2)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {23818697864446985668391\over 5874203484500262912000}
- {633021048064\over 2029832429625}\,\ln{\mu^2\over m^2}
\nonumber\\
&&\mbox{}
+ {262144\over 230945}\,\zeta(2)
+ {286122897977\over 154618822656}\,\zeta(3)
\Bigg),
\end{eqnarray}
\begin{eqnarray}
\Pi^{(0),a} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{8}{15}z
+ \frac{16}{105}z^2
+ \frac{64}{945}z^3
+ \frac{128}{3465}z^4
+ \frac{1024}{45045}z^5
+ \frac{2048}{135135}z^6
+ \frac{8192}{765765}z^7
\nonumber\\
&&\mbox{}
+ \frac{16384}{2078505}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
\Pi^{(1),a} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{689}{405}z
+ \frac{3382}{4725}z^2
+ \frac{196852}{496125}z^3
+ \frac{12398216}{49116375}z^4
+ \frac{318252608}{1820944125}z^5
\nonumber\\
&&\mbox{}
+ \frac{655479040}{5113211103}z^6
+ \frac{639246915968}{6519344156325}z^7
+ \frac{38821601949952}{501368610117375}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
C^{(2),a}_1 &=&
C_F^2 \,\Bigg(
{2237369\over 51840}
- {16\over 5}\,\zeta(2)\,\ln 2
+ 2\,\zeta(2)
- {1164013\over 34560}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{3226373\over 311040}
+ {8\over 5}\,\zeta(2)\,\ln 2
- {8\over 15}\,\zeta(2)
- {494867\over 69120}\,\zeta(3)
+ {7579\over 4860}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {137\over 810}
- {8\over 15}\,\zeta(2)
- {689\over 1215}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {433669\over 186624}
+ {16\over 15}\,\zeta(2)
+ {10493\over 13824}\,\zeta(3)
- {689\over 1215}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_2 &=&
C_F^2 \,\Bigg(
{7672813249\over 26127360}
- {64\over 35}\,\zeta(2)\,\ln 2
+ {8\over 7}\,\zeta(2)
- {2349181181\over 9676800}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{47328042151\over 1306368000}
+ {32\over 35}\,\zeta(2)\,\ln 2
- {32\over 105}\,\zeta(2)
- {188251393\over 6451200}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {18601\over 28350}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
- {1097\over 6750}
- {32\over 105}\,\zeta(2)
- {3382\over 14175}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {27450553\over 17418240}
+ {64\over 105}\,\zeta(2)
+ {19579\over 36864}\,\zeta(3)
- {3382\over 14175}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_3 &=&
C_F^2 \,\Bigg(
{111399585201971\over 58525286400}
- {128\over 105}\,\zeta(2)\,\ln 2
+ {16\over 21}\,\zeta(2)
- {979995241517\over 619315200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{2930267790199843\over 20483850240000}
+ {64\over 105}\,\zeta(2)\,\ln 2
- {64\over 315}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {146653533139\over 1238630400}\,\zeta(3)
+ {541343\over 1488375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {5058122\over 52093125}
- {64\over 315}\,\zeta(2)
- {196852\over 1488375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {26031430073\over 20901888000}
+ {128\over 315}\,\zeta(2)
+ {4411519\over 8847360}\,\zeta(3)
- {196852\over 1488375}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_4 &=&
C_F^2 \,\Bigg(
{308356223383353917\over 27590492160000}
- {1024\over 1155}\,\zeta(2)\,\ln 2
+ {128\over 231}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {197037714570097\over 21194342400}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
{1275464959378469537\over 2212255825920000}
\nonumber\\
&&\mbox{}
+ {512\over 1155}\,\zeta(2)\,\ln 2
- {512\over 3465}\,\zeta(2)
- {109692872248273\over 228898897920}\,\zeta(3)
+ {3099554\over 13395375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {2710286584\over 46414974375}
- {512\over 3465}\,\zeta(2)
- {12398216\over 147349125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {731128794367\over 689762304000}
+ {1024\over 3465}\,\zeta(2)
+ {1432739\over 2949120}\,\zeta(3)
- {12398216\over 147349125}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_5 &=&
C_F^2 \,\Bigg(
{4277005531832013845390021\over 69597568283443200000}
- {2048\over 3003}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {1280\over 3003}\,\zeta(2)
- {1014170497519835231\over 19837904486400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{58119452968289341424539\over 24983742460723200000}
+ {1024\over 3003}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {1024\over 9009}\,\zeta(2)
- {2193462351270763\over 1133594542080}\,\zeta(3)
+ {79563152\over 496621125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {226047457424\over 6309571393125}
- {1024\over 9009}\,\zeta(2)
- {318252608\over 5462832375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {5461272114191\over 5796790272000}
+ {2048\over 9009}\,\zeta(2)
+ {30020447\over 62914560}\,\zeta(3)
\nonumber\\
&&\mbox{}
- {318252608\over 5462832375}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_6 &=&
C_F^2 \,\Bigg(
{359745448810293562716400230493\over 1114674653627626291200000}
- {8192\over 15015}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {1024\over 3003}\,\zeta(2)
- {23241579953084394919\over 86565401395200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{24694796807630112104602086197\over 2634685544938025779200000}
+ {4096\over 15015}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {4096\over 45045}\,\zeta(2)
- {76150305462878641\over 9766352977920}\,\zeta(3)
+ {163869760\over 1394512119}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {381648296450416\over 17274344560097625}
- {4096\over 45045}\,\zeta(2)
- {655479040\over 15339633309}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {1548825962112515819\over 1799694695006208000}
+ {8192\over 45045}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {1134854351\over 2415919104}\,\zeta(3)
- {655479040\over 15339633309}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_7 &=&
C_F^2 \,\Bigg(
{2295850065917186141074716812133631\over
1401418515636368556687360000}
- {16384\over 36465}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {2048\over 7293}\,\zeta(2)
- {2420469632151392380640363\over 1776025240967577600}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{117926764107372779240781607407103\over 3127132224973714961203200000}
+ {8192\over 36465}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {8192\over 109395}\,\zeta(2)
- {111434031814012253905813\over 3552050481935155200}\,\zeta(3)
+ {159811728992\over 1778002951725}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {2357049928630816\over 176198314512995775}
- {8192\over 109395}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {639246915968\over 19558032468975}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
- {705492229082574766543\over 881130522675039436800}
\nonumber\\
&&\mbox{}
+ {16384\over 109395}\,\zeta(2)
+ {161018056831\over 347892350976}\,\zeta(3)
- {639246915968\over 19558032468975}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),a}_8 &=&
C_F^2 \,\Bigg(
{36666382863813217294681656413671975999\over
4526581805505470438100172800000}
- {262144\over 692835}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {32768\over 138567}\,\zeta(2)
- {3890931550494737377107721691\over 577405539452348006400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{489815334595084347787765229172106365989\over
3231979409130905892803523379200000}
+ {131072\over 692835}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {131072\over 2078505}\,\zeta(2)
- {524355086420656861203887\over 4158983477447884800}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {9705400487488\over 136736893668375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {1762232386535569216\over 230358320235919000125}
- {131072\over 2078505}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {38821601949952\over 1504105830352125}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
- {84212007346306764915559\over 111609866205504995328000}
\nonumber\\
&&\mbox{}
+ {262144\over 2078505}\,\zeta(2)
+ {1058200490221\over 2319282339840}\,\zeta(3)
- {38821601949952\over 1504105830352125}\,\ln{\mu^2\over m^2}
\Bigg),
\end{eqnarray}
\begin{eqnarray}
\Pi^{(0),s} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{4}{5}z
+ \frac{8}{35}z^2
+ \frac{32}{315}z^3
+ \frac{64}{1155}z^4
+ \frac{512}{15015}z^5
+ \frac{1024}{45045}z^6
+ \frac{4096}{255255}z^7
\nonumber\\&&\mbox{}
+ \frac{8192}{692835}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
\Pi^{(1),s} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{61}{135}z
+ \frac{1223}{1575}z^2
+ \frac{86246}{165375}z^3
+ \frac{5845948}{16372125}z^4
+ \frac{155689024}{606981375}z^5
\nonumber\\
&&\mbox{}
+ \frac{1637544448}{8522018505}z^6
+ \frac{323629508032}{2173114718775}z^7
+ \frac{19824721740416}{167122870039125}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
C^{(2),s}_1 &=&
C_F^2 \,\Bigg(
{413\over 30}
- {1645\over 144}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {191\over 648}
- {59\over 32}\,\zeta(3)
+ {671\over 1620}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
{119\over 135}
- {61\over 405}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(\!
- {60559\over 77760}
+ {1435\over 1152}\,\zeta(3)
- {61\over 405}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_2 &=&
C_F^2 \,\Bigg(
{627541597\over 10886400}
- {48\over 35}\,\zeta(2)\,\ln 2
+ {6\over 7}\,\zeta(2)
- {1074607\over 23040}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
+ {991366223\over 108864000}
+ {24\over 35}\,\zeta(2)\,\ln 2
- {8\over 35}\,\zeta(2)
- {110107\over 15360}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {13453\over 18900}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
+ {797\over 47250}
- {8\over 35}\,\zeta(2)
- {1223\over 4725}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {1685773\over 1161216}
+ {16\over 35}\,\zeta(2)
+ {9107\over 12288}\,\zeta(3)
- {1223\over 4725}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_3 &=&
C_F^2 \,\Bigg(
{1619371436071\over 4064256000}
- {128\over 105}\,\zeta(2)\,\ln 2
+ {16\over 21}\,\zeta(2)
- {405607027\over 1228800}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{11448730350251\over 284497920000}
+ {64\over 105}\,\zeta(2)\,\ln 2
- {64\over 315}\,\zeta(2)
- {566787803\over 17203200}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {474353\over 992250}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
- {1146421\over 17364375}
- {64\over 315}\,\zeta(2)
- {86246\over 496125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {694040519\over 497664000}
+ {128\over 315}\,\zeta(2)
+ {978439\over 1474560}\,\zeta(3)
- {86246\over 496125}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_4 &=&
C_F^2 \,\Bigg(
{147161013073070141\over 56330588160000}
- {384\over 385}\,\zeta(2)\,\ln 2
+ {48\over 77}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {224204681453\over 103219200}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
{35969257153127519\over 202790117376000}
+ {192\over 385}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {64\over 385}\,\zeta(2)
- {18221998757\over 123863040}\,\zeta(3)
+ {1461487\over 4465125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {930573962\over 15471658125}
- {64\over 385}\,\zeta(2)
- {5845948\over 49116375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {590888856583\over 459841536000}
+ {128\over 385}\,\zeta(2)
+ {1262219\over 1966080}\,\zeta(3)
- {5845948\over 49116375}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_5 &=&
C_F^2 \,\Bigg(
{45884811924398978440541\over 2899898678476800000}
- {4096\over 5005}\,\zeta(2)\,\ln 2
+ {512\over 1001}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {13283992935869\over 1009254400}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
{945084080119306598357\over 1230260045414400000}
\nonumber\\
&&\mbox{}
+ {2048\over 5005}\,\zeta(2)\,\ln 2
- {2048\over 15015}\,\zeta(2)
- {5414889135283\over 8477736960}\,\zeta(3)
+ {38922256\over 165540375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {21726270352\over 485351645625}
- {2048\over 15015}\,\zeta(2)
- {155689024\over 1820944125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {413776570931\over 347163328512}
+ {4096\over 15015}\,\zeta(2)
+ {1660607\over 2621440}\,\zeta(3)
\nonumber\\
&&\mbox{}
- {155689024\over 1820944125}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_6 &=&
C_F^2 \,\Bigg(
{397501731663152341632983791\over 4423312117569945600000}
- {2048\over 3003}\,\zeta(2)\,\ln 2
+ {1280\over 3003}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {1977406903785590041\over 26450539315200}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
{717522378440002995500293379\over 219557128744835481600000}
\nonumber\\
&&\mbox{}
+ {1024\over 3003}\,\zeta(2)\,\ln 2
- {1024\over 9009}\,\zeta(2)
- {12326391884997959\over 4534378168320}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {409386112\over 2324186865}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
- {7250780973536\over 230324594134635}
- {1024\over 9009}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {1637544448\over 25566055515}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
- {51604525307586967\over 46146017820672000}
+ {2048\over 9009}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {506059663\over 805306368}\,\zeta(3)
- {1637544448\over 25566055515}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_7 &=&
C_F^2 \,\Bigg(
{2781508068462396120688370396051\over 5724748838383858483200000}
- {49152\over 85085}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {6144\over 17017}\,\zeta(2)
- {159953731628328432443\over 395727549235200}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{19658778113043866943074074991111\over
1433268936446286023884800000}
+ {24576\over 85085}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {8192\over 85085}\,\zeta(2)
- {2207504939742233011\over 193466801848320}\,\zeta(3)
+ {80907377008\over 592667650575}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {6298337396620816\over 293663857521659625}
- {8192\over 85085}\,\zeta(2)
- {323629508032\over 6519344156325}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {111176247094824256811\over 104896490794647552000}
+ {16384\over 85085}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {36189456601\over 57982058496}\,\zeta(3)
- {323629508032\over 6519344156325}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),s}_8 &=&
C_F^2 \,\Bigg(
{90860323801590559420949997562702411\over
35925252424646590778572800000}
- {114688\over 230945}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {14336\over 46189}\,\zeta(2)
- {8719171444685991398931083\over 4144058895591014400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
{2791491385643572306216795357083768311\over
48969384986831907466720051200000}
+ {57344\over 230945}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {57344\over 692835}\,\zeta(2)
- {891255560853790732189\over 18793917893836800}\,\zeta(3)
+ {4956180435104\over 45578964556125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {216441517065785056\over 15357221349061266675}
- {57344\over 692835}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {19824721740416\over 501368610117375}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
- {7534267707657422828683\over 7440657747033666355200}
\nonumber\\
&&\mbox{}
+ {114688\over 692835}\,\zeta(2)
+ {479255846237\over 773094113280}\,\zeta(3)
- {19824721740416\over 501368610117375}\,\ln{\mu^2\over m^2}
\Bigg),
\end{eqnarray}
\begin{eqnarray}
\Pi^{(0),p} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{4}{3}z
+ \frac{8}{15}z^2
+ \frac{32}{105}z^3
+ \frac{64}{315}z^4
+ \frac{512}{3465}z^5
+ \frac{1024}{9009}z^6
+ \frac{4096}{45045}z^7
\nonumber\\&&\mbox{}
+ \frac{8192}{109395}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
\Pi^{(1),p} &=&
\frac{3}{16\pi^2}\bigg\{ \frac{7}{3}z
+ \frac{353}{135}z^2
+ \frac{10054}{4725}z^3
+ \frac{96668}{55125}z^4
+ \frac{24281408}{16372125}z^5
\nonumber\\
&&\mbox{}
+ \frac{4203369152}{3277699425}z^6
+ \frac{1781242688}{1578151575}z^7
+ \frac{312784060544}{310444959825}z^8
\bigg\}
+\ldots\,\,,
\nonumber\\
C^{(2),p}_1 &=&
C_F^2 \,\Bigg(
- {401\over 144}
+ {439\over 96}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {3385\over 864}
+ {329\over 192}\,\zeta(3)
+ {77\over 36}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
{25\over 27}
- {7\over 9}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
{7\over 27}
+ {7\over 8}\,\zeta(3)
- {7\over 9}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_2 &=&
C_F^2 \,\Bigg(
- {1100707\over 17280}
- {16\over 5}\,\zeta(2)\,\ln 2
+ 2\,\zeta(2)
+ {681359\over 11520}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {1137479\over 103680}
+ {8\over 5}\,\zeta(2)\,\ln 2
- {8\over 15}\,\zeta(2)
+ {28969\over 2560}\,\zeta(3)
+ {3883\over 1620}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {287\over 810}
- {8\over 15}\,\zeta(2)
- {353\over 405}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {98605\over 62208}
+ {16\over 15}\,\zeta(2)
+ {1253\over 4608}\,\zeta(3)
- {353\over 405}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_3 &=&
C_F^2 \,\Bigg(
- {22191983083\over 43545600}
- {128\over 35}\,\zeta(2)\,\ln 2
+ {16\over 7}\,\zeta(2)
+ {1390832179\over 3225600}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {4990621717\over 87091200}
+ {64\over 35}\,\zeta(2)\,\ln 2
- {64\over 105}\,\zeta(2)
+ {107917807\over 2150400}\,\zeta(3)
\nonumber\\
&&\mbox{}
+ {55297\over 28350}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T\,n_l \,\Bigg(
- {1687\over 3375}
- {64\over 105}\,\zeta(2)
- {10054\over 14175}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {36123823\over 17418240}
+ {128\over 105}\,\zeta(2)
+ {10045\over 36864}\,\zeta(3)
- {10054\over 14175}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_4 &=&
C_F^2 \,\Bigg(
- {329691878962513\over 97542144000}
- {128\over 35}\,\zeta(2)\,\ln 2
+ {16\over 7}\,\zeta(2)
+ {581996570819\over 206438400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {571511627867983\over 2275983360000}
+ {64\over 35}\,\zeta(2)\,\ln 2
- {64\over 105}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {17445959641\over 82575360}\,\zeta(3)
+ {265837\over 165375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {8198894\over 17364375}
- {64\over 105}\,\zeta(2)
- {96668\over 165375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {1077978107\over 464486400}
+ {128\over 105}\,\zeta(2)
+ {130123\over 327680}\,\zeta(3)
- {96668\over 165375}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_5 &=&
C_F^2 \,\Bigg(
- {9089416219983580783\over 450644705280000}
- {4096\over 1155}\,\zeta(2)\,\ln 2
+ {512\over 231}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {213469483642711\over 12716605440}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {771845002398293227\over 737418608640000}
\nonumber\\
&&\mbox{}
+ {2048\over 1155}\,\zeta(2)\,\ln 2
- {2048\over 3465}\,\zeta(2)
+ {66603161317883\over 76299632640}\,\zeta(3)
+ {6070352\over 4465125}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {6409976752\over 15471658125}
- {2048\over 3465}\,\zeta(2)
- {24281408\over 49116375}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {329953898617\over 131383296000}
+ {4096\over 3465}\,\zeta(2)
+ {2222003\over 3932160}\,\zeta(3)
\nonumber\\
&&\mbox{}
- {24281408\over 49116375}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_6 &=&
C_F^2 \,\Bigg(
- {372359772998064628281949\over 3314169918259200000}
- {10240\over 3003}\,\zeta(2)\,\ln 2
+ {6400\over 3003}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {618116373887820433\over 6612634828800}\,\zeta(3)
\Bigg)
+ C_F\,C_A \,\Bigg(
- {107211161626223001664831\over 24983742460723200000}
\nonumber\\
&&\mbox{}
+ {5120\over 3003}\,\zeta(2)\,\ln 2
- {5120\over 9009}\,\zeta(2)
+ {115687685688677\over 32388415488}\,\zeta(3)
+ {1050842288\over 893918025}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {12132112100624\over 34071685522875}
- {5120\over 9009}\,\zeta(2)
\nonumber\\
&&\mbox{}
- {4203369152\over 9833098275}\,\ln{\mu^2\over m^2}
\Bigg)
+ C_F\,T \,\Bigg(
- {2290986762786311\over 852128169984000}
+ {10240\over 9009}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {9445897\over 12582912}\,\zeta(3)
- {4203369152\over 9833098275}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_7 &=&
C_F^2 \,\Bigg(
- {221521574638803295862282113747\over 371558217875875430400000}
- {16384\over 5005}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {2048\over 1001}\,\zeta(2)
+ {1729993541168029561\over 3487983206400}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {15338320757467109893990945403\over 878228514979341926400000}
+ {8192\over 5005}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {8192\over 15015}\,\zeta(2)
+ {263558316028764511\over 18137512673280}\,\zeta(3)
+ {445310672\over 430404975}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {64850722258864\over 213263513087625}
- {8192\over 15015}\,\zeta(2)
- {1781242688\over 4734454725}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {190892441981633663\over 66655359074304000}
+ {16384\over 15015}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {253247865\over 268435456}\,\zeta(3)
- {1781242688\over 4734454725}\,\ln{\mu^2\over m^2}
\Bigg)
,\nonumber\\
C^{(2),p}_8 &=&
C_F^2 \,\Bigg(
- {1018252563630160440365157797976011\over
333671075151516323020800000}
- {114688\over 36465}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
+ {14336\over 7293}\,\zeta(2)
+ {214705361130392874134587\over 84572630522265600}\,\zeta(3)
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,C_A \,\Bigg(
- {2366402466662694875682083064373\over 33429013094957108428800000}
+ {57344\over 36465}\,\zeta(2)\,\ln 2
\nonumber\\
&&\mbox{}
- {57344\over 109395}\,\zeta(2)
+ {1106800878920761371869\over 18793917893836800}\,\zeta(3)
+ {78196015136\over 84666807225}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T\,n_l \,\Bigg(
- {10872485544378464\over 41951979645951375}
- {57344\over 109395}\,\zeta(2)
- {312784060544\over 931334879475}\,\ln{\mu^2\over m^2}
\Bigg)
\nonumber\\
&&\mbox{}
+ C_F\,T \,\Bigg(
- {4465014818406761701847\over 1468550871125065728000}
+ {114688\over 109395}\,\zeta(2)
\nonumber\\
&&\mbox{}
+ {132010901659\over 115964116992}\,\zeta(3)
- {312784060544\over 931334879475}\,\ln{\mu^2\over m^2}
\Bigg)
\end{eqnarray}
For the vector case the first seven moments were already presented in
\cite{CheKueSte96}. All other results are new.
\end{appendix}
|
1,108,101,562,741 | arxiv | \section{Introduction: Brackets and deformations in DFT}
Due to the extended nature of closed strings moving in a background, the field theory describing its classical dynamics is different from that of a point particle. In particular, the string can ``wind'' around compact cycles of the background manifold. This gives rise to two sets of parameters (or zero modes) characterizing the solutions to the classical equations of motion. One of them is associated to the center of mass momentum $p_i$ of the closed string and the corresponding configuration space coordinates $x^i$ span the phase space of the center of mass treated as a point particle. The second set $\tilde p^i$ is associated to the winding and gives rise to a second set of coordinates $\tilde x_i$. DFT is a field theory on this ``doubled configuration space'' which can be reduced to ordinary configuration space by using the strong constraint
\eq{
\partial_i\phi(x,\tilde x)\,\tilde \partial^i \psi(x,\tilde x) + \tilde \partial^i\phi(x,\tilde x)\,\partial_i\psi(x,\tilde x) = \;0,
}
for functions $\phi,\psi$ on the doubled configuration space. This constraint has its origin in the level matching condition for physical fields in string theory and restores the right amount of coordinates of a physical configuration space. We refer to the reader especially to \cite{Hull:2009mi} and the lecture notes \cite{Zwiebachlectures} for an introduction to DFT.
\subsection{C-bracket and bilinear form}
In \cite{Hull:2009zb, Hohm:2010jy}, a Lagrangian action for DFT was formulated and gauge symmetries were identified. Due to a lack of space, we only present results that are important for the rest of the presentation. To state the gauge symmetries, we use notation conventions of \emph{generalized geometry}. On a $d$-dimensional manifold $M$, generalized vector fields $V$ are locally given by sections of $TM\oplus T^*M$, i.e. $V=V^i\partial_i + V_idx^i$. To state local expressions in DFT, the components are allowed to depend on the doubled configuration space with coordinates $(x^i,\tilde x_i)$. Furthermore one uses a capital index to denote objects transforming in the fundamental representation of $O(d,d)$, i.e. $V^M=(V^i(x,\tilde x),V_i(x,\tilde x))$, where $A\in O(d,d)$ obeys
\eq{
A \eta A^t =\;\eta\;, \qquad \eta = \begin{pmatrix}
0 & id \\
id & 0
\end{pmatrix}\;,
}
and $id$ is the d-dimensional identity matrix. We will denote the bilinear form represented by $\eta$ by $\langle \cdot,\cdot\rangle$. Capital indices are raised and lowered by the latter, so for generalized vectors $V,W$ we have
\eq{
\langle V,W\rangle =\;V^PW^Q \eta_{PQ} =\; V^iW_i + V_i W^i\;.
}
The gauge symmetries of DFT are given by the action of a generalized Lie derivative, acting on functions $\phi$ by\footnote{We use the notation $\partial_M$ for the pair $(\partial_i,\tilde \partial^i)$, so expressions like $V^M\partial_M$ are expanded as $V^i\partial_i + V_i \tilde \partial^i$.} $\mathcal{L}_V \phi = V^K\partial_K \phi$ and generalized vectors $W$ according to
\eq{
(\mathcal{L}_V W)_K =\;&V^P\partial_P W_K + (\partial_K V^P - \partial^P V_K) W_P\;,\\
(\mathcal{L}_V W)^K =\;&V^P\partial_P W^K -(\partial_P V^K - \partial^K V_P)W^P\;.
}
Finally, the commutator of two generalized Lie derivative gives the generalized Lie derivative with respect to the \emph{C-bracket} of two generalized vectors, which is given in components by
\eq{
\label{Cbracket}
\Bigl([V,W]_C\Bigr)^P =\;V^K\partial_K W^P - W^K\partial_K V^P -\frac{1}{2}\Bigl(V^K\partial^P W_K - W^K\partial^PV_K\Bigr)\;.
}
Note that for the specific solution $\tilde \partial^i=0$, this bracket reduces to the well-known Courant bracket of generalized geometry. In the following subsection, we will present a deformation of the bilinear form $\eta$ and the C-bracket found in double field theory.
\subsection{$\alpha'$-deformations}
Classical closed string theory is described by a two-dimensional sigma model. Perturbative expansions are formal power series in the coupling constant $\alpha' = l_s^2$, where $l_s$ is the fundamental string length. Recently, corrections to the bilinear form and C-bracket up to first order in $\alpha'$ were given in \cite{Hohm:2013jaa, Hohm:2014eba}. For the correction of the bilinear form, we introduce the notation $\langle V,W\rangle_{\alpha'}:=\;\langle V,W\rangle - \alpha'\langle\langle V,W\rangle\rangle$, where the component expression for the correction is
\eq{
\label{bilinearcorr}
\langle\langle V, W\rangle \rangle =\;\partial_P V^Q \partial_Q W^P\;.
}
Similarly, for the correction to the C-bracket, we introduce the short notation $[V,W]_{\alpha'} :=\; [V,W]_C - \alpha'[[V,W]]$, where the correction is given by
\eq{
[[V,W]]^K = \;\frac{1}{2}\Bigl(\partial^K\partial_Q V^P \partial_P W^Q - V\leftrightarrow W\Bigr)\;.
}
Note that this expression has a form part and a vector part. As an example, we expand the vector part in terms of partial derivatives:
\eq{
\label{cbracketcorr}
[[V,W]]_i =\;\frac{1}{2}\Bigl(&\partial_i\partial_m V^n\partial_n W^m + \partial_i \partial_m V_n \tilde \partial^n W^m + \partial_i\tilde \partial^m V^n\partial_n W_m \\
&+\partial_i\tilde \partial^m V_n \tilde \partial^n W_m - V\leftrightarrow W\Bigr)\;.
}
The goal of this work is to get a systematic explanation of the derivative expansions \eqref{bilinearcorr} and \eqref{cbracketcorr}. In the following section, we are going to set up a mathematical formalism to rewrite the bilinear form and the C-bracket in terms of Poisson brackets. This will allow us finally to identify the deformation using a Moyal-Weyl star product on a specific symplectic supermanifold.
\section{Lie bialgebroids and double fields}
For finite dimensional vector spaces $\mathcal{V}$, it is a standard exercise to show the isomorphism between the exterior algebra and the algebra of polynomials in the parity reversed version $\Pi\mathcal{V}$:
\eq{
\label{pi}
\wedge^\bullet \mathcal{V}^* \simeq \textrm{Pol}^\bullet(\Pi \mathcal{V})\;.
}
For a finite dimensional $\mathbb{Z}_2$-graded vector space $\mathcal{W} = \mathcal{W}_0 \oplus \mathcal{W}_1$, parity reversion $\Pi$ acts according to $(\Pi \mathcal{W})_0 = \mathcal{W}_1$ and $(\Pi \mathcal{W})_1 = \mathcal{W}_0$. In \eqref{pi}, elements of $\mathcal{V}$ have degree $0$ and elements of $\Pi\mathcal{V}$ have degree $1$. In the case of vector bundles, differentials are derivations of the exterior algebra, which get mapped to derivations on functions, i.e. vector fields. Squaring to zero means that the vector fields are actually \emph{homological}. These statements are summarized by the structure of a Lie algebroid:
\begin{defn}
A \emph{Lie algebroid} is a vector bundle $A\rightarrow M$ together with a homological vector field $d_A$ of degree 1 on the supermanifold $\Pi A$.
\end{defn}
A pair $(A,A^*)$ of a Lie algebroid and its linear dual has the structure of a \emph{Lie bialgebroid} if the differentials respect the brackets on the dual spaces. This will be the basic structure used in the following sections.
\subsection{Lie bialgebroids and the Drinfel'd double}
Let $(A,A^*)$ be a pair of dual Lie algebroids over a manifold $M$. The homological vector field $d_A$ can be lifted to a function on the cotangent bundle $T^*\Pi A \overset{\mathfrak{p}}{\rightarrow} \Pi A$. Similarly, the corresponding operator $d_{A^*}$ for the dual can be lifted to $T^*\Pi A^* \overset{\bar{\mathfrak{p}}}{\rightarrow} \Pi A^*$. Similarly to the case of standard phase spaces, there is a Legendre transform $L: T^*\Pi A \rightarrow T^*\Pi A^*$, which can be used to pull back functions. Thus we have the situation
\eq{
\label{diagram}
\begin{matrix}
T^*\Pi A & \overset{L}\rightarrow & T^*\Pi A^* \\
\downarrow \mathfrak{p} & & \downarrow \bar{\mathfrak{p}} \\
\Pi A & & \Pi A^*
\end{matrix}
}
For local formulas we use coordinates $(x^i,\xi^a)$ on $\Pi A$, where $x^i$ are coordinates on the base manifold and $\xi^a$ denote the (Grassmann odd) fibre coordinates. On its cotangent bundle, we have in addition the canonical conjugate momenta, i.e. $(x^i,\xi^a,x^*_i, \xi^*_a)$. As in the purely even case, there is a canonical Poisson bracket on $T^*\Pi A$, given by the relations
\eq{
\lbrace x^i,x^*_j\rbrace = \; \delta^i_j\;,\qquad \lbrace \xi^a,\xi^*_b\rbrace =\; \delta^a_b\;.
}
Using this Poisson structure and the ``lifted'' vector field
\eq{
\label{theta}
\theta :=\; h_{d_A} + L^*h_{d_{A^*}}\;,
}
it is possible to write down the following concise characterization of $(A,A^*)$ being a Lie bialgebroid:
\begin{thm}
\label{Bialgthm}
The pair $(A,A^*)$ is a Lie bialgebroid if and only if $\lbrace \theta, \theta\} =0\;.$
\end{thm}
We refer to \cite{Deethesis} for a proof and further details on the mathematical structures introduced in the present work. Theorem \ref{Bialgthm} is the motivation for the following definition:
\begin{defn}
For a Lie bialgebroid as above, the bundle $T^*\Pi A$, equipped with the homological vector field $\lbrace \theta, \cdot\rbrace$ is called the \emph{Drinfel'd double} of $(A,A^*)$.
\end{defn}
We refer to \cite{Mack1, Mack2} for the original work on the Drinfel'd double in this context. The essential ingredient for the homological vector field is the function $\theta$ in \eqref{theta}.
\subsection{C-bracket in terms of Poisson brackets}
Let $M$ be a Poisson manifold. Then the standard example of a Lie bialgebroid is $(A,A^*)=(TM,T^*M)$. The respective brackets are the Lie bracket and Koszul bracket\footnote{The Koszul bracket of forms $\omega_1,\omega_2 \in \Gamma(T^*M)$ is given by
$$[\omega_1,\omega_2]_K = \mathcal{L}_{\pi^\sharp(\omega_1)}\omega_2 - \iota_{\pi^\sharp(\omega_2)}d\omega_1\;,$$ where $\mathcal{L}$ is the Lie derivative and $\pi^\sharp$ is the anchor determined by the Poisson structure.}, giving rise to the de Rham and Poisson-Lichnerowicz differential, respectively. We use their lifts to functions on the Drinfel'd double to define two sets of momentum variables $p_i,\tilde p^i$:
\eq{
h_{d_A} &=\;a^j_i(x)x^*_j\xi^i -\tfrac{1}{2}f^k_{ij}(x)\xi^i\xi^j\xi^*_k =:\; \xi^ip_i\;,\\
h_{d_{A^*}} &=\;a^{ij}(x)x^*_i\xi^*_j + \tfrac{1}{2}Q_k^{ij}(x)\xi^k\xi^*_i\xi^*_j =:\; \xi^*_i \tilde p^i\;,
}
where we denote the anchor maps by $a^j_i$ and $a^{ij}$, and $f$ and $Q$ are determined by the brackets on $A$ and $A^*$, respectively\footnote{The notation $f$ and $Q$ is common in the physics literature, where these objects play a role in flux compactifications of string theory.}. We consider the momenta $p_i$ and $\tilde p^i$ to act on functions on $T^*\Pi A$ by using the Poisson bracket, e.g. $\lbrace p_i, \cdot\rbrace$. In particular, lifting functions $\phi \in \mathcal{C}^\infty(M)$ to $T^*\Pi A$ (we use the same letter $\phi$ for the lift), we define the following two differential operators:
\eq{
\partial_i \phi :=\;\lbrace p_i, \phi\rbrace\;,\qquad \tilde \partial^i \phi :=\; \lbrace \tilde p^i,\phi\rbrace\;.
}
Lifting furthermore generalized vectors to $T^*\Pi A$, i.e. if locally $X^i\partial_i + \omega_idx^i \in \Gamma(TM\oplus T^*M)$, we define $V:= X^i\xi^*_i + \omega_i \xi^i \in T^*\Pi A$, we are able to show the following result by rewriting the proof done in \cite{Deethesis} for Courant brackets, but using $\partial_i$ and $\tilde \partial^i$ here:
\begin{thm}
\label{thm1}
For vanishing $f$ and $Q$, let $V,W$ be lifts of generalized vectors to $T^*\Pi A$. Furthermore, define the Dorfmann-product $\circ$ by
\eq{
V\circ W :=\; \Bigl\lbrace \lbrace \xi^i p_i + \xi^*_i \tilde p^i, V\rbrace,W\Bigr\rbrace\;.
}
Then the C-bracket of $V,W$ (lifted to $T^*\Pi A$) is given by
\eq{
[V,W]_C =\; \frac{1}{2}\Bigl(V\circ W - W\circ V\Bigr)\;.
}
\end{thm}
The proof is an easy evaluation in local coordinates of $T^*\Pi A$, and comparison with \eqref{Cbracket}, see \cite{Deser:2014mxa}. The generalization for non-vanishing $f$ and $Q$ would give a version of the C-bracket containing ``fluxes'', which, as far as we know, has not been done so far in the physics literature. As a final remark for this subsection, we observe that the bilinear form $\langle V,W\rangle$ is given by evaluating the Poisson bracket $\lbrace V,W\rbrace$ of the lifted quantities to $T^*\Pi A$. These observations will be used in the following sections to suggest a way to understand the deformations \eqref{bilinearcorr} and \eqref{cbracketcorr} of the bilinear form and C-bracket encountered in DFT.
\section{Deformation of the metric and C-bracket}
The result of theorem \ref{thm1} immediately suggests the interpretation of $\alpha'$-corrections such as \eqref{bilinearcorr} and \eqref{cbracketcorr} in terms of deformation theory. Given a formal star product on the algebra of smooth functions on a Poisson manifold\footnote{More precisely on formal power series in a deformation parameter $t$, usually denoted by ${\mathcal C}^\infty(M)[[t]]$. We refer to \cite{Blumenhagen:2011ph, Blumenhagen:2013zpa, Bakas:2013jwa, Blumenhagen:2014sba} for recent applications of deformation theory in closed string theory and to \cite{Bordemann:1999ca, Klemm:2001yu, KellerWaldmann} for star products on graded manifolds.}, the star-commutator reproduces the Poisson bracket in the first non-trivial order:
\eq{
\lbrace f,g\rbrace =\;\underset{t\rightarrow \infty}{\lim}\frac{1}{t}\Bigl(f\star g - g\star f\Bigr)\;.
}
Thus, higher orders lead to deformations of the Poisson bracket and as a consequence of theorem \ref{thm1} of the metric and C-bracket. In the following, we will define an appropriate notion of star-commutator taking into account the Koszul signs on the graded manifold $T^*\Pi A$. Furthermore, we will give a (constant) Poisson structure on $T^*\Pi A$ such that the corrections of DFT are reproduced by taking star-commutators w.r.t. the corresponding Moyal-Weyl product.
\subsection{Star-commutator and Poisson structure}
For the Moyal-Weyl case, let $I=i_1\cdots i_k,\;J=j_1\cdots j_k$, with $\partial_I = \partial_{x^{i_1}}\cdots \partial_{x^{i_k}}$, then the star commutator for purely even manifolds has the standard form
\eq{
\lbrace f, g\rbrace^* = \; \overset{\infty}{\underset{k=1}{\sum}}\,t^k\Bigl(\underset{IJ}{\sum}\,m_k^{IJ}(\partial_I f \partial_J g - \partial_I g \partial_J f)\Bigr)\;.
}
In the case of the symplectic supermanifold $T^*\Pi A$, we will replace this by the following expression:
\eq{
\label{supercomm}
\lbrace f,g\rbrace^* =\; \overset{\infty}{\underset{k=1}{\sum}}\;t^k\Bigl(\underset{IJ}{\sum}\,(\partial_I f \partial_J g - (-1)^\epsilon \partial_I g \partial_J f )\Bigr)\;.
}
The sign $(-1)^\epsilon$ takes care of the $\mathbb{Z}_2$-grading and is given by
\eq{
\epsilon =\; \lvert f\rvert \lvert g \rvert + \lvert x^J \rvert(\lvert f \rvert -1) +\lvert x^I \rvert(\lvert g \rvert -1)\;,
}
where $\lvert f \rvert$ denotes the $\mathbb{Z}_2$-degree of a function and the shorthand notation $\lvert x^I\rvert := \lvert x^{i_1}\rvert + \dots + \lvert x^{i_k}\rvert$ is used. We remark that in contrast to the Moyal-Weyl case where the odd powers of the deformation don't contribute due to the antisymmetry of the Poisson tensor, in the graded case there are such contributions due to the different sign rule. In our case this will open the possibility to get the appropriate $\alpha'$-correction.
Finally, we have to choose a Poisson structure on $T^*\Pi A$ which correctly reproduces both, the correction to the bilinear form $\langle \cdot,\cdot \rangle$ and the C-bracket. Furthermore, the corresponding Poisson brackets, i.e. the first order star commutators still have to give the result of theorem \ref{thm1}. It turns out that this is indeed possible. To avoid long calculations we choose a setup which is as simple as possible, but still shows the essential features. Let $M$ be a symplectic manifold with Poisson tensor $\pi$. In this case $(TM,T^*M)$ is a Lie bialgebroid. In the expressions for the $\alpha'$-corrections, there are no $f$ -- and $Q$ -- fluxes. We can achieve the latter by taking the standard basis of vector fields on the tangent bundle. As a consequence, we get
\eq{
h_{d_A} =\; \xi^m x^*_m\;, \qquad L^*h_{d_{A^*}} =\; \xi^*_m \pi^{mn} x^*_n\;.
}
This is a special solution to the strong constraint of double field theory, with $\tilde{\partial}^i f = \lbrace \tilde p^i,f\rbrace = \pi^{ij}\partial_j f$. We choose the following Poisson structure on the Drinfel'd double:
\eq{
\label{specialPoisson}
\pi_{T^*\Pi A}=\;\frac{\partial}{\partial x^*_i} \wedge \frac{\partial}{\partial x^i} + \frac{\partial}{\partial \xi^*_i}\wedge \frac{\partial}{\partial \xi^i} + \frac{\partial}{\partial x^i}\wedge\frac{\partial}{\partial \xi^*_i} -\pi^{ij}\frac{\partial}{\partial x^i}\wedge \frac{\partial}{\partial \xi^j}\;.
}
We will give our results for the deformation for this situation. In the general case, we have a differential operator $\tilde \partial^i = \lbrace p^i,\cdot\rbrace$, whose action on functions depends on the chosen Lie bialgebroid. If it is possible to associate a vector field $\tfrac{\partial}{\partial \tilde x_i}$ to this operator, the corresponding Poisson structure would be
\eq{
\pi_{T^*\Pi A} =\; \frac{\partial}{\partial x^*_i} \wedge \frac{\partial}{\partial x^i} + \frac{\partial}{\partial \xi^*_i}\wedge \frac{\partial}{\partial \xi^i} + \frac{\partial}{\partial x^i}\wedge\frac{\partial}{\partial \xi^*_i} + \frac{\partial}{\partial \tilde x_i} \wedge \frac{\partial}{\partial \xi^i}\;.
}
We will leave the investigation of existence and properties of such a Poisson structure and its relation to double field theory for future work and give our deformation results for the Poisson tensor \eqref{specialPoisson} in the following.
\subsection{Deformation of the metric}
Due to the various terms of the graded Poisson structure \eqref{specialPoisson}, computing higher orders of the graded Moyal--Weyl product is lengthy, but straight forward. We therefore refer the reader to \cite{Deser:2014wva} for computational details and only give the results. We will use the notation $\tilde \partial^i$ for $\lbrace p^i,\cdot\}$. Furthermore, we use the following notation for star -- commutators:
\eq{
\lbrace f, g\rbrace =\; \overset{\infty}{\underset{k=1}{\sum}}\; t^k\lbrace f,g\rbrace_{(k)}\;.
}
Taking $V = V^m(x)\xi^*_m + V_m(x)\xi^m$ and $W = W^m(x)\xi^*_m + W_m(x)\xi^m$ to be lifts of generalized vectors to $T^*\Pi A$, we get the following results for the first two orders in the deformation parameter:
\eq{
\lbrace V, W\rbrace_{(1)} &=\, (V^i W_i + V_i W^i) =\,\langle V,W\rangle\;, \\
\lbrace V, W\rbrace_{(2)}&=\, - \partial_i V^j\partial_j W^i - \partial_i V_j \tilde \partial^j W^i -\tilde\partial^i V^j \partial_j W_i - \tilde \partial^i V_j \tilde \partial^j W_i\;.
}
Comparing the latter expressions with the formulas from DFT \eqref{bilinearcorr}, we get the following statement:
\begin{thm}
Let $V=V^i\xi^*_i + V_i \xi^i$ and $W=W^i\xi^*_i + W_i\xi^i$ be two generalized vectors, lifted to $T^*\Pi A$. Then we have
\eq{
\frac{1}{t}\lbrace V,W\rbrace^* =\; \langle V,W\rangle -t\langle\langle V,W\rangle \rangle + \mathcal{O}(t^2)\;,
}
i.e. the graded star--commutator gives the deformation of the inner product $\langle \cdot,\cdot\rangle$ up to second order.
\end{thm}
For convenience of notation, we always denote the generalized vectors $V,W$ and their lifts to $T^*\Pi A$ by the same letters. It is clear from the context which objects are used.
\subsection{Deformation of the C-bracket}
Using theorem \ref{thm1}, we are now able to compute corrections to the C-bracket. First, it is easy to see that the Poisson structure \eqref{specialPoisson} together with the sign rule given in \eqref{supercomm} correctly reproduce the Dorfmann product $\circ$:
\eq{
V\circ W =\; \Bigl\lbrace \lbrace \theta, V\rbrace_{(1)}, W\Bigr\rbrace_{(1)}\;.
}
To see which Poisson brackets contribute to the first non-trivial corrections to $V\circ W$, we expand the double Poisson bracket up to order $t^4$:
\eq{
\Bigl\lbrace \lbrace \theta, V\rbrace,W\Bigr\rbrace^* = \;&t^2 \, V\circ W + t^3\Bigl\lbrace\lbrace \theta, V\rbrace_{(2)},W\Bigl\rbrace_{(1)} \\
+ &t^3\Bigl\lbrace\lbrace \theta, V\rbrace_{(1)}, W\Bigr\rbrace_{(2)} + \mathcal{O}(t^4)\;.
}
A short calculation shows the vanishing of $\lbrace \theta, V\rbrace_{(2)}$ for the chosen setup ($\pi$ constant) and we have
\eq{
\lbrace \theta, V\rbrace_{(1)} =\; &\xi^m\xi^n \partial_m V_n + \xi^*_k \xi^m \pi^{kn} \partial_n V_m + \xi^*_k \xi^*_m \pi^{kn}\partial_n V^m \nonumber \\
&+V_n \pi^{nm} x^*_m + x^*_n V^n\;.
}
Inserting this expression into $\Bigl\lbrace \lbrace \theta, V\rbrace_{(1)}, W\Bigr\rbrace_{(2)}$ gives exactly the contribution which was encountered for this setup in DFT, see equation \eqref{cbracketcorr}. Thus we state the following result:
\begin{thm}
Let $V = V^i\xi^*_i + V_i \xi^i$ and $W=W^i\xi^*_i + W_i \xi^i$ be two generalized vectors lifted to $T^*\Pi A$, then we have
\eq{
\frac{1}{2t^2}\Bigl(\Bigl\lbrace\lbrace \theta, V\rbrace^*,W\Bigr\rbrace^* - \Bigl\lbrace\lbrace \theta, W\rbrace^*,V\Bigr\rbrace^*\Bigr) =\; [V,W]_C + t[[V,W]]_C + \mathcal{O}(t^2)\;,
}
i.e. the two-fold star commutator coincides with the $\alpha'$-corrected C-bracket of DFT up to second order in the deformation parameter $t=\alpha'$.
\end{thm}
The proof is a straight forward but lengthy evaluation in local coordinates. We refer the reader to the original article \cite{Deser:2014wva} for details, especially concerning the Koszul signs. To sum up, in the framework chosen above, it is possible to explain $\alpha'$-corrections to the bilinear pairing and C-bracket encountered in string theory via a star commutator with respect to a graded version of the Moyal-Weyl product.
\section{Outlook: $B$-, $\beta$- transformations and the Atiyah algebra}
In the final section we want to give additional evidence for the relevance of the introduced mathematical framework in physics, especially to the structures arising in DFT. First we recall that a \emph{B-transform} of a generalized vector $(X,\omega)$ is defined by
\eq{
(X,\omega)\mapsto (X, \omega + \iota_X B)\;, \quad B\in \Gamma(\wedge^2 T^*M)\;.
}
Furthermore, a $\beta$-transform is given in an analogous way by
\eq{
(X,\omega) \mapsto (X+\iota_\omega \beta, \omega)\;, \quad \beta \in \Gamma(\wedge^2 TM)\;.
}
Finally a linear transformation is given by the following definition
\eq{
(X,\omega) \mapsto (X + C(X),\omega + C^{-t}(\omega))\;, \quad C\in \Gamma(TM\otimes T^*M)\;,
}
where $A^{-t}$ means the inverse transpose of the invertible matrix $C$. The idea to lift these transformations to $T^*\Pi A$ lies at hand, thus introducing the lifts
\eq{
B = \,\tfrac{1}{2}B_{ij}\xi^i\xi^j\;,\quad \beta =\,\tfrac{1}{2}\beta^{ij}\xi^*_i\xi^*_j\;,\quad C=\,C^j_i\xi^*_j\xi^i\;,
}
it is a straight forward exercise to show that the action of $B$-,$\beta$- and linear transformations on the lift $\Sigma = X^i\xi^*_i + \omega_i \xi^i$ of a generalized vector $(X,\omega)$ is given by
\eq{
\label{Atyiah}
\Sigma \mapsto \Sigma + \lbrace&\Sigma,B\rbrace\;,\quad \Sigma \mapsto \Sigma + \lbrace \Sigma, \beta\rbrace \\
&\Sigma \mapsto \Sigma + \lbrace \Sigma, C\rbrace\;.
}
Comparing with \cite{Roytenberg:2001am}, we see that the transformations \eqref{Atyiah} are the lifts to $T^*\Pi A$ of the generators of the \emph{Atiyah algebra} of infinitesimal bundle transformations of $A\oplus A^*$, preserving the bilinear form $\eta$. With this very convenient rewriting of the transformations used frequently in the generalized geometry applications to string theory, an immediate open question is about the deformation of these transformations. The tools established in this work will be helpful to investigate this further. In addition to that, the inclusion of fluxes as ``fibre translations'' in the sense of \cite{Roytenberg:2001am} could be performed conveniently as suggested in \cite{Deser:2014wva}.
\subsection*{Acknowledgement}
I want to thank Jim Stasheff for collaboration and Athanasios Chatzistavra\-kidis, Larisa Jonke, Tom Lada, Erik Plauschinn, Dmitry Roytenberg and Theodore Voronov for discussion. Furthermore, I want to thank the organisers of the Bialowieza workshop, especially Tomasz Goli\'{n}ski and Aneta Sli\.{z}ewska for taking care especially of the newcomers to the conference venue.
|
1,108,101,562,742 | arxiv |
\subsection{Outlier Text Detection}
\label{subsec:otd}
The goal of outlier text detection is to identify semantically-deviating (or out-of-domain) documents from a given text corpus.
The most dominant approach to this task is applying existing outlier detection methods on a low-dimensional vector space, where the semantic meaning of each document is effectively captured~\cite{mikolov2013distributed, meng2019spherical, zhuang2017identifying, fouche2020mining}.
Specifically, the outlierness of each document is computed by using the local outlier factor~\cite{breunig2000lof}, randomized hashing functions~\cite{sathe2016subspace}, or non-negative matrix factorization~\cite{kannan2017outlier}.
Recently, there have been several attempts to employ neural networks~\cite{ruff2019self, manolache2021date} for modeling the \textit{normality} of documents, regarding all unlabeled documents in the corpus as normal (i.e., inliers);
they detect the outliers based on how much each document deviates from the normality.
However, all the existing detection methods have critical limitations.
First, they only find out semantically minor documents in the corpus, without taking an actual underlying category (or topic) structure into consideration.
In this case, they might make incorrect predictions on the documents of \textit{inlier-but-minor} categories (i.e., false positive) or \textit{noisy-but-frequent} documents miscollected from other sources (i.e., false negative);
this will be shown in our experiments (Section~\ref{subsec:qualanal}).
In other words, they cannot consider a set of inlier categories, which can be given as users' prior knowledge or interests.
For this reason, the scope of inliers and outliers needs to be designated more concretely, depending on the categories covered by the corpus.
Second, they do not explicitly learn the useful features related to inlier categories, which eventually results in limited performance for outlier detection.
They mainly utilize the embedding space optimized in an unsupervised manner;
the similarity among documents is implicitly captured by the co-occurrence of words, and this makes the documents not clearly distinguishable according to their category.
The most recent work on text classification~\cite{hendrycks2020pretrained, moon2021masker} empirically demonstrated that discriminative modeling among in-domain classes is helpful to enhance the robustness to the out-of-domain inputs.
Since class-indicative (i.e., discriminative) features are effective to determine whether an input belongs to a target class or not, they also can be used for detecting inputs that do not belong to any of the in-domain classes~\cite{lee2020multi}.
From this perspective, the unsupervised outlier detectors can be further improved by encouraging discrimination power for inlier categories.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{FIG/framework.png}
\caption{The overview of our OOCD\xspace framework for detecting out-of-category documents. It consists of two steps for scoring the confidence: (1) discriminative text embedding for generating the pseudo-category label of unlabeled documents, and (2) neural classifier training for making the target-category prediction on unlabeled documents.}
\label{fig:framework}
\end{figure*}
\subsection{Weakly Supervised Text Classification}
\label{subsec:wsclf}
To alleviate the difficulty to obtain the class label of each document for text classification, several recent studies tried to train a text classifier by using unlabeled documents only with the label names or few keywords of each target class~\cite{meng2018weakly, meng2020text};
this task is called as \textit{weakly supervised} text classification.
The main challenge is to fully utilize various types of weak supervision for effectively training a text classifier.
To this end, existing methods adopt various techniques to infer the pseudo-label of unlabeled documents by distilling the knowledge from a pre-trained word embedding space~\cite{meng2018weakly, huang2020weakly}, or a pre-trained language model~\cite{meng2020text}.
However, they assume that all unlabeled documents in a training corpus belong to one of the target classes, which implies that training and test documents are sampled from the in-domain distribution.
For this reason, although they are able to accurately classify in-domain documents into the target classes to some extent, they are not robust to out-of-domain documents.
To be specific, out-of-domain documents that reside in the training set can cause the classifier to make unreliable (i.e., high confident) predictions on out-of-domain inputs.
To enhance the robustness by ensuring the ability to correctly identify out-of-domain documents, the training process needs to filter out non-confident documents that are less relevant to the target classes.
\subsection{Problem Formulation}
\label{subsec:problem}
In this work, we focus on a weakly supervised outlier detection task with target-category names available, named as out-of-category detection.
Unlike unsupervised outlier detection, the goal of this task is to distinguish unlabeled documents according to their relevance (or similarity) to target categories.
\begin{definition}[Out-of-category document detection]
Given a set of unlabeled documents $\mathcal{D}=\{d_1, \ldots, d_N\}$ with their vocabulary $\mathcal{W}=\{w_1, \ldots, w_M\}$ and a set of target categories $\mathcal{C}=\{c_1,\ldots,c_K\}$ designated by their names $\mathcal{W}_\mathcal{C}=\{w_{c_1}, \ldots, w_{c_K}\}\subset\mathcal{W}$, we aim to obtain a measure of \textit{confidence}, denoted by $\text{conf}:\mathcal{D} \mapsto \mathbb{R}$, indicating how confidently each document belongs to the target categories.
\end{definition}
\subsection{Problem Analysis}
\label{subsec:analysis}
The key challenge of this problem is to model the confidence score of an unlabeled document based on its relevance to target categories.
The straightforward solutions for this challenge can be summarized into two approaches, depending on how to encode the category information of each document by utilizing the target-category names.
\smallsection{Using a text embedding space}
One possible solution is leveraging a joint embedding space of all words and documents~\cite{meng2019spherical, meng2020hierarchical}, to measure a document's relevance to each target category by the similarity of the document vector and the word (i.e., category name) vector in the embedding space.
This approach is effective to capture the category information of each document under weak supervision, since it additionally utilizes the co-occurrence between words and documents as self-supervision.
However, a text embedding space is not able to encode rich contextual information within a document.
\smallsection{Using a neural text classifier}
Another solution is training a target-category classifier which outputs the probability that an input document belongs to each category, based on a neural model with its capability of extracting useful features from a text.
The documents that contain one of the target-category names need to be collected to build the set of training documents (with the corresponding category labels),
but in this case, the classifier cannot be effectively trained due to the noisy labels and the limited number of labeled documents~\cite{meng2018weakly}.
\subsection{Overview}
\label{subsec:overview}
To get the best of both the approaches, our framework for out-of-category document detection, termed as OOCD\xspace, basically adopts a two-step approach that utilizes both a text embedding space and a neural classifier.
To be specific, OOCD\xspace aims to more effectively train the neural classifier by fully utilizing the knowledge encoded in the text embedding space, so that it can output the reliable confidence score based on its target-category prediction on unlabeled documents.
Figure~\ref{fig:framework} provides a high-level overview of our OOCD\xspace framework.
The first step, for embedding-based confidence scoring, maps all words, documents, and categories into a spherical embedding space, while making them discriminative based on given target-category names.
Using the embedding vectors, OOCD\xspace not only generates the pseudo-category label of all unlabeled documents, but also produces their confidence from the pseudo-label.
The second step, for classifier-based confidence scoring, trains a neural text classifier by using the set of confident documents and their pseudo-labels, filtered by the confidence in the first step.
Lastly, OOCD\xspace ranks all documents by their confidence, computed from the target-category prediction result.
\subsection{Embedding-based Confidence Scoring}
\label{subsec:confemb}
\subsubsection{Category-discriminative text embedding}
\label{subsubsec:josd}
To effectively capture the textual similarity (or distance) among words, documents, and categories into a joint embedding space, we employ the state-of-the-art spherical embedding framework~\cite{meng2019spherical} with an additional term using the target-category names for inter-category distinctiveness.
Due to the space limit, we briefly introduce the key idea and objective of our text embedding in this section.
Please refer to~\cite{meng2019spherical, meng2020hierarchical} for more details.
The objective of our text embedding is to maximize the generative likelihood of the corpus given the target categories $P(\mathcal{D}|\mathcal{C})$, while enforcing that the category-conditional likelihood distributions are clearly separable.
In brief, based on the generative process, $P(\mathcal{D}|\mathcal{C})$ is formulated to indicate how likely (i) each document $d_i$ comes from its category $c_{d_i}$, and (ii) each word $w_j$ co-occurs with its document $d_i$ and context words $w_k$.
The loss function is described as follows.
\begin{equation}
\label{eq:josdloss}
\begin{split}
\mathcal{L}_{emb} = -\log P(\mathcal{D}|\mathcal{C}) &+ \Omega(\mathcal{C}), \\
P(\mathcal{D}|\mathcal{C}) = \prod_{d_i\in\mathcal{D}}p(d_i|c_{d_i}) &\prod_{w_j\in d_i} p(w_j|d_i) \hspace{-10pt} \prod_{w_k\in \text{cw}(w_j; d_i)} \hspace{-10pt} p(w_k|w_j) \\
\approx \prod_{c_k\in\mathcal{C}} p(w_{c_k}|c_k) \cdot \prod_{d_i\in\mathcal{D}} &\prod_{w_j\in d_i} p(w_j|d_i) \hspace{-10pt} \prod_{w_k\in\text{cw}(w_j;d_i)} \hspace{-10pt} p(w_{k}|w_j),
\end{split}
\end{equation}
where $\Omega(\mathcal{C})$ is the term for minimizing the semantic correlation between the target categories, defined by $\log \prod_{c_i,c_j\in\mathcal{C}} p(c_j|c_i)$, and $\text{cw}(w_j;d_i)$ is the set of surrounding words in a local context window for the center word $w_j$.
As the true category of each document $c_{d_i}$ is unknown, we replace the term $p(d_i|c_{d_i})$ with the category-conditional likelihood of target-category names $p(w_{c_k}|c_k)$ to utilize weak supervision.
To optimize the embedding vector of each entity $w_j$, $d_i$, and $c_k$ (denoted by $\wvec{j}$, $\dvec{i}$, and $\cvec{k}$, respectively) based on $\mathcal{L}_{emb}$, we need to model each probability (or likelihood) in Equation~\eqref{eq:josdloss} by using the embedding vectors.
First of all, we define the generative likelihood of documents and words conditioned on each category, $p(d_i|c_k)$ and $p(w_j|c_k)$, by the von Mises-Fisher (vMF) distribution, which is a spherical distribution centered around $\cvec{k}$, to obtain a spherical space.
\begin{equation}
\label{eq:vmf}
\begin{split}
p(d_i|c_k) &= \text{vMF}(\dvec{i};\cvec{k},\kappa_{c_k}) = n(\kappa_{c_k})\exp(\kappa_{c_k}\cos(\dvec{i}, \cvec{k})) \\
p(w_j|c_k) &= \text{vMF}(\wvec{j};\cvec{k},\kappa_{c_k}) = n(\kappa_{c_k})\exp(\kappa_{c_k}\cos(\wvec{i}, \cvec{k}))
\end{split}
\end{equation}
where $\kappa_{c_k}\geq 0$ is the \textit{concentration} parameter, $n(\kappa_{c_k})$ is the normalization constant, and the \textit{mean direction} of each vMF distribution is modeled by the category embedding vector $\cvec{k}$.
Then, we also need to define the probability of word-document and word-word co-occurrence, $p(w_j|d_i)$ and $p(w_k|w_j)$, as well as that of inter-category correlation, $p(c_j|c_i)$.
In this sense, we simply use the cosine (i.e., directional) similarity, which can be a measure of semantic coherence in the spherical space,
i.e., $p(w_j|d_i)\propto\exp(\cos(\wvec{j},\dvec{i}))$, $p(w_k|w_j)\propto\exp(\cos(\wvec{k},\wvec{j}))$, and $p(c_j|c_i)\propto\exp(\cos(\cvec{j},\cvec{i}))$.
Combining a max-margin loss function~\cite{vilnis2015word, vendrov2016order, ganea2018hyperbolic, meng2019spherical} with the category-conditional likelihood and the co-occurrence probability defined above, the objective of our text embedding in Equation~\eqref{eq:josdloss} is summarized as follows.
\begin{equation}
\label{eq:josdopt}
\begin{split}
&\sum_{\substack{d_i\in\mathcal{D}}} \hspace{-25pt} \sum_{\substack{w_j\in d_i\\\hspace{26pt} w_k\in\text{cw}(w_j;d_i)}} \hspace{-25pt} \max \left(\vvec{k'}^\top\wvec{j} - \vvec{k}^\top\wvec{j} + \wvec{j'}^\top\dvec{i} - \wvec{j}^\top\dvec{i} + m, 0 \right) \\
&\qquad - \sum_{c_k\in\mathcal{C}} \left(\log(n(\kappa_{c_k})) + \kappa_{c_k}\wvec{c_k}^\top\cvec{k}\right) \cdot \mathbbm{1}\left[\wvec{c_k}^\top\cvec{k} < m\right]\\
&\qquad + \hspace{-4pt}\sum_{c_i,c_j\in\mathcal{C}}\max\left(\cvec{j}^\top\cvec{i} - m, 0 \right) \\
&\quad \text{s.t.}\quad \forall w, d, c, \quad \lVert\wvec{ }\rVert=\lVert\vvec{ }\rVert=\lVert\dvec{ }\rVert=\lVert\cvec{ }\rVert=1, \kappa_c \geq 0,
\end{split}
\raisetag{38pt}
\end{equation}
where $m$ is the margin size and $\mathbbm{1}$ is the indicator function.
Similar to previous work on word embedding~\cite{mikolov2013distributed, pennington2014glove}, each word $w_j$ has two independent embedding vectors as the center word $\wvec{j}$ and the context word $\vvec{j}$, and the negative samples $w_{j'}$ and $w_{k'}$ are randomly selected from the vocabulary.
To sum up, the first term optimizes the similarity of each document and its words, and each word and its context words.
The second term pulls the category-indicative words (i.e., the given category names) close to the corresponding category vectors, while the third term makes the category vectors far apart from each other.
\subsubsection{Confidence scoring by target-category pseudo-labeling}
\label{subsubsec:docretrieval}
The category-conditional likelihood of a document $\text{vMF}(\dvec{};\cvec{},\kappa_c)$ in our text embedding space can serve as a good confidence measure, because it encodes the semantic relevance of documents to each target category.
Therefore, we generate soft pseudo-labels of all unlabeled documents by using their category-conditional likelihood, then calculate the confidence from the pseudo-label.
To this end, we present two strategies to measure the category-specific relevance score, $r:\mathcal{D}\times\mathcal{C}\mapsto \mathbb{R}$.
They either directly obtain the relevance by using a target document vector, denoted by $r_d(d, c)$, or indirectly capture it based on the proximity (i.e., nearest neighbor documents and words) in the embedding space, denoted by $r_w(d, c)$.
\begin{equation}
\label{eq:vmfscore}
\begin{split}
r_{d}(d, c) &\propto \text{vMF}(\dvec{}; \cvec{}, \kappa_c) \\
r_{w}(d, c) &\propto \hspace{-10pt} \sum_{(d', w)\in \mathcal{N}^{k,j}(d)} \hspace{-10pt} sim(\dvec{}, \dvec{}')\cdot sim(\dvec{}', \wvec{}) \cdot \text{vMF}(\wvec{}; \cvec{}, \kappa_c),
\end{split}
\end{equation}
where $\mathcal{N}^{k,j}(d)$ is the set of document-word pairs $(d', w)$ that consist of $k$ neighbor documents and their $j$ neighbor words identified by their similarity.
For the computation of $r_w$, we use the cosine similarity among documents and words, as all the embedding vectors reside on the spherical space.
This proximity-based relevance score can improve the robustness to the noise in the embedding space, by additionally leveraging the similarity of words and documents~\cite{fouche2020mining}.
Then, the pseudo-label of each document is obtained by normalizing the category-specific relevance scores over the target categories as follows.
\begin{equation}
\label{eq:pseudolabel}
\hat{y}_c(d) = \frac{\exp( r(d, c)/T)}{\sum_{c'\in\mathcal{C}}\exp( r(d, c')/T)}
\end{equation}
where $T$ is the temperature parameter that controls the smoothness of a probability distribution~\cite{hinton2015distilling}.
Finally, the embedding-based confidence of document $d$ is defined by the maximum value of the soft pseudo-label $\hat{\mathbf{y}}(d)$.
\begin{equation}
\label{eq:confemb}
\text{conf}_{emb}(d) = \max_{c\in\mathcal{C}} \hat{y}_c(d)
\end{equation}
This embedding-based confidence can be used for detecting out-of-category documents by itself, but OOCD\xspace makes use of it to filter out less confident documents for its next step.
\subsection{Classifier-based Confidence Scoring}
\label{subsec:confclf}
To take advantage of advanced neural architectures that effectively capture the contextual information within a document, OOCD\xspace utilizes a neural classifier for measuring the final confidence.
Note that the classifier takes the sequence of word tokens as an input document, not the embedding vector obtained from the first step.
\subsubsection{Neural classifier training}
\label{subsubsec:clftrain}
For training a neural classifier, we build the training set by utilizing both the pseudo-labels and confidences, described in Equation~\eqref{eq:pseudolabel} and~\eqref{eq:confemb}.
Depending on a given set of target categories, a large number of out-of-category documents could exist in the corpus, and using such documents for training the classifier degrades the performance of target-category discrimination.
Thus, we collect only the \textit{confident} documents, whose embedding-based confidence is larger than a filtering threshold $\tau_{emb}$, with their pseudo-labels as follows.
\begin{equation}
\label{eq:confdocuset}
\begin{split}
&\mathcal{D}_{conf} = \left\{\left(d, \hat{\mathbf{y}}(d)\right) \vert \text{ conf}_{emb}(d) > \tau_{emb}, \forall d \in \mathcal{D} \right\}
\end{split}
\end{equation}
\smallsection{Pre-training the classifier with pseudo-labels}
Using the confident documents in the training set, OOCD\xspace pre-trains a neural classifier by minimizing the cross-entropy between their pseudo-labels and the prediction output of the classifier.
This pre-training process distills the knowledge from the text embedding space into the classifier through the soft pseudo-labels~\cite{hinton2015distilling};
it eventually optimizes the classifier to copy the relevance of each confident documents to target categories.
\begin{equation}
\label{eq:ptloss}
\mathcal{L}_{pretrain} = -\sum_{d\in\mathcal{D}_{conf}}\sum_{c\in\mathcal{C}} \hat{y}_c(d)\cdot\log p(c|d)
\end{equation}
\smallsection{Refining the classifier with self-training}
After the classifier is pre-trained by the confident documents and their pseudo-labels, OOCD\xspace further refines the classifier based on a self-training approach~\cite{xie2016unsupervised, meng2018weakly}.
The self-training process bootstraps the classifier;
its high-confident predictions on input documents are used to estimate their new targets (i.e., labels).
In detail, it gradually updates the output by minimizing the cross-entropy between the enhanced-but-consistent target $q(c|d)$ and the current prediction $p(c|d)$.
\begin{equation}
\label{eq:stloss}
\mathcal{L}_{refine} = -\sum_{d\in\mathcal{D}_{conf}}\sum_{c\in\mathcal{C}} q(c|d)\cdot\log p(c|d)
\end{equation}
The soft-label $q(c|d)$ is inferred by current prediction $p(c|d)$;
i.e., $q(c|d)=\frac{p(c|d)^2/f(c)}{\sum_{c'\in\mathcal{C}}p(c'|d)^2/f(c')}$ where $f(c)=\sum_{d\in\mathcal{D}_{conf}}p(c|d)$ is the soft-frequency for category $c$.
Note that this is particularly effective for out-of-category detection, because it encourages to produce more confident prediction only for confident documents in the training set $\mathcal{D}_{conf}$.
\subsubsection{Confidence scoring by target-category prediction}
\label{subsubsec:oocd}
Similar to the embedding-based confidence computed from the pseudo-category label, the classifier-based confidence can be obtained from the category prediction result.
OOCD\xspace defines the final confidence by using the maximum softmax probability for target-category classification, which is the output of the neural classifier.
\begin{equation}
\label{eq:confclf}
\text{conf}_{clf}(d) = \max_{c\in\mathcal{C}} p(c|d) = \max_{c\in\mathcal{C}} \frac{\exp(logit_{c, d})}{\sum_{c'\in\mathcal{C}}\exp(logit_{c',d})}
\end{equation}
Based on the final confidence, OOCD\xspace ranks all documents according to their confidence, which allows to distinguish out-of-category documents from in-category counterparts.
In addition to the maximum value of category prediction (i.e., maximum softmax probabiltiy), its negative entropy also can be used as a confidence measure~\cite{wang2019effective, fouche2020mining}; i.e., $\text{conf}_{clf}(d) = -\mathcal{H}[p(c|d)] = \sum_{c\in\mathcal{C}} p(c|d)\cdot\log p(c|d)$ where $\mathcal{H}$ is the entropy of an input distribution.
We empirically found that there is no significant difference between the two confidence measures in terms of detection performance, thus we simply use the maximum softmax probability.
\subsection{Experimental Setting}
\label{subsec:expset}
\subsubsection{Datasets}
\label{subsubsec:dataset}
For our experiments, we use real-world corpora of two different domains:
\textbf{NYT\xspace}\footnote{The news articles are crawled by using https://developer.nytimes.com/} and \textbf{arXiv\xspace}\footnote{The abstracts of arXiv\xspace papers are crawled from https://arxiv.org/}.
We crawled the documents from 26 categories in 5 different sections (for NYT\xspace), and 34 categories in 3 different sections (for arXiv\xspace).
To consider various types of outliers, we include the documents collected from the other categories/sections (i.e., local outlier) and the ones from the other domain (i.e., global outlier) while keeping their ratio very small (1$\sim$2\%).
The statistics and category information of each corpus is summarized in Tables~\ref{tbl:datastats} and~\ref{tbl:catinfo}.
For our problem setting that only target-category names are available, the category label of each document is not utilized at all for the task, but only for evaluation on each target scenario by determining whether each document is in-category or out-of-category.
\begin{table}[t]
\caption{All inlier category (and section) names.}
\centering
\resizebox{0.99\linewidth}{!}{%
\begin{tabular}{rll}
\hline
& \textbf{Section} & \textbf{Category} \\\hline
\multirow{7}{*}{\rotatebox{90}{NYT\xspace}}
& politics & federal budget, surveillance, affordable care act, immigration, \\
& & law enforcement, gay rights, gun control, military, abortion \\
& arts & dance, television, music, movies \\
& business & stocks and bonds, energy companies, economy, \\
& & international business \\
& science & cosmos, environment \\
& sports & hockey, basketball, tennis, golf, football, baseball, soccer \\\hline
\multirow{6}{*}{\rotatebox{90}{arXiv\xspace}}
& \multirow{1}{*}{math} & math.NA, math.AG, math.FA, math.NT, math.AP, math.OC, \\
& & math.ST, math.PR, math.DG, math.CO, math.RT, math.DS, \\
& & math.GR, math.RA, math.SG, math.AT, math.MG \\
& physics & ph.optics, ph.flu-dyn, ph.atom-ph, ph.ins-det, ph.acc-ph, \\
& & ph.plasm-ph, ph.chem-ph, ph.class-ph \\
& cs & cs.CV, cs.NI, cs.SE, cs.CC, cs.CR, cs.LO, cs.SY, cs.DS, cs.DB \\
\hline
\end{tabular}
}
\label{tbl:catinfo}
\end{table}
For both the datasets, we make use of AutoPhrase\xspace~\cite{shang2018automated} to tokenize and segment raw texts of each document, thereby obtaining the phrase (or word) embedding vectors, as done in~\cite{fouche2020mining}.
Since our task only focuses on the documents in an input corpus, we do not consider other subword tokenizers~\cite{sennrich2016neural, kudo2018sentencepiece} mainly used to solve the out-of-vocabulary issue.
\subsubsection{Baselines}
\label{subsubsec:baseline}
We compare the performance of OOCD\xspace with that of other baseline methods which are designed for various tasks, including outlier detection, one-class classification, and weakly-supervised classification.
We re-categorize them as either (i) unsupervised methods that do not utilize the information about categories at all, and (ii) weakly supervised methods that can focus on target categories by utilizing the target category names.
The first category is unsupervised methods for text outlier detection.
\begin{itemize}
\item \textbf{ANCS\xspace}: A simple baseline that defines the outlier score of each document by its average negative cosine similarity to all the other documents in the corpus.
\item \textbf{LOF\xspace}~\cite{breunig2000lof}: The most popular outlier detector based on the local density, which computes the local outlier factor.
\item \textbf{RS-Hash\xspace}~\cite{sathe2016subspace}: A subspace hashing-based outlier detection method. We use the MurmurHash3 function with multiple random seeds for randomized hashing.
\item \textbf{TONMF\xspace}~\cite{kannan2017outlier}: A text outlier detector based on non-negative factorization of the term-document matrix. It computes the $l_2$-norm of each column (i.e., document) in its residual matrix.
\item \textbf{CVDD\xspace}~\cite{ruff2019self}: A neural one-class classifier designed for text data. It encodes an input document based on the multi-head self-attention architecture.
\end{itemize}
Since the density-based detection methods (i.e., ANCS\xspace, LOF\xspace, and RS-Hash\xspace) work independently with the embedding space, we consider two spherical embedding spaces optimized with/without target-category names:
(i) the \textit{non-discriminative} space only capturing word-word and word-document contexts~\cite{meng2019spherical},
and (ii) the \textit{discriminative} space additionally enforcing discrimination of target categories (Section~\ref{subsec:confemb}).
The other category is weakly supervised methods that are capable of considering the semantic relevance between each unlabeled document and target categories to some extent.
\begin{itemize}
\item \textbf{vMF\textsubscript{d}\xspace}, \textbf{vMF\textsubscript{w}\xspace}: The embedding-based confidence scoring methods that exploit separable vMF distributions (Equation~\eqref{eq:confemb}). vMF\textsubscript{d}\xspace and vMF\textsubscript{w}\xspace respectively adopt $r_d$ and $r_w$ as their category-specific relevance score (Equation~\eqref{eq:vmfscore}).\footnote{vMF\textsubscript{w}\xspace can be thought as a weakly supervised variant of kj-NN\xspace~\cite{fouche2020mining}. For computing the category-specific relevance score (Equation~\eqref{eq:vmfscore}), kj-NN\xspace directly utilize the category label of each document, whereas vMF\textsubscript{w}\xspace is tailored to use the discriminative embedding space obtained by the help of weak supervision.}
\item \textbf{CVDD\textsubscript{d}\xspace}, \textbf{CVDD\textsubscript{w}\xspace}: The weakly supervised variants of CVDD\xspace~\cite{ruff2019self}. Being tailored to use only the confident documents retrieved by vMF\textsubscript{d}\xspace and vMF\textsubscript{w}\xspace, they train the multi-head attention architecture for one-class classification.
\item \textbf{SM-Class\xspace}: A target-category classifier trained on a small labeled set, which includes only the documents that contain one of the category names (i.e., \underline{S}imple \underline{M}atch).
\item \textbf{WeST-Class\xspace}~\cite{meng2018weakly}: A weakly supervised target-category classifier. It is trained on the set of pseudo-documents, whose words are generated by random sampling from each vMF distribution modeled in our embedding space.
\item \textbf{OOCD\textsubscript{d}\xspace}, \textbf{OOCD\textsubscript{w}\xspace}: The proposed confidence scoring based on our target-category classifier (Equation~\eqref{eq:confclf}). The training set of confident documents and their pseudo-labels is obtained by vMF\textsubscript{d}\xspace and vMF\textsubscript{w}\xspace, respectively.
\end{itemize}
All the methods based on target-category classification (i.e., SM-Class\xspace, WeST-Class\xspace, OOCD\xspace) measure the confidence of unlabeled documents by using the classifier output; i.e., maximum softmax probability (Equation~\eqref{eq:confclf}).
\input{041tbl_odperf}
\begin{table*}[thbp]
\caption{Four different evaluation scenarios for out-of-category document detection. Each scenario lists the target-category names with the ratio of out-of-category documents, whose category (or section) name is not included in the list.}
\centering
\resizebox{0.99\linewidth}{!}{%
\begin{tabular}{rlclc}
\hline
& \textbf{NYT\xspace} & \textbf{Out-Ratio} & \textbf{arXiv\xspace} & \textbf{Out-Ratio} \\\hline
Major-Sec & sports, politics & 0.2353 & math, cs & 0.1779 \\
Minor-Sec & science, business, arts & 0.7733 & cs, physics & 0.6991 \\
Homo-Cat & hockey, tennis, basketball, golf & 0.7335 & math.(NA, AG, FA, NT, AP, OC) & 0.6868 \\
Hetero-Cat & federal budget, music, stocks and bonds, environment, baseball & 0.7794 & math.(GR, RA, SG), ph.plasm-ph, cs.(CV, NI) & 0.8735 \\\hline
\end{tabular}
}
\label{tbl:targetinfo}
\end{table*}
\input{042tbl_oocperf}
\subsubsection{Evaluation metrics}
\label{subsubsec:baseline}
As evaluation metrics for our detection tasks, we measure (i) the area under the receiver operating curve (\textbf{AUROC}), (ii) the area under the precision-recall curve (\textbf{AUPR}),\footnote{In cases of AUPR and F1, we measure the values where out-of-category (or outlier) documents are considered as positive.}
and (iii) the F1 score at a top-$O$ list of documents (\textbf{F1@O}), where $O$ is the number of actual out-of-category (or outlier) documents; this is equivalent to using the confidence threshold $\Gamma$ that satisfies $p_{out}\cdot{|\mathcal{D}|} == {|\{d| \text{conf}(d) <\Gamma, \forall d\in\mathcal{D} \}|}$, where $p_{out}$ is the out-of-category (or outlier) ratio for each target scenario.
For the classifier-based methods, including CVDD\xspace, SM-Class\xspace, WeST-Class\xspace, and OOCD\xspace, we report the average of three independent runs, each of which uses different random seeds for initialization.
\subsubsection{Implementation details}
We implement our OOCD\xspace framework and the other baseline methods by using PyTorch,\footnote{All the experiments are conducted on NVIDIA Titan Xp.} except for using the official author codes of TONMF\xspace\footnote{https://github.com/ramkikannan/outliernmf} and CVDD\xspace\footnote{https://github.com/lukasruff/CVDD-PyTorch}.
For a fair comparison, SM-Class\xspace, WeST-Class\xspace, and OOCD\xspace adopt the same CNN architecture with a single 1D convolutional layer~\cite{kim-2014-convolutional}.
We initialize their word embedding layer by the embedding vectors obtained from our first step, and use the Adam optimizer to train each classifier.
We simply fix the temperature parameter $T$ to 0.1,\footnote{We empirically found that this hyperparameter hardly affects the final performance, and the sensitivity analysis will be provided in Section~\ref{subsec:paramanal}.} while tuning the filtering threshold $\tau_{emb}$ with respect to the ratio of confident documents in the training set,
$|\mathcal{D}_{conf}|/|\mathcal{D}|\in\{0.1, \ldots, 1.0\}$.
In cases of vMF\textsubscript{w}\xspace, CVDD\textsubscript{w}\xspace, and OOCD\textsubscript{w}\xspace, the numbers of neighbor documents and words in their relevance scores are set to the values suggested by~\cite{fouche2020mining}, i.e., $k=j=30$.
\subsection{Outlier detection}
\label{subsec:perf_od}
We first evaluate all the methods in terms of identifying a small number of outlier documents among a large number of unlabeled documents in a text corpus.
In Table~\ref{tbl:odperf}, our frameworks (i.e., OOCD\textsubscript{d}\xspace and OOCD\textsubscript{w}\xspace) achieve the best performance for both the datasets.
We observe that there is no remarkable difference between unsupervised methods and weakly supervised methods, except for the OOCD\xspace framework.
This is because the outliers, which are different from the majority of the inliers, can be detected even in an unsupervised way to some extent, by using their low density or the deviation from the normality.
On the contrary, OOCD\xspace leverages prior knowledge about the scope of the inlier categories in the corpus so that it measures the confidence of unlabeled documents by their semantic relevance to the inlier categories.
This allows to make reliable results not being affected by the density or diversity of the outliers, which leads to significant improvement of the outlier detection performance.
\subsection{Out-of-category Detection}
\label{subsec:perf_ooc}
Next, we compare the out-of-category detection performance of OOCD\xspace with that of the other baselines.
We consider four different scenarios where target categories are flexibly designated as listed in Table~\ref{tbl:targetinfo}.
\begin{itemize}
\item \textbf{Major-Section}, \textbf{Minor-Section}:
To demonstrate that high-level category (i.e., section) names also can be used for detecting out-of-category documents, we choose the subsets of sections by their number of documents in descending (i.e., major) and ascending (i.e., minor) order.
\item \textbf{Homo-Category}, \textbf{Hetero-Category}:
To consider different levels of semantic correlation among target categories, we select the categories from a single section (i.e., homogeneous) or multiple sections (i.e., heterogeneous).
\end{itemize}
In case of arXiv\xspace, we use a single main keyword of each category instead of its category name, since each category name is an abbreviation that is difficult to be correctly embedded into the text embedding space.
Table~\ref{tbl:oocdperf} shows that OOCD\xspace considerably outperforms the other baselines for all the target scenarios.
To be specific, the unsupervised outlier detection methods fail to distinguish the out-of-category documents.
Particularly, they show poor performance as the ratio of out-of-category documents increases in the corpus (e.g., Minor-Section and Hetero-Category), because they mainly employ the similarity (or distance) to other documents rather than to the target categories.
Among the weakly supervised methods, OOCD\xspace beats each type of the baselines in the following aspects:
\smallsection{Comparison with the embedding-based methods (vMF\textsubscript{d}\xspace and vMF\textsubscript{w}\xspace)}
Compared to the confidence scoring methods based on the document embedding vectors, a neural classifier of OOCD\xspace is better at capturing the contextual information in each document, which eventually helps to accurately compute the category-specific semantic relevance scores.
\smallsection{Comparison with the one-class classifiers (CVDD\textsubscript{d}\xspace and CVDD\textsubscript{w}\xspace)}
OOCD\xspace learns category-discriminative features of documents by training a multi-class classifier, so it can leverage much richer category (or topic) information than CVDD\xspace which learns stereotypical (i.e., normal) features of in-category documents.
As a consequence, a target-category classifier more effectively distinguishes the out-of-category documents compared to a one-class classifier.
\smallsection{Comparison with the target-category classifiers (SM-Class\xspace and WeST-Class\xspace)}
The existing weakly supervised classifiers show poor performances in spite of their neural architecture.
This result strongly indicates that pseudo-labeling the documents on the embedding space is a more effective way to distill the knowledge from the embedding space, compared to simply using the limited number of exactly matched documents (SM-Class\xspace) or synthesized pseudo-documents (WeST-Class\xspace).
\input{043tbl_ablation}
\input{044tbl_case}
\begin{figure}[thbp]
\centering
\includegraphics[width=\linewidth]{FIG/embspace_vertical.png}
\caption{The visualization of the discriminative text embedding space. Colored and white circles represent in-category and out-of-category documents, respectively, and black asterisks show the category embedding vectors (i.e., the mean direction of each vMF distribution). Best viewed in color.}
\label{fig:embspace}
\end{figure}
\subsection{Ablation Study}
\label{subsec:ablation}
We provide an ablation analysis on out-of-category detection performance, to validate the effectiveness of the following components:
(i) a two-step approach to confidence scoring $\text{conf}_{clf}$,
(ii) training the classifier on confident documents retrieved by a filtering threshold $\tau_{emb}$,
(iii) pseudo-labeling by softmax normalization with the temperature parameter $T$, and
(iv) refining the classifier via self-training process $\mathcal{L}_{refine}$.
Table~\ref{tbl:ablation} reports the AUROC on both the datasets in various scenarios.\footnote{Inlier-Category is the scenario that all inlier category names are specified.}
As discussed in Section~\ref{subsec:perf_ooc}, $\text{conf}_{clf}$ consistently shows higher AUROC scores than $\text{conf}_{emb}$ with the assistance of its neural classifier.
However, in case of using the entire corpus to train the classifier (without the filtering threshold $\tau_{emb}$), $\text{conf}_{clf}$ does not work well because the target-category classifier is trained by numerous out-of-category documents with their noisy labels.
Furthermore, both the temperature $T$ and the self-training loss $\mathcal{L}_{refine}$ are helpful to enhance the detection performance of our OOCD\xspace framework, by making it produce more confident (i.e., sharper) pseudo-category labels and target-category prediction, respectively.
In particular, $\mathcal{L}_{refine}$ significantly increases the AUROC compared to the case of using only $\mathcal{L}_{pretrain}$ for all the target scenarios.
\subsection{Qualitative Analysis}
\label{subsec:qualanal}
We visualize the discriminative embedding space for each target scenario by using t-SNE.
In case of arXiv\xspace that has a large number of documents, we plot 10,000 document vectors randomly selected from the corpus.
Each document vector is colored according to its true category, where each color group is used to mark different sections so as to indicate the semantic correlation among the target categories.
Figure~\ref{fig:embspace} shows that document embedding vectors gather around each category embedding vector to follow the vMF distribution, thus they can be distinguished in our embedding space according to their categories to some extent.
In case of arXiv\xspace, some categories fail to be aligned with their documents, because the category information of the arXiv\xspace documents is difficult to be accurately captured by only the co-occurrence of words and documents.
Nevertheless, the similarity (or distance) of document vectors to each category vector successfully encodes their semantic relevance to the category in general, thus it can serve as a measure of confidence used to distinguish out-of-category from in-category documents.
In addition, we examine how the confidence rank of each document varies depending on detection methods: ANCS\xspace, LOF\xspace, vMF\textsubscript{d}\xspace, and OOCD\textsubscript{d}\xspace.
Table~\ref{tbl:casestudy} describes the result on four example documents from the NYT\xspace dataset.
In summary, OOCD\textsubscript{d}\xspace correctly ranks all of them while the other methods fail.
The first document, which belongs to an inlier-but-minor category (\textit{cosmos}), is correctly identified as inlier by vMF\textsubscript{d}\xspace and OOCD\textsubscript{d}\xspace, because the weakly supervised methods are aware that \textit{cosmos} is one of the inlier categories and utilize its relevance to the category.
In cases of the second/third documents, which are the representatives of local/global outliers (\textit{real estate} and \textit{math.OC}), each of them is incorrectly ranked by ANCS\xspace and LOF\xspace, respectively.
To be precise, the unsupervised detection methods are likely to make unreliable predictions when these outlier documents become locally or globally dense, as discussed in Section~\ref{subsec:otd}.
Finally, the fourth document from out-of-category (\textit{baseball}) is only correctly identified by OOCD\xspace among the weakly supervised methods.
Its relevance to some of the target categories (e.g., \textit{tennis} and \textit{basketball}) can be overestimated by the embedding-based method vMF\textsubscript{d}\xspace, due to their similar word occurrence, such as \textit{game}, \textit{matchup}, and \textit{scoreless}.
\subsection{Parameter Analysis}
\label{subsec:paramanal}
Finally, we study how sensitive the performance of OOCD\xspace is to its hyperparameters:
(i) the filtering threshold $\tau_{emb}$ to build the set of confident documents, and
(ii) the temperature parameter $T$ for pseudo-labeling.
We first assess the quality of $\mathcal{D}_{conf}$ in terms of the pseudo-label consistency, defined by $\frac{1}{|\mathcal{D}_{conf}|}\cdot\sum_{(d, \hat{\mathbf{y}}(d))\in\mathcal{D}_{conf}}\mathbbm{1}[\argmax_{c\in\mathcal{C}}\hat{y}_c(d)==y(d)]$, varying the filtering threshold $\tau_{emb}$ .
Then, we investigate the final performance of OOCD\textsubscript{d}\xspace with respect to the hyperparameters.
In Figure~\ref{fig:paramanal}, the pseudo-label consistency gets lower as the size of $|\mathcal{D}_{conf}|$ becomes larger, showing that $\tau_{emb}$ controls the trade-off between the number of confident documents and the accuracy of their pseudo-labels.
For this reason, the final performance of OOCD\textsubscript{d}\xspace becomes worse in both the cases of using only a very small number of surely-confident documents or simply using all documents regardless of their confidence.
On the other hand, the performance does not largely depend on the choice of $T$, even though its smaller value brings the improvement compared to the standard softmax normalization (i.e., $T=1$).
In conclusion, OOCD\xspace can achieve better discrimination between in-category and out-of-category documents with the help of the proper hyperparameter values.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{./FIG/plot_tautemp_effect.png}
\caption{The performance of OOCD\textsubscript{d}\xspace varying $\tau_{emb}$ and $T$.}
\label{fig:paramanal}
\end{figure}
\section{Introduction}
\label{sec:intro}
\input{010introduction}
\section{Related Work}
\label{sec:related}
\input{020relatedwork}
\section{Out-of-category Detection}
\label{sec:method}
\input{030proposed}
\section{Experiments}
\label{sec:exp}
\input{040experiments}
\section{Conclusion}
\label{sec:conc}
\input{050conclusion}
\smallsection{Acknowledgement}
This work was supported by the NRF grant (No. 2020R1A2B5B03097210), the IITP grant (No. 2018-0-00584, 2019-0-01906), US DARPA KAIROS Program (No. FA8750-19-2-1004), SocialSim Program (No. W911NF-17-C-0099), INCAS Program (No. HR001121C0165), National Science Foundation (IIS-19-56151, IIS-17-41317, IIS 17-04532), and the Molecule Maker Lab Institute: An AI Research Institutes program (No. 2019897).
\bibliographystyle{IEEEtran}
|
1,108,101,562,743 | arxiv |
\section{Introduction}
\input{Introduction}
\section{System Model}
\input{SystemModel}
\subsection{Energy Transfer Phase}
\input{DownlinkStage}
\subsection{Information Transmission Phase}
\input{UplinkStage}
\section{Problem Formulation and Optimal Solution}
\label{Section:ProblemFormulationAndOptimalSolution}
\input{ProblemFormulationSolution}
\subsection{Problem Formulation}
\input{ProblemFormulation}
\subsection{Single-User WPCNs}
\label{Section:SingleUser}
\input{SingleUserSystems}
\subsection{Multi-User WPCNs}
\input{MultiUserOptimal}
\subsubsection{Solution of Problem (\ref{Eqn:FunctionPsi})}
\label{Section:OptimalFunctionPsi}
\input{FunctionSolution}
\subsubsection{Solution of Problem (\ref{Eqn:OptimalResourceAlloc})}
\label{Section:OptimalResourceAllocation}
\input{ResourceAllocSolution}
\subsubsection{Solution of Problem (\ref{Eqn:OptimalTransmitVectors})}
\label{Section:OptimalVectors}
\input{VectorsSolution}
\section{Low-Complexity Design of Multi-User WPCNs}
\input{MultiUserSuboptimal}
\subsection{Massive MISO WPCNs}
\label{Section:MassiveMIMO}
\input{MassiveMIMOScheme}
\subsection{Suboptimal MRT-based Scheme}
\input{ScaledMassiveMIMO}
\subsection{Suboptimal SDR-based Scheme}
\input{SuboptimalScheme3}
\section{Numerical Results}
\label{Section:SimulationResults}
\input{NumericalResults}
\subsection{Simulation Setup}
\input{SimulationSetup}
\subsection{Complexity Analysis}
\input{ComplexityAnalysis}
\subsection{Performance Analysis}
\input{PerformanceAnalysis}
\section{Conclusions}
\input{Conclusions}
\appendices
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\thesubsection}{\thesection.\arabic{subsection}}
\renewcommand{\thesectiondis}[2]{\Alph{section}:}
\renewcommand{\thesubsectiondis}{\thesection.\arabic{subsection}:}
\section{Proof of Proposition \ref{Theorem:SingleUser}}
\label{Appendix:PropSU}
\input{ProofPropSU}
\section{Proof of Proposition \ref{Theorem:MuProp1}}
\label{Appendix:Prop1}
\input{ProofProp1}
\section{Proof of Proposition \ref{Theorem:MuProp2}}
\label{Appendix:Prop2}
\input{ProofProp2}
\section{Proof of Proposition \ref{Theorem:MuProp3}}
\label{Appendix:Prop3}
\input{ProofProp3}
\section{Proof of Lemma \ref{Theorem:Lemma}}
\label{Appendix:LemmaProof}
\input{ProofLemma}
\section{Proof of Proposition \ref{Theorem:MassiveMIMO}}
\label{Appendix:Prop5}
\input{ProofProp5}
\bibliographystyle{IEEEtran}
|
1,108,101,562,744 | arxiv | \section{Introduction}
\label{sec:introduction}
Type Ia supernovae (SNe Ia) are remarkable cosmological standardisable
candles that are routinely used to measure cosmological parameters
\citep{1998AJ....116.1009R,1999ApJ...517..565P,2007ApJ...659...98R,2009ApJS..185...32K,2011ApJ...737..102S,2012ApJ...746...85S}.
As these studies become increasingly more precise, systematic
uncertainties become a significant component of the error budget
\citep{2011ApJS..192....1C}. Thus an important consideration in their
future use is the degree to which SN Ia properties evolve with
redshift or depend on their environment -- and how well any
evolutionary effects can be controlled.
The host galaxies and environments of SNe Ia has long been a profitable
route to probe astrophysical effects in the SN Ia population, with the
observed properties of SNe Ia known to correlate with the physical
parameters of their host galaxy stellar populations. SNe Ia in
elliptical or passively evolving galaxies are intrinsically fainter
than SNe Ia in spiral or star-forming galaxies, and possess narrower,
faster evolving (or lower `stretch') light curves
\citep{1995AJ....109....1H,1996AJ....112.2391H,1999AJ....117..707R,2000AJ....120.1479H,2001ApJ...554L.193H,2005ApJ...634..210G,2006ApJ...648..868S}.
The impact of these effects on the cosmological results is small due
to observed correlations between SN Ia light curve shape and
luminosity \citep{1993ApJ...413L.105P}, and between SN Ia optical
colour and luminosity \citep{1996ApJ...473...88R,1998A&A...331..815T}.
When these empirical relations are applied to SN Ia datasets, only
small correlations remain between SN Ia luminosity and host galaxy
properties, such as their stellar masses or star formation rates
\citep{2010ApJ...715..743K,2010MNRAS.406..782S,2010ApJ...722..566L}.
These residual trends between SN luminosity and host galaxy properties
can be accounted for at the level required by current cosmological
analyses, either by directly using host galaxy information in the
cosmological fits \citep{2011ApJ...737..102S} or by applying
probabilistic corrections to the absolute magnitudes of the SNe
\citep{2012ApJ...746...85S}. However, as the size of other systematic
uncertainties in the cosmological analyses are reduced as, for
example, the accuracy of the photometric calibration procedures
improve \citep[][]{2013A&A...552A.124B}, understanding the physical
origin of these astrophysical correlations will become critical for
future, larger samples \citep[e.g. Dark Energy
Survey;][]{2012ApJ...753..152B}.
The two primary competing ideas are that either progenitor metallicity
or progenitor age (or a combination of both) play a role in
controlling SN Ia luminosities -- but directly measuring either is
extremely difficult. Indirect information can be obtained on
metallicity from the ultraviolet (UV) SN spectra
\citep[e.g.][]{1998ApJ...495..617H,2000ApJ...530..966L}, and while
this has provided useful insights into evolution within SN Ia
populations
\citep{2008ApJ...674...51E,2008ApJ...684...68F,2012MNRAS.426.2359M,2012AJ....143..113F},
the interpretation of any individual event is extremely complex even
with very high quality data \citep{2013arXiv1305.2356M}. There is
currently no technique to estimate the age of the progenitor star from
the SN spectrum.
Thus many studies have instead focused on detailed spectroscopic
studies of the host galaxies of the SNe Ia rather than the events
themselves, assembling statistical samples with which to search for
correlations between the physical parameters defining the host
galaxies, and the SN Ia properties. Such global host galaxy properties
are believed to represent reasonable tracers of the SN progenitor
star, at least in a statistical sense \citep{2011MNRAS.414.1592B}.
Common spectroscopic measurements include star formation rates and gas
phase metallicity measured from nebular emission lines
\citep{2005ApJ...634..210G,2011ApJ...743..172D,2012A&A...545A..58S,2012arXiv1211.1386J,2013arXiv1304.4720C,2013arXiv1309.1182R},
and stellar metallicity and age measured from
spectral absorption indices
\citep{2008ApJ...685..752G,2012A&A...545A..58S,2012arXiv1211.1386J}.
A number of intriguing results have arisen from these studies. Based
on star formation activity in the host galaxy, brighter SNe Ia were
found to explode in more active galaxies than those in passive
galaxies. The SN Ia luminosities were also found to be significantly
correlated with host gas-phase metallicities, with metal-rich galaxies tending
to host fainter SNe Ia than metal-poor galaxies. A similar trend has
also been identified with host stellar metallicity. The stellar age of
the host galaxies also shows a correlation with SN Ia luminosities, in
the sense that fainter SNe Ia preferentially explode in older
populations.
In this paper, we present new spectroscopic observations of the host
galaxies of SNe Ia discovered by the Palomar Transient Factory
\citep[PTF;][]{2009PASP..121.1334R,2009PASP..121.1395L}, a project
designed to explore the optical transient sky. 82 high-quality spectra
of SNe Ia host galaxies were obtained, with precise determinations of
their stellar masses, gas-phase and stellar metallicities, stellar
ages, and star formation rates. We then combine these host parameters
with optical multi-colour light curves of the SNe in an effort to
investigate the physical origin of the trends discussed above.
A plan of the paper follows. In Section~\ref{sec:observ-data-reduct}
we introduce the SN Ia sample, and the spectroscopic observations of
their host galaxies. Section~\ref{sec:host-galaxy-param} discusses
the various measurements that can be made from these host galaxy
spectra, and the methods for measuring star formation rates, host
galaxy stellar masses, ages and metallicities. In Section
\ref{sec:depend-sn-prop} we examine how the key SN Ia photometric
properties depend on these host parameters, and we discuss our
findings in Section~\ref{sec:discussion}. We conclude in Section
\ref{sec:conclusions}. Throughout this paper, we assume
$\mathrm{H_0}=70$\,km\,s$^{-1}$\,Mpc$^{-1}$ and a flat universe with
$\ensuremath{\Omega_{\mathrm{M}}}=0.3$.
\section{OBSERVATIONS AND DATA REDUCTION}
\label{sec:observ-data-reduct}
\begin{figure*}
\centering
\includegraphics*[scale=0.8]{plot/sample_selection.pdf}
\caption{The distribution in SN redshift, SDSS
$r$-band host galaxy apparent magnitude
($m_{r}$), and host galaxy stellar mass (\ensuremath{M_{\mathrm{stellar}}})
of our 82 PTF SNe Ia host galaxies. The larger PTF
SN Ia sample is shown as the filled grey histogram
(527 SNe Ia in the redshift histogram, and 443
events in the $m_r$ and \ensuremath{M_{\mathrm{stellar}}}\ panels), and our
host galaxy sample studied as the open red
histogram.}
\label{sample_selection}
\end{figure*}
In this section, we present the sample of SNe Ia and their host
galaxies studied in this paper. We discuss the SN sample selection,
the observations of the host galaxies and their data reduction, and
the photometric light curve data for the SNe.
\subsection{SN sample selection}
\label{sec:sample-selection}
The SNe Ia studied in this paper were discovered by the PTF, a project
which operated from 2009-2012 and used the CFH12k wide-field survey camera
\citep{2008SPIE.7014E.163R} mounted on the Samuel Oschin 48-inch telescope (P48)
at the Palomar Observatory. The observational cadences used to discover the
SNe ranged from hours up to $\sim5$ days. SN candidates were identified in
image subtraction data and ranked using both simple cuts on the detection
parameters and a machine learning algorithm \citep{2012PASP..124.1175B}, and then
visually confirmed by members of the PTF collaboration or, from mid-2010 onwards,
via the citizen science project `Galaxy Zoo: Supernova' \citep{2011MNRAS.412.1309S}.
The latter identified 8 of the SNe studied in this paper.
Promising SN candidates were then sent for spectroscopic confirmation using
a variety of telescope/instrument combinations. These included:
The William Herschel Telescope (WHT) and the Intermediate dispersion Spectrograph and Image System (ISIS),
the Palomar Observatory Hale 200-in and the double spectrograph,
the Keck-I telescope and the Low Resolution Imaging Spectrometer (LRIS),
the Keck-II telescope and the DEep Imaging Multi-Object Spectrograph (DEIMOS),
the Gemini-N telescope and the Gemini Multi-Object Spectrograph (GMOS),
the Very Large Telescope and X-Shooter, the Lick Observatory 3m Shane telescope and the Kast Dual Channel Spectrograph,
the Kitt Peak National Observatory 4m telescope and the Richey-Chretien Spectrograph,
and the University of Hawaii 88-in and the Supernova Integral Field Spectrograph (SNIFS).
All of the spectra used to confirm the SNe in this paper as SN Ia are available from
the WISeREP archive \citep{2012PASP..124..668Y}.
PTF operated in either the $R$ or $g^\prime$ band (hereafter $R_\textrm{P48}$ and $g_\textrm{P48}$),
switching from $g_\textrm{P48}$ band around new moon to $R_\textrm{P48}$ band when the sky was brighter.
Multi-colour light curves were not obtained by default for all SNe using the P48; instead they were
assembled via triggered observations on other robotic facilities, e.g., the Liverpool Telescope \citep[LT;][]{2004SPIE.5489..679S},
the Palomar 60-in \citep[P60;][]{2006PASP..118.1396C} and the Las Cumbres Observatory Global Telescope Network \citep[LCOGT;][]{2013arXiv1305.2437B} Faulkes Telescopes (FTs; clones of the LT).
The full PTF SN Ia sample comprises some 1250 spectroscopically
confirmed events. However, many of these are at relatively high
redshift and thus have poor quality P48 light curves, or were
discovered at the start or end of an observing season and thus have
incomplete P48 light curves. In both of these cases no multi-colour
information is available. Thus the first task is to define a parent
sample of high-quality SNe Ia from which targets for host galaxy
studies can be selected. Several criteria were used.
Firstly, the PTF SN Ia program (generally) restricted multi-colour
follow-up to those events with a redshift ($z$) of $z<0.09$. The
motivation for this was to define a sample less susceptible to
selection effects: the median redshift of all PTF SNe Ia is 0.1, and
at $z=0.09$, a typical SN Ia has a peak apparent magnitude of
$R_\textrm{P48}\simeq18.5$, $\simeq2.5$\,mag above the PTF detection limit of 21
(a typical SN Ia at $z=0.09$ has $R_\textrm{P48}=21$ at 13 days before maximum light).
We apply the same redshift constraint, giving a parent sample of 527
SNe Ia. Secondly, for this host galaxy study we only considered SNe Ia
with a multi-colour light curve: only SNe Ia discovered and confirmed
before maximum light were sent for detailed monitoring, with around
220 events followed in this way. Finally, for this paper, we only
selected `older' SNe Ia for study, i.e., those SNe Ia which had
already faded by the time the host galaxy spectrum was taken. We took
these at $>$1 year since the SN explosion. This leaves a potential
sample of 140 events, all discovered during 2009--2011, which are
suitable for our study. Of these events, we had sufficient telescope
time to observe 82 host galaxy spectra, selected at random from the
parent sample. The host galaxies of the SNe Ia were identified by
inspecting images taken by the SDSS. Most of the host galaxies in our
sample can be identified unambiguously, except PTF09dav where its
likely host galaxy lies $\sim41$\,kpc from the SN
\citep{2011ApJ...732..118S}.
A final caveat is that any biases that exist in the selection of the parent
PTF sample will also be present in our SN Ia sample. The potentially most
serious of these is the difficulty in finding SNe on very bright galaxy
backgrounds, where the contrast of the SN over the host galaxy is low.
This can occur in the cores of galaxies (e.g., \citet{1979A&A....76..188S})
but also more generally for faint events in bright host galaxies (e.g., \citet{2010AJ....140..518P}),
which of course are also likely to be the most metal rich. However, with
modern image subtraction techniques this is only an issue when the SN
brightness drops to $<10\%$ of that of the host background \citep{2010AJ....140..518P},
and the redshift cuts used in our sample definition mean this is unlikely to occur for normal SNe Ia.
Fig.~\ref{sample_selection} shows a comparison of the distributions of
our host galaxy sample and the various larger PTF samples in redshift,
host galaxy $r$-band apparent magnitude ($m_r$), and host galaxy
stellar mass, \ensuremath{M_{\mathrm{stellar}}}\ (the determination of \ensuremath{M_{\mathrm{stellar}}}\ is described
in Section~\ref{sec:host-galaxy-param}). The parent PTF sample shown
in Fig.~\ref{sample_selection} contains the 527 $z<0.09$ PTF SNe Ia,
although only 443 of these have Sloan Digital Sky Survey (SDSS)
$u g r i z$ imaging data from which
\ensuremath{M_{\mathrm{stellar}}}\ estimates could be made
(Section~\ref{sec:host-photometry}). Of the 84 events for which SDSS
photometry is not available, 74 lie outside the SDSS footprint, and
the remaining 10 SNe Ia have no host galaxy visible in the SDSS
images. A K-S test gives a 35, 77 and 99 percent probability that
our host galaxy sample and the larger PTF sample are drawn from the
same population in redshift, $m_r$, and \ensuremath{M_{\mathrm{stellar}}}. Thus we find no
strong evidence that our SN Ia host galaxy sample is biased with
respect to the larger PTF sample.
\subsection{Host galaxy observations}
\label{sec:observations}
\begin{table*}
\centering
\caption{The instrumental setups used for the spectroscopic data.}
\begin{tabular}{cccccc}
\hline\hline
Telescope & Spectrograph & \multicolumn{2}{c}{Gratings/Grisms} & Dichroic & $\lambda$ coverage\\
& & (Red) & (Blue) & & (\AA)\\
\hline
Gemini & GMOS & R400 & B600 & -- & $3600$--$9400$\\
WHT & ISIS & R158R & R300B & 5336\,\AA & $3000$--$10000$\\
Lick & Kast & 300/7500 & 600/4310 & 5500\,\AA & $3000$--$11000$\\
Keck & LRIS & 400/8500 & 600/4000 & 5696\,\AA & $3200$--$10000$\\
\hline
\end{tabular}
\label{setup}
\end{table*}
All of our host galaxy spectra were obtained using spectrographs
operating in long-slit mode on four different facilities.
Table~\ref{setup} summarises the instruments and setups used for our
spectroscopic data, and an observational log of the galaxies studied
in this paper can be found in Table~\ref{obs-log}. Generally, our
strategy was to place the slit through both the positions of the SN
and the centre of the host galaxy. Thus we were careful to ensure that
the observations were taken at low airmass to avoid losses due to not
observing at the parallactic angle. The median airmass of the spectroscopic
data in this work is $\sim1.15$.
Most of our SN Ia host galaxy spectra were taken at the Gemini
Observatory during 2010--2012 (59 out of 82 hosts), using both Gemini
North and Gemini South. We used GMOS \citep{2004PASP..116..425H} with
a $3600$--$9400$\,\AA\ wavelength coverage provided using two
different settings (B600 and R400 gratings). Two exposures in each
setting were taken, with a $\sim100$ pixel shift in wavelength space
in order to avoid the gaps between the detectors (the GMOS array is
composed of three CCDs). Total integration times were around two hours
per source.
18 further SN Ia host spectra were taken at the 4.2-m WHT using ISIS,
providing $3000$--$10000$\,\AA\ wavelength coverage. ISIS is a
dual-armed spectrograph, and we used the R300B and R158R gratings in
the blue and red arms, respectively. The 5300 dichroic was used.
Two brighter host galaxy spectra were taken with the 3-m Shane
telescope at the Lick Observatory, using the Kast Spectrograph \citep{Kast_spectrograph}
providing $3000$--$11000$\,\AA\ wavelength coverage. Here, the 300/7500
grating was used for the red arm and 600/4310 grism for the blue arm,
using the D55 dichroic.
Finally the 10-m telescope Keck-I telescope was used to observe three
fainter ($m_r\geq20$) host galaxies using LRIS
\citep{1995PASP..107..375O} with a $3200$--$10000$\,\AA\ wavelength
coverage. LRIS is also a dual-armed spectrograph. The 400/8500 grating
was used for the red arm and the 600/4000 grism for the blue arm, with
the D560 dichroic.
\subsection{Spectral data reduction}
\label{sec:data-reduction}
We reduced our data using a custom data reduction pipeline written in
\textsc{iraf}\footnote{The Image Reduction and Analysis Facility
(\textsc{iraf}) is distributed by the National Optical Astronomy
Observatories, which are operated by the Association of Universities
for Research in Astronomy, Inc., under cooperative agreement with
the National Science Foundation.}. For data taken at the Gemini
Observatory, we also used some tasks from the Gemini \textsc{iraf}
package. Our pipeline follows standard procedures, including bias
subtraction, flat-fielding, cosmic-ray removal \citep[using
\textsc{lacosmic};][]{2001PASP..113.1420V} and a wavelength
calibration. The \textsc{iraf} task \textsc{apall} is then used to
extract the 1-D spectrum from each 2-D frame, and a (relative) flux
calibration performed with a telluric-correction by comparing to
standard stars. `Error' spectra are derived from a knowledge of the
CCD properties and Poisson statistics, and are tracked throughout the
reduction procedure. As all the spectra in our sample are taken either
with a spectrograph with two different grating settings (Gemini), or
with dual-arm spectrographs with a dichroic (WHT, Lick, Keck), red and
blue spectra for each object, with different wavelength coverages and
dispersions, need to be combined to produce the final spectrum. This
was performed by rebinning to a common dispersion, and combining (with
weighting) to form a final contiguous spectrum.
\begin{figure}
\centering
\includegraphics*[scale=0.5]{plot/g-r_compare.pdf}
\caption{The $g-r$ colour derived from our host galaxy
spectra, compared with that determined from the SDSS
broad-band photometry. The line of equality is shown
in dotted line.}
\label{g-r}
\end{figure}
\begin{figure*}
\centering
\includegraphics*[scale=0.8]{plot/sn_stretch_colour_statistics.pdf}
\caption{The grey filled histograms show the SN stretch ($s$)
and colour (\ensuremath{\mathcal{C}}) distributions of our PTF sample
(see Section~\ref{sec:sn-photometry-light} for more
details). The SNLS sample of
\citet{2010A&A...523A...7G} at $z<0.6$ is
over-plotted in the red open histogram.}
\label{sn_stretch_colour_statistics}
\end{figure*}
We test our relative flux calibration by comparing synthetic
photometry measured from our final host spectra, with SDSS Data
Release 9 \citep[DR9;][]{2012ApJS..203...21A} photometry of the same
objects. The SDSS model magnitudes are used here.
Fig.~\ref{g-r} shows the $g-r$ colour of our
spectra plotted against the $g-r$ colour from the SDSS
photometry. Overall our data show a good consistency with the SDSS
photometry: the r.m.s. scatter is 0.12\,mag, with a mean offset
of 0.01\,mag.
We correct our absolute flux calibration using the same SDSS
photometry (this is important for host galaxy parameters measured
based on absolute line strength, for example star formation rates).
Again, we measure a synthetic SDSS $r$-band magnitude for our observed
spectra, and compare to the SDSS photometry, scaling our observed
spectra so the two magnitudes are equal.
Finally, we apply a correction for foreground galactic extinction
prior to de-redshifting the spectra into the rest-frame. The latest
calibration \citep{2011ApJ...737..103S} is used, and the typical Milky
Way value $R_V=3.1$ is assumed, using a
\citet*[][CCM]{1989ApJ...345..245C} law. Although redshift estimates
based on the original SN classification spectrum are available, we
confirm these using emission and absorption lines in the galaxy
spectra; the two redshift measures are consistent in all cases.
The quality of our spectra is quite diverse. We estimate the
signal-to-noise ratio (S/N) over a region in the centre of each
spectrum ($\sim5500-6000$\,\AA). The median flux and standard deviation
within that region are measured, and the S/N taken as the ratio of the
two. Our spectra have a S/N ranging from 5 to 53 with a
median of $\simeq28$.
\subsection{SN photometry and light curve fitting}
\label{sec:sn-photometry-light}
Optical light curves of our SNe Ia in $gri$ were obtained at the LT,
the P60, and the FTs.
There are 66 events with available LT (54 events), P60 (6 events) or FT
(6 events) light curves, complemented by P48 $R_\textrm{P48}$ (and
sometimes $g_\textrm{P48}$) light curves from the rolling PTF search.
In all cases, reference images were made by stacking data taken $>$1
year after the SN explosion, which was then subtracted from the images
containing SN light to remove the host galaxy. We measure the SN
photometry using a point-spread-function (PSF) fitting method. In
each image frame, the PSF is determined from nearby field stars, and
this average PSF is then fit at the position of the SN event weighting
each pixel according to Poisson statistics, yielding a SN flux and
flux error.
The SiFTO light curve fitting code \citep{2008ApJ...681..482C} was
used to fit the light curves. SiFTO works in flux space, manipulating
a model of the spectral energy distribution (SED) and synthesising an
observer-frame light curve from a given spectral time-series in a set
of filters at a given redshift, allowing an arbitrary normalization in
each observed filter (i.e., the absolute colours of the template being
fit are not important and do not influence the fit). The time-axis of
the template is adjusted by a dimensionless relative `stretch' ($s$)
factor to fit the data, where the input template is defined to have
$s=1$. Once the observer-frame SiFTO fit is complete, a second step
can be used to estimate rest-frame magnitudes in any given set of
filters, provided there is equivalent observer-frame filter coverage,
and at any epoch. This is performed by adjusting the template SED at
the required epoch to have the correct observed colours from the SIFTO
fit, correcting for extinction along the line of sight in the Milky
Way, de-redshifting, and integrating the resultant SED through the
required filters. This process is essentially a cross-filter
k-correction, with the advantage that all the observed data contribute
to the SED shape used.
We used SiFTO to determine the time of maximum light in the rest-frame
$B$-band, the stretch, the rest-frame $B$-band apparent magnitude at
maximum light $m_B$, and the $B-V$ colour at $B$-band maximum light,
\ensuremath{\mathcal{C}}. When estimating the final SN colour via the template SED adjustment,
filters that are very close in effective wavelength can introduce discontinuities
in the adjusted spectrum. Thus we remove the P48 $R_\textrm{P48}$ and $g_\textrm{P48}$
filters in this process where data from the LT, P60, or FTs are also available. Note
that the P48 filters are always used to estimate the stretch and time of maximum light.
Fig.~\ref{sn_stretch_colour_statistics} shows the
distribution of our SNe Ia in stretch and colour. As a comparison, we
over-plot the higher-redshift Supernova Legacy Survey (SNLS) sample
studied by \citet{2010A&A...523A...7G} for SNLS events at $z<0.6$
where the SNLS sample is more complete \citep{2010AJ....140..518P}. We
generally find a good agreement in the stretch and colour
distributions, although our sample probes faster (lower stretch) and
redder SNe Ia than SNLS.
\subsection{Host galaxy photometry}
\label{sec:host-photometry}
In later sections, we will use broad-band photometry of the SN Ia host
galaxies to estimate the host galaxy stellar mass. Where available we
use SDSS $u g r i z$ photometry, but some (five) of our SNe with host
galaxy spectra lie outside the SDSS footprint. For these we instead
use the LT $g^\prime r^\prime i^\prime$ images taken as part of the SN
photometric follow-up campaign, calibrated using observations of
either \citet{2002AJ....123.2121S} standard stars, or of the SDSS
stripe 82 \citep{2007AJ....134..973I}. The host photometry is measured
by \textsc{sextractor} \citep{1996A&AS..117..393B}, which we use in
dual-image mode with FLUX\_AUTO photometry, ensuring the same
consistent aperture is used in each filter.
\section{HOST GALAXY PARAMETER DETERMINATION}
\label{sec:host-galaxy-param}
Having described the sample and data that make up our host galaxy
sample, we now discuss the techniques used to fit the SN Ia host
galaxy spectra, and estimate various physical parameters such as the
star formation rate (SFR) and the gas-phase metallicity. We use
various techniques, including emission line measurements to determine
SFRs and gas-phase metallicities, spectral fitting to determine
stellar metallicities and ages, and broad-band photometric fitting to
determine stellar masses. We first introduce the technique used to
make the emission line measurements.
\subsection{Emission line measurement}
\label{sec:emiss-line-meas}
The emission lines and stellar continuum of the host galaxy spectra
are fit using the Interactive Data Language (\textsc{idl}) codes
\textsc{ppxf} \citep{2004PASP..116..138C} and \textsc{gandalf}
\citep{2006MNRAS.366.1151S}. \textsc{ppxf} fits the line-of-sight
velocity distribution (LOSVD) of the stars in the galaxy in pixel
space using a series of stellar templates. The advantage of working in
pixel space is that emission lines and bad pixels are easily excluded
when fitting the continuum. Before fitting the stellar continuum, a
list of emission lines is used to mask this potential contamination.
The stellar templates are based on the MILES empirical stellar library
\citep{2006MNRAS.371..703S,2010MNRAS.404.1639V}, giving a wavelength
coverage of 3540\,\AA\ to 7410\,\AA\ with a spectral resolution of
2.51\,\AA, and a variety of different metallicities and ages. A total
of 276 templates are selected with $[M/H]=-1.71$ to $+0.22$ in 6 steps
and ages ranging from $0.079$ to $14.12$\,Gyr in 46 steps.
After measuring the stellar kinematics with \textsc{ppxf}, the
emission lines and stellar continuum are simultaneously fit by
\textsc{gandalf}. \textsc{gandalf} treats the emission lines as additional Gaussian
templates. Through an iterative fitting process, \textsc{gandalf}
locates the best velocities and velocity dispersions of each Gaussian
template and also the optimal combination of the stellar templates
which have already been convolved with the LOSVD. This results in the
emission lines and stellar continuum being fit simultaneously to each
spectrum.
Extinction is handled using a two-component reddening model. The first
component assumes a diffusive dust throughout the whole galaxy that
affects the entire spectrum including emission lines and the stellar
continuum, while the second is a local dust component around the
nebular regions, and therefore affects only the emission lines. The
first component is determined by comparing the observed spectra to the
un-reddened spectral templates. However, the local dust component is
constrained only if the Balmer decrement (the H$\alpha$ $\lambda6563$
to H$\beta$ $\lambda4861$ line ratio) can be measured. For galaxies
without Balmer lines in their spectra, only the diffusive dust
component is fit (26 out of 82 hosts). To ensure the emission lines in our spectrum are
well-measured, we required a S/N $>3$ (S/N is defined as the
ratio of line amplitude to the noise of the spectrum) for emission
lines used in the determination of host parameters.
\subsection{AGN Contamination}
\label{sec:agn-contamination}
\begin{figure}
\centering
\includegraphics*[scale=0.5]{plot/bpt.pdf}
\caption{The BPT diagram used to identify the AGN host
galaxies in our sample. Two different criteria are
over-plotted: \citet{2001ApJ...556..121K} and
\citet{2003MNRAS.346.1055K}. The galaxies which lie
on the right hand side of Kewley 01 criteria will be
regarded as potential AGN host galaxies in this work
(the open circles). Normal star-forming galaxies are
plotted in filled circles. The representative error is
shown in the bottom-right corner.}
\label{bpt}
\end{figure}
Our next task is to check for active galactic nuclei (AGN) activity in
our host galaxies. In galaxies hosting an AGN, non-thermal emission
from the AGN can dominate over that from the hot stars, leading to a
different ionisation source for the nebular \ensuremath{\mathrm{H}\,\textsc{ii}}\ regions. This in
turn means that the emission line measurements performed in the
previous section cannot be interpreted using the techniques discussed
later in this section.
We adopt the BPT diagram \citep*{1981PASP...93....5B}, shown in
Fig.~\ref{bpt} for our sample. The galaxies are divided into two
groups using either the criteria proposed by
\citet{2001ApJ...556..121K} or \citet{2003MNRAS.346.1055K}. Any
galaxies lying to the right of these lines in Fig.~\ref{bpt} are
regarded as potential AGN host galaxies. We adopt the
\citeauthor{2001ApJ...556..121K} criterion: a galaxy will be
identified as a AGN if
\begin{equation}
\log\left([\ensuremath{\mathrm{O}\,\textsc{iii}}]/\mathrm{H}{\beta}\right)>\frac{0.61}{\log\left([\ensuremath{\mathrm{N}\,\textsc{ii}}]/\mathrm{H}{\alpha}\right)-0.47}+1.19
\end{equation}
where \ensuremath{\mathrm{N}\,\textsc{ii}}\ is the flux of $\lambda 6584$ line, and \ensuremath{\mathrm{O}\,\textsc{iii}}\ the
$\lambda4959,5007$ lines. However, this requires the four emission
lines to be well detected. For those spectra with only \ensuremath{\mathrm{O}\,\textsc{iii}}\ and
H$\beta$ or \ensuremath{\mathrm{N}\,\textsc{ii}}\ and H$\alpha$ available, `two-line' methods can be
used \citep{2003ApJ...597..142M}: a galaxy will be
identified as a AGN if
\begin{equation}
\log\left([\ensuremath{\mathrm{N}\,\textsc{ii}}]/\mathrm{H}\alpha\right)>-0.2
\end{equation}
or
\begin{equation}
\log\left([\ensuremath{\mathrm{O}\,\textsc{iii}}]/\mathrm{H}\beta\right)>0.8
\end{equation}
Note that these two-line criteria are more conservative than \citeauthor{2001ApJ...556..121K} criterion.
There are 11 (3 by the two-line methods) galaxies in our sample identified as AGN, and these are
discarded from the sample for further emission line analyses. A further 5 galaxies would have been excluded
based on the \citeauthor{2003MNRAS.346.1055K} criterion. We have checked that including these objects does
not affect our results.
\subsection{Determination of host parameters}
\label{sec:determ-host-param}
Having measured the emission lines of the SN hosts, and removed
galaxies likely hosting AGN from our sample, we now turn to the
estimation of various host galaxy physical properties: the host galaxy
SFR, the gas-phase metallicity, the mean stellar metallicity and age,
and the stellar mass. A complete list of the host
parameters measured in this section can be found in
Table~\ref{host_para_phot} and Table~\ref{host_para_spec}.
\begin{figure*}
\includegraphics*[scale=0.9]{plot/z-z.pdf}
\caption{The metallicity conversions used in this
paper. We convert the metallicities derived by M91,
KK04 and KD02 to
\citetalias{2004MNRAS.348L..59P} N2, which is the
calibration used for our host galaxy data. In each
plot we use the best linear fit (solid line) and
scatter to represent the accuracy of the conversion.
}
\label{z-z}
\end{figure*}
\subsubsection{Star formation rate}
\label{sec:star-formation-rate}
The SFR of a galaxy can be estimated using nebular lines in the
spectrum, with H$\alpha$ the most popular choice due to its intrinsic
strength and location in the redder part of the spectrum, leading to a
lower susceptibility to dust extinction. As this emission line is
produced from ionising photons generated by the most massive, youngest
stars, the SFR estimated is a nearly instantaneous measure. We adopt
the conversion of \citet{1998ARA&A..36..189K}, which used evolutionary
synthesis models to relate the luminosity of the H$\alpha$ line,
$L(\mathrm{H}\alpha)$, to the SFR via
\begin{equation}
\label{eq:sfr}
\mathrm{SFR} = 7.9 \times 10^{-42} \times L(\mathrm{H}\alpha)\;M_{\odot}\,\mathrm{yr}^{-1}
\end{equation}
with $L(\mathrm{H}\alpha)$ measured in erg\,s$^{-1}$. The relation
assumes case B recombination and a \citet{1955ApJ...121..161S}
initial mass function (IMF). \citet{2004MNRAS.351.1151B} studied the
likelihood distribution of the conversion factor between
$L(\mathrm{H}\alpha)$ and SFR, and found a $\sim0.4$ dex variation
between high-mass and low-mass galaxies, with the
\citet{1998ARA&A..36..189K} conversion factor close to the median
value of their study. As a result, and following
\citet{2011ApJ...743..172D}, we adopt a 0.2\,dex uncertainty in our
SFR measurements.
\subsubsection{Gas-phase metallicity}
\label{sec:gas-phase-metall}
There are various methods for calibrating the the gas-phase
metallicity determined by emission line ratios \citep[for a review
see][hereafter KE08]{2008ApJ...681.1183K}. The direct method is to
measure the ratio of the [\ensuremath{\mathrm{O}\,\textsc{iii}}] $\lambda4363$ line to a lower
excitation line to estimate the electron temperature ($T_e$) of the
gas, and then convert it to the metallicity -- the so-called
$T_e$-based metallicity. The disadvantage is that this [\ensuremath{\mathrm{O}\,\textsc{iii}}] line is
very weak and difficult to detect unless a very high S/N spectrum can
be acquired; in our sample for only three
spectra was the [\ensuremath{\mathrm{O}\,\textsc{iii}}] $\lambda4363$ line detected.
Instead, we use indirect metallicity calibrations, and, following the
recommendation of \citetalias{2008ApJ...681.1183K}, adopt the
empirical relations from \citet[][hereafter
PP04]{2004MNRAS.348L..59P}. \citetalias{2004MNRAS.348L..59P} fit the
relations between various emission line ratios and the $T_e$-based
metallicity measurement for a sample of \ensuremath{\mathrm{H}\,\textsc{ii}}\ regions.
The \citetalias{2004MNRAS.348L..59P} `N2' method uses the ratio of
[\ensuremath{\mathrm{N}\,\textsc{ii}}] $\lambda 6584$ to H$\alpha$. As these lines are very close in
wavelength space, this is a (nearly) reddening-free method, and covers
both the upper-branch ($\log$([\ensuremath{\mathrm{N}\,\textsc{ii}}]/[\ensuremath{\mathrm{O}\,\textsc{ii}}]) $>$$-1.2$) and
lower-branch ($\log$([\ensuremath{\mathrm{N}\,\textsc{ii}}]/[\ensuremath{\mathrm{O}\,\textsc{ii}}]) $<$$-1.2$)) metallicities.
The \citetalias{2004MNRAS.348L..59P} relation is only valid for
$-2.5<\mathrm{N}2<-0.3$. Gas-phase metallicities for 53 galaxies in
our sample can be derived using this method.
For those galaxies outside the valid range of \citetalias{2004MNRAS.348L..59P} `N2' method,
we follow the \citet[][hereafter KD02]{2002ApJS..142...35K}
method. Unlike the empirical \citetalias{2004MNRAS.348L..59P}
calibration, the \citetalias{2002ApJS..142...35K} technique is derived
based on stellar evolution and photoionization models. For the upper
branch metallicities, we use the ratio of [\ensuremath{\mathrm{N}\,\textsc{ii}}] and [\ensuremath{\mathrm{O}\,\textsc{ii}}]
$\lambda3727$,
and for the lower-branch metallicity, \citetalias{2002ApJS..142...35K}
recommend averaging two methods based on the $R_{23}$ ratio
($(\ensuremath{\mathrm{O}\,\textsc{ii}}+\ensuremath{\mathrm{O}\,\textsc{iii}}\lambda4959,5007)$/H$\beta$): the \citet[][hereafter
M91]{1991ApJ...380..140M} and \citet[][hereafter
KK04]{2004ApJ...617..240K} relations. For the 29 galaxies where
metallicities are unavailable via the \citetalias{2004MNRAS.348L..59P}
N2 method, 19 can be calibrated following
\citetalias{2002ApJS..142...35K}. The 10 other galaxies show no
detectable emission lines available for the metallicity calibration.
As previous studies have noted, offsets may exist between these
different metallicity calibrations and thus we used the
self-consistent calibrations of \citetalias{2008ApJ...681.1183K}. For
galaxies where it is possible to make more than one metallicity
measurement, we can also compare the results directly. This is shown in
Fig.~\ref{z-z}; the different calibrations after applying a linear fit compare well.
The observed r.m.s. scatters are (this work/\citetalias{2008ApJ...681.1183K}):
0.06/0.07, 0.06/0.07, 0.05/0.05 and 0.01/0.02 for the
M91--PP04, KK04--PP04, KD02--PP04 and M91--KK04 relations,
respectively. The best-fitting linear trends (the difference between two
different metallicity calibrations) are applied to our metallicity
measurements, although they have no significant effects on the final
results.
\subsubsection{Stellar metallicity and age}
The stellar metallicity and age are normally determined using
absorption features in the spectrum. One widely used method is the
Lick/IDS system \citep{1994ApJS...95..107W,1998ApJS..116....1T}.
Recently, with the availability of high-quality model templates, the
`full spectrum fitting' method has become a popular alternative
\citep[e.g.][]{2005MNRAS.358..363C,2009A&A...501.1269K} to study the
stellar populations, as it exploits more spectral information than
just individual line indices.
In this study we use \textsc{ppxf} to fit the stellar continuum of our
host spectra. The same \textsc{miles} templates described in
Section~\ref{sec:emiss-line-meas} were used. One feature of
\textsc{ppxf} is the linear regularisation performed during the fit,
which can help smooth the weights of the best-fit templates. However,
this feature must be used with caution, as it is a trade-off between
the smoothness and goodness of the fit. Following the procedure
described in \citet{1992nrfa.book.....P}, the regularisation parameter
for each host galaxy was determined such that the resulting fit was
consistent with the observations, but also gave a smooth star
formation history. Finally, the stellar metallicity and age can be
estimated by performing a weighted-average of all the model
templates, given by
\begin{equation}
\langle\log t\rangle=\sum^{N}_{i=1} w_{i}\times \log t_{i}
\end{equation}
and
\begin{equation}
\langle \mathrm{[M/H]}\rangle=\sum^{N}_{i=1} w_{i}\times \mathrm{[M/H]}_{i},
\end{equation}
where $\log t_{i}$, [M/H]$_{i}$ and $w_{i}$ represent the stellar age,
stellar metallicity and weight of the $i$th template. The
$\langle\log t\rangle$ and $\langle \mathrm{[M/H]}\rangle$ are the
mass-weighted age and metallicity over the $N$ templates used to fit the
spectrum. Here we estimated the uncertainty by examining the
dispersion between the results with and without regularization. An
uncertainty of 0.12\,dex and 0.15\,dex was determined and added to
[M/H] and stellar age, respectively.
A comparison between the host gas-phase and stellar metallicities can
be found in Fig.~\ref{sz-gz}. It is clear that the two metallicities
scale with each other with a positive Pearson correlation coefficient
$\sim0.67$.
\begin{figure}
\centering
\includegraphics*[scale=0.5]{plot/sz-gz.pdf}
\caption{The host stellar metallicity as function of gas-phase
metallicity. The median metallicity in each bin
with bin size = 0.2\,dex is computed and shown in red filled-circle.
}
\label{sz-gz}
\end{figure}
\subsubsection{Host stellar mass}
\label{sec:host-stellar-mass-1}
The final parameter of interest is the stellar mass of the host
galaxies. We use the photometric redshift code \textsc{z-peg}
\citep{2002A&A...386..446L}, which is based on the spectral synthesis
code P\'{E}GASE.2 \citep{1997A&A...326..950F}, to estimate \ensuremath{M_{\mathrm{stellar}}}.
\textsc{z-peg} fits the observed galaxy colours
(Section~\ref{sec:host-photometry}) with galaxy SED templates
corresponding to 9 spectral types (SB, Im, Sd, Sc, Sbc, Sb, Sa, S0 and
E). We assume a \citet{1955ApJ...121..161S} IMF. A foreground dust
screen varying from a colour excess of $E(B-V)=0$ to 0.2\,mag in steps
of 0.02\,mag is used.
\textsc{z-peg} is used to locate the best-fitting SED model (in a
$\chi^2$ sense), with the redshift fixed at the redshift of the SN
host galaxy measured from our spectra. The current \ensuremath{M_{\mathrm{stellar}}}\ and the
recent SFR, averaged over the last 0.5\,Gyr before the best fitting
time step, are recorded. Error bars on these parameters are taken from
their range in the set of solutions that have a similar $\chi^2$
\citep[as in][]{2006ApJ...648..868S}. Note that these SFRs are not used
in the analysis in this paper.
\begin{figure*}
\includegraphics*[scale=0.9]{plot/m-sfr.pdf}
\caption{Left: A comparison of the star formation
rates (SFRs) derived from H$\alpha$ luminosities
(SFR$\mathrm{_{spec}}$) to those measured from
broad-band photometry using \textsc{z-peg}
(SFR$\mathrm{_{phot}}$). The solid line shows the
1:1 relation. Right: The SFR measured from
H$\alpha$ as function of the host \ensuremath{M_{\mathrm{stellar}}}. The
relation determined by the Galaxy And Mass Assembly
survey \citep[GAMA;][]{2012A&A...547A..79F} is
over-plotted. The dotted line shows the 1-$\sigma$
range of the GAMA relation. Some passive galaxies
with a low SFR for their \ensuremath{M_{\mathrm{stellar}}}\ (i.e. a low
specific SFR) can be seen for large \ensuremath{M_{\mathrm{stellar}}}. }
\label{m-sfr}
\end{figure*}
\begin{figure*}
\includegraphics*[scale=0.8]{plot/m-gz_k01_72.pdf}
\caption{The \ensuremath{M_{\mathrm{stellar}}}-metallicity relation derived
for our host galaxy sample. The red solid line is
the best fit from \citetalias{2008ApJ...681.1183K}
using the \citetalias{2004MNRAS.348L..59P} N2
metallicity calibration. The red-dotted lines
represent the r.m.s. residuals from the best fit
to field galaxies. The mean metallicity (blue filled-circle)
in each bin is also computed.}
\label{m-gz_k01}
\end{figure*}
The main uncertainty in this procedure is the choice of SED libraries
used in the $\chi^2$ fitting. We use the standard \textsc{z-peg}
libraries for ease of comparison to previous results in the
literature. However we note that improved stellar masses can be
obtained by the use of more recent templates
\citep{2012arXiv1211.1386J}, particularly those that include an
improved treatment of the thermally-pulsing Asymptotic Giant Branch
stage of stellar evolution \citep{2005MNRAS.362..799M}.
A fuller discussion of the uncertainties associated with this
stellar population modelling can be found in \cite{2013arXiv1304.4720C}.
These authors conservatively concluded that the maximal systematic is
$\sim 0.4$\,dex in \ensuremath{M_{\mathrm{stellar}}}, which should be borne in mind when
interpreting our results.
Fig.~\ref{m-sfr} shows a comparison between the SFRs derived from the
H$\alpha$ line to that estimated by \textsc{z-peg}. The mean
difference in $\mathrm{log(SFR)}$ is $\sim0.25$\,dex, with the SFRs
from \textsc{z-peg} systematically larger than those from the
H$\alpha$ luminosity. A similar offset using similar techniques was
also found by \citet{2012ApJ...755...61S}. This offset is perhaps not
surprising; \textsc{z-peg} determines SFRs essentially from $u$-band
data and is therefore sensitive to SFRs over a longer time-period than
the instantaneous H$\alpha$-based measures.
The relation between the spectroscopic SFRs and \ensuremath{M_{\mathrm{stellar}}}\ for our
sample is shown in Fig.~\ref{m-sfr}. We over-plot the relation
determined by the Galaxy And Mass Assembly survey
\citep[GAMA;][]{2012A&A...547A..79F}, which used a similar method as
ours for estimating the SFR. Our results show good consistency with
this relation, although we also sample some massive galaxies with
lower SFRs than the linear relation would predict.
Finally, in Fig.~\ref{m-gz_k01} we plot our metallicities as a
function of \ensuremath{M_{\mathrm{stellar}}}\ (the `mass--metallicity relation'). The
mass--metallicity relation studied by \citetalias{2008ApJ...681.1183K}
using the \citetalias{2004MNRAS.348L..59P} N2 method is over-plotted
for comparison. For consistency, in this plot we have adopted the
same IMF \citep{2001MNRAS.322..231K} and cosmology
($\mathrm{H_{0}=72\,km\,s^{-1}\,Mpc^{-1}}$ and $\ensuremath{\Omega_{\mathrm{M}}}=0.29$) as
used by \citetalias{2008ApJ...681.1183K} for the measurement of
\ensuremath{M_{\mathrm{stellar}}}. It is clear that our SN Ia host galaxies follow a very
similar mass--metallicity relation as that of
\citetalias{2008ApJ...681.1183K}. This will be considered further in
Section~\ref{sec:m-z-relation}.
\begin{figure*}
\includegraphics*[scale=1]{plot/sn_para1.pdf}
\caption{The SN stretch $s$ as a function of host
\ensuremath{M_{\mathrm{stellar}}}\ (top left), SFR (top right), gas-phase
metallicity (middle left), specific SFR (sSFR;
middle right), mass-weighted mean stellar metallicity (lower left) and
stellar age (lower right). The
red points represent the mean stretch in bins of
host parameters.}
\label{sn_para1}
\end{figure*}
\begin{figure*}
\includegraphics*[scale=1]{plot/sn_para2.pdf}
\caption{As Fig.~\ref{sn_para1}, but considering the SN
colour \ensuremath{\mathcal{C}}\ in place of stretch.}
\label{sn_para2}
\end{figure*}
\section{The dependence of SN Ia properties on their host galaxies}
\label{sec:depend-sn-prop}
Having measured various physical parameters of the PTF SN Ia host
galaxies from their spectra and broad-band photometry, we now compare
these parameters with the photometric properties of the SNe. We take
each of the three key SN Ia properties in turn -- stretch (light curve
width), optical colour, and luminosity -- and compare with the
\ensuremath{M_{\mathrm{stellar}}}, the gas-phase metallicity $12+\log(\rmn{O}/\rmn{H})$, the
stellar metallicity M/H, the stellar age, and the specific SFR (sSFR), the SFR per unit
\ensuremath{M_{\mathrm{stellar}}}\ \citep[][]{1997ApJ...489..559G}. Compared to the SFR, the
sSFR is a more appropriate indicator to measure the relative
star-formation activity of a galaxy as it measures the star-formation
relative to the underlying galaxy stellar mass.
In this section, we will assess the significance of various relations
between the SN properties and their host galaxies. In each case we
split the sample into two groups. A value of $\mathrm{\log M=10.0}$ is
used to split between the high- and low-\ensuremath{M_{\mathrm{stellar}}}\ sample, and
$12+\log(\rmn{O}/\rmn{H})=8.65$ and $\mathrm{[M/H]=-0.5}$ are used
(based on the mass-metallicity relation in Fig.~\ref{m-gz_k01} and
relation between gas-phase and stellar metallicities in
Fig.~\ref{sz-gz}) to split between high- and low-metallicity hosts.
For the stellar age, SFR and sSFR, the split points were selected to
make approximately equally sized sub-groups (e.g.,
\citet{2010MNRAS.406..782S}). The weighted-mean of the residuals in
each group are calculated. The error of the weighted-mean was
corrected to ensure a $\chi^{2}_{\mathrm{red}}=1$. The linear fitting
is performed by using the Monte Carlo Markov Chain (MCMC) method
\textsc{linmix} \citep{2007ApJ...665.1489K}. To examine the
correlation of the relations, both the Pearson and Kendall correlation
coefficients are also calculated.
\begin{table*}
\centering
\caption{The trend of SN stretch/colour with host parameters.}
\begin{tabular}{lccccc}
\hline\hline
& & \multicolumn{2}{c}{SN stretch ($s$)} & \multicolumn{2}{c}{SN colour (\ensuremath{\mathcal{C}})} \\
& Split point & N$_{SN}$ & bin difference & N$_{SN}$ & bin difference \\
\hline
$\log$ M & 10.0 & 68 & 0.08 (4.3$\sigma$) & 55 & 0.05 (2.5$\sigma$) \\
12+$\log$(O/H) & 8.65 & 50 & 0.07 (2.5$\sigma$) & 40 & 0.05 (2.5$\sigma$) \\
$\mathrm{[M/H]}$& -0.5 & 67 & 0.07 (2.5$\sigma$) & 55 & 0.06 (3.1$\sigma$) \\
$\log$ Age & 0.7 & 67 & 0.08 (3.2$\sigma$) & 55 & 0.01 (0.7$\sigma$) \\
$\log$ SFR &$-0.1$ & 65 & 0.03 (1.4$\sigma$) & 52 & 0.03 (1.3$\sigma$) \\
$\log$ sSFR &$-10.1$& 65 & 0.08 (3.1$\sigma$) & 52 & 0.03 (1.2$\sigma$) \\
\hline
\end{tabular}
\label{trend1}
\end{table*}
\subsection{SN Ia stretch}
\label{sec:sn-stretch}
The stretch of a SN Ia is a direct measurement of its light curve
width, a key parameter in the calibration of SNe Ia as distance
estimators \citep{1993ApJ...413L.105P} -- brighter SNe Ia have slower
light curves (a broader width or higher stretch) than their fainter
counterparts. In this study, we restrict our analysis to SNe Ia with
$0.7<s<1.3$, typical of SNe Ia that are used in cosmological analyses
\citep{2011ApJS..192....1C}. This removes one high-stretch and five low-stretch
(sub-luminous) SNe, including the peculiar event PTF09dav
\citep{2011ApJ...732..118S}.
The SN stretch as a function of the host parameters can be found in
Fig.~\ref{sn_para1}. The trend calculated for each case is listed in
Table~\ref{trend1}. When comparing with \ensuremath{M_{\mathrm{stellar}}}, we recover the
trend seen by earlier studies that lower stretch ($s<1$) SNe Ia are
more likely to be found in massive galaxies than higher stretch
($s\geq1$) SNe Ia
\citep{2009ApJ...691..661H,2009ApJ...707.1449N,2010MNRAS.406..782S}.
Bearing in mind that gas-phase and stellar metallicity strongly
correlates with \ensuremath{M_{\mathrm{stellar}}}\
\citep[e.g.][]{2004ApJ...613..898T,2005MNRAS.362...41G}, a similar
trend is expected between stretch and metallicity, which is both
observed here (Fig.~\ref{sn_para1}) and has previously been described
in the literature: low-metallicity galaxies preferentially host
brighter SNe Ia (before light curve shape correction). The data also
show that higher-stretch SNe Ia preferentially explode in younger
galaxies, to the extent that there
are few low stretch SNe in hosts with mass-weighted mean ages of less
than $\sim4$\,Gyr. This is consistent with the recent study of
\citet{2013arXiv1309.1182R}, who found that the relation between SN stretch and \ensuremath{M_{\mathrm{stellar}}}\
is primarily driven by age, as measured by local SFR. We also see moderate trend with sSFR
that galaxies with higher sSFR tend to host brighter SNe Ia. No significant
correlation is found with SFR.
\begin{figure*}
\includegraphics*[scale=0.7]{plot/mass-res.pdf}
\caption{Hubble residuals as a function of host galaxy
\ensuremath{M_{\mathrm{stellar}}}. The vertical dashed line represents the
criterion used to split our sample into
high-\ensuremath{M_{\mathrm{stellar}}}\ and low-\ensuremath{M_{\mathrm{stellar}}}\ galaxies. The red filled
circles represent the weighted-mean of the residuals
in bins of \ensuremath{M_{\mathrm{stellar}}}, and their error bars are the
width of the bins and the error of the weighted
mean. The histogram on the right shows the
distribution of residuals in high-\ensuremath{M_{\mathrm{stellar}}}\ (filled
histogram) and low-\ensuremath{M_{\mathrm{stellar}}}\ (open histogram). The
distribution of the slopes best fit to the data from
10,000 MCMC realisations was showed in the sub-plot. The
solid line represents the mean slope determined from the distribution.}
\label{mass-res}
\end{figure*}
\begin{figure*}
\includegraphics*[scale=0.7]{plot/gmetal-res.pdf}
\caption{As Fig. \ref{mass-res}, but considering gas-phase metallicity instead of \ensuremath{M_{\mathrm{stellar}}}.}
\label{gmetal-res}
\end{figure*}
\subsection{SN Ia colour}
\label{sec:sn-colour}
We next examine trends with SN Ia colour (\ensuremath{\mathcal{C}};
Section~\ref{sec:sn-photometry-light}), shown in Fig.~\ref{sn_para2}.
As for the stretch comparisons, we restrict the SNe to a typical
colour range used in cosmological studies ($\ensuremath{\mathcal{C}}<0.4$). This removes
five red SNe Ia from our sample.
As star-forming galaxies are expected to contain more dust than
passive galaxies, all other things being equal we would expect them to
host redder SNe Ia. However, Fig.~\ref{sn_para2} does not show this
effect; if anything SNe Ia in high-sSFR galaxies appear bluer
($\ensuremath{\mathcal{C}}<0$) than those in low-sSFR galaxies. This may imply an
intrinsic variation of SN colour with host environment that is greater
than any reddening effect from dust. The SN colour as a function of
\ensuremath{M_{\mathrm{stellar}}}\ does show a trend, with SNe Ia in more massive galaxies
being redder, and both gas-phase and stellar metallicities also show
correlations with SN colour with SNe Ia tending to be redder in
galaxies of higher metallicity. We will discuss these various trends
involving SN colour in Section~\ref{sec:discussion}.
\begin{figure*}
\includegraphics*[scale=0.7]{plot/smetal-res.pdf}
\caption{As Fig. \ref{mass-res}, but considering stellar metallicity instead of \ensuremath{M_{\mathrm{stellar}}}.}
\label{smetal-res}
\end{figure*}
\begin{figure*}
\includegraphics*[scale=0.7]{plot/age-res.pdf}
\caption{As Fig. \ref{mass-res}, but considering
stellar age instead of \ensuremath{M_{\mathrm{stellar}}}. Here the dashed
line includes the two SNe in the youngest host
galaxies, which cause a significant trend with
Hubble residual, and which otherwise is not
present.}
\label{age-res}
\end{figure*}
\subsection{SN luminosity}
\label{sec:sn-luminosity}
\begin{table*}
\centering
\caption{The trend of Hubble residual with host parameters.}
\begin{tabular}{lccccccc}
\hline\hline
& Split point & N$_{SN}$ & bin difference & linear trend & probability of & \multicolumn{2}{c}{correlation} \\
& & & (mag) & & negative slope & Pearson & Kendall \\
\hline
$\log$ M & 10.0 & 50 & 0.085 (1.8$\sigma$) & $-0.041 \pm 0.030$ & 91.3\% & $-0.19$ & $-0.13$ \\
12+$\log$(O/H) & 8.65 & 36 & 0.115 (2.5$\sigma$) & $-0.358 \pm 0.176$ & 98.1\% & $-0.36$ & $-0.22$ \\
$\mathrm{[M/H]}$& -0.5 & 50 & 0.006 (0.1$\sigma$) & $-0.065 \pm 0.063$ & 85.1\% & $-0.13$ & $-0.09$ \\
$\log$ Age & 0.7 & 48 & 0.012 (0.3$\sigma$) & $-0.135 \pm 0.193$ & 75.7\% & $-0.14$ & $-0.04$ \\
$\log$ sSFR &$-10.1$& 48 & 0.070 (1.7$\sigma$) & $-0.019 \pm 0.077$ & 58.7\% & 0.04 & $-0.05$ \\
\hline
\end{tabular}
\label{trend2}
\end{table*}
Finally, we turn to the SN luminosity, which we parameterise by
calculating the Hubble residual. This is defined as the difference
between \ensuremath{m_{B}^{\mathrm{corr}}}, the observed rest-frame $B$-band SN apparent
magnitude (\ensuremath{m_{B}}; Section~\ref{sec:sn-photometry-light}) corrected for
stretch and colour, and \ensuremath{m_{B}^{\mathrm{mod}}}, the peak SN magnitude expected in
our assumed cosmological model. At a fixed redshift, a brighter SN Ia
therefore gives a negative Hubble residual. \ensuremath{m_{B}^{\mathrm{corr}}}\ is given by
\begin{equation}
\label{eq:mbcorr}
\ensuremath{m_{B}^{\mathrm{corr}}}=\ensuremath{m_{B}}+\alpha\times(s-1)-\beta\times \ensuremath{\mathcal{C}}
\end{equation}
and \ensuremath{m_{B}^{\mathrm{mod}}}\ by
\begin{equation}
\label{eq:mbmodel}
\ensuremath{m_{B}^{\mathrm{mod}}}=5\log_{10}{\mathcal D_L}\left(z;\ensuremath{\Omega_{\mathrm{M}}}\right) + \ensuremath{\mathcal{M}_B},
\end{equation}
where $z$ refers to the cosmological redshift in the CMB frame,
${\mathcal D_L}$ is the $c/H_0$ reduced luminosity distance with the
$c/H_0$ factor (here $c$ is the speed of light) absorbed into
\ensuremath{\mathcal{M}_B}, the absolute luminosity of a $s=1$ $\ensuremath{\mathcal{C}}=0$ SN Ia
(eqn.~(\ref{eq:mbcorr})). Explicitly,
$\ensuremath{\mathcal{M}_B}=\ensuremath{M_B}+5\log_{10}(c/H_0)+25$, where \ensuremath{M_B}\ is the absolute
magnitude of a SN Ia in the $B$-band. $\alpha$ and $\beta$ are
`nuisance variables' derived from the cosmological fits. In this work
$\alpha=1.45\pm0.12$ and $\beta=3.21\pm0.15$ were obtained. We do not
add any intrinsic dispersion into the Hubble residual uncertainties as
we are, in part, searching for variables which could generate this
extra scatter. However, we are aware of the potential bias that this
could introduce into the nuisance parameters (e.g. $\alpha$ and
$\beta$) when comparing the results to the studies including the
intrinsic dispersion, and thus we caution such a comparison is not
valid. To ensure our SNe are located in the smooth Hubble flow, we
exclude SNe Ia with $z<0.015$, removing one SN from our sample.
The comparisons with the host galaxy parameters can be found in
Figs.~\ref{mass-res} to \ref{age-res}. The trends calculated for Hubble residuals
with host parameters are listed in Table~\ref{trend2}.
The Hubble residuals as a function of host \ensuremath{M_{\mathrm{stellar}}}\ are shown in
Fig.~\ref{mass-res}. We see only a weak trend that is consistent with
that expected based on earlier work, in the sense that more massive
galaxies preferentially host brighter SNe Ia after stretch and colour
corrections. However, the trend in our data taken in isolation is not
significant: the Hubble residuals of low-\ensuremath{M_{\mathrm{stellar}}}\ and
high-\ensuremath{M_{\mathrm{stellar}}}\ bins have a weighted average of $0.057\pm0.038$\,mag
and $-0.028\pm0.028$\,mag respectively, a difference of
$0.085\pm0.047$\,mag. There is a $\sim91\%$ probability the
slope is negative based on 10,000 MCMC realisations.
Figs.~\ref{gmetal-res} and ~\ref{smetal-res} show the Hubble residuals
as a function of gas-phase and stellar metallicity respectively. A
trend with gas-phase metallicity can be seen in the same sense as with
stellar mass: higher-metallicity galaxies tend to host brighter SNe Ia
after stretch and colour corrections. The differences are more
significant than with \ensuremath{M_{\mathrm{stellar}}}: the Hubble residuals of the
high-metallicity and low-metallicity bins have a weighted average of
$-0.047\pm0.030$\,mag and $0.068\pm0.035$\,mag, respectively, a
difference of $0.115\pm0.046$\,mag. Fitting a straight line using the
\textsc{linmix} method gives a $\sim98\%$ probability the slope is negative.
The correlation coefficient is $\sim2$ times larger than the
relation between Hubble residuals and \ensuremath{M_{\mathrm{stellar}}}. We see no trend with
stellar metallicity.
The Hubble residuals as function of host stellar age are shown in
Fig.~\ref{age-res}; no significant trends are seen. We also see no
trend with sSFR.
\subsection{Comparison to previous studies}
\label{sec:compare-pre-studies}
Many of the relations in the previous section have been studied by
different authors using independent samples of SNe Ia. Since
\ensuremath{M_{\mathrm{stellar}}}\ is the most straight forward variable to measure, requiring
only broad-band imaging, the most common comparison has been between
Hubble residual and \ensuremath{M_{\mathrm{stellar}}}, which has been examined with a variety
of samples over a large redshift range
\citep{2010ApJ...715..743K,2010MNRAS.406..782S,2010ApJ...722..566L,2011ApJ...740...92G,2012arXiv1211.1386J,2013arXiv1304.4720C}.
These studies all find that more massive galaxies host brighter SNe
after corrections for light curve shape and colour have been made. The
size of the difference is usually around 0.1\,mag, with a transition
mass of around 10$^{10}$\,$M_{\odot}$. Our result is consistent with
these earlier studies, although at a reduced significance due to a
smaller dataset.
However, our primary goal is to study the metallicity of the SN host
galaxies rather than just \ensuremath{M_{\mathrm{stellar}}}. Although some studies convert
\ensuremath{M_{\mathrm{stellar}}}\ into metallicity using average mass-metallicity relations
\citep[e.g.][]{2010MNRAS.406..782S}, it is obviously more useful to
measure the metallicity directly. The first study of this kind was
\citet{2005ApJ...634..210G} who compared the Hubble residuals for 16
local SNe Ia with the gas-phase metallicity of their hosts; no
significant trends were seen. \citet{2011ApJ...743..172D} studied a
larger sample of $\sim$40 SNe Ia host galaxies from the SDSS-II SN
survey, finding SNe Ia in high-metallicity galaxies to be
$\sim0.1$\,mag ($\simeq4.9\sigma$) brighter than those in
low-metallicity galaxies after corrections, consistent with the
results based on \ensuremath{M_{\mathrm{stellar}}}. Using the low-redshift SNe studied by
the SNfactory, \citet{2013arXiv1304.4720C} derived the
gas-phase metallicity from 69 SNe Ia hosts and found a difference
$\sim0.1$\,mag ($\simeq2.9\sigma$) difference between high-metallicity
and low-metallicity hosts. Our results are in good agreement (a
0.115\,mag difference between high- and low-metallicity hosts).
Compared to a metallicity simply converted from host \ensuremath{M_{\mathrm{stellar}}}, the SN
Ia luminosity shows a stronger dependence on the metallicity derived
via direct emission-line measurements.
For stellar metallicity studies, \citet{2008ApJ...685..752G} studied
29 early-type SN Ia host galaxies by measuring the Lick indices from the SN Ia
host spectra. They found the host stellar metallicity correlates with the
Hubble residual at $\simeq$98 per cent confidence level, although
this technique is not directly comparable to ours.
\citet{2012arXiv1211.1386J} also derived the host stellar metallicity by
measuring the absorption line indices from the SN Ia host spectra, but
did not find a significant trend. We also find no significant trend in our
data.
\citet{2011ApJ...740...92G} determined the mass-weighted average age
of 206 SNe Ia host galaxies by fitting their broad-band photometry.
They found a weak correlation between the Hubble residuals and host age at
$\sim1.9\sigma$. \citet{2012arXiv1211.1386J} measured the
light-weighted age for the SNe Ia host galaxies using the absorption
line indices but found no significant trend; again we find no trend in
our data.
\citet{2010MNRAS.406..782S} measured photometric-based sSFRs and found
that SNe Ia in low-sSFR hosts appear brighter than those in high-sSFR
hosts at $\simeq2.6\sigma$ significance after $s$ and \ensuremath{\mathcal{C}}\
corrections. Similar trends have also been found using
spectroscopy-based sSFRs
\citep{2011ApJ...743..172D,2013arXiv1304.4720C}. We do not see these
trends in our dataset although this may be due to the relatively small
sample size.
Finally, \citet{2013arXiv1309.1182R} have recently shown that at
least some of the trends with host \ensuremath{M_{\mathrm{stellar}}}\ may be driven by SNe in
locally passive environments: SNe in massive galaxies with locally passive
environments are systematically brighter than those in locally star forming
environments by $\sim 0.09$\, mag. At the time of writing, it is unclear
whether this trend is due to age or metallicity.
\begin{figure*}
\includegraphics*[scale=1]{plot/mass_compare.pdf}
\caption{Upper left: The stellar mass distribution of
PTF SN Ia host galaxies (grey histogram) compared to
the low-redshift SN Ia host sample of
\citet{2011ApJS..192....1C} (black histogram). The
red solid line with filled circles is the predicted
distribution based on the stellar mass function of
local galaxies
\citep[$z<0.06$;][]{2012MNRAS.421..621B}. Upper
right: The same plot as the upper left panel, but
with the predicted SN Ia stellar mass distribution
produced by weighting each bin by a $t^{-1}$ DTD.
Lower left: The stellar mass distribution of the SNe
Ia host galaxies in the SNLS sample of
\citet{2010MNRAS.406..782S} (grey histogram). The
lines with filled circles in different colours
represent the mass contributions derived from the
galaxy stellar mass functions in different redshift
ranges studied by \citet{2009ApJ...707.1595D}.
Lower right: The same plot as left panel, but with
prediction weighted by the $t^{-1}$ DTD.}
\label{mass_compare}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
\subsection{The host stellar mass distribution}
\label{sec:host-stellar-mass-2}
\begin{figure*}
\includegraphics*[scale=1]{plot/m-gz_compare.pdf}
\caption{The metallicity-mass relations compared by different
metallicity calibrations. Upper panels: \citetalias{2004MNRAS.348L..59P} N2 (left)
and \citetalias{2004MNRAS.348L..59P} O3N2 (right) calibrations.
Lower panels: KK04 (left) and KD02 (right) calibrations.
The open circles show the measurements in this work. The blue
filled-circle is the mean metallicity in bins of \ensuremath{M_{\mathrm{stellar}}}.
The best fit to field galaxies given by \citet{2008ApJ...681.1183K}
with the range of one r.m.s scatter was over-plotted
(in red line and red dashed-line, respectively). The black
dashed-line is the best fit to the measurements in this work.
The sub-plot in the bottom of each panel represents the residuals
of our measurements from KE08's best fit.}
\label{m-gz_compare}
\end{figure*}
Unlike galaxy-targeted supernova surveys that are biased towards surveying
brighter and more massive galaxies, the host galaxies in rolling
searches such as PTF should represent a wider range of SN
environments. A key test of this is the host galaxy stellar mass
distribution (Fig.~\ref{sample_selection};
Section~\ref{sec:observ-data-reduct}), the form of which should be a
combination of the underlying galaxy stellar mass function and the
mean SN Ia rate as a function of stellar mass, and which should be
able to be reproduced from a qualitative knowledge of both. We examine
this in this section.
We begin with the galaxy stellar mass function (GSMF) in the local
universe, defined as the number of galaxies per logarithmic bin in
stellar mass. For this study we adopt the GSMF of
\citet{2012MNRAS.421..621B}, the redshift range of which ($z<0.06$) is
similar to this work. In each bin in stellar mass, we divide the GSMF
by the total mass (stellar mass multiplied by the GSMF) in that bin.
We then over-plot the observed SN Ia host galaxy stellar mass
distribution. We compare both to a SN Ia host mass distribution drawn
primarily from galaxy-targeted searches \citep[the low-$z$ sample
compilation of][]{2011ApJS..192....1C}. The
result can be seen in the upper left panel of Fig.~\ref{mass_compare}.
The host stellar mass distribution for the galaxy-targeted SN Ia
sample is similar to that expected based on the GSMF, but is obviously
different to our PTF sample at smaller stellar masses. However, this
test assumes that the SN Ia rate is simply proportional to the stellar
mass of the host. While this may be approximately correct in the more
massive systems, it is known to be incorrect in lower stellar mass
systems which have a larger fraction of younger potential progenitor
systems or higher sSFRs (Fig.~\ref{m-sfr}) -- the SN Ia rate is not
simply proportional to stellar mass
\citep[e.g.][]{2005A&A...433..807M,2006MNRAS.370..773M,2006ApJ...648..868S,2012ApJ...755...61S}.
In practise, a delay-time distribution (DTD; the distribution of times
between the progenitor star formation and the subsequent SN Ia
explosion) with the SN Ia rate proportional to $t^{-1}$ is favoured
by most recent data \citep[e.g.][]{2012PASA...29..447M}. If we assume
this DTD, and the relation between \ensuremath{M_{\mathrm{stellar}}}\ and galaxy age
determined by \citet{2005MNRAS.362...41G}, a revised distribution of
SN Ia host galaxy stellar masses can be formed by weighting each mass
bin by a $t^{-1}$ DTD.
The results are shown in Fig.~\ref{mass_compare} (upper right): the
effect of the $t^{-1}$ DTD is to increase the contribution from SNe in
less massive (younger) galaxies. A $\chi^2$ is calculated between
galaxy stellar mass distribution and SN Ia host galaxy stellar mass distribution.
The $\chi^2$ drops from 123.18 to 12.21 after considering a $t^{-1}$ DTD to the galaxy
stellar mass distribution. Indeed, assuming a $t^{-1}$ DTD and a
simple scaling between stellar mass and age allows an excellent
reproduction of the observed host galaxy stellar mass distribution.
A similar comparison can be made to the stellar mass distribution from
the SNLS sample \citep{2010MNRAS.406..782S}. As seen in the lower
panels of Fig.~\ref{mass_compare}, the SNLS sample contains more lower
stellar-mass galaxies than our PTF sample even though the selection of
SNe should be similar. Using the GSMF of \citet{2009ApJ...707.1595D}
over $0.2<z<1.0$, and the same technique as above, again a good
agreement between the observed host galaxy \ensuremath{M_{\mathrm{stellar}}}\ distribution and
that derived from the GSMF is achieved. Thus the difference in the
stellar mass distributions of the PTF and SNLS host galaxies can be
explained by evolution in the field galaxy population from which the
hosts are drawn where there is an excess in low-mass galaxies for the stellar
mass distributions at high redshifts.
\subsection{The mass--metallicity relation of SN Ia host galaxies}
\label{sec:m-z-relation}
\begin{figure*}
\includegraphics*[scale=0.9]{plot/FMR_sdss.pdf}
\caption{The fundamental metallicity relation (FMR)
derived from our data. The open circles
show the measurement of SN hosts. The red dashed
line is the best-fit to it. The grey contours show the
sample including $68\%$, $95\%$ and $99.7\%$ of SDSS galaxies from
\citet{2010MNRAS.408.2115M}. The blue dot-dashed line is the best
fit to the SDSS galaxies.}
\label{FMR}
\end{figure*}
\begin{figure*}
\includegraphics*[scale=0.9]{plot/res_compare.pdf}
\caption{The Hubble residuals as function of host
galaxy gas-phase metallicity using four different
calibrations. The best linear fit is shown
in solid line in each panel.}
\label{res_compare}
\end{figure*}
As well as impacting on the observed photometric properties of SNe Ia,
the metallicity of the progenitor star may also impact on the SN Ia
rate. An increased rate with lower metallicity may be expected as
stars with a lower metallicity generally form more massive white
dwarfs and therefore may more easily approach the Chandrasekhar mass
limit \citep{2011arXiv1106.3115K}. A decreased rate with lower
metallicity may be expected in some single degenerate scenarios as the
lower metallicity inhibits accretion onto the white dwarf due to lower
opacities in the wind \citep{1998ApJ...503L.155K,2009ApJ...707.1466K}.
Observationally, there is some evidence that prompt SNe Ia are more
prevalent (or explode with a brighter luminosity) in metal-poor
systems \citep*{2009ApJ...704..687C}.
Such effects may impact on the observed SN Ia host mass--metallicity
relation; if any of the putative metallicity effects lead to SNe Ia
preferentially occurring in low or high metallicity galaxies, then the
mass--metallicity relation for SN Ia hosts would be offset from that
of field galaxies (e.g., at fixed galaxy stellar mass, the SN Ia host
galaxies may systematically have lower or higher metallicities than
the field galaxies). We compare our mass--metallicity relation with
those derived for field galaxies (Fig.~\ref{m-gz_k01}), and compare
the use of four different gas-phase metallicity calibrations in
Fig.~\ref{m-gz_compare}. We fit these mass--metallicity relations
using the same functional form as described in KE08 at $8.5 <
\mathrm{logM} < 11.0$. Note that although we estimate \ensuremath{M_{\mathrm{stellar}}}\ by
fitting broad-band photometry instead of using the spectroscopic
indices of KE08, previous work has shown that the two different
approaches provide consistent results
\citep[0.001\,dex;][]{2012ApJ...755...61S}. Thus the different
\ensuremath{M_{\mathrm{stellar}}}\ determination techniques should have a negligible effect on
our results.
The SN Ia host mass--metallicity relations are consistent with the
fits from KE08, with weighted mean offset $\sim0.01$\,dex
($\lesssim1\sigma$ significance). This is
consistent with \citet{2013arXiv1304.4719C}, who found that SN Ia
hosts in their sample also show a good agreement with field galaxy
mass-metallicity relation.
However, this comparison has one potential systematic -- at fixed
\ensuremath{M_{\mathrm{stellar}}}, other variables may affect the SN Ia rate, for example the
number of young stars (or the SFR). Indeed,
\citet{2010MNRAS.408.2115M} showed that the observed mass--metallicity
relation could be a projection of a more general relation between
\ensuremath{M_{\mathrm{stellar}}}, gas-phase metallicity, and SFR, which together can be
described using a fundamental metallicity relation (FMR). The FMR can
be defined as the relation between gas-phase metallicity and
$\mathrm{log(\ensuremath{M_{\mathrm{stellar}}})-\alpha\times log(SFR)}$, where $\alpha$ is a
parameter determined to minimize the scatter of the metallicities.
\citet{2010MNRAS.408.2115M} found $\alpha=0.32$ produced the
minimum dispersion in metallicity.
We therefore construct the FMR for the 61 SN Ia hosts with a measure
of \ensuremath{M_{\mathrm{stellar}}}, SFR and gas-phase metallicity. For comparison, we also
determined the FMR for SDSS field galaxies using the same parent
sample as \citet{2010MNRAS.408.2115M}. Similar quality cuts as
described in \citet{2010MNRAS.408.2115M} have been performed. In
addition, we applied aperture corrections to the SDSS sample, and
select galaxies within a similar redshift range as our host sample
(median $z\sim0.07$). The results are shown in Fig.~\ref{FMR}. We
found no significant difference between the FMR for SN Ia hosts and
that of the SDSS galaxies. The weighted mean offset between the
SNe Ia hosts and best-fit from SDSS galaxies is $0.005\pm0.011$\,dex.
In summary, by examining both the mass--metallicity relation and FMR
of SN Ia host galaxies, we find a good agreement with the same
relations derived from field galaxies, suggesting SN Ia host galaxies
and normal field galaxies follow similar relations. This in turn
suggests that the effect of metallicity on the SN Ia rate must be
small.
\subsection{The effect of metallicity on SN Ia luminosities}
\label{sec:effect-metall-sn}
The peak luminosity of SNe Ia is powered by the radioactive decay of
$^{56}$Ni\ synthesised during the explosion. \citet{2003ApJ...590L..83T}
showed that the observed scatter in metallicity could introduce a
$25\%$ variation in the mass of $^{56}$Ni\ synthesized by SNe Ia.
Metal-rich stars tend to synthesize more neutron-rich (and stable)
$^{58}$Ni instead of the $^{56}$Ni\ that powers the SN Ia luminosity. As a
result, and all other variables being equal, intrinsically fainter SNe
Ia are expected to explode in higher-metallicity environments.
However, \citet{2011MNRAS.414.1592B} showed that SN Ia metallicities
can be reasonably estimated by the host galaxy metallicity, and are
better represented by gas-phase metallicity than by stellar
metallicity. In this study we used the host gas-phase metallicity as a
proxy for the progenitor metallicity to study the metallicity effects
on SNe Ia. Fig.~\ref{res_compare} shows the dependence of Hubble
residuals on metallicities based on different calibrations. Again, the
relative metallicity conversions were not applied here, therefore the
number of hosts which are available for different metallicity
calibrations could be different. The results showed a good
consistency between different calibrations. The slopes range from
$-0.74$ to $-1.13$, with the Pearson correlation
coefficients range from $-0.42$ to $-0.51$. This suggests that the
correlation between Hubble residual and gas-phase metallicity is
independent of the calibration methods at least at the level of
precision probed here.
\begin{table}
\centering
\caption{Kendall rank correlation coefficients between Hubble residual (HR), gas-phase metallicity, \ensuremath{M_{\mathrm{stellar}}}\ and stellar age.}
\begin{tabular}{l|cccc}
\hline\hline
& HR & 12+$\log$(O/H) & \ensuremath{M_{\mathrm{stellar}}} & Age \\
\hline
HR & -- & $-0.22 $ & $-0.13$ & $-0.04$ \\
12+$\log$(O/H) & & -- & 0.49 & 0.03 \\
\ensuremath{M_{\mathrm{stellar}}} & & & -- & 0.35 \\
Age & & & & -- \\
\hline
\end{tabular}
\label{kendall_correlation}
\end{table}
Followed the procedure described in \citet{1994MNRAS.268..305H},
the Kendall rank correlation coefficients between
Hubble residual, gas-phase metallicity, \ensuremath{M_{\mathrm{stellar}}}\, and stellar age are listed in
Table~\ref{kendall_correlation}. Our results show that the SN Ia
luminosity has the strongest dependence on the host gas-phase
metallicity compared to \ensuremath{M_{\mathrm{stellar}}}\ or stellar age. We also found that
the correlation coefficient between Hubble residuals and \ensuremath{M_{\mathrm{stellar}}}\
is similar to the value multiplicatively combining the correlation
coefficients of the Hubble residual--metallicity and
\ensuremath{M_{\mathrm{stellar}}}\--metallicity relations, from which it can be inferred that
the correlation between Hubble residuals and \ensuremath{M_{\mathrm{stellar}}}\ may be a
consequence of the Hubble residual--metallicity relation and the
strong correlation between \ensuremath{M_{\mathrm{stellar}}}\ and metallicity.
\citet{2013ApJ...764..191H} applied the FMR to the SN Ia host galaxies
using broad-band colours alone. They found the scatter of Hubble
residual is greatly reduced by using the FMR instead of just the
\ensuremath{M_{\mathrm{stellar}}}, which in turn implies that metallicity may be the
underlying cause of the correlation between Hubble residual and
\ensuremath{M_{\mathrm{stellar}}}. By directly measuring the gas-phase metallicity of SN Ia
host galaxies we can also show that it has a more significant effect on
SN Ia luminosity than \ensuremath{M_{\mathrm{stellar}}}\ or stellar age.
\subsection{The SN Ia intrinsic colour}
\label{sec:sn-ia-intrinsic}
\begin{figure*}
\includegraphics*[scale=0.7]{plot/ebv-hostpara.pdf}
\caption{The colour excess of the host galaxies
$E(B-V)$ as function of gas-phase metallicity, specific SFR
and SN colour \ensuremath{\mathcal{C}}.
The solid line in each panel represents
the best linear fit to the data. The SNe with $\ensuremath{\mathcal{C}}>0.4$
are shown in open squares in the lower panel,
and the dashed line is the best linear fit
to the data including these red SNe.
}
\label{ebv-hostpara}
\end{figure*}
In Section~\ref{sec:sn-colour} we examined the correlations between SN
colour and host parameters, and found that the SNe Ia in
high-metallicity and/or low-sSFR hosts appear to be redder. However,
the SN colour discussed here (SiFTO \ensuremath{\mathcal{C}}) is not an `intrinsic' SN
colour, as dust extinction from the host galaxy may also contribute to
the observed SN colour variation. Therefore it is useful to compare
the SN colour measures to independent measures of host extinction to
assess the effect of dust extinction from the host galaxies.
Fig.~\ref{ebv-hostpara} shows the host colour excess $E(B-V)$
determined from the Balmer decrement as function of host parameters
and SN colour. Mild correlations between $E(B-V)$ and the gas-phase
metallicity/sSFR are found, with Pearson correlation coefficients of
0.39 and 0.36 for gas-phase metallicity and sSFR, respectively.
However, we see no significant correlation with SN colour: the SN
colour appears independent of the host galaxy $E(B-V)$. This
independence become even more evident when including those red SNe Ia
with $\ensuremath{\mathcal{C}}>0.4$ previously excluded in this study.
There are two possibilities that could cause this. The first is that the
bulk of the SN Ia colour variation is intrinsic to the SN event. Previous
studies have shown that the SN Ia intrinsic colour could be altered
systematically by changing the metallicity of the progenitor \citep*{1998ApJ...495..617H,2001ApJ...557..279D}.
There is some observational evidence for this: \citet{2013arXiv1304.4720C}
recently showed that SNe Ia in high-metallicity hosts appear redder, and
we also find a similar dependence of SN Ia colour on host gas-phase and
stellar metallicities in Fig.~\ref{sn_para2}.
A second possibility is that there is dust local to the SN explosion that affects the colour,
for example circumstellar dust \citep{2005ApJ...635L..33W,2008ApJ...686L.103G,2011ApJ...735...20A},
but would not be traced by photons emerging from HII regions in the host galaxy. This interpretation
is supported by evidence that the $B-V$ colour at maximum of SNe Ia correlates with the strength
of narrow, blueshifted Na I D features in SN Ia spectra \citep{2013arXiv1308.3899M,2013ApJ...772...19F},
which likely trace the presence of circumstellar material \citep{2007Sci...317..924P,2011Sci...333..856S}.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have derived the parameters of a sample of 82 SN Ia
host galaxies from the Palomar Transient Factory (PTF), and used it to
examine the relationships between SNe Ia and their hosts. The host
galaxy parameters have been determined from both photometric and
spectroscopic data. In particular, we have derived star-formation
rates, stellar masses, gas-phase and stellar metallicities, and
stellar ages for the host galaxies. Our main findings are:
\begin{enumerate}
\item[$\bullet$] Previously observed correlations between SN Ia
properties and their host parameters are recovered in this work. In
particular, we show that the SN light-curve width (stretch) has a
strong dependence on host galaxy age and stellar mass -- fainter,
faster evolving (lower stretch) SNe Ia tend to be hosted by older,
massive galaxies.
\item[$\bullet$] For the SN Ia colour, we have shown that redder SNe
Ia have a tendency to explode in more metal-rich galaxies. However,
we found no relation between SN colour and the colour excess of the
host galaxies as measured from the Balmer decrement, suggesting that
the bulk of the SN Ia colour variation is intrinsic and not
dependent on host galaxy extinction.
\item[$\bullet$] The dependence of the SN Ia Hubble residuals on host
gas-phase metallicities was also confirmed. SNe Ia in metal-rich
galaxies are $\sim0.1$\,mag brighter than those in metal-poor
galaxies after light-curve shape and colour corrections. This
dependence does not depend on different metallicity calibrations.
The correlation derived between Hubble residual and gas-phase
metallicities is about two times stronger than for stellar mass.
That implies that the host galaxy metallicity may be the underlying
cause of the well-established relation between SN Ia luminosity and
stellar mass.
\item[$\bullet$] We showed that the stellar mass distribution of the
PTF and SNLS SN Ia host galaxies are quite different, with SNLS
possessing many more low-mass host galaxies than PTF. However this
can be understood and reproduced by a combination of a
redshift-dependent galaxy stellar mass function (GSMF), and a SN Ia
rate inversely proportional to the age of the galaxy (a $t^{-1}$
DTD).
\item[$\bullet$] Finally, we compared the mass-metallicity relation
for the SN Ia hosts to that of field galaxies drawn from SDSS. We
found no significant difference between the two relations, a result
that is not sensitive to the metallicity calibrations adopted. In
addition, we derived the fundamental metallicity relation
\citep[FMR;][]{2010MNRAS.408.2115M} for the SN Ia hosts and also
found it to be similar to that measured from field galaxies. This
suggests that metallicity has a negligible effect on the SN Ia rate.
\end{enumerate}
This study has emphasised the important role of the host galaxies of
SNe Ia in influencing the SN explosion properties, with SN Ia
properties showing considerable dependence on their host galaxy
parameters. From a cosmological perspective, the precision of the
cosmology can be improved by correcting these biases introduced by
host galaxies. It can also shed light on the properties of SN Ia
progenitors. Therefore it is of great importance for future SN Ia
surveys to study both the SNe and the galaxies in which they explode.
\section*{Acknowledgements}
MS acknowledges support from the Royal Society. A.G. acknowledges support
from the EU/FP7 via and ERC grant, funding from the ISF and BSF, and the
Minerva ARCHES and Kimmel awards.
Based on observations obtained at the Gemini Observatory, which is operated by the
Association of Universities for Research in Astronomy, Inc., under a
cooperative agreement with the NSF on behalf of the Gemini
partnership: the National Science Foundation (United States), the
National Research Council (Canada), CONICYT (Chile), the Australian
Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia,
Tecnologia e Inova\c{c}\~{a}o (Brazil) and Ministerio de Ciencia,
Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina). Based on
Gemini progams GN-2010B-Q-111, GS-2010B-Q-82, GN-2011A-Q-82,
GN-2011B-Q-108, GN-2012A-Q-91, GS-2012A-Q-3, GN-2012B-Q-122, and
GS-2012B-Q-83 for the host galaxy observations,
and GN-2010A-Q-20, GN-2010B-Q-13, GN-2011A-Q-16 and GS-2009B-Q-11
for the SN observations.
The William Herschel Telescope is operated on the
island of La Palma by the Isaac Newton Group in the Spanish
Observatorio del Roque de los Muchachos of the Instituto de
Astrof�sica de Canarias. Observations obtained with the Samuel
Oschin Telescope at the Palomar Observatory as part of the Palomar
Transient Factory project, a scientific collaboration between the
California Institute of Technology, Columbia University, Las Cumbres
Observatory, the Lawrence Berkeley National Laboratory, the National
Energy Research Scientific Computing Center, the University of Oxford,
and the Weizmann Institute of Science. Some of the data presented
herein were obtained at the W.M. Keck Observatory, which is operated
as a scientific partnership among the California Institute of
Technology, the University of California and the National Aeronautics
and Space Administration. The Observatory was made possible by the
generous financial support of the W.M. Keck Foundation.
Based on observations collected at the European Organisation for Astronomical
Research in the Southern Hemisphere, Chile, under program IDs
084.A-0149 and 085.A-0777.
Observations obtained with the SuperNova Integral Field Spectrograph
on the University of Hawaii 2.2-m telescope as part of the
Nearby Supernova Factory II project, a scientific collaboration between
the Centre de Recherche Astronomique de Lyon, Institut de Physique Nucl'eaire de Lyon,
Laboratoire de Physique Nucl'eaire et des Hautes Energies,
Lawrence Berkeley National Laboratory, Yale University, University of Bonn,
Max Planck Institute for Astrophysics, Tsinghua Center for Astrophysics,
and Centre de Physique des Particules de Marseille.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is
operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration.
This publication has been made possible by the participation of more
than 10 000 volunteers in the Galaxy Zoo: Supernovae project
(\url{http://supernova.galaxyzoo.org/authors}).
\bibliographystyle{mn2e}
|
1,108,101,562,745 | arxiv | \section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\begin{abstract}
The abstract of the paper.
\end{abstract}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt]
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt]
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'
\section{Introduction}
Large galaxy surveys necessarily encounter galactic blends due to projection effects and galactic interactions. Often, the objective of astronomical research involves measuring the properties of isolated celestial bodies. The methods used therein frequently make strong assumptions about the isolation of the object. In such cases, galactic blends can be severe enough to warrant discarding the images. Current surveys such as DES (Dark Energy Survey) \citep{DES}, KiDs (Kilo-Degree Survey) \citep{KiDS} and HSC (Hyper Suprime-Cam) \citep{HSC} as well as future surveys such as LSST (Large Synoptic Survey Telescope) \citep{BookLSST}, Euclid \citep{Euclid} and WFIRST (Wide-Field Infrared Survey Telescope) \citep{WFIRST} will generate immense quantities of imaging data within the next decade. Thus, a robust and high-throughput solution to galaxy image deblending is vital for drawing maximal value from near-future surveys.
This issue is especially pressing given the details of LSST, for example. Beginning full operation in 2023, LSST is expected to retrieve 15 PB of image data over its scheduled 10-year operation with a limiting magnitude of $i \approx 27$. In the densest regions of the sky (100 galaxies per square arc-minute), up to half of all images are blended with center-to-center distance of 3 arc-seconds, with one-tenth of all galaxy images blended with center-to-center distance of 1 arc-second. Even considering the most typical regions of sky (37 galaxies per square arc-minute), simulation estimates suggest around 20 percent of galaxies will be superimposed at 3 arc-second separation and up to 5 percent at 1 arc-second separation. \citep{Fraction}. At 1.5 PB per year and image sizes of approximately 6 GBs, conservative estimates predict the LSST capable of producing 200,000 wide-field images. This results in 1 Billion postage stamp galaxy images per year \citep{BookLSST}. Improved handling of blended images can thus be expected to save up to 200 Million galaxy images from being discarded each year. 50 Million of these galaxy images are predicted to be blended with 1 arc-second center-to-center separation, posing an extremely difficult task for deblending algorithms. Moreover, detecting blends in the first place will pose an additional challenge for LSST (and other near-future surveys), especially for galaxies blended at 1 arc-second separation and a median seeing of ${\sim}0.67$ arc-seconds.
Pioneering work in deblending can be traced back to Jarvis \& Tyson's "Faint Object Classification and Analysis System" \citep{FOCAS} and Irwin's "Automatic Analysis of Crowded Fields" \citep{Irwin}. In Jarvis \& Tyson's \textsc{focas} algorithm, objects are initially identified and segmented by comparing the local flux density to that of the average sky density computed in a 5-by-5 pixel box filter. Blended sources are then separated by scanning the principle axis of the object centroid for a multi-modal intensity signal; if found, objects are deblended by drawing a boundary perpendicular to the principle axis at the point of minimum intensity. Meanwhile, in Irwin's analysis, a maximum likelihood parameter estimation scheme was used to segment stars near the center of globular clusters. The technique iteratively estimates the local sky background then divides the field into images and analyzes each image for blends, estimates source position and shape, updates the background estimate and repeats. Irwin's method offered great progress in regions of high number density where the majority of images overlap even at high isophotes.
Since Jarvis \& Tyson and Irwin, many wide-field image processing algorithms have been proposed including \textsc{next} \citep{NExt} and the widely used \textsc{sextractor} \citep{SExtractor}. \textsc{next} was among the first wide-field image processing methods to utilize artificial neural networks, approaching the problem of detection as one of clustering then employing a modified version of the \textsc{focas} algorithm to deblend superimposed objects. First, a neural network compresses windows of the input image into a dense vector, performing a non-linear principal component analysis. These representations are passed through a second network which classifies the central pixel in the window as belonging to an object or the background. Neighboring object pixels are grouped together and parameters such as the photometric barycenter and principal axis are computed for each contiguous cluster. Building upon the \textsc{focas} algorithm, \textsc{next} then searches for multiple peaks in the light distribution along the principal axis as well as five other axes rotated by up to $\pi$ radians from the principal axis. When two peaks are found in the distribution, the objects are split along a line perpendicular to the axis joining the peaks. In this way, the \textsc{next} deblending algorithm is an extension of \textsc{focas} in which the assumption that the multi-peaked light distribution will occur along the principal axis is relaxed.
Modern approaches to deblending include gradient and interpolation-based deblending (\textsc{gain}) \citep{GAIN}, morpho-spectral component analysis (\textsc{muscadet}) \citep{Muscadet} and constrained matrix factorization (\textsc{scarlet}) \citep{Scarlet}. In both \textsc{muscadet} and \textsc{scarlet}, each astronomical scene is assumed to be the composition of two non-negative matrices which encode the spectral energy distribution and spatial shape information of a finite number of components which sum to represent the scene. Though the techniques share many similarities, the more recent approach of \textsc{scarlet} can be viewed as a generalization of \textsc{muscadet} which allows any number of constraints to be placed on each source. Meanwhile, \textsc{gain} acts as a secondary deblender to repair flawed images by making use of image intensity gradient information and interpolating the missing flux from background sources after the foreground object is segmented. While \textsc{gain}, \textsc{muscadet} and \textsc{scarlet} appear to be powerful deblending algorithms in their own right, here we present a method inspired by promising results in computer vision and deep learning.
Deep convolutional neural networks have proven highly effective at classifying images of galaxies \citep{Sander}. Moreover, their use as generative models for galaxy images has yielded impressive results. In particular, generative adversarial networks (GANs) with deep convolutional generators are able to model realistic galaxies by learning an approximate mapping from a latent space to the distribution over galaxy images given large datasets such as those provided by Galaxy Zoo \citep{Forging, GalaxyZoo}.
Here, we introduce an algorithm which offers progress on both the task of deblending conspicuously blended galaxies as well as the challenge of fast, end-to-end inference of galaxy layers: a branched generative adversarial network. GANs offer an elegant solution to the problem of deblending by allowing us to combine the standard content loss function of a convolutional neural network (e.g. mean squared error) with the adversarial loss signal of a second ``discriminator'' network which pushes solutions to the galaxy image manifold and thereby ensures that our deblending predictions appear to be galaxy-like in a probabilistic sense.
After training our deblending network, a forward pass of input to deblended outputs takes a trivial amount of time on a modern laptop. We focus on galaxy pairs which are known to be blended though future work may involve the identification of blends and inference of the number of galaxies involved. With this work, we intend to make usable hundreds of millions more galaxy images, while saving time and effort for astronomers to spend on other pertinent questions.
\section{Methods}
Generative adversarial networks (GANs) are a family of unsupervised learning models used to learn a map between two probability distributions \citep{GAN, GAN2}. Often, these probability distributions live in high-dimensional space. For example, images are realizations of a high-dimensional joint probability distribution over the pixel intensities. Instead of positing a parametric family of functions and using maximum likelihood methods to determine an analytic form for these complex distributions, GANs use an artificial neural network to learn the map between a known (hereafter latent) probability distribution and the target data distribution. This map is called the generator. The generator is trained by a second network which learns to discriminate between samples from the target distribution and samples from the generator's approximate distribution. This second network is called the discriminator. The generator and discriminator share a loss function in which the discriminator aims to maximize its ability to discriminate between true and generated samples while the generator aims to maximize its ability to generate samples indistinguishable from the true distribution samples and therefore "trick" the discriminator. This back-and-forth training can be viewed as a zero-sum game between neural networks where the goal is to reach Nash equilibrium \citep{GAN}. It is important to note that the generator relies upon the discriminator's signal (and hence its accuracy) to adjust its parameters and increase the quality of its generated outputs. For this reason, it is crucial that both networks train in tandem with one network not overpowering the other.
In the problem of deblending images of galaxies, we treat the joint distribution over pixel intensities of the blended images as the latent distribution. Samples from the latent distribution are denoted $I^{BL}$. The generator's purpose is to map samples from this latent distribution to corresponding samples in the deblended galaxy image distribution. Samples from the target distribution are denoted $I^{PB}$, preblended galaxy images. Our goal is to train a generator using a training set of blended images (see Data section below), $I^{BL}$, and their corresponding original images, $I^{PB}$. The discriminator provides gradient information to the generator to ensure that the generated deblended images lie on the natural image manifold. The training of the generator is also guided with supervision in the form of a pixel-wise mean squared error (see Loss section below) which further coaches the generator to closely reproduce explicit ground truth galaxy images used to make an artificial blend. After training, the generator is capable of inferring the unseen independent layers of galaxy images with high fidelity (see Results section below).
\subsection{Architecture}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Deblender-Generator.png}
\caption{Branched residual network architecture for our deblender GAN. The number of residual blocks in the ``root'' and ``branches'' are integer-valued hyperparameters we denote M and N, respectively. Larger values of M and N generally produce higher quality results at the cost of training time. Here, we show (M, N) = (5, 3) as an example though the trained deblender herein used (M, N) = (10, 6). Although all residual blocks share a common structure and skip connection, here we only explicitly show the inner-workings of the first.}
\label{fig:generator}
\end{figure*}
The architecture of our GAN is a modified version of the successful SRGAN (super resolution GAN) architecture \citep{SRGAN}. Our generator is a very deep, branched residual network \citep{ResNet} consisting of a large number of residual blocks and skip-connections (see \autoref{fig:generator}). We reference the generator as $G$ and denote the learnable parameters of the generator network as $\theta_G$. Residual networks (ResNets) have performed impressively on a variety of image detection and segmentation problems. The ResNet architecture alleviates the vanishing gradient problem by including a large number of skip-connections wherein gradient information can flow undegraded to shallow network layers, allowing for very deep network architectures. The building blocks of residual networks are residual blocks: a standard convolution with kernel size $k=[3, 3]$ followed by batch normalization \citep{BatchNorm}, activation, another convolution, batch normalization and finally an elementwise sum with the original input of the block. We refer to the first M residual blocks as the ``root'' layers whereafter the network splits into two ``branches'' with N residual blocks each. For the results presented herein, we have chosen (M, N) = (10, 6). Larger values of (M, N) are generally preferable though training times can be prohibitively expensive for large values of either. For activation functions we've chosen parametric rectified linear units (PReLU), an adaptation of leaky ReLU wherein the slope parameter $\alpha$ becomes a learnable parameter \citep{PReLU}.
The discriminator is a deep convolutional neural network consisting of many convolutional blocks (see \autoref{fig:discriminator}). A single convolutional block consists of a convolution with kernel size $k = [3, 3]$ and unit stride followed by batch normalization and an activation layer; we employ standard leaky ReLU activations throughout with $\alpha=0.2$. We reference the discriminator as $D$ and denote the learnable parameters of the discriminator as $\theta_D$.
For network regularization, we rely on our batch normalization layers in place of common alternatives such as dropout. In essence, batch normalization layers regularize by multiplying the hidden units by a random value (one over the batch standard deviation) and subtracting a random value (the batch mean). Since these values are different for each mini-batch, the layers must learn to be robust to added noise.
We used TensorFlow \citep{TensorFLow} to build and run our computational graph; training was executed on a single Nvidia GTX 1080 Ti Founder's Edition video card with 11 GB of VRAM and 3584 Nvidia CUDA cores.
\subsection{Loss}
The loss function of the generator is made of three components. As in the SRGAN prescription, these losses can be broken up into two classes: (i) the adversarial loss and (ii) the content loss.
\subsubsection{Adversarial Loss}
The adversarial loss is based on the discriminator's ability to differentiate the generator's samples from true data samples. In essence, this part of the loss ensures the generator's samples lie on the galaxy image manifold (i.e. they look convincingly like galaxies). We define $p_{\text{data}}$ and $p_{z}$ as the probability distributions over the pre-blended and blended image sets, respectively, wherein $x$ is a single pre-blended image and $z$ is a single blended image.
\begin{ceqn}
\begin{equation}
l_{ADV}= \mathbb{E}_{x\sim p_{\text{data}}(x)}[\log D(x)] + \mathbb{E}_{z\sim p_z(z)}[\log(1-D(G(z)))]
\end{equation}
\end{ceqn}
\begin{ceqn}
\begin{align}
\hat{\theta}_G &= \argmin_{G} l_{ADV}(D,G)\\
\hat{\theta}_D &= \argmax_{D} l_{ADV}(D,G)
\end{align}
\end{ceqn}
In deblending use cases, this loss alone is insufficient since we aim to faithfully deblend an image into its exact components (not simply any two images that look like galaxy images). To ensure this, we employ content losses.
\subsubsection{Content Loss}
We compute the pixel-wise mean squared error (MSE) between the generator's sample ($I^{DB}$) and the corresponding ground truth images ($I^{PB}$). This is equivalent to maximizing the effective log-likelihood of the data (in this case pixel intensities) conditional upon the network parameters given a Gaussian likelihood function. This is justified by a combination of the central limit theorem and the observation that pixel intensities arise from a very large amount of uncertain physical factors. Thus, for images of pixel width W and pixel height H, the pixel-wise mean squared error is given by the following.
\begin{ceqn}
\begin{equation}
l_{MSE} = \frac{1}{WH} \sum_{x=1}^W \sum_{y=1}^H \big[ I_{x,y}^{PB} - G_{\theta_G}(I^{BL})_{x,y} \big]^2
\end{equation}
\end{ceqn}
The second component of the content loss uses deep feature maps of the pre-trained VGG19 19-layer convolutional neural network \citep{VGG}. We chose the activations of the deepest convolutional layer (corresponding to the fourteenth overall layer of VGG19). The ground truth and generator output images are passed through the VGG19 network and their corresponding feature maps at the last convolutional layer are extracted. We compute the pixel-wise mean squared error between these feature maps and include this in the total generator loss function. For feature maps of pixel width W and pixel height H, the VGG mean squared error loss is computed as follows.
\begin{ceqn}
\begin{equation}
l_{VGG} = \frac{1}{WH} \sum_{x=1}^W \sum_{y=1}^H \big[VGG_{14}(I^{PB})_{x,y} - VGG_{14}(G_{\theta_G}(I^{BL}))_{x,y}\big]^2
\end{equation}
\end{ceqn}
In the original SRGAN implementation, the VGG feature maps were used as a content loss which resulted in perceptually satisfying images at the cost of exact similarity to the ground truth image. For our purposes, we employ the VGG loss as a ``burn-in'' loss function to break out of otherwise difficult to escape local minima. In the case of galaxy images, these local minima resulted in entirely black images for the first few hundred thousand batches. Using the VGG loss, we were able to escape the local minima in fewer than a thousand batches. To explain this improvement, note that entirely black images share much in common with images of galaxies in the dark surrounding sky, but those same flat dark images have no meaningful structural features. Because the VGG loss sets the target as deep layers of a CNN trained for image processing tasks, it effectively measures and penalizes the lack of pattern found in the predominately black images.
Following the SRGAN prescription, we scale the adversarial loss by $10^{-3}$ to approximately match its magnitude to the mean squared error content loss. We used a discriminator-to-generator training ratio of one: for each iteration, both the discriminator's parameters and the generator's parameters were updated once. It should be noted that we have to compute the MSE and adversarial losses twice---once for each image output of the generator. The total loss is the average of the losses computed for each output image.
The composite loss function for the generator may be interpreted as a local \textit{Maximum a posteriori} estimator of the generator's parameters. Data likelihood is fixed at each training step for updating the generator networks weights, but the prior distribution is non-stationary. The discriminator is the stand-in prior at each local MAP step, and becomes more representative of our true prior while it is jointly trained. It is extremely challenging to hand engineer the features that describe what makes a galaxy appear real. We need to competitively train to approximate the ideal prior. The discriminator learns to better discern which weight configuration in the generator yields a mapping that creates realistic images. From a Bayesian perspective, the adversarial term can therefore be interpreted as a moving log-prior assigning more probabilistic weight to generators that can fool the increasingly trained discriminator.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{discriminator.png}
\caption{Deep convolutional network architecture for the deblender GAN discriminator. All convolutions use a kernel size of (3, 3) whereas strides alternate between (2, 2) and (1, 1). The number of convolutional blocks is a hyperparameter though it is constrained by the requirement that images maintain integer dimensions given that non-unity convolutional strides reduce this dimensionality by a factor of the stride assuming zero padding. Although all convolutional blocks share a common structure, here we only explicitly show the inner-workings of the first.}
\label{fig:discriminator}
\end{figure*}
\subsection{Data}
As noted in \cite{Blends}, large near-future surveys such as LSST will encounter three classes of blends: \textit{ambiguous} blends, \textit{conspicuous} blends and \textit{innocuous} blends. Here, we focus primarily on the conspicuous (confirmed blends) and innocuous (near blends) classes. Future work may include classifying ambiguous blends.
We used 141,553 images available from Galaxy Zoo \citep{GalaxyZoo} via the Kaggle Galaxy Zoo classification challenge. These are galaxy images from the sixth data release \citep{DR6} of the Sloan Digital Sky Survey (SDSS). SDSS is a survey covering nearly 26 percent of the sky and capturing photometric data in five filters, though the images used here were composite images of the $g$, $r$ and $i$ bands. Each image is of standard (i.e. non-extended) JPEG format and therefore has a bit depth of 8 bits per color channel. According to \cite{GalaxyZoo}, the images were cropped such that the resolution was $0.024R_p$ arc-second per pixel with $R_p$ the Petrosian radius of the system.
Each image is originally of dimension (H, W, C) = (424, 424, 3) where H, W and C are the height, width and channel dimensions, respectively. We first crop the center of these images to dimension (H, W, C) = (240, 240, 3) and then downsample them using a bicubic sharpener to (H, W, C) = (80, 80, 3). Leaving one image centered, we then perturb the second image by flipping it both horizontally or vertically according to a Bernoulli distribution with $p=0.5$, rotating uniformly on the interval $\theta \in [0, 2\pi]$, displacing both horizontally or vertically uniformly on the interval $dx, dy \in [10, 50]$ pixels and finally scaling log-uniformly on the interval $r \in [1/e, \sqrt{e}]$ where $r$ is the scale ratio. These images serve as our pre-blended ground truth images ($I^{PB}$); we scale their pixel values to the interval $I \in [-1, 1]$.
The images are then blended by selecting the pixelwise max between the two images. This blending prescription was inspired by the catalog of true blended Galaxy Zoo images presented in \cite{BlendCatalog}. We find that pixelwise max blended images closely match selections from the catalog of true blends though we discuss improvements on this schema later on (see Discussion section below). We scale the blended image pixel values to the interval $I \in [0, 1]$ as recommended in the SRGAN paper. These blended images are realizations of the latent distribution of images $(I^{BL})$.
Note that it is a necessary requirement for our method that the image is centered upon one galaxy; this spatial information informs the independent behavior of each branch. During training, the network learns that one branch corresponds to the deblended central galaxy, while the other branch corresponds to the deblended off-center galaxy. Without such a feature (which should always be possible in true blends with a simple image crop), the network may be confounded by which branch to assign to which galaxy, and for example create the same deblended galaxy image in each branch. By using a principled centering scheme, this problem is largely diminished. This scheme should be easily extended to more than two layers by deblending the center galaxy from all other galaxies, then sending the deblended output centered on a remaining galaxy back into the generator.
\subsection{Training}
During training, the generator network is presented with a batch of blended inputs ($I^{BL}$); we use a batch size of 16 samples. The generator predicts the deblended components ($I^{DB}$) of each blended input. The discriminator is given both a batch of generated deblended images and their corresponding preblended ground truth images. The discriminator updates its parameters to maximize its discriminative ability between the two distributions. Mathematically speaking, minimization of the adversarial loss is equivalent to minimization of the Jenson-Shannon divergence between the generated distribution $(I^{DB})$ and the data distribution $(I^{PB})$ whereas minimization of the mean squared error term equates to maximization of an effective Gaussian likelihood over pixel intensities. In reality, the generator is trained to minimize the sum of these components by averaging the gradient signals from each. This is done via stochastic gradient descent on the network's high-dimensional error manifold defined by its compound loss function with respect to its learnable network parameters.
Both the generator and discriminator loss functions are optimized in high-dimensional neural network weight space via the Adam optimizer (adaptive moment estimation) \citep{Adam}, a method for stochastic optimization which combines classical momentum with RMSProp. With momentum, Adam uses past gradients to influence current gradient-based steps, accelerating learning in valleys of the error manifold. Following the running-average style of RMSProp, it emphasizes the most recent gradients while decaying influence from much earlier ones, hence adapting better to the present loss surface curvature.
The steps down the error manifold are further scaled by a hyperparameter referred to as the learning rate (although the true rate is also naturally scaled by adaptive momentum). We choose an initial learning rate of $10^{-4}$. After 100,000 update iterations (1.6 million individual images or five epochs), we decrease the learning rate by an order of magnitude to $10^{-5}$. We then train for another 100,000 update iterations after which the network parameters are saved and can be used for inference on unseen images.
\section{Results}
To evaluate our model, we infer upon blended images withheld from the training dataset. For these blends, we have access to the to the true preblended images and therefore are able to quote relevant metrics for image comparison (see below). It should be noted that the galaxies which make up the testing set blended images have also been withheld in the creation of the training set. That is, the deblender GAN has never seen this particular blend nor the underlying individual galaxies during training. A selection of results are presented in \autoref{fig:successes} and \autoref{fig:failures} with a more extensive catalog of predictions in a GitHub repository available at \href{https://github.com/davidreiman/deblender-gan-images}{https://github.com/davidreiman/deblender-gan-images}.
We select two image comparison metrics for determining the quality of the deblended images in relation to the ground truth images: the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM).
The peak signal-to-noise ratio is often used as an evaluation metric for compression algorithms. It is a logarithmic measure of the similarity between two images, in our case the ground truth preblended image ($I^{PB}$) and the deblended re-creation ($I^{DB}$). The equation for PSNR is written in terms of the maximum pixel intensity (MAX) of the truth image and the mean squared error (MSE) between the test and ground truth images.
\begin{ceqn}
\begin{equation}
\text{PSNR} = 20\log_{10}(\text{MAX}) - 10\log_{10} (\text{MSE})
\end{equation}
\end{ceqn}
The structural similarity index \citep{SSIM} is a method for evaluating the perceptual quality of an image in relation to an unaltered, true image. It was invented as an alternative to PSNR and MSE methods which measure error. Instead, SSIM measures the structural information in an image where structural information is understood in terms of pixel-wise correlations. SSIM scores are measured via sliding windows of user-defined width in which the means ($\mu_x$, $\mu_y$), variances ($\sigma_x^2$, $\sigma_y^2$) and covariance ($\sigma_{xy}$) of pixel intensities are computed in the same windowed region of each image being compared, the window in one image denoted $x$ and the other denoted $y$. Included are the constants $c_1$ and $c_2$ which stabilize the division to avoid undefined scores---here we use $c_1=0.01$ and $c_2=0.03$. A single SSIM score between two images is the mean of all windowed SSIM values across the extent of the images.
\begin{ceqn}
\begin{equation}
\text{SSIM} = \frac{(2\mu_x\mu_y + c_1)(2\sigma_{xy} + c_2)}{(\mu_x^2 + \mu_y^2 + c_1)(\sigma_x^2 + \sigma_y^2 + c_2)}
\end{equation}
\end{ceqn}
We tested on 3,200 blends withheld from the training set. This corresponds to 6,400 individual galaxy images. We computed 6,400 PSNR and SSIM scores and quote their mean, median, minimum, maximum and variance in \autoref{metric-table}. Their full distributions are displayed in \autoref{psnr-pdf} and \autoref{ssim-pdf}.
\begin{figure*}
\includegraphics[width=0.75\textwidth]{success-5-9.png}
\includegraphics[width=0.75\textwidth]{success-16-14.png}
\includegraphics[width=0.75\textwidth]{success-14-8.png}
\caption{}
\end{figure*}
\clearpage
\begin{figure*}
\ContinuedFloat
\includegraphics[width=0.75\textwidth]{success-36-25.png}
\includegraphics[width=0.75\textwidth]{success-32-7.png}
\includegraphics[width=0.75\textwidth]{success-0-4.png}
\caption{}
\end{figure*}
\clearpage
\begin{figure*}
\ContinuedFloat
\includegraphics[width=0.75\textwidth]{success-2-14.png}
\includegraphics[width=0.75\textwidth]{success-7-30.png}
\includegraphics[width=0.75\textwidth]{success-97-17.png}
\caption{A selection of successful deblender GAN predictions. On the left side of each panel are two preblended Galaxy Zoo images ($I^{PB}$) which were superimposed to create the central blended input image. The trained generator's deblended predictions ($I^{DB}$) are on the right of each panel. Superimposed upon the deblended predictions are the associated PSNR scores between the deblended image and its related preblended image. A variety of galaxy morphologies are represented here---in each case, our deblender GAN successfully segments and deblends the foreground galaxy while imputing the most likely flux for the occluded pixels in the background galaxy.}
\label{fig:successes}
\end{figure*}
\clearpage
The deblender GAN performs impressively with respect to both metrics. For reference, values of PSNR above $30$ dB are considered acceptable for lossy image compression. Considering that these images were completely reconstructed out of a blended image (rather than compressed) the scores set a high benchmark for deblending. It should be noted, however, that comparing these PSNR and SSIM scores directly to those of natural images is likely in error. Many galaxy images are dominated by a black background which can bias both the mean squared error and pixelwise correlations upward and thereby inflate the resulting PSNR and SSIM scores. Still, we quote our scores as a benchmark for comparison with future iterations of our own model and collation with alternative deblending algorithms working with a similar dataset and image format.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{psnr-pdf-final.png}
\caption{Distribution of PSNR scores on the testing set. Deblended images achieve mean PSNR scores of $34.61$ dB with a standard deviation of approximately $\sigma = 2.2$.}
\label{psnr-pdf}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{ssim-pdf-final-2.png}
\caption{Distribution of SSIM scores on the testing set. Deblended images achieve a maximum structural similarity of $0.982$ with a standard deviation of approximately $\sigma = 0.032$.}
\label{ssim-pdf}
\end{figure}
\begin{table}
\begin{tabular}{rccccc}
\hline
\multicolumn{1}{l}{Imaging Metrics} & Mean & Median & Max & Min & \multicolumn{1}{l|}{Var} \\ \hline
PSNR (dB) & 34.61 & 34.83 & 40.45 & 21.34 & 4.88 \\
SSIM & 0.928 & 0.934 & 0.982 & 0.576 & 0.001 \\ \hline
\end{tabular}
\caption{Summary statistics of the PSNR and SSIM score distributions. Maximum PSNR and SSIM scores are comparable with high-quality compression algorithms ($\text{PSNR} \in [30, 50] \text{ dB}$).}
\label{metric-table}
\end{table}
\section{Discussion}
We've shown promising initial results for the application of a novel generative adversarial network architecture to the problem of galaxy deblending. Deep generative models are natural solutions to deblending for their ability to infill missing pixel values behind blends using information learned about the distribution of natural images provided during training. The discriminative loss ensures that all deblended images lie on the natural image manifold of galaxies. In addition, the branched structure allows our model to identify separate galaxies and segment them accordingly without human intervention or labeling; our approach only requires that one of the galaxies lie at the center of the image. Our branched GAN naturally handles the large incoming quantities of data from surveys like LSST and DES, deblending images near-instantaneously.
We've used the peak signal to noise ratio (PSNR) and structural similarity index (SSIM) as a metrics of quality in the segmentation. Though we've found no other applications of deep learning to the problem of deblending, we have quoted our PSNR and SSIM scores here as a benchmark for future comparisons with our own branched GAN revisions and with the alternative deblending algorithms.
We encountered a variety of issues while training our branched deblender GAN. Most notable of these is the tendency to fall into local minima near the start of training. Since galaxy images are largely black space the GAN learns that it can initially trick the discriminator by making all black images. We broke out of the minima by including a ``burn-in'' period of a few thousand batches using only a mean squared error loss between the deep activations of the pre-trained VGG19 network. Another solution to this issue may be to pre-train the discriminator though there exists the possibility of mode collapse wherein the generator and discriminator performance are highly unequal and learning ceases.
There is great room for improvement in our branched GAN. Our blending schema was chosen to match the curated catalog of true blends in the Galaxy Zoo catalog presented in \cite{BlendCatalog} though it certainly could be improved upon. In truth, galaxy blends consist of a combination of pixelwise sum (generally in more diffuse regions) and pixelwise max/min (generally where dense regions of the foreground galaxy obstructs the background galaxy flux). Pure pixelwise max blending tends to create unrealistic blended images where low flux foreground galaxies fall upon bright regions of the background galaxy---this generally leads to blends with artificially incomplete foreground galaxies which have been ``cutoff'' by the high flux pixels of the background images. Images of this type bias our estimates of both PSNR and SSIM scores. Indeed, pixelwise max blending is in some sense the worst case for deblending wherein precisely zero information about the background galaxy is encoded in the pixel intensities in regions where the foreground galaxy eclipses it. On the other hand, a variety of generated blends exhibit artificially hard lines between overlapping galaxies which likely makes the galaxies easier to deblend. Moreover, the presence of unmasked background galaxies in the Galaxy Zoo images give rise to artificial penalties in the mean squared error loss function when they are assigned to the incorrect deblended galaxy image. We also note a slight misalignment of the RGB channels of the Galaxy Zoo images which gives rise to unnatural color gradients---a feature that our network learned to reproduce. We propose to address issues in our blending schema and issues in data creation in general in future work (see below).
\begin{figure*}
\includegraphics[width=0.75\textwidth]{fail-2-1.png}
\includegraphics[width=0.75\textwidth]{fail-92-25.png}
\includegraphics[width=0.75\textwidth]{test-73-24.png}
\caption{}
\end{figure*}
\clearpage
\begin{figure*}
\ContinuedFloat
\includegraphics[width=0.75\textwidth]{fail-47-4.png}
\includegraphics[width=0.75\textwidth]{fail-27-9.png}
\includegraphics[width=0.75\textwidth]{fail-12-10.png}
\caption{A selection of failed deblender GAN predictions. There exists a variety of failure modes in the deblender GAN predictions: (1) incomplete deblends wherein artifacts of one galaxy are left in the prediction for the other galaxy. (2) Incomplete galaxy generation due to our blending prescription. In selecting the pixelwise maximum value during the blending process, foreground galaxies that eclipse bright regions like the core of the background galaxy often are non-realistically ``cutoff'' by the overwhelming background flux. This leaves our network very little with which to predict from and often results in predictions that appear to be only a part of the associated preblended galaxy. (3) Associating background galaxies with the incorrect blended galaxy. This negatively biases the PSNR scores for otherwise acceptable predictions---a representative example is given in the last image of the above figure. (4) Deblending artifacts: in a handful of predictions, we notice random artifacts of unnatural color such as the green aberration in the second-to-last image above; a possible explanation being that a bright celestial object in the same relative position to the galaxy appears in neighboring galaxy images of the galaxy image manifold.}
\label{fig:failures}
\end{figure*}
\clearpage
The deblender model presented herein would likely benefit from multiband input data spanning a larger range of the electromagnetic spectrum. Deep neural networks have proven powerful tools for estimating photometric redshifts from multiband galaxy images alone with no manual feature extraction \citep{DeepPhotoZ}. The ability to learn approximate photometric redshift estimates from input images would likely strengthen the deblending performance of our network. To this end, future iterations of our deblender GAN will utilize multiband inputs.
Moreover, our model is restricted to inference on solely nearby (low-redshift) galaxies. In deep learning, we assume that examples from the training and testing sets are drawn from the same distribution. For our deblender, this limits feasible test images to those composed of galaxies from the same redshift and mass bin as the training images. It is commonly understood that galaxy morphology evolves with redshift and therefore galaxy images at high-redshift will look largely different than nearby galaxies \citep{MorphEvolution}, i.e. they belong to a different image manifold. Though the galaxy distributions are distinct, the distribution of pixel intensities in the images share a great deal of similarities such as dark backgrounds populated by luminous, diffuse objects. This observation makes it an area of interest apt for transfer learning which is a topic we will explore in future work.
Also, in future work we will apply our branched deblender GAN to data generated from GalSim \citep{GalSim} in which we can robustly blend any number of galaxies and apply a variety of PSFs. With this, we hope to generalize our results to galaxy images from a wide variety of potential surveys and thereby salvage an increasing number of blended images which would otherwise be unused in the study of galaxies and galaxy evolution. Moreover, we plan to implement a variable number of GAN branches, allowing any blended object to be deblended after estimating the number of objects involved in the blend. An ultimate goal is the full deep learning pipeline from source identification to blending classification and deblending.
\section*{Acknowledgements}
The authors would like to thank Shawfeng Dong for offering his compute resources to train and test our model. We would also like to thank Joel Primack and Doug Hellinger for their helpful feedback on a draft of this paper. In addition, we thank David Koo, Sandra Faber, Tesla Jeltema, and Spencer Everett for many insightful discussions and recommendations.
The authors would also like to acknowledge the Sloan Digital Sky Survey for providing the data used herein.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
Finally, we'd like to thank the Galaxy Zoo team for their efforts in providing a neatly curated dataset of SDSS galaxy images to the astronomy community.
\bibliographystyle{mnras}
|
1,108,101,562,746 | arxiv | \section{Introduction}
Tuberculosis (TB) is one of the most common infectious diseases worldwide \cite{world2010treatment}. Although the mortality rate caused by TB has declined in recent years, single- and multi-drug resistance has become a major threat to quick and effective TB treatment. Studies have revealed that predicting the outcome of a treatment is a function of many patient-specific factors for which collection of multimodal data covering clinical, genomic, and imaging information about the patient has become essential \cite{munoz2010factors}. However, it is not clear what information is best captured in each modality and how best to combine them. For example, genomic data can reveal the genetic underpinnings of drug resistance and identify genes/mutations conferring drug-resistance \cite{manson2017genomic}. Although imaging data from X-ray or CT can show statistical difference for drug resistance, they alone may be insufficient to differentiate multi-drug resistant TB from drug-sensitive TB \cite{wang2018radiological}. Thus it is important to develop methods that allow simultaneously extraction of relevant information from multiple modalities as well as ways to combine them in an optimal fashion to lead to better outcome prediction.
Recent efforts to study the fusion problem for outcome prediction in TB have focused on a single outcome such as treatment failure or used only a limited number of modalities such as clinical and demographic data \cite{sauer2018feature,asad2020machine}. In this paper, we take a comprehensive approach by treating the outcome prediction problem as a multiclass classification for multiple possible outcomes through multimodal fusion. We leverage more extensive modalities beyond clinical, imaging, or genomic data, including novel features extracted via advanced analysis of protein domains from genomic sequence as well as deep learning-derived features from CT images as shown in Fig.~\ref{TB Results}.(a). Specifically, we develop a novel fusion framework using multiplexed graphs to capture the information from modalities and derive a new graph neural network for learning from such graphs. The framework represents modalities through their targeted encodings, and models their relationship via multiplexed graphs derived from projections in a latent space.
Existing approaches often infer matrix or tensor encodings from individual modalities ~\cite{lahat2015multimodal} combined with early, late, or intermediate fusion~\cite{subramanian2020multimodal,baltruvsaitis2018multimodal,Wang2021} of the individual representations. Example applications include- CCA for speaker identification~\cite{sargin2006multimodal}, autoencoders for video analytics~\cite{vu2017multimodal}, transformers for VQA~\cite{kant2020spatially}, etc. In contrast, our approach allows for the modalities to retain their individuality while still participating in exploring explicit relationships between the modality features through the multiplexed framework. Specifically, we design our framework to explicitly model relationships within and across modality features via a self-supervised multi-graph construction and design a novel graph neural network for reasoning from these feature dependencies via structured message passing walks. We present results which show that by relaxing the fusing constraints through the multiplex formulation, our method outperforms state-of-the-art methods of multimodal fusion in the context of multi-outcome prediction for TB treatments.
\section{A Graph Based Multimodal Fusion Framework}
As alluded to earlier, exploring various facets of cross-modal interactions is at the heart of the multimodal fusion problem. To this end, we propose to utilize the representation learning theory of multiplexed graphs to develop a generalized framework for multimodal fusion. A multiplexed graph~\cite{Cozzo2018} is a type of multigraph in which the nodes are grouped into multiple planes, each representing an individual edge-type. The information captured within a plane is multiplexed to other planes through diagonal connections as shown in Fig.~\ref{Multiplex_Formulation}. Mathematically, we define a multiplexed graph as: $\mathcal{G}_{\text{Mplex}} = (\mathcal{V}_{\text{Mplex}},\mathcal{E}_{\text{Mplex}})$, where $\vert{\mathcal{V}_{\text{Mplex}}}\vert= \vert{\mathcal{V}}\vert \times K$ and $\mathcal{E}_{\text{Mplex}} = \{(i,j) \in \mathcal{V}_{\text{Mplex}} \times \mathcal{V}_{\text{Mplex}}\}$. There are $K$ distinct types of edges which can link two given nodes. Analogous to ordinary graphs, we have $k$ adjacency matrices $\mathbf{A}_{(k)} \in \mathcal{R}^{P \times P}$, where $P=\vert{\mathcal{V}}\vert$, each summarizing the connectivity information given by the edge-type $k$. The elements of these matrices are binary $\mathbf{A}_{(k)}[m,n] = 1 $ if there is an edge of type $k$ between nodes $m,n \in \mathcal{V}$.
\begin{figure}[t!]
\begin{center}
\centerline{{\includegraphics[scale= 0.25]{paper_2619_figs/Model_Figure.png}}}
\caption{\small{Graph Based Multimodal Fusion for Outcome Prediction. \textbf{Blue Box:} Incoming modality features are concatenated into a feature vector (of size P=396) and projected into a common latent space (of size K=32). Salient activations in the latent space are used to form the planes of the multiplexed graph. \textbf{Green Box:} The multiplexed GNN uses message passing walks to combine latent concepts for inference.}}
\label{Multiplex_Formulation}
\end{center}
\end{figure}
\paragraph{\textbf{Multimodal Graph Representation Learning:}} While the multiplexed graph has been used for various modeling purposes in literature~\cite{kivela,manlio,Ferriani2013,Maggioni2013}, we propose to use it for multimodal fusion of imaging, genomic and clinical data for outcome prediction in TB. We adopt the construction shown in the Blue Box in Fig.~\ref{Multiplex_Formulation} to produce the multiplexed graph from the individual modality features. First, domain specific autoencoders (d-AE) are used to convert each modality into a compact feature space that can provide good reconstruction using Mean Squared Error (MSE). To capture feature dependencies across modalities, the concatenated features are brought to a common low dimensional subspace through a common autoencoder (c-AE) trained to reconstruct the concatenated features. Each latent dimension of the autoencoder captures an abstract aspect of the multimodal fusion problem, e.g. features projected to be salient in the same latent dimension are likely to form meaningful joint patterns for a specific task, and form a ``conceptual'' plane of the multiplexed graph. The $\vert{\mathcal{V_{\text{Mplex}}}}\vert$ ``supra-nodes'' of $\mathcal{G}_{\text{Mplex}}$ are produced by creating copies of features (i.e. nodes) across the planes. The edges between nodes in each plane represent features whose projections in the respective latent dimensions were salient (see section \ref{exp:graph_construction} for details). Further, each plane is endowed with its own topology and is a proxy for the correlation between features across the corresponding latent dimension. This procedure helps model the interactions between the various modality features in a principled fashion. We thus connect supra-nodes within a plane to each other via the intra-planar adjacency matrix $\mathbf{A}_{(k)}$, allowing us to traverse the multi-graph according to the edge-type $k$. We also connect each supra-node with its own copy in other planes via diagonal connections, allowing for inter-planar traversal.
\paragraph{\textbf{Outcome Prediction via the Multiplexed GNN:}}
We develop a novel graph neural network for outcome prediction from the multiplexed graph (Green Box in Fig.~\ref{Multiplex_Formulation}). Graph Neural Networks (GNN) are a class of representation learning algorithms that distill connectivity information to guide a downstream inference task \cite{scarselli2008graph}. A typical GNN schema comprises of two components: (1) a message passing scheme for propagating information across the graph and (2) task-specific supervision to guide the representation learning. For ordinary graphs, the adjacency matrix $\mathbf{A}$ and its matrix powers allows us to keep track of neighborhoods (at arbitrary $l$ hop distance) within the graph during message passing. Conceptually, cascading $l$ GNN layers is analogous to pooling information at each node $i$ from its $l$-hop neighbors that can be reached by a walk starting at $i$. The Multiplex GNN is designed to mirrors this behavior.
The \textit{intra-planar adjacency matrix} $\boldsymbol{\mathcal{A}}\in \mathcal{R}^{PK \times PK} $, and the \textit{inter-planar transition control matrix} $\hat{\boldsymbol{\mathcal{C}}} \in \mathcal{R}^{PK \times PK}$ \cite{Cozzo2018} define walks on the multiplex $\mathcal{G}_{\text{Mplex}}$.
\begin{equation}
\boldsymbol{\mathcal{A}} = \bigoplus_{k}\mathbf{A}_{(k)} \ \ \ \ \ ; \ \ \ \ \
\hat{\boldsymbol{\mathcal{C}}} = [\mathbf{1}_{K}\mathbf{1}_{K}^{T}] \otimes \boldsymbol{\mathcal{I}}_{P} \label{ILtrans}
\end{equation}
where $\bigoplus$ is the direct sum operation, $\otimes$ denotes the Kronecker product, $\mathbf{1}_{K}$ is the $K$ vector of all ones, and $\boldsymbol{\mathcal{I}}_{P}$ denotes the identity matrix of size $P \times P$. Thus $\boldsymbol{\mathcal{A}}$ is block-diagonal by construction and captures within plane transitions across supra-nodes. Conversely, $\hat{\boldsymbol{\mathcal{C}}}$ has identity matrices along on off-diagonal blocks. This implicitly restricts across plane transitions to be between supra-nodes which arise from the same multi-graph node (i.e. $i$ and $P(k-1)+i$ for $k \in \{1,\dots, K\}$). Since supra-nodes across planes can already be reached by combining within and across-planar transitions, this provides comparable representational properties at a reduced complexity ($\mathcal{O}(PK)$) inter-planar edges instead of $\mathcal{O}(P^2K)$).
A walk on $ \mathcal{G}_{\text{Mplex}}$ combines within and across planar transitions to reach a supra-node $j\in \mathcal{V}_{\text{Mplex}}$ from a given supra-node $i \in \mathcal{V}_{\text{Mplex}}$. $\boldsymbol{\mathcal{A}}$ and $\hat{\boldsymbol{\mathcal{C}}}$ allow us to define multi-hop transitions on the multiplex in a convenient factorized form. A multiplex walk proceeds according to two types of transitions \cite{Cozzo2018}: (1) A single intra-planar step or (2) A step that includes both an inter-planar step moving from one plane to another (this can be before or after the occurrence of an intra-planar step). To recreate these transitions exhaustively, we have two supra-walk matrices. $\boldsymbol{\mathcal{A}}\hat{\boldsymbol{\mathcal{C}}}$ encodes transitions where \textit{after} an intra-planar step, the walk \textit{can} continue in the same plane or transition to a different plane (Type I). Similarly, using $\hat{\boldsymbol{\mathcal{C}}}\boldsymbol{\mathcal{A}}$, the walk \textit{can} continue in the same plane or transition to a different plane \textit{before} an intra-planar step (Type II).
\paragraph{\textbf{Message Passing Walks:}} Let $\mathbf{h}^{l}_{i} \in \mathcal{R}^{D^{l}\times 1}$ denote the (supra)-node representation for (supra)-node $i$. In matrix form, we can write $\mathbf{H}^{(l)} \in \mathcal{R}^{\vert{\mathcal{V}_{\text{Mplex}}}\vert \times D^{l}}$, with $\mathbf{H}^{(l)}[i,:] = \mathbf{h}^{(l)}_{i}$. We then compute this via the following operations:
\begin{eqnarray}
\mathbf{h}_{i,I}^{(l+1)} = \boldsymbol{\phi}_{I}\Big(\{\mathbf{h}^{(l)}_{j}, j: [\boldsymbol{\mathcal{A}}\hat{\boldsymbol{\mathcal{C}}}][i,j] = 1 \}\Big) \ \ ; \ \ \
\mathbf{h}_{i,II}^{(l+1)} = \boldsymbol{\phi}_{II}\Big(\{\mathbf{h}^{(l)}_{j}, j: [\hat{\boldsymbol{\mathcal{C}}}\boldsymbol{\mathcal{A}}][i,j] = 1 \}\Big) \nonumber \\
\mathbf{h}^{(l+1)}_{i} = f_{\text{concat}}(\mathbf{h}^{(l+1)}_{i,I},\mathbf{h}^{(l+1)}_{i,II}) \ \ \ \ \ ; \ \ \ \ \ f_{\text{o}}(\{\mathbf{h}^{(L)}_{i}\}) = \mathbf{Y} \ \ \ \ \ \ \ \ \ \label{Mplex_MP}
\end{eqnarray}
Here, $f_{\text{concat}}(\cdot)$ concatenates the Type I and Type II representations. At the input layer, we have $\mathbf{H}^{(0)} = \mathbf{X} \otimes \mathbf{1}_{K}$, where $\mathbf{X} \in \mathcal{R}^{\vert{V}\vert \times 1}$ are the node inputs (concatenated modality features). $\{\boldsymbol{\mathcal{\phi}}_{I}(\cdot), \boldsymbol{\mathcal{\phi}}_{II}(\cdot)\}$ performs message passing according to the neighborhood relationships given by the supra-walk matrices. Finally, $f_{o}(\cdot)$ is the graph readout that predicts the outcome $\mathbf{Y}$. The learnable parameters of the Multiplex GNN can be estimated via standard backpropagation
\paragraph{\textbf{Implementation Details:}}
We utilize the Graph Isomorphism Network (GIN) \cite{xu2018powerful} with LeakyReLU (neg. slope = 0.01) readout for message passing (i.e. $\{\boldsymbol{\mathcal{\phi}}_{I}(\cdot), \boldsymbol{\mathcal{\phi}}_{II}(\cdot)\}$). Since the input $\mathbf{x}$ is one dimensional, we have two such layers in cascade with hidden layer width one. $f_{o}(\cdot)$ is a Multi-Layered Perceptron (MLP) with two hidden layers (size: 100 and 20) and LeakyReLU activation. In each experimental comparison, we chose the model architecture and hyperparameters for our framework (learning rate=0.001 decayed by 0.1 every 20 epochs, weight decay=0.001, number of epochs =40) and baselines using grid-search and validation set. All frameworks are trained on the Cross Entropy loss between the predicted logits (after a softmax) and the ground truth labels. We utilize the ADAMw optimizer \cite{loshchilov2017decoupled}. Models were implemented using the Deep Graph Library (v=0.6.2) in PyTorch (v=0.10.1). We trained all models on a 64GB CPU RAM, 2.3 GHz 8-Core Intel i9 machine, with 3.5-4 hrs training time per run (Note: Performing inference via GPUs will likely speed up computation).
\section{Experimental Evaluation}
\begin{table}[b!]
\footnotesize{
\begin{center}
\caption{Dataset Description of the TB dataset. }
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Modality}& \textbf{CT} & \textbf{Genomic}&\textbf{Demographic}& \textbf{Clinical}&\textbf{Regimen}& \textbf{Continuous}\\
\hline
Native Dimen. & 2048 & 4081 & 29 & 1726 & 233 & 8\\
Rank & 250 & 300 & 24 & 183 & 112 & 8\\
Reduced Dim. & 128 & 64 & 8 & 128 & 64 & 4\\
\hline
\end{tabular}
\label{Dataset Description}
\end{center}}
\end{table}
\subsection{Data and Experimental Setup}
\label{Data}
We conducted experiments using the Tuberculosis Data Exploration Portal \cite{Gabrielian2019}. 3051 patients with five classes of treatment outcomes (Still on treatment, Died, Cured, Completed, or Failure) were used. Five modalities are available (see Fig. \ref{TB Results}.(a)). Demographic, clinical, regimen and genomic data are available for each patient, while chest CTs are available for 1015 patients. For clinical and regimen data, information that might be directly related to treatment outcomes, such as type of resistance, were removed. For each CT, lung was segmented using multi-atlas segmentation \cite{wang2013multi}. The pre-trained DenseNet \cite{huang2017densely} was then applied to extract a feature vector of 1024-dimension for each axial slice intersecting lung. To aggregate the information from all lung intersecting slices, the mean and maximum of each of the 1024 features were used providing a total of 2048 features. For genomic data from the causative organisms \textit{Mycobacterium tuberculosis} (Mtb), 81 single nucleotide polymorphisms (SNPs) in genes known to be related to drug resistance were used. In addition, we retrieved the raw genome sequence from NCBI Sequence Read Archive for 275 patients to describe the biological sequences of the disease-causing pathogen at a finer granularity. The data was processed by the IBM Functional Genomics Platform \cite{Seabolt2019}. Briefly, each Mtb genome underwent an iterative \textit{de novo} assembly process and then processed to yield gene and protein sequences. The protein sequences were then processed using InterProScan \cite{Jones2014} to generate the functional domains. Functional domains are sub-sequences located within the protein's amino acid chain. They are responsible for the enzymatic bioactivity of a protein and can more aptly describe the protein's function. 4000 functional features were generated for each patient.
{\noindent\textbf{Multiplexed Graph Construction:}
\label{exp:graph_construction}
We note that the regimen and genomic data are categorical features. CT features are continuous. The demographic and clinical data are a mixture of categorical and continuous features. Grouping the continuous demographic and clinical variables together yielded a total of six source modalities (see Table~\ref{Dataset Description}). We impute the missing CT and functional genomic features using the mean values from the training set. To reduce the redundancy in each domain, we use d-AEs with fully connected layers, LeakyReLU non-linearities and tied weights trained to reconstruct the raw modality features. The d-AE bottleneck (see Table~\ref{Dataset Description}) is chosen via the validation set.
The reduced individual modality features are concatenated to form the node feature vector $\mathbf{x}$. To form the multiplexed graph planes, the c-AE projects $\mathbf{x}$ to a `conceptual' latent space of dimension $K<<P$ where $P=128+64+8+128+64+4 = 396$. We use the c-AE concept space form the planes of the multiplex and explore the correlation between pairs of features. The c-AE architecture mirrors the d-AE, but projects the training examples $\{\mathbf{x}\}$ to $K=32$ concepts. We infer within plane connectivity along each concept perturbing the features and recording those features giving rise to largest incremental responses. Let $\mathcal{AE}_{\text{enc}}(\cdot): \mathcal{R}^{P} \rightarrow \mathcal{R}^{K}$ be the c-AE mapping to the concept space. Let $\hat{\mathbf{x}}^{(i)}$ denote the perturbation of the input by setting $\hat{\mathbf{x}}^{(i)}[j] = \mathbf{x}[j] \ \forall \ j \neq i$ and 0 for $j=i$. Then for concept axis $k$, the perturbations are $\mathbf{p}_{k}[i] = \vert{\mathcal{AE}_{\text{enc}}(\hat{\mathbf{x}}^{(i)})- \mathcal{AE}_{\text{enc}}({\mathbf{x}})}\vert$. Thresholding $\mathbf{p}_{k} \in\mathcal{R}^{P \times 1}$ selects feature nodes with the strongest responses along concept $k$. To encourage sparsity, we retain the top one percent of salient patterns. We connect all pairs of such feature nodes with edge-type $k$ via a fully connected (complete) subgraph between nodes thus selected (Fig.~\ref{Multiplex_Formulation}). Across the $K$ concepts, we expect that different sets of features are prominent. The input features $\mathbf{x}$ are one dimensional node embeddings (or the messages at input layer $l=0$).} The latent concepts $K$, and the feature selection (sparsity) are key quantities that control generalization.
\subsection{Baselines}
We compared with four multimodal fusion approaches. We also present three ablations, allowing us to probe the inference (Multiplex GNN) and representation learning (Multimodal Graph Construction) separately.
\noindent \textbf{No Fusion:} This baseline utilizes a two layered MLP (hidden width: 400 and 20, LeakyReLU activation) on the individual modality features before the d-AE dimensionality reduction. This provides a benchmark for the outcome prediction performance of each modality separately.
\noindent \textbf{Early Fusion:}
Individual modalities are concatenated before dimensionality reduction and fed through the same MLP architecture as described above.
\noindent \textbf{Intermediate Fusion:} In this comparison, we perform intermediate fusion after the d-AE projection by using the concatenated feature $\mathbf{x}$ as input to a two layered MLP (hidden width: 150 and 20, LeakyReLU activation). This helps us evaluate the benefit of using graph based fusion via the c-AE latent encoder.
\noindent \textbf{Late Fusion:} We utilize the late fusion framework of \cite{wang2021modeling} to combine the predictions from the modalities trained individually in the No Fusion baseline. This framework leverages the uncertainty in the 6 individual classifiers to improve the robustness of outcome prediction. We used the hyperparameters in \cite{wang2021modeling}.
\noindent \textbf{Relational GCN on a Multiplexed Graph:} This baseline utilizes the multigraph representation learning (Blue Box of Fig.~\ref{Multiplex_Formulation}), but replaces the Multiplex GNN feature extraction with the Relational GCN framework of \cite{schlichtkrull2018modeling}. Essentially, at each GNN layer, the RGCN runs $K$ separate message passing operations on the planes of the multigraph and then aggregates the messages post-hoc. Since the width, depth and graph readout is the same as with the Multiplex GNN, this helps evaluate the expressive power of the walk based message passing in Eq.~(\ref{Mplex_MP}).
\noindent \textbf{Relational GCN w/o Latent Encoder:} For this comparison, we utilize the reduced features after the d-AE, but instead create a multi-layered graph with the individual modalities in different planes. Within each plane, nodes are fully connected to each other after which a two layered RGCN \cite{schlichtkrull2018modeling} model is trained. Effectively, \textit{within modality} feature dependence may still be captured in the planes, but the concept space is not used to infer the \textit{cross-modal} interactions.
\noindent \textbf{GCN on monoplex feature graph:} This baseline also incorporates a graph based representation, but does not include the use of latent concepts to model within and cross-modal feature correlations. Essentially, we construct a fully connected graph on $\mathbf{x}$ instead of using the (multi-) conceptual c-AE space and train a two layered Graph Convolutional Network \cite{kipf2016semi} for outcome prediction.
\subsection{Results}
\paragraph{\textbf{Evaluation Metrics:}} Since we have unbalanced classes for multi-class classification, we evaluate the performance using AU-ROC (Area Under the Receiver Operating Curve). We also report weighted average AU-ROC as an overall summary. We rely on $10$ randomly generated train/validation/test splits of size 2135/305/611 to train the representation learning and GNNs in a fully blind fashion. We use the same splits and training/evaluation procedure for the baselines. For statistical rigour, we indicate significant differences between Multiplex GNN and baseline AU-ROC for each class as quantified by a DeLong \cite{delong1988comparing} test.
\begin{figure}[t!]
\begin{center}
\centerline{\includegraphics[scale=0.38]{paper_2619_figs/TB_modality+Outcome_pred.png}}
\caption{(a). Multimodal data for Tuberculosis treatment outcome prediction. (b). Outcome prediction performance measured by per-class and weighted average AU-ROC. We display mean performance along with standard errors. * indicates comparisons with the Multiplexed GNN per-class AU-ROC with ($p<0.01$) according to the DeLong test. Individual class frequencies are listed below the x axis.
}
\label{TB Results}
\end{center}
\end{figure}
\paragraph{\textbf{Outcome Prediction Performance:}}
Fig.~\ref{TB Results} illustrates the outcome prediction results. Our framework outperforms common multimodal fusion baselines (Early Fusion, Intermediate Fusion, and Late Fusion), as quantified by the higher mean per-class AU-ROC and weighted average AU-ROC. Our graph based multimodal fusion also provides improved performance over the single modality outcome classifiers. The Relational GCN- Multigraph baseline is an ablation that replaces the Multiplexed GNN with an existing state-of-the art GNN framework. For both techniques, we utilize the same Multi-Graph representation as learned by the c-AE autoencoder latent space. The performance gains within this comparison suggest that the Multiplexed GNN is better suited for reasoning and task-specific knowledge distillation from multigraphs. We conjecture that the added representational power is a direct consequence of our novel multiplex GNN message passing (Eq.~(\ref{Mplex_MP})) scheme. Along similar lines, the Relational GCN w/o latent encoder and the GCN on the monoplex feature graph baseline comparisons are generic graph based fusion approaches. They allow us to examine the benefit of using the salient activation patterns from the c-AE latent concept space to infer the multi-graph representation. Specifically, the former separates the modality features into plane-specific fully connected graphs within a multi-planar representation. The latter constructs a single fully connected graph on the concatenated modality features. Our framework provides large gains over these baselines. In turn, this highlights the efficacy of our Multimodal graph construction. We surmise that the salient learned conceptual patterns are more successful at uncovering cross modal interactions between features that are explanative of patient outcomes. Overall, these observations highlight key representational aspects of our framework, and demonstrate the efficacy for the TB outcome prediction task. Given the clinical relevance, a promising direction for exploration would be to extend frameworks for explainability in GNNs (for example, via subgraph exploration~\cite{yuan2021explainability}) to Multiplex GNNs to automatically highlight patterns relevant to downstream prediction.
\section{Conclusion}
We have introduced a novel Graph Based Multimodal Fusion framework to combine imaging, genomic and clinical data. Our Multimodal Graph Representation Learning projects the individual modality features into abstract concept spaces, wherein complex cross modal dependencies can be mined from the salient patterns. We developed a new multiplexedh Neural Network that can track information flow within the multi-graph via message passing walks. Our GNN formulation provides the necessary flexibility to mine rich representations from multimodal data. Overall, this provides for improved Tuberculosis outcome prediction performance against several state-of-the-art baselines.
\bibliographystyle{splncs04}
|
1,108,101,562,747 | arxiv | \section{Summary}
In conclusion, we have constructed the dynamical equations for active
smectics, both in bulk suspensions and in confined systems in contact with a
momentum sink. Our theory is generic, applicable to any driven system with
spontaneous stripe order. We show, extending \cite{sraditiSSCOM2006}, that
noisy active smectic order is long-ranged in dimension $d=3$ and
quasi-long-ranged in $d=2$ for all dynamical regimes, and that active
smectic suspensions have a nonzero second sound speed parallel to the layers.
For $d=2$ we predict a Kosterlitz-Thouless transition from active nematic to
active smectic, with a re-entrant nematic at low concentration. We show that
smectic elasticity suppresses the giant number fluctuations and extensile
instabilities that occur in active nematics, but that bulk contractile systems
exhibit an active undulation instability. Active extensile stresses, if
strong enough, give rise to a ``breathing" instability which is likely to be
oscillatory. Our results should apply to a wide range of active systems,
including horizontal layers of granular matter agitated vertically or fluids
heated from below.
We look forward to detailed experimental tests of our predictions.
We are grateful to R.A. Simha for useful discussions, and the Active Matter
workshop of the Institut Henri Poincar\'e, Paris, the Lorentz Center of Leiden
University (SR and JT), the Initiative for the Theoretical Sciences at The
Graduate Center of CUNY and the MPIPKS, Dresden (JT), for support and
hospitality while this work was underway. TCA acknowledges support from the
CSIR, India, SR from the DST, India, through a J.C. Bose grant and Math-Bio
Centre grant SR/S4/MS:419/07, and JT from the U.S. National Science
Foundation through awards \# EF-1137815 and 1006171.
|
1,108,101,562,748 | arxiv |
\subsection{Deterministic Slowdown}
To mitigate the effect of heterogeneity, backup workers and bounded staleness loosen up the synchronization scheme by allowing a worker to advance faster than a few or even all of its incoming neighbors. However, the gap between a worker's iteration and its neighbor's is still bounded, either by the technique itself (in the bounded staleness case) or by the use of token queues (in the backup worker case). Consequently, if a worker suffers from severe deterministic slowdown, it will eventually drag down its neighbors and then the entire graph of nodes. Therefore, a dilemma exists: if we do not bound the iteration gap, then better performance can be achieved against deterministic slowdown, but such a system is unrealistic, since a worker must be prepared to receive updates from infinitely many iterations; on the other hand, if we bound the iteration gap with mechanisms like token queues, then whenever a node is slowed down
in a deterministic manner, other nodes can only obtain limited progress before having to wait for the slow worker, and the maximum progress they can make is strictly determined by the bound shown in the last row of Table \ref{tbl:ig_ubnd}.
There are two potential solutions.
One is
developing new {\em algorithms}
to eliminate the bound on the iteration gap and support infinitely large iteration gaps.
In fact, AD-PSGD~\cite{ArXiv_ADPSGD} is one example, where every worker updates its own parameters by averaging them with a randomly selected neighbor at the end of each iteration, regardless of the neighbor's iteration. However, this algorithm easily creates deadlock, and to prevent it, existing solutions require the communication graph $G$ to be bipartite \cite{ArXiv_ADPSGD}, which greatly constrains users' choice of communication topology.
The other is developing new {\em system}
mechanisms to identify the slow worker and seek a way to let it progress, so that the training process can resume.
We consider a system approach.
We let the slow worker identify itself by checking the number of tokens in its out-going neighbors' token queues. This can be done conveniently when it acquires the needed tokens from its out-going neighbors at the end of each iteration, in order to enter a new iteration. For worker $i$ and its out-going neighbor $j$, the number of tokens in $TokenQ(j \rightarrow i)$ is exactly $Iter(j)-Iter(i)+max\_ig$. If $TokenQ(j \rightarrow i).size()$ is large for all $j \in N_{out}(i)$, we can imagine that worker $i$ is very likely a straggler.
\begin{figure}[b]
\centering
\vspace{-6mm}
\includegraphics[width=\linewidth]{figure/Fig10_new.pdf}
\caption{Skipping Iteration Example. \#Tokens denotes the number of tokens in the corresponding token queue. }
\label{fig:JI}
\end{figure}
To allow the straggler to make progress, we propose {\em skipping iterations}, i.e., a slow worker can jump a few iterations, allowing other workers to advance. According to the token queues scheme proposed in Section~\ref{token}, the maximum number of iterations worker $i$ can jump is determined by $max\_jump = \min_{j \in N_{out}(i)} TokenQ(j \rightarrow i).size()$, since every new iteration it enters, it needs a new token from every one of its out-going neighbors. Although skipping $max\_jump$ iterations is allowed under our proposed token queues scheme, we argue that it would be absurd for the slow worker $i$ to surpass its out-going neighbors after the jump; therefore, a more intuitive upper-bound is given by $max\_jump-max\_ig$.
Assume that worker $i$ will jump to iteration $k$. In our design, before the jump, worker $i$ will renew its parameters by executing $Recv(k-1)$ and a following $Reduce$, averaging its current parameters with updates sent by its in-coming neighbors in the $(k-1)$-th iteration, i.e. $\{u_{j' \rightarrow i}(k-1): j' \in N_{in}(i) \}$. This is to ensure that after the jump, worker $i$'s parameters will not appear too stale, so that future updates sent by worker $i$ will not harm the training process. In this way, the slow worker will not always remain a straggler. Note that to ensure the correctness of the token queues scheme, when a jump from iteration $k_0$ to iteration $k$ is performed, the worker will need to obtain $(k-k_0)$ tokens from every one of its out-going neighbors, and also put $(k-k_0)$ tokens into every local token queue intended for its in-coming neighbors.
Figure \ref{fig:JI} gives two examples of executing the jump, one for the bounded staleness case and the other for backup workers. Changes in red indicate the jump, while changes in green indicate the new progress enabled by the jump. (a) Worker B and C are blocked from advancing because of the staleness bound of 4. Without A's skipping iterations, the speed of B and C will be no more than 4 iterations faster than A. However, with skipping iterations, the slow worker A can quickly jump to iteration 9, so that training can smoothly resume for another 4 iterations, before A advances again. (b) Worker B and C are blocked because of the bounded iteration gap ensured by the token queues --- they cannot get a new token from A, since the token queues are empty. Without A's skipping iterations, they will be no more than 5 iterations faster than A. But with A jumping to the 10th iteration, they can train non-stop for another 5 iterations, before A makes new progress.
It may seem like a problem that for worker $i$ to execute $Recv(k-1)$, it must wait for its in-coming neighbors to reach the $(k-1)$-th iteration. However, we argue that it is very likely that worker $i$ has also fallen behind its in-coming neighbors due to the deterministic slowdown; even if one of its in-coming neighbors is also slow, the mechanism of either bounded staleness or backup workers will ensure that worker $i$ can easily proceed to the $k-th$ iteration. Moreover, although skipping iterations means missing a few iterations' updates from worker $i$, this will not be a problem because even if the updates were sent, they would be stale and thus dropped by worker $i$'s out-going neighbors (in the backup workers case) / receive a very small weight (in the bounded staleness case).
To enable flexible settings of our proposed mechanism, our system allows users to specify the maximum number of iterations that a worker can skip in one jump, as well as the condition to trigger the jump, e.g., a worker may only skip iterations if it is more than a user-specified number of iterations behind its out-going neighbors.
\subsection{The Problem}
The problem considered in this paper is to use SGD to minimize a loss function $F$ over a data set $S$, and the update function in each iteration is given by
$x \leftarrow x - \eta \cdot \nabla_x F(x;\xi)$,
where $\xi$ is a mini-batch of data randomly sampled from $S$ and model parameters are represented by $x$.
this process can be parallelized to be executed in a distributed
environment~\cite{Zinkevich2010PSGD}. A well-known mechanism for parallel SGD is training with Parameter Servers (PS) \cite{Li2014ParamServer}.
\subsection{Distributed Training with Parameter Server}
\label{sec:trainPS}
Training with PS involves choosing one or a few central nodes as PS that are responsible for maintaining and updating model parameters \cite{Li2014ParamServer}. Other machines, called workers, pull parameters from the PS, compute gradients based on random samples and send gradients back to the PS. Then the PS will update the parameters based on the received gradients.
In the most basic setting, workers are synchronized at the end of each iteration. They are not allowed to pull new parameters from the PS until the PS has received updates from every worker and applied them to the parameters. In this way, workers always work on the same and most up-to-date version of parameters. However, there are two main drawbacks of the synchronous setting: {\em a)} fast workers always have to wait for slow ones, which is called the straggler problem; and {\em b)} communication bottlenecks or hotspots easily occur at the PS.
The current approach to mitigate the communication bottleneck is
to apply ring All-Reduce~\cite{ring_allreduce1,Ring_allreduce2} among all workers, without using
PS. With the careful overlapping of communication and computation,
the communication hotspot at PS is eliminated.
Logically, it implements All-Reduce,
which means that the update
from one worker is {\em broadcast}
to all other workers by the
end of the iteration.
While ring All-Reduce hides some communication latency with
computation, the actual latency can potentially be
increased when
the single update from each worker travels in the ring and reaches all other workers.
\subsection{Decentralized Training}
It has been recently theoretically shown for the first time that decentralized algorithms can outperform centralized ones \cite{NIPS2017_dPSGD}. The two types of algorithms share the same order of computational complexity \cite{NIPS2017_dPSGD}, but decentralized algorithms enjoy faster communications since the communication load is spread across the graph instead of concentrated at the PS.
Although decentralized algorithms were studied before, its advantage was long hidden. Prior work \cite{2012_dualAveraging} showed that as the number of workers increases, it takes more iterations to reach a certain accuracy, based on the assumption that $F$ is convex. However, in the recent work \cite{NIPS2017_dPSGD}, convexity of $F$ was not assumed and it was shown that the convergence rate exhibits an asymptotically linear speedup with respect to the number of workers. The improved result serves the motivation
for investigating decentralized training.
In decentralized training algorithms \cite{NIPS2017_dPSGD,ArXiv_ASAP,DBLP:conf/nips/send_difference}, there is no central node; every worker maintains its own version of parameters. Workers communicate with one another based on a predefined network topology. In each iteration, a worker computes gradients, sends its parameters to its out-going neighbors, and updates its parameters by averaging them with its in-coming neighbors. It remains a choice whether the gradients are applied to the parameters before or after the parameters are sent. The algorithm is shown in Figure \ref{alg0}.
In the algorithm, parameters are sent before applying the gradients, which enables parallel execution of step 1 and 2 (the parallel approach). An alternative approach is to swap step 3 and 4, so that parameters are sent after applying the gradients (the sequential approach).
We will refer to this algorithm as standard decentralized training, and will discuss the two variants in Section~\ref{sec:framework}.
\begin{figure}
\centering
\small
\begin{algorithmic}[1]
\REQUIRE A set of worker nodes $V$ and their connection represented in a weighted adjacency matrix $W$
\FOR{worker $i \in V$}
\STATE Compute gradients over randomly selected samples $g_{k,i} = \nabla F(x_{k,i};\xi_{k,i})$
\STATE Average parameters with my neighbors $x_{k+\frac{1}{2},i} \leftarrow \sum_{j \in V} W_{ji}x_{k,j}$
\STATE Apply gradients $x_{k+1,i} \leftarrow x_{k+\frac{1}{2},i}-\eta_k \cdot g_{k,i}$
\ENDFOR
\end{algorithmic}
\caption{Standard Decentralized Training. }
\label{alg0}
\end{figure}
\subsection{System Heterogeneity}
As discussed in Section~\ref{sec:trainPS}, a main source of performance degradation is the straggler problem, which is
an example of system heterogeneity.
In general, it involves random aspects such as slowdown caused by resource sharing and hardware faults, as well as deterministic factors including differences in hardware computational capabilities and network bandwidths \cite{NSDI2017_Gaia,SIGMOD2017_het,ArXiv_ADPSGD}.
Fundamentally, both PS and ring All-Reduce lack
the {\em flexibility} to tackle the heterogeneous execution environment due to the {\em fixed} communication pattern between workers and PS or between workers themselves.
For the PS setting, previous work has proposed several ways to deal with this problem, e.g., updating parameters asynchronously \cite{NIPS2011_hogwild}, using backup workers \cite{ICLR2016_backup_workers}, allowing bounded staleness \cite{NIPS2013_SSP}, dynamically adjusting the learning rate \cite{SIGMOD2017_het}, and sending accumulated gradients when they reach a significance threshold \cite{NSDI2017_Gaia}.
In ring All-Reduce, the more restrictive communication
pattern makes it impossible to implement some
techniques, e.g., backup workers.
In fact, the execution may suffer more
from slow communication links and/or stragglers in the ring.
For decentralized training, which has gained interests only recently, relatively few efforts have been devoted to improving performance in heterogeneous environments. A fairly recent work \cite{ArXiv_ADPSGD} proposed an asynchronous scheme where every worker averages parameters with a randomly selected neighbor instead of all the in-coming neighbors.
However, as will be discussed in
detail in Section~\ref{sec:opt4het}, it may lead to deadlock and can only
work for a specific type of communication graphs.
\subsection{Challenges and Motivation}
Decentralized algorithms can outperform centralized ones, because it eliminates the communication hotspot at the PS. However, heterogeneity remains a problem, since workers still need to synchronize with its neighbors, and thus the straggler effect exists. It can be imagined that the influence of one slow worker or network link can spread to the whole graph through connected nodes.
In the next section, we will analyze the nature of distributed synchronization, propose heterogeneity-aware algorithms based on
insights from solutions for PS, and implement a distributed training system based on the proposed mechanisms.
\subsection{Dataset and Models}
We evaluate {Hop}\xspace on two machine learning tasks, namely image classification and web spam detection. For image classification we train the VGG11\cite{VGG11} network on CIFAR-10\cite{Cifar10}; for web spam detection, we train SVM over the webspam dataset\cite{pascalchallenge}.
\subsection{Experiments Setup}
We use a CPU cluster with 1000Mbit/s ethernet connection to run 16 workers on 4 machines: each machine has 4 workers.
We use the following hyper-parameter setup as prescribed in http://leon.bottou.org/projects/sgd and AD-PSGD \cite{ArXiv_ADPSGD} with some modifications:
batch size: 128; learning rate: 0.1 for VGG and 10 for SVM; no learning rate decay policy is used;
momentum: 0.9; weight decay: $10^{-4}$ for VGG and $10^{-7}$ for SVM.
We use log loss for SVM instead of hinge loss.
\subsection{Results and Analysis}
\subsubsection{Heterogeneity with Random Slowdown}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/picExp.pdf}
\caption{Graphs used in experiments (self-loops are omitted) with increasing node degrees. (1) Ring graph \cite{ArXiv_ASAP,NIPS2017_dPSGD,ArXiv_ADPSGD}. Nodes are connected in a circle via bidirectional edges. (2) A ring-based graph \cite{NIPS2017_dPSGD}. On top of the ring graph, every node is also connected to the most distant node. (3) Double-ring graph. Two ring-based graphs are connected node to node. }
\label{fig:topology}
\end{figure}
We simulate a heterogeneous environment by randomly slowing down every worker by 6 times at a probability of $1/n$ in each iteration, where $n$ is the number of workers.
We conduct experiments
with and without slowdown
on three different communication graphs (labeled as ring, ring-based and double-ring as shown in Figure \ref{fig:topology}), and the result is illustrated in Figure \ref{fig:2.hetero}. None of the graphs is immune to the slowdown.
Moreover, we see that sparser graphs suffer less to random slowdown.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/heteroOnDecen.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/heteroOnDecen-SVM.png}
\caption{Effect of heterogeneity (left: CNN; right: SVM)}
\label{fig:2.hetero}
\end{figure}
\subsubsection{Comparison to Parameter Servers}
For decentralized algorithm, training is conducted on a ring-based topology (Figure \ref{fig:topology}); for PS, we adopt BSP and use one additional machine as the parameter server. As shown in Figure \ref{fig:1.ps-vs-dc-vgg}, decentralized training in either heterogeneous or homogeneous environments converges much faster
than homogeneous PS. Because the parameter server algorithm will inevitably be slowed down in a heterogeneous environment \cite{SIGMOD2017_het}, we do not conduct this experiment.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/1pscurve.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/1pscurve-SVM.png}
\caption{Decentralized vs. PS (left: CNN; right: SVM)}
\label{fig:1.ps-vs-dc-vgg}
\end{figure}
\subsubsection{Effect of Backup Workers}
We design backup workers mainly for random heterogeneity, since in an environment with deterministic heterogeneity (e.g., when a worker runs much slower), the whole process will still slow down due to the token limit.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/backup.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/backup-SVM.png}
\caption{Effect of backup workers on decentralized training with random slowdown: loss vs time (left: CNN; right: SVM)}
\label{fig:backup.vgg}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/backup-step.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/backup-step-SVM.png}
\caption{Effect of backup workers on decentralized training with random slowdown: loss vs steps (left: CNN; right: SVM)}
\label{fig:backup-step.vgg}
\end{figure}
\begin{figure}
\begin{minipage}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figure/evaluations/backup_run_time_compare.png}
\caption{Effect of backup workers: iteration speed over 6Xslowdown (on CNN)}
\label{fig:iter_speed}
\end{minipage}
\quad
\begin{minipage}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figure/evaluations/staleness.png}
\caption{Effect of bounded staleness with random slowdown (on CNN)}
\label{fig:staleness}
\end{minipage}
\end{figure}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/backup_run_time_compare.pdf}
\caption{Effect of backup workers: iteration speed over 6Xslowdown}
\label{fig:iter_speed}
\end{figure}
\end{comment}
We test our system on 2 different communication graphs, the ring-based graph and the double-ring graph. We use one backup worker (i.e., each node can receive one less update), and the results on two graphs are similar as shown in Figure \ref{fig:backup.vgg}: training with backup workers converges faster than standard decentralized algorithms on wall-clock time. Combined with the loss curves on steps as shown in Figure \ref{fig:backup-step.vgg}, we argue that although receiving one less update hurts the
per iteration progress,
the effect is insignificant compared to the gained speedup in the per iteration execution time (a speedup of up to 1.81 is shown in Figure \ref{fig:iter_speed}).
\subsubsection{Effect of Staleness}
We conduct experiments on a ring-based graph using 6 times random slowdown and a staleness bound of 5.
As shown in \ref{fig:staleness}, the system with staleness can achieve a similar speedup to that with backup workers, and they both outperform the standard decentralized setting.
\subsubsection{Effect of Skipping Iterations}
\begin{wrapfigure}[10]{r}{0.5\columnwidth}
\vspace{-0.7cm}
\begin{center}
\includegraphics[width=0.50\columnwidth]{figure/evaluations/fig17.png}
\caption{Effect of skipping iterations: 4Xslowdown (on CNN)}
\label{fig:jump-speed}
\end{center}
\end{wrapfigure}
Experiments are conducted on a ring-based graph with 16 workers, while one worker is deterministically chosen for a 4 times slowdown.
We test two settings: jumping at most 2 iterations at a time and jumping at most 10 at a time. As shown in Figure \ref{fig:jump}, skipping iterations exhibits superior performance over the simple backup worker setting, and jumping at most 10 iterations delivers the fastest convergence, with a speedup of more than 2 times over the standard decentralized system. Moreover, Figure \ref{fig:jump-speed} shows that with skipping iterations, the influence of stragglers on the duration of a iteration can be reduced a lot --- from 3.9 times slowdown to 3.90/3.43 $\approx$ 1.1 times slowdown, which contributes to the significant convergence speed gain on wall-clock time.
\label{sec:jump}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/jump.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/jump-SVM.png}
\caption{Effect of skipping iterations (left: CNN; right: SVM)}
\label{fig:jump}
\end{figure}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/evaluations/jump-speed.pdf}
\caption{Effect of skipping iterations: 4Xslowdown.}
\label{fig:jump-speed}
\end{figure}
\end{comment}
\subsubsection{Effect of graph topology}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure/evaluations/topology.png}
\includegraphics[width=0.48\linewidth]{figure/evaluations/topology-step.png}
\caption{Comparison of three different topology settings (on CNN)}
\label{fig:3graphs}
\end{figure}
We have compared 3 graphs in a heterogeneous setting where 8 workers are unevenly distributed over 3 machines (Figure \ref{fig:expGraph}). The baseline graph is the ring-based graph with a high spectral gap~\footnote{The spectral gap of a graph $G$ is defined as the difference between the norms of the largest 2 eigenvalues of the weighted adjacency matrix $W$, i.e. $\|\lambda_1(W)\|-\|\lambda_2(W)\|$. The bigger the spectral gap, the faster information spreads over the graph.} of 0.6667. Our proposed 2 graphs are inspired by the heterogeneous distribution of workers: an all-reduce graph is used within a physical machine, while a ring graph is used between different machines. They have much smaller spectral gaps, 0.2682 and 0.2688 respectively, but our experiments show that they perform better than the symmetric ring-based graph (Figure \ref{fig:3graphs}). In theory, the bigger the spectral gap, the fewer iterations it takes to converge \cite{ArXiv_ASAP,NIPS2017_dPSGD}. However, our experiments do not show a significant difference in the convergence rate w.r.t iterations, even when the spectral gaps are very dissimilar (Figure \ref{fig:3graphs}). Moreover, the duration of an iteration can largely vary due to the graph topology as well as the heterogeneity in the system, which has provided insights that more factors should be taken into consideration when designing the communication graph.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/expGraph.pdf}
\caption{Three graphs tested in a heterogeneous environment (self-loops are omitted). Nodes in the same color reside in the same physical machine. Spectral gaps: (a) 0.6667, (b) 0.2682, and (c) 0.2688. }
\label{fig:expGraph}
\end{figure}
\subsection{Queue Structures in \textsc{TensorFlow}}
Specifically, the queues in our design
are based on the comprehensive and flexible FIFO queue in \textsc{TensorFlow}.
The initialization specifies the data types, shapes and the capacity of the queue entries.
The FIFO queue supports common
functions including $enqueue$, $dequeue$,
$dequeue\_many$ and $size$.
\subsection{Collecting Updates Matching a Tag}
\label{sec:implTag}
To implement the queue operations defined in Section~\ref{update}, we only need to enhance
each FIFO queue entry with a tag, which is
used to match an update of a particular iteration and/or from a particular neighbor.
\begin{comment}
In the standard decentralized training (Figure \ref{alg1,alg2}), we need to collect updates of a particular iteration, while the queue likely contains updates of several iterations due to the iteration gap; in addition, the sequence of the updates in the queue may not be in order, i.e. updates of later iterations may arrive at the receiver first because of the uncertain network delay. In decentralized training with bounded staleness, we need to obtain the most recent available update from every in-coming neighbor. In the case of using backup workers, we want to collect a certain number of updates of a particular iteration.
\end{comment}
A simple implementation of the tags is to use one FIFO queue as the Update Queue at each worker and include the tags as part of the queue entry. Whenever a worker collects updates from the queue, it takes all entries out of the queue and keeps the ones with matching tags. This process goes on in a while loop until the worker has obtained all the required entries. The issue with this approach is that dealing with the unmatched
entries can be cumbersome. They cannot be discarded since
they can be from later iterations and will be used in the future. This may happen because
we do not assume network preserves the
message order.
We cannot simply put them back into the queue, since they will be dequeued again and again as the while loop continues. It is possible to store them locally after performing a partial $Reduce$ of the available updates according to the tag, but that will complicate the bookkeeping and
consume a considerable amount of local memory, --- about $max\_ig$ times model size.
We propose a solution that prevents dequeuing updates of newer iterations with nearly zero memory overhead. Instead of using a single queue, we define multiple queues, each of which corresponds to an iteration. Queues are reused across iterations in a way similar to rotating registers. To select the correct queue to $enqueue$ or $dequeue$, a worker determines the queue index by computing the modulo $mod(iter,\#queues)$, where $\#queues$ is the total number of queues. $\#queues$ is set to $max\_ig+1$, because a worker can receive updates of at most $(max\_ig+1)$ different newer or current iterations based on {\em Theorem 1}.
In standard case (Section~\ref{update}),
a worker can only receive newer or current updates, and it can always $dequeue$ the correct updates;
in the backup worker case (Section~\ref{backup}),
a worker can receive older updates as well, but the older ones will be discarded.
Our solution essentially divides the original large single queue into multiple small ones, and the total space consumed basically remains the same.
As for distinguishing the sender via the $w\_id$ tag, it can also be achieved by defining multiple queues. But it is not necessary in our system, since we only use the $w\_id$ tag when employing bounded staleness, which only requires processing
one-pass of all entries: among the entries with the same $w\_id$ tag, the most recent one is retained and the rest are discarded.
\subsection{Handling Late Updates}
As mentioned in Section~\ref{backup}, when using backup workers, updates that are not used in the $Reduce$ can accumulate in the queue. In our design, the effect of stale updates is mitigated in the following two ways:
{\em a)} Stale updates are found and discarded in the $dequeue$/$dequeue\_many$ operation in later iterations. This is already incorporated in our system as described in Section~\ref{sec:implTag}.
{\em b)} Inquire the receiver's iteration before sending the update. If the receiver's iteration is larger than the local worker's iteration, do not send the update. This method creates a small communication overhead, but can save much more when the update is stale. More importantly, it can effectively reduce the number of stale updates --- now the only source of stale updates are those that are on-the-fly when the receiver performs the $dequeue$/$dequeue\_many$.
\begin{comment}
{\em b)} We increase $\#queue$ to makes sure that a queue is not always in use, and we clear stale updates by clearing a queue right before its next use. For example, for $max\_iter=3$, the required number of queues is $max\_iter+1=4$, but we can set $\#queue=5$. At any time, only 4 queues are in use, i.e. they are receiving updates from the current or newer iterations. The remaining 1 queue is not in use, but late updates from earlier iterations may still arrive in the queue. As long as we clear the queue before it is used, accumulated stale updates will not affect the queue from receiving newer updates. For instance, when the local worker is in iteration 7, it collects updates from queue No.2, since $mod(7,5)=2$. The indices of the queues in use are 2,3,4 and 0. In the next iteration, queue No.2 will be out of use, while queue No.1 will start receiving updates tagged with $iter=11$. Before the worker enters iteration 8, it clears queue No.1, so that stale updates will not affect queue No.1 from receiving newer updates. This method makes sure that a stale update will be cleared if it arrives within $(\#queue-max\_ig-1)$ iterations of its intended iteration, and the cleared updates will not take up space in the queue when the queue is in use.
\end{comment}
We have also considered a more customized structure provided by \textsc{TensorFlow} called the conditional accumulator, which only accepts updates sent with a correct $local\_step$, another notion for iteration. If the $local\_step$ is incorrect, then the update will be dropped. It seemed to be a perfect solution to the problem, but we have observed in experiments that this property cannot be always ensured. The update that is up-to-date when it is sent can end up stale when it is received, and the conditional accumulator will incorrectly accept the update. This is exactly the same problem we have encountered with FIFO queues
with the on-the-fly stale updates.
\begin{comment}
************Old************
Implementation notes - difficulties:
(1) centralized -> decentralized: conditional accumulators are created where the variables are (check the graph in duoSV2.py to make sure!). It is impossible to send local parameters to an accumulator in a remote machine using its shared name. Our approach: create all the accumulators in the beginning and use an array to store them so that we can reference them later even at remote machines (2) change the synchronization behavior at the conditional accumulator: use a FIFO queue to replace the conditional accumulator
Implementation Writing Plan A:
(1) current implementation on TensorFlow (based on shared variables and without support for dynamic graphs, which creates great obstacles) (2) problems under decentralized settings (3) our solution
Implementation Writing Plan B: introduce our system and its various functions
4.1 Current implementation on Tensorflow
- Only the centralized setting is available
- There is \textit{useless computation} when using backup workers
- mixed version problem when moving from centralized to decentralized settings
- how to ensure bounded staleness under decentralized settings
Define \textit{useless computation}: A worker gets a token with a global step smaller than the current step at every one of its receiving device (i.e. PS in centralized setting or its outgoing neighbors in decentralized setting).
Number of workers: n; number of gradients to aggregate: m.
Conditions where \textit{useless computation} could occur: n>2m in centralized setting; very common in naive implementation of backup workers in decentralized setting.
4.1.1 Abstracted implementation
PS: manages the global step, trainable variables, conditional accumulators (one per variable) and a token queue.
\textbf{Actions at PS}:
(1) \textit{Update}: the PS applies aggregated gradients to the trainable variables, increases the global step by 1, and puts n tokens into the token queue, each stamped with the global step.
(2) \textit{Apply gradient}: if the local step is no less than the global step, the action succeeds and the gradient is pushed into the conditional accumulator. Otherwise the action fails.
(3) \textit{Take gradient}: if the number of gradients in the conditional accumulator is no less than m, the action succeeds, and the conditional accumulator returns the average of the newest m gradients and clears all the gradients in it. Otherwise, the action fails.
\textbf{Initialization}:
PS: sets the global step to 0.
Worker: sets the local step to 0.
\textbf{Training}:
Worker:
(1) Gets trainable variables from the PS. Gets a random batch of data and computes gradients.
(2) Passes every gradient to its corresponding conditional accumulator by calling the \textit{Apply gradient} action, stamping the gradient with the local step.
(3) Calls \textit{Take gradient} action at all the conditional accumulators; if the action succeeds, triggers \textit{Update} action at the PS.
(4) Gets a token from the token queue; if the token queue is empty, blocks till the action succeeds. Sets the local step to the global step obtained from the token.
4.2 Our implementation
- Both the centralized and decentralized settings are available
- No useless computation when using backup workers (have to check this in experiments to see how important it is in the centralized settings and whether it is necessary in decentralized settings)
- Bounded staleness is guaranteed
4.2.1 Abstracted implementation of decentralized ML
Indegree of the i-th worker: \textit{$d_{in}$(i)}; number of updates needed from its incoming neighbors: \textit{$n_{in}$(i)}; staleness bound: \textit{s}; weight matrix of the communication topology: \textit{W}
Worker: manages its own local step, trainable variables, conditional accumulators (\textit{s}+1 per variable, indexed by 0, 1, 2, …, \textit{s}) and \textit{$n_{in}$(i)} token queues (one for each incoming neighbor to take tokens).
\textbf{Actions at each worker}:
(1) \textit{Update}: the worker takes a weighted average of its own variables with the \textit{$n_{in}$(i)} sets of values it has received (according to the matrix W), increases the local step by 1 and puts 1 token into every local token queue.
(2) \textit{Send vals}: if the received local step is no less than the current worker’s local step, the action succeeds and the value is pushed into the conditional accumulator. Otherwise the action fails.
(3) \textit{Collect vals}: if the number of values in the conditional accumulator is no less than \textit{$n_{in}$(i)}, the action succeeds, and the conditional accumulator returns the concatenation of the newest \textit{$n_{in}$(i)} values and clears all the values in it. Otherwise, the action fails.
\textbf{Initialization}:
Worker: sets the local step to 0. Puts \textit{s} tokens into every local token queue.
\textbf{Training}:
Worker:
(1) Gets a random batch of data, computes gradients and applies the gradients to its own trainable variables.
(2) Passes the variables’ values to its outgoing neighbors’ corresponding conditional accumulators indexed by local step mod (\textit{s}+1) by calling the \textit{Send vals} action, stamping the values with the local step.
(3) Calls \textit{Collect vals} action at its own conditional accumulators indexed by local step mod (\textit{s}+1); blocks till the action succeeds.
(4) Triggers the local \textit{Update} action.
(5) Gets a token from every corresponding token queue at its outgoing neighbors; blocks till the action succeeds.
\end{comment}
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Background and motivation}
\label{sec:back}
\input{back}
\section{Distributed Coordination}
\label{sec:framework}
\input{notation}
\section{Queue-based synchronization}
\label{sec:queue}
\input{queuebased.tex}
\section{Deterministic Slowdown}
\label{sec:opt4het}
\input{adaptive_update.tex}
\section{Implementation}
\label{sec:impl}
\input{impl.tex}
\section{Experiments}
\label{sec:exprmnts}
\input{exp.tex}
\section{Conclusion}
\label{sec:conclsn}
\input{concl.tex}
\bibliographystyle{plain}
\subsection{Notations}
We define the communication topology among workers in distributed
training as a weighted directed graph \(G = (V, E)\),
where each node represents a worker.
At each node $i$, $N_{in}(i) = \{j|(j,i) \in E\}$ and
$N_{out}(i) = \{j|(i,j) \in E\}$ denote the
set of in-coming and out-going neighbors, respectively.
An edge \(e = (i,j) \in E\) indicates that worker $i$ needs to send updates to worker $j$ during training.
The weight of an edge reflects
how much ``influence'' the updates have upon worker $j$.
Each worker maintains a copy of all the parameters and
the local update is always assumed to be available ---
there is a self-loop at every node \cite{ArXiv_ASAP,NIPS2017_dPSGD}, i.e., for all $i \in V$, $(i,i) \in E$.
Let $u_i$ denote an update generated by worker $i$,
then the aggregated update at worker $j$ is given by $\sum_{i \in N_{in}(j)} W_{ij}u_i$, where
$W$ is the weighted adjacency matrix of $G$.
Previous works \cite{NIPS2017_dPSGD, ArXiv_ASAP} show
that for decentralized training to perform well, $G$ must be connected and $W$ has to be doubly stochastic, i.e., the row sums and column sums of $W$ must be one. Normally, every update has the same influence:
\begin{equation}
\label{eq:average}
W_{ij} = \Big\{
\begin{tabular}{lc}
$1/|N_{in}(j)|$ & for $i \in N_{in}(j)$ \\
0 & otherwise
\end{tabular}
\end{equation}
An update sent from worker $i$ to worker $j$ is denoted by $u_{i \rightarrow j}$.
If the update is generated in the $k$-th iteration, we further indicate the time stamp with $u_{i \rightarrow j}(k)$.
\subsection{Computation Graph in Decentralized Training}
This section discusses two variants of computation graphs at a worker for
decentralized training and the trade-off.
The computation in an iteration involves five operations.
\begin{itemize}[leftmargin=*]
\item {$Compute$:} The worker consumes a randomly selected batch of data and computes gradients based on its current model parameters.
\item {$Send$:} The worker sends its current parameters to its out-going neighbors. This operation is non-blocking; the worker can send its updates regardless of the status of its out-going neighbors.
We use $Send(i)$ to specify the send operations performed
in the $i$-th iteration.
\item {$Recv$:} The worker receives the model parameters of
its in-coming neighbors. Note that the worker does not request
the parameters; instead, they are sent by the in-coming neighbors
proactively.
We use $Recv(i)$ to specify that the received parameters are sent
in the $i$-th iteration.
The $Recv$ operation blocks until the parameters are completely
received.
\item {$Reduce$:} The worker averages the parameters it has received with its own parameters.
\item {$Apply$:} The worker applies gradients to its current parameters, either before or after the $Reduce$.
\end{itemize}
Next, we describe two computation graphs used in
recent works~\cite{NIPS2017_dPSGD,ArXiv_ASAP,DBLP:conf/nips/send_difference}, which are consistent with the algorithm described in Figure \ref{alg0}.
{\bf Serial Approach}
Illustrated in Figure \ref{fig:dec2ways} (a),
upon entering a new iteration, each worker will $Compute$ gradients,
$Apply$ the gradients to its current parameters, and then $Send$ the new parameters to its out-going neighbors. When it has received new parameters from all its in-coming neighbors sent in the same iteration, it will perform a $Reduce$ and update the local parameters with the results.
This mode is adopted by \cite{ArXiv_ASAP}.
{\bf Parallel Approach}
Shown in Figure \ref{fig:dec2ways} (b), each worker will $Send$
its current parameters at the beginning of an iteration, and at the same time $Compute$ gradients based on the same set of parameters. After receiving the parameters from its in-coming neighbors, it performs a $Reduce$, followed by an $Apply$, producing the local parameters for
update after applying the gradients to the reduced values.
This mode is used in \cite{NIPS2017_dPSGD,DBLP:conf/nips/send_difference}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Pic1_new.pdf}
\caption{Computation Graph in Decentralized Training.}
\label{fig:dec2ways}
\end{figure}
Compared to the serial approach,
the parallel approach allows parallel execution of the $Compute$ and the $Reduce$.
However, the parallelism is achieved at the cost of
{\em inaccurate gradients}.
Specifically, in Figure \ref{fig:dec2ways} (b),
the gradients are computed using the parameters {\em before} the $Reduce$
but are applied to the parameters {\em after} the $Reduce$.
This can harm the effectiveness of gradient descent.
On the contrary, the serial approach in Figure \ref{fig:dec2ways} (a)
ensures that the gradients are generated with and applied to the {\em same} set of parameters.
Therefore, the parallel approach enjoys faster iterations but requires more iterations to converge, while the serial approach needs fewer
but longer iterations to converge.
It reflects the interesting trade-off between the execution efficiency
and statistic efficiency~\cite{Zhang:dinnwitted}.
We use parallel approach in our design.
\subsection{Iteration Gap and the Mixed-Version Problem}
An important characteristic of decentralized training is
the potential large iteration gap, i.e., at any given time, workers can be found in a large range of iterations. In the centralized setting, no such gap exists in synchronous computation, because workers have to synchronize at the end of each iteration, which ensures that all workers always stay in the same iteration. For asynchronous computation, convergence is only guaranteed when bounded staleness is enforced, which sets a fixed upper-bound on the iteration gap between the fastest worker and the slowest one.
However, in decentralized setting, we show that the size of the gap is only limited by the graph topology and that large gaps can occur even in synchronous training. Before delving into the details, we first establish a basic and natural assumption.
\textbf{Assumption}. A worker can advance to the next iteration if and only if all of the following conditions are true:
{\em a)} it has finished the computation of the current iteration;
{\em b)} it has sent updates of the current iteration to its out-going neighbors;
{\em c)} it has received updates required by the current iteration from its in-coming neighbors.
The above assumption was adopted in previous work \cite{ArXiv_ASAP}. Note that the assumption does {\em not} impose a global barrier between adjacent iterations. In fact, a global barrier can hurt the performance as it
introduces unnecessary waiting: in order to enter the next iteration, a worker has to wait for all other workers to complete the current iteration, when actually it only needs to wait for its in-coming neighbors. Based on the assumption, an important result on iteration gap follows:
\textbf{Theorem 1}. Under the above assumption, at any given time, the maximum iteration difference between
worker $i$'s iteration and worker $j$'s iteration is the length of the shortest path from node $j$ to node $i$, i.e.,
$Iter(i) - Iter(j) \leq length(Path_{j \rightarrow i})$,
where $Iter(i)$ is the iteration of worker $i$ for any $i \in V$ and $length(Path_{j \rightarrow i})$ stands for the length of the shortest path from node $j$ to node $i$.
\textit{Proof}. The proof of the theorem is based on
a simple observation: the maximal iteration difference between a node and its in-coming
neighbor is 1, i.e., for any $i \in V$ and any $j \in N_{in}(i)$, $Iter(i) - Iter(j) \leq 1$. This is because worker $i$ can only advance to $Iter(i)$ when it has received worker $j$'s update of iteration $Iter(i)-1$. Note that we cannot derive the lower-bound of $Iter(i) - Iter(j)$ given only the directed edge from $j$ to $i$, because $Iter(j)$ can be much larger than $Iter(i)$ if worker $i$ is slower than worker $j$.
Now for two arbitrary nodes $i$ and $j$, consider the shortest path from $j$ to $i$. Going from $j$ to $i$, based on the observation above, every time we pass a node $v$, the maximal possible value of $Iter(v) - Iter(j)$ is increased by 1. Since $Iter(j) - Iter(j) = 0$ and there are $length(Path_{j \rightarrow i})$ other nodes on the path, we have $Iter(i) - Iter(j) \leq length(Path_{j \rightarrow i})$. $\blacksquare$
The existence of the iteration gap creates the mixed-version problem \cite{ArXiv_ASAP}, i.e., a worker can receive updates of various iterations at the same time from its in-coming neighbors. The problem may not be
severe if the network is small, but as the size of the network grows, $length(Path_{i \rightarrow j})$ can be large, where $j \in N_{in}(i)$. In such cases, there are a large number of $u_{j \rightarrow i}$'s generated by worker $j$ but not consumed by worker $i$.
While this paper is the first to present and prove the theorem,
we find that a previous solution in ~\cite{ArXiv_ASAP} indeed provides
a mechanism to bound the gap between workers~\footnote{Although ~\cite{NIPS2017_dPSGD} also implements decentralized training,
its focus is mainly on the algorithm, not implementation. We cannot find much detail to judge how
it handles the mix-version and iteration gap problem.
Thus, we only consider ~\cite{ArXiv_ASAP} regarding implementation.}.
Specifically, a worker is prevented to send an update unless it has received confirmation that the previous update has been consumed. As illustrated in Figure \ref{fig:dec2ways} (a), this method, called \textsc{notify-ack}, requires a worker to send an $ACK$ message after the $Reduce$ to all its in-coming neighbors, announcing that the parameters they sent have been consumed. The in-coming neighbors will not perform the next $Send$ unless it has received the $ACK$.
Essentially, in addition to the forward
dependence from sender to receiver explained in
{\em Theorem 1},
\textsc{notify-ack} also enforces the {\em backward}
dependence from receiver to sender.
It leads to the over-restrictive iteration gap
between adjacent workers for the following reasons.
For $j \in N_{in}(i)$, we already know from \textit{Theorem 1} that $Iter(i) - Iter(j) \leq 1$. As for $Iter(j) - Iter(i)$, since worker $j$ needs to receive an ACK from worker $i$'s $(Iter(j)-1)$-th iteration in order to advance to iteration $Iter(j)+1$, the difference of their iterations is at most 2.
As for an arbitrary pair of workers $(i,j)$, the upper-bound on $Iter(i) - Iter(j)$ cannot be merely expressed by a function of $length(Path_{j \rightarrow i})$.
This is because on any path between $i$ and $j$, we must ensure that $-2 \leq Iter(u) - Iter(v) \leq +1$ holds for any $v \in N_{in}(u)$ on the path.
In another word, every time we pass a node $u$
from $v$, worker $u$ can be at most 1 iteration ahead of,
{\em and} at most 2 iterations behind worker $v$.
As the result, the upper-bound of $Iter(i) - Iter(j)$,
i.e., how much worker $i$ is ahead of worker $j$,
is the {\em minimum} of maximum iteration gap following
either $Path_{j \rightarrow i}$ or $Path_{i \rightarrow j}$ subject to the constraint between $u$ and $v$:
$Iter(i) - Iter(j) \leq min(length(Path_{j \rightarrow i}),2 \times length(Path_{i \rightarrow j}))$, --- more restrictive
than the iteration gap determined by {\em Theorem 1}.
Although \textsc{notify-ack} can ensure the sequential order of updates at the receiving worker, the tightly bounded iteration gap makes it an undesirable choice for heterogeneous environments where workers are expected to advance at various speeds. An intuitive example is that a fast worker may wait for a slow out-going neighbor's $ACK$ after it has received all the updates from its in-coming neighbors,
--- in fact ready to advance to the next iteration.
To cope with heterogeneity,
we propose two mechanisms,
backup workers and bounded staleness, to accommodate
larger iteration gaps.
While similar mechanisms exist in centralized training~\cite{ICLR2016_backup_workers,NIPS2013_SSP},
they have not been applied in the decentralized setting.
As we will show, both the protocol and implementation will need to be carefully redesigned.
\subsection{Decentralized Training with Backup Workers}
\label{backup_concept}
An effective technique to mitigate the effect of heterogeneity is to use backup workers \cite{ICLR2016_backup_workers}. In centralized training,
it can be easily implemented by allowing
the number of updates needed at parameter servers to
be smaller than the number of workers.
Assume that there are $n$ workers and the number of updates needed in every iteration is $m$ ($m<n$). From
each PS's perspective, the effective number of workers is $m$, since it needs $m$ updates in each iteration, and the remaining $(n-m)$ workers are ``backups''. In this way, we can tolerate at most $(n-m)$ slow workers in case of random slowdowns or even accidental node crashes without influencing the training speed.
We naturally apply backup workers to decentralized training by setting the number of updates needed at each worker to be smaller than the number of its in-coming neighbors. As illustrated in Figure \ref{fig:buwstaleEg}(a), in a decentralized 3-worker setting, every worker has edges to and from two other workers, and only needs one update from its neighbors in every iteration.
In the current state, worker A is stuck at iteration 0, while worker B and C are able to advance to iteration 3 and 4, respectively. Without backup workers, worker A would have dragged down the progress of B and C, because they would have both relied on A's updates to advance. With backup workers, B and C can advance to
later iterations.
However, the simple mechanism causes a fundamental problem: the iteration gap between two workers can be
{\em arbitrarily} large.
It can be easily seen from Figure \ref{fig:buwstaleEg},
in which B and C can in fact be at {\em any}
iteration since they may only rely on the updates
of each other.
Since the \textsc{notify-ack} mechanism
implies that the iteration difference of adjacent
workers is at most 2,
the benefits of backup workers cannot be fully realized.
For example,
worker B and C can both be stuck at iteration 1 waiting for the $ACK(0)$ from A, which will not arrive without A's progress.
Thus, worker A can still drag down the progress
of worker B and C even if they do not need to wait
for worker A's update.
In essence, it is the {\em mechanisms} in \textsc{notify-ack}
that prevent the realization of backup workers.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Fig3_new.pdf}
\caption{Backup workers and bounded staleness in distributed heterogeneous environments. Consumed updates are shown in grey. (a) Worker B and C can advance to any iteration, only relying on the updates of one another. (b) With the staleness bound set to 2, worker B is blocking due to the slow progress of A. Worker C can freely advance to the next iteration.}
\label{fig:buwstaleEg}
\end{figure}
\subsection{Decentralized Training with Bounded Staleness}
\label{staleness_concept}
Bounded staleness~\cite{NIPS2013_SSP} is another
technique for centralized training to tolerate
slow workers or slow communication between parameter
servers and workers.
To realize it, an asynchronous parameter server is adopted but an upper-bound is enforced on the difference of the fastest worker's iteration and the slowest one's.
A worker is free to advance to a new iteration as long as the staleness bound is preserved.
For decentralized training, it is difficult to enforce
globally bounded staleness, which means that the iteration
difference of the fastest worker and slowest worker
in the whole system cannot exceed the bound.
Clearly, it will defeat the decentralized nature
by introducing some kind of global progress monitor
to ensure such a property.
Instead, we propose to apply bounded staleness in a local fashion: a worker can enter a new iteration as long as it has received updates from at most $s$ iterations ago from all its in-coming neighbors, where $s$ is the upper-bound of staleness.
Since updates are delivered locally, the enforcement of the staleness bound is straightforward.
We believe that a local bound is a natural adoption of
bounded staleness in the decentralized setting
that leads to efficient implementation.
Figure~\ref{fig:buwstaleEg}(b) shows an example of a staleness bound of 5 being used in the same 3-worker decentralized setting. Worker A, B and C are in the 0th, 5th and 3rd iteration, respectively. Worker A is temporarily slowed down due to a random factor, e.g., resource sharing. Nevertheless, with the use of staleness, worker B and C can continue to advance until the 5th iteration. The advantage of bounded staleness is that some progress can still be made even if certain workers are slowed down, --- the effect of the slowdown is mitigated.
Let us consider \textsc{notify-ack} again.
Unfortunately, it directly imposes a strict bound on the iteration gap (i.e., 2), so
any staleness bound larger than 1
is not possible. For example, in the case in Figure \ref{fig:buwstaleEg}(b), worker B and C would not have been able to enter iteration 1
if \textsc{notify-ack} was used as the protocol, since they would still be waiting for $ACK(0)$ from worker A.
Similar to backup workers,
\textsc{notify-ack}'s mechanism affects the realization
of local bounded staleness.
\begin{comment}
in an alternative perspective, the PS is allowed to enter a new iteration as long as the slowest worker is at most $s$ iterations away, where $s$ is the upper-bound on staleness, since the PS's iteration directly reflects the fastest worker's iteration.
with a local staleness bound, a (much looser) global one is automatically implied as long as the graph is strongly connected
The expected scenario is that worker A will catch up later and possibly outrun the others when B and C are caught up in random slowdown.
\end{comment}
The {\bf essential takeaway} of the discussion so far is the
following.
Although \textsc{notify-ack} points out and prevents
the mixed-version problem, it does {\em not} realize the
larger and potentially arbitrary iteration gap.
As we have shown, \textsc{notify-ack} is overly
restrictive to force a very small gap between
adjacent workers, which will
{\em 1)} limit the potential of decentralized training; and
{\em 2)} prevent the implementation of backup workers and
bounded staleness.
To support larger iteration gaps while solving the mixed-version problem, we propose a queue-based coordination
scheme.
\subsection{Update Queue}
\label{update}
To support mixed-version and large iteration gaps,
we propose a queue-based coordination scheme where the received updates are stored in FIFO queues, called {\em update queues}. The update queue at worker $i$ is denoted by $UpdateQ(i)$. We further define the following queue operations:
\begin{itemize}[leftmargin=*]
\item $q.enqueue(update,iter=None,w\_id=None)$ pushes $update$ into the queue, where $iter$ and $w\_id$ are tags, denoted by $(iter,w\_id)$, and $q$ is the name of the queue. The input $iter$ indicates the index of the iteration where the $update$ was generated, and $w\_id$ indicates the index of the sender worker.
\item $q.dequeue(m,iter=None,w\_id=None)$ takes the first $m$ entries tagged with $(iter,w\_id)$ out of the queue and returns a list containing these entries. This function blocks if there are not enough elements tagged with $(iter,w\_id)$ in the queue. If one of the tags is not specified, then the first $m$ entries matching the other tag will be returned. If neither is specified, the first $m$ entries are returned regardless of their tags. If needed, tags of the returned entries can be returned as well.
\item $q.size(iter=None,w\_id=None)$ returns the number of entries tagged with $(iter,w\_id)$ in the queue. If one of the tags is not specified, the number of entries matching the other tag is returned. If neither is specified, the total number of entries in the queue is returned.
\end{itemize}
Based on the update queue, the standard decentralized training algorithm is shown in Figure \ref{alg1}.
To send an update from $i$ to $j$,
worker $i$ directly enqueues the parameters to the update queue
of worker $j$.
To receive an update, a worker can locally dequeue
updates sent from various workers and iterations.
However, one question remains, --- how large should the queue be to accommodate all the updates?
Based on \textit{Theorem 1}, for any worker $i$ and its in-coming neighbor $j$, worker $j$ can be $length(Path_{i \rightarrow j})$ iterations ahead of $i$.
It means that $UpdateQ(i)$ must be able to store updates of $length(Path_{j \rightarrow i}) + 1$ different iterations from $j$. When the number of workers is large, the shortest path from $j$ to $i$ can also be large, and so must be the capacity of the queue, which will then put considerable pressure on the system memory. An example is shown in Figure \ref{fig:standardDec}.
\begin{figure}
\centering
\small
\begin{algorithmic}[1]
\REQUIRE Initial model parameters $p_0$
\REQUIRE $UpdateQ(i)$ for all $i \in V$
\REQUIRE Maximum number of iterations $max\_iter$
\FOR {$i \in V$}
\STATE //Initialize local model parameters
\STATE $x_{0,i} = p_0$
\FOR {$k = 0$ \TO $max\_iter$}
\STATE // 1. $Send$ my parameters to my out-going neighbors
\FOR {$j \in N_{out}(i)$}
\STATE $UpdateQ(j).enqueue(x_{k,i},iter=k,w\_id=i)$
\ENDFOR
\STATE // 2. $Compute$ gradients based on $x_{k,i}$
\STATE Randomly sample a batch of data $d_{k,i}$
\STATE $grads = Compute(x_{k,i},d_{k,i})$
\STATE // 3. $Recv$ parameters from my in-coming neighbors
\STATE $x_{recv} = UpdateQ(i).dequeue(|N_{in}(i)|,iter=k)$
\STATE // 4. $Reduce$
\STATE $temp = \sum_{j = 0}^{|N_{in}(i)|-1} x_{recv}(j) / |N_{in}(i)|$
\STATE // 5. $Apply$
\STATE $x_{k+1,i} = temp+grads$
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Decentralized Training with Update Queue}
\label{alg1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Fig5_new.pdf}
\caption{Iteration gap in standard decentralized training. The size of the update queue is 4. Consumed updates are shown in grey. (1) A maximum iteration gap of 3 between B and A is illustrated. $UpdateQ(A)$ is able to accommodate the 4 updates. (2) A maximum iteration gap of 4 between B and A is observed. $UpdateQ(A)$ will not be able to accommodate the updates. Using $TokenQ(A \rightarrow B)$ can prevent this situation.}
\label{fig:standardDec}
\end{figure}
\subsection{Token Queue: Controlled Iteration Gaps}
\label{token}
To tackle this problem, we propose token queues as a mechanism to {\em control the iteration gap between adjacent workers}. Note that by the nature of standard decentralized training, every worker can be at most one iteration {\em ahead} of its in-coming neighbors; therefore, we only need to control a worker's speed as compared to its potentially {\em slower} out-going neighbors.
In our design, each worker maintains a token queue for every in-coming neighbor. Whenever a worker attempts to enter a new iteration, it must acquire a token from every one of its out-going neighbors. The number of tokens in the queue determines how many more iterations an in-coming neighbor can advance, considering the local worker's progress. Assuming that we want the iteration gap between adjacent workers not to exceed a predefined positive integer constant $max\_ig$,
we propose the following procedure to ensure this gap.
We denote the token queue at worker $i$ storing tokens for worker $j$ by $TokenQ(i \rightarrow j)$, where $i \in N_{out}(j)$.
\begin{itemize}[leftmargin=*]
\item {\bf Initialization}
At the start of the first iteration, each worker puts $max\_ig$ tokens in each token queue it maintains.
\item {\bf Remove token}
When a worker $i$ attempts to enter a new iteration, it must remove a token from {\em every} one of its out-going neighbors,
--- for each $j \in N_{out}(i)$, remove a token from
$TokenQ(j \rightarrow i)$.
\item {\bf Insert token}
When a worker $i$ enters a new iteration, it will insert a token in {\em every} local token queue, --- for each $j \in N_{in}(i)$, insert a token to $TokenQ(i \rightarrow j)$. This allows all its in-coming neighbors to advance further.
\end{itemize}
\textbf{Theorem 2}. For standard decentralized training, with token queues, the upper-bound of $Iter(i)-Iter(j)$ is given by:
$min(length(Path_{j \rightarrow i}),max\_ig \times length(Path_{i \rightarrow j}))$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Added_fig_new.pdf}
\caption{Topological relations between two workers.}
\label{token_proof}
\end{figure}
\textit{Proof}. First, we consider a pair of adjacent workers $(i,j)$. For any worker $i$ and its out-going neighbor $j$, we already know from \textit{Theorem 1} that $Iter(j)-Iter(i) \leq 1$. To derive the upper-bound of $Iter(i)-Iter(j)$, we will prove in a deductive manner that $TokenQ(j \rightarrow i).size() = Iter(j)-Iter(i)+max\_ig$ holds for all iterations, where the $size()$ function returns the number of tokens in the token queue. At the start of training, $Iter(i) = Iter(j) = 0$, and the initialization of $TokenQ(j \rightarrow i)$ results in $TokenQ(j \rightarrow i).size() = max\_ig$. Therefore, the above equation holds. The variables in the equation only change when either worker $i$ or $j$ advances to a new iteration. If worker $i$ advances, it must remove one token from $TokenQ(j \rightarrow i)$; at the same time, $Iter(i)$ is increased by 1. Therefore, both sides of the equation are decreased by 1 and the equality still holds. Similarly, if worker $j$ enters a new iteration, it must insert one token into $TokenQ(j \rightarrow i)$; therefore, both sides of the equation are increased by 1 and the equality still holds. Since the number of tokens in $TokenQ(j \rightarrow i)$ is non-negative, we have $Iter(j)-Iter(i)+max\_ig = TokenQ(j \rightarrow i).size() \geq 0$, and thus $Iter(i)-Iter(j) \leq max\_ig$.
\begin{comment}
\textbf{\green{Original proof for neighbors: }}
Consider three adjacent workers $i$, $j$, and $k$ in
Figure~\ref{token_proof} (a).
If worker $i$ and $j$ are in the same iteration, $Iter(i)-Iter(j)=0$,
obviously less than $max\_ig$.
If $Iter(i)-Iter(j)>0$ and they are in the same iteration
initially, according to the procedure,
worker $i$ can at most advance $max\_ig$ iterations, since
the number of tokens in $TokenQ(j \rightarrow i)$ is $max\_ig$,
thus $Iter(i)-Iter(j)=max\_ig$.
Note that if worker $i$ is already faster than worker $j$, the
gap between them is still the same since $TokenQ(j \rightarrow i)$
will contain less number of tokens that is equal to the current gap.
If $Iter(i)-Iter(j)<0$, there are two cases.
From {\em Theorem 1}, we have $Iter(j)-Iter(i) \leq 1$, given that
$max\_ig \geq 1$, we have $Iter(j)-Iter(i) \leq max\_ig$.
It is also possible that worker $j$ advances to later
iterations by removing tokens from $TokenQ(k \rightarrow j)$,
as shown before, it can only advance $max\_ig$ iterations,
and after that $TokenQ(j \rightarrow i)$ will have
$2 \times max\_ig$ tokens.
This allows worker $i$ to advance $2 \times max\_ig$ iterations,
but since we start from $Iter(j)-Iter(i) = max\_ig$,
worker $i$ can at most advance $max\_ig$ ahead of worker $j$ by
removing all $2 \times max\_ig$ tokens,
reaching $Iter(i)-Iter(j) = max\_ig$.
We see that the the condition is held both before and after
worker $i$'s advances.
\textbf{\green{Original part ends}}
\end{comment}
Next we consider an arbitrary pair $(i,j)$. Since the graph is connected, there exists a path from $i$ to $j$ and vice versa, as shown in Figure~\ref{token_proof} (a).
The two paths can be viewed as two basic scenarios.
In Figure~\ref{token_proof} (b),
due to the earlier proof of the
adjacent workers case,
$Iter(i)-Iter(j) \leq max\_ig \times length(Path_{i \rightarrow j})$.
In Figure~\ref{token_proof} (c), due to {\em Theorem 1},
$Iter(i)-Iter(j) \leq length(Path_{j \rightarrow i})$.
The general case is the combination of the two
scenarios, thus we have
$Iter(i)-Iter(j) \leq min(length(Path_{j \rightarrow i}),max\_ig \times length(Path_{i \rightarrow j}))$.
$\blacksquare$
The intuition of the iteration gap being
bounded by the smaller one is that, otherwise, the
larger gap becomes {\em infeasible}, due to
either not having enough tokens
(if the gap in Figure~\ref{token_proof} (b) is larger), or
not actually having long enough path
(if the gap in Figure~\ref{token_proof} (c) is larger).
Overall, the proposed token queue provides a flexible parametrized method to bound the iteration gap.
The upper-bound of the capacity of any token queue is
$max\_ig \cdot (length(Path_{i \rightarrow j})+1)$.
It directly follows from applying the upper-bound of $Iter(i)-Iter(j)$ proved in \textit{Theorem 2} to $TokenQ(i \rightarrow j).size()=Iter(i)-Iter(j)+max\_ig$.
Now we apply token queues to the example in Figure \ref{fig:standardDec}(b). We can set $max\_ig$ to 3. The token queue at worker A contains 3 tokens at the beginning of the 0th iteration. Whenever worker B enters a new iteration, it must obtain a token from A. Since A has not progressed, B can get at most 3 tokens from A, which enables B to reach the 3rd iteration but no more. Therefore, A only has to deal with at most 4 updates at a time, and the situation in the figure is prevented.
The decentralized training algorithm
using token queues is shown in Figure \ref{alg2}. With bounded iteration gaps, the required size of $UpdateQ(i)$ is upper-bounded by $(1+max\_ig)|N_{in}(i)|$, regardless of the graph size or topology.
Although the use of token queues may
only provide a marginal improvement to the algorithm
based merely on the update queue,
we will later see in Section~\ref{backup}
that bounding the iteration gap is absolutely necessary when backup workers are employed to mitigate the effect of heterogeneity.
\begin{figure}
\centering
\small
\begin{algorithmic}[1]
\REQUIRE All the requirements in Figure \ref{alg1}
\REQUIRE $TokenQ(i \rightarrow j)$ for all $i \in V$ and all $j \in N_{in}(i)$
\REQUIRE Maximum iteration gap $max\_ig$
\FOR {$i \in V$}
\STATE $x_{0,i} = p_0$
\FOR {$j \in N_{in}(i)$}
\STATE // Put $(max\_ig-1)$ initial tokens
\STATE $TokenQ(i \rightarrow j).enqueue([0] *(max\_ig-1))$
\ENDFOR
\FOR {$k = 0$ \TO $max\_iter$}
\FOR {$j \in N_{in}(i)$}
\STATE // Insert tokens
\STATE $TokenQ(i \rightarrow j).enqueue([k])$
\STATE $Send(x_{k,i},k,i)$ \algorithmiccomment{1. $Send$}
\STATE $grads = Compute(x_{k,i})$ \algorithmiccomment{2. $Compute$}
\STATE $x_{recv} = Recv(k,i)$ \algorithmiccomment{3. $Recv$}
\STATE $temp = Reduce(x_{recv})$ \algorithmiccomment{4. $Reduce$}
\STATE $x_{k+1,i} = temp+grads$ \algorithmiccomment{5. $Apply$}
\FOR {$j \in N_{out}(i)$}
\STATE // Get a new token
\STATE $TokenQ(j \rightarrow i).dequeue(1)$
\ENDFOR
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\footnotesize
\emph{Notes:} Some pseudocodes have been wrapped up in functions. Line 11 has replaced the original lines 6-8 from Figure \ref{alg1}; line 12 has replaced the original lines 10-11; line 13 has replaced the original line 13; line 14 has replaced the original line 15.
\caption{Decentralized Training with Token Queue}
\label{alg2}
\end{figure}
\subsection{Supporting Backup Workers}\label{backup}
Applying backup workers to decentralized training is relatively intuitive. As stated in Section~\ref{backup_concept}, instead of requiring an update from every in-coming neighbor, a worker only needs to get updates from a few neighbors in order to advance to the next iteration, i.e., the number of updates needed is smaller than the number of its in-coming neighbors. In the algorithm shown in Figure \ref{alg4}, when collecting updates from the local update queue, a worker first makes sure it has enough number of updates by specifying the number in the input of the $dequeue$ function. Then it checks the queue for any additional update that is available, as the number of updates received may exceed the required number. The above process can also be replaced with a while loop that continues to take available updates out of the queue until the amount is adequate.
One problem with using a smaller number of updates is that the unused updates that arrive later can accumulate in the update queue. Iteration after iteration, they will take up more and more amount of space, which will inevitably lead to overflow. We propose a solution that consists of two parts:
{\em a)} clear the stale updates periodically; and
{\em b)} with little communication overhead, prevent unnecessary updates from being sent by checking the receiver's iteration before the $Send$. We will explain in more detail in Section~\ref{sec:impl}.
Another distinct feature of the backup workers setting is that the iteration gap is unbounded. As we have illustrated in Figure \ref{fig:buwstaleEg} before, worker B and C can progress to an arbitrarily large iteration depending only on the updates of one another, while their common neighbor A stays in iteration 0. Therefore, bounding the iteration gap is a {\em must} for the correct execution of decentralized training --- in this case, token queues are an indispensable part of the design.
\begin{figure}
\centering
\small
\begin{algorithmic}[1]
\REQUIRE All the requirements in Figure \ref{alg2}
\REQUIRE Number of backup workers $N\_buw(i)$ for all $i \in V$; $N\_buw(i) < |N_{in}(i)|$
\STATE function $Recv(k,i)\{$
\STATE\hspace{\algorithmicindent} // Get the needed updates
\STATE\hspace{\algorithmicindent} $x_{rcv1} = UpdateQ(i).dequeue(|N_{in}(i)|-N\_buw(i),iter=k)$
\STATE\hspace{\algorithmicindent} // Get additional updates remaining in the queue
\STATE\hspace{\algorithmicindent} $x_{rcv2} = UpdateQ(i).dequeue(UpdateQ(i).size(k),iter=k)$
\STATE\hspace{\algorithmicindent} // Combine updates and return
\STATE\hspace{\algorithmicindent} \textbf{Return} $concatenate(x_{rcv1},x_{rcv2})$
\STATE $\}$
\STATE function $Reduce(x_{recv})\{$
\STATE\hspace{\algorithmicindent} // Compute the number of entries in $x_{recv}$
\STATE\hspace{\algorithmicindent} $N\_updates = size(x_{recv})$
\STATE\hspace{\algorithmicindent} // Reduce and return
\STATE\hspace{\algorithmicindent} \textbf{Return} $\sum_{j = 0}^{N\_updates-1} x_{recv}(j) / N\_updates$
\STATE $\}$
\STATE $Train()$
\end{algorithmic}
\footnotesize
\emph{Notes:} Lines 1-22 in Figure \ref{alg2} are wrapped up in the function $Train()$.
\caption{Decentralized Training with Backup Workers}
\label{alg4}
\end{figure}
\subsection{Supporting Bounded Staleness}
\label{staleness}
As discussed in Section~\ref{staleness_concept}, in the bounded staleness setting, a worker can enter a new iteration as long as it has received updates from at most $s$ iterations ago from all its in-coming neighbors, where $s$ is the upper-bound on staleness. However, a specific way to incorporate staleness in decentralized training is yet to be discovered. In particular, it remains a problem how to handle stale updates.
We make an observation that model parameters sent in a later iteration contain the information carried by earlier updates, since later updates are built upon earlier ones and gradients are accumulated in the parameters sent in the updates. Therefore, we propose to use the most recent available updates whenever possible and discard the rest.
Specifically, when a worker collects updates from its local update queue, it will compare the tags and select the newest update from each of its in-coming neighbors. If the update is within the staleness bound, it is deemed satisfactory; otherwise, it is dropped. If no update from an in-coming neighbor, either received in the current iteration or in previous ones, is within the current staleness bound, the worker will block until it gets a newer update from the corresponding neighbor. When the worker has received a satisfactory update from every one of its in-coming neighbors, it will perform a $Reduce$ on the newly received updates. Note that the updates to reduce may come from different iterations, thus a simple average may not be the best way to aggregate them. We have compared simple averaging to an iteration based weighted average, and found the latter performs slightly better. For worker $j$ in iteration $k$, the update formula we have settled on is as follows:
\begin{equation}
\vspace{-2mm}
\frac{\sum_{i \in N_{in}^{(k)}(j)} [Iter(u_i)-(k-s)+1]u_i}{\sum_{i \in N_{in}^{(k)}(j)} [Iter(u_i)-(k-s)+1]}
\label{eq:updateST}
\end{equation}
where $Iter(u_i)$ is the iteration in which $u_i$ was generated, and $N_{in}^{(k)}(j) =$\{$i \in N_{in}(j)$: $u_i$ received in iteration $k$ is satisfactory\}. The weight of an update is linearly associated with its iteration, which is at least $k-s$ to be considered satisfactory. The above formula may very well be non-optimal, and we leave further
optimization as future work.
As for the iteration gap, with a staleness bound of $s$, we have $Iter(i) - Iter(j) \leq s+1$ for $j \in N_{in}(i)$. This is because for worker $i$ to enter $Iter(i)+1$, it needs an update from worker $j$ at least as recent as $u_{j \rightarrow i}(Iter(i)-s)$. Therefore, the upper-bound on the iteration gap is given by
$Iter(i) - Iter(j) \leq (s+1) \cdot length(Path_{j \rightarrow i})$.
We see that the iteration gap has been largely increased compared to the standard decentralized setting. Therefore, we have also employed token queues as a way to bound the gap. The algorithm is shown in Figure \ref{alg3}.
\begin{figure}
\centering
\small
\begin{algorithmic}[1]
\REQUIRE All the requirements in Figure \ref{alg2}
\REQUIRE Staleness bound $max\_staleness$
\REQUIRE The iteration of the most recent $u_{i \rightarrow j}$ received, denoted by $iter\_rcv_{i \rightarrow j}$, initialized to -1, for $i \in V$ and $j \in N_{out}(i)$
\STATE function $Recv(k,i)\{$
\STATE\hspace{\algorithmicindent} $min\_iter = k-max\_staleness$
\STATE\hspace{\algorithmicindent} $x_{recv} = [], iter_{recv} = []$
\STATE\hspace{\algorithmicindent} \textbf{for} $j \in N_{in}(i)$ \textbf{do}
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent} \textbf{do}
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent}\hspace{\algorithmicindent} $q\_sze = max(UpdateQ(i).size(w\_id=j),1)$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent}\hspace{\algorithmicindent} $(l\_x,l\_iter) = UpdateQ(i).dequeue(q\_sze,w\_id=j)$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent}\hspace{\algorithmicindent} $iter\_rcv_{j \rightarrow i}=max(max(l\_iter),iter\_rcv_{j \rightarrow i})$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent} \textbf{while} $iter\_rcv_{j \rightarrow i}<min\_iter$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent} \textbf{if} $max(l\_iter) \geq min\_iter$ \textbf{then}
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent}\hspace{\algorithmicindent} $x_{recv} = concatenate(x_{recv},l\_x(argmax(l\_iter)))$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent}\hspace{\algorithmicindent} $iter_{recv} = concatenate(iter_{recv},[max(l\_iter)])$
\STATE\hspace{\algorithmicindent}\hspace{\algorithmicindent} \textbf{endif}
\STATE\hspace{\algorithmicindent} \textbf{end for}
\STATE\hspace{\algorithmicindent} // Return a tuple of the parameters, their iterations and $k$
\STATE\hspace{\algorithmicindent} \textbf{Return} $tuple(x_{recv},iter_{recv},k)$
\STATE $\}$
\STATE function $Reduce(tuple_{recv})\{$
\STATE\hspace{\algorithmicindent} // Deconstruct the input tuple
\STATE\hspace{\algorithmicindent} $(x_{recv}, iter_{recv}, k) = tuple_{recv}$
\STATE\hspace{\algorithmicindent} // Compute the number of entries in $x_{recv}$
\STATE\hspace{\algorithmicindent} $N\_updates = size(x_{recv})$
\STATE\hspace{\algorithmicindent} // Reduce the updates
\STATE\hspace{\algorithmicindent} $sum\_weight = \sum_{j = 0}^{N\_updates-1} [iter_{recv}(j)-(k-s)+1]$
\STATE\hspace{\algorithmicindent} $temp = \frac{\sum_{j = 0}^{N\_updates-1} [iter_{recv}(j)-(k-s)+1]x_{recv}(j)}{sum\_weight}$
\STATE\hspace{\algorithmicindent} \textbf{Return} $temp$
\STATE $\}$
\STATE $Train()$
\end{algorithmic}
\footnotesize
\emph{Notes:} Lines 1-22 in Figure \ref{alg2} are wrapped up in the function $Train()$.
\caption{Decentralized Training with Bounded Staleness}
\label{alg3}
\end{figure}
\begin{comment}
\subsection{Putting All Together}
We can also blend bounded staleness with backup workers to produce a hybrid setting. To use backup workers in the bounded staleness approach, we need to specify how many satisfactory (i.e., within the staleness bound) updates a worker must obtain, instead of demanding that it gets a satisfactory update from every one of its incoming neighbors. Note that the satisfactory update can be either obtained in the current iteration or in previous iterations, and if the latter is true, then the update will not participate in the $Reduce$ of the current iteration.
In Figure \ref{fig:comp3eg}, we give three examples that correspond to the three settings respectively: bounded staleness, backup workers and a hybrid of both. Compared to bounded staleness, the hybrid setting can induce arbitrarily large iteration gaps. For example, in Figure \ref{fig:comp3eg} (c), worker A and C can progress to any iteration replying on the updates of one another, leaving the slow worker B far behind. Compared to the backup workers setting, the hybrid setting allows workers to accept stale updates. As shown in Figure \ref{fig:comp3eg} (c), worker A can complete iteration 7 with an update sent from iteration 5, while in the backup workers scenario, worker A would strictly need an update generated in iteration 7.
We have also summarized the theoretical upper-bounds on the iteration gap for various settings in Table \ref{tbl:ig_ubnd}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/compare3egs.pdf}
\caption{Comparison of three different settings. (1) Bounded staleness with $s=2$. Worker A can enter iteration 8 after receiving $u_{C \rightarrow A}(5)$. (2) Backup workers with $N\_buw=1$. Worker B and C can progress to any iteration without A's updates. (3) Hybrid with $s=2$ and $N\_buw=1$. Worker A can enter iteration 8 after receiving $u_{C \rightarrow A}(5)$, although A hasn't received a recent update from B. }
\label{fig:comp3eg}
\end{figure}
\end{comment}
\begin{table*}[t]
\small
\begin{center}
\begin{tabular}[t]{|l|l|l|l|}
\hline
Setting & For $j \in N_{in}(i)$ & For $i \in N_{in}(j)$ & For arbitrary $(i,j)$ \\
\hline
Standard decentralized & $1$ & $length(Path_{j \rightarrow i})$ & $length(Path_{j \rightarrow i})$ \\
\hline
Bounded staleness & $s+1$ & $(s+1) \times length(Path_{j \rightarrow i})$ & $(s+1) \times length(Path_{j \rightarrow i})$ \\
\hline
Backup worker & $\infty$ & $\infty$ & $\infty$ \\
\hline
Hybrid & $\infty$ & $\infty$ & $\infty$ \\
\hline
Using \textsc{notify-ack} & $1$ & $2$ & $min(length(Path_{j \rightarrow i}),2 \times length(Path_{i \rightarrow j}))$ \\
\hline
Using token queues & $b_0$ (varied) & $max\_ig$ & $min(b_0 \times length(Path_{j \rightarrow i}),max\_ig \times length(Path_{i \rightarrow j}))$ \\
\hline
\end{tabular}
\caption{Theoretical upper-bound on the iteration gap $Iter(i)-Iter(j)$ for various settings. $b_0$ is varied according to the original setting to which token queues are applied, e.g. $b_0=1$ for standard decentralized setting and $b_0=s+1$ for bounded staleness. For backup worker and the hybrid setting, the original bound is $\infty$, therefore $b_0$ can only be derived from the last column, which gives $b_0 = max\_ig \cdot length(Path_{i \rightarrow j})$. However, no matter the setting to which token queues are applied, the maximum number of tokens in a token queue is always $TokenQ(i \rightarrow j).size() \leq max\_ig \cdot (length(Path_{i \rightarrow j})+1)$.
\label{tbl:ig_ubnd}}
\end{center}
\end{table*} |
1,108,101,562,749 | arxiv | \section{Introduction}
\label{intro}
Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks~\citep{wang2012baselines}. However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.
Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec~\citep{mikolov2013efficient}, by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space.
The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.
Paragraph Vectors~\citep{le2014distributed} generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation~\citep{blei2003latent}, on various text understanding tasks~\citep{dai2015document}. However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions;
and 2) it is expensive to generate vector representations for unseen documents at test time.
We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC{}), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence~\citep{mikolov2013linguistic}. For example, vec(``Russia'') + vec(``river'') is
close to vec(``Volga River'')~\citep{mikolov2013distributed}, and vec(``king'') - vec(``man'') + vec(``women'') is close to vec(``queen'')~\citep{mikolov2013linguistic}. In Doc2VecC{}, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation~\citep{socher2013recursive,mesnil2014ensemble}, Doc2VecC{} enforces a meaningful document representation can be formed by averaging the word embeddings \textbf{during learning}. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.
Doc2VecC{} has several desirable properties: 1. The model complexity of Doc2VecC{} is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC{} resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC{} matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks.
\section{Related Works and Notations}
Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants~\citep{salton1988term}, language model based methods~\citep{croft2013language,mikolov2010recurrent,kim2015character}, topic models~\citep{deerwester1990indexing,blei2003latent}, Denoising Autoencoders and its variants~\citep{vincent2008extracting,chen2012marginalized}, and distributed vector representations~\citep{mesnil2014ensemble,le2014distributed,kiros2015skip}. Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN~\citep{zhang2015text} or LSTM based approaches~\citep{tai2015improved,dai2015semi}.
In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models~\citep{mikolov2013efficient}. In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:
\begin{itemize}[leftmargin=*]
\item[] ${\cal D} = \{D_1, \cdots, D_n\}$: a training corpus of size $n$, in which each document $D_i$ contains a variable-length sequence of words $w_i^1, \cdots, w_i^{T_i}$;
\item[] $V$: the vocabulary used in the training corpus, of sizes $v$;
\item[] $\mathbf{x} \in {\cal R}^{v\times 1}$: BoW of a document, where $x_j = 1$ iff word $j$ does appear in the document.
\item[] $\mathbf{c}^t \in {\cal R}^{v\times 1}$: BoW of the local context $w^{t-k}, \cdots, w^{t-1}, w^{t+1}, \cdots, w^{t+k}$ at the target position $t$. $c_j^t = 1$ iff word $j$ appears within the sliding window of the target;
\item[] $\mathbf{U}\in {\cal R}^{h\times v}$: the projection matrix from the input space to a hidden space of size $h$. We use $\mathbf{u}_w$ to denote the column in $\mathbf{U}$ for word $w$, i.e., the ``input`` vector of word $w$;
\item[] $\mathbf{V}^\top\in{\cal R}^{v\times h}$: the projection matrix from the hidden space to output. Similarly, we use $\mathbf{v}_w$ to denote the column in $\mathbf{V}$ for word $w$, i.e., the ``output`` vector of word $w$.
\end{itemize}
\paragraph{Word2Vec.}
\label{sec:word2vec}
Word2Vec proposed a neural network architecture of an input layer, a projection layer parameterized by the matrix $\mathbf{U}$ and an output layer by $\mathbf{V}^\top$. It defines the probability of observing the target word $w^t$ in a document $D$ given its local context $\mathbf{c}^t$ as
$$P(w^t|\mathbf{c}^t) = \frac{\exp(\mathbf{v}_{w^t}^\top \mathbf{U} \mathbf{c}^t)}{\sum_{w' \in V}\exp(\mathbf{v}_{w'}^\top \mathbf{U} \mathbf{c}^t)}$$
The word vectors are then learned to maximize the log likelihood of observing the target word at each position of the document. Various techniques~\citep{mitchell2010composition, zanzotto2010estimating, yessenalina2011compositional, grefenstette2013multi, socher2013recursive,kusner2015word} have been studied to generate vector representations of documents from word embeddings, among which the simplest approach is to use weighted average of word embeddings. Similarly, our method forms document representation by averaging word embeddings of all the words in the document. Differently, as our model encodes the compositionality of words in the learned word embeddings, heuristic weighting at test time is not required.
\paragraph{Paragraph Vectors.}
\label{sec:doc2vec}
Paragraph Vectors, on the other hands, explicitly learns a document vector with the word embeddings. It introduces another projection matrix $\mathbf{D} \in {\cal R}^{h\times n}$. Each column of $\mathbf{D}$ acts as a memory of the global topic of the corresponding document.
It then defines the probability of observing the target word $w^t$ in a document $D$ given its local context $\mathbf{c}^t$ as
$$P(w^t|\mathbf{c}^t, \mathbf{d}) = \frac{\exp(\mathbf{v}_{w^t}^\top (\mathbf{U} \mathbf{c}^t + \mathbf{d}))}{\sum_{w' \in V}\exp(\mathbf{v}_{w'}^\top (\mathbf{U} \mathbf{c}^t + \mathbf{d}))}$$
where $\mathbf{d} \in \mathbf{D}$ is the vector representation of the document. As we can see from this formula, the complexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size of the training corpus. While we can reasonably limit the size of a vocabulary to be within a million for most datasets, the size of a training corpus can easily go to billions. What is more concerning is that, in order to come up with the vector representations of unseen documents, we need to perform an expensive inference by appending more columns to $\mathbf{D}$ and gradient descent on $\mathbf{D}$ while fixing other parameters of the learned model.
\section{Method}
\label{method}
Several works~\citep{mikolov2013distributed,mikolov2013linguistic} showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure~\ref{fig:architecture} illustrates the new model architecture.
\begin{figure}[h]
\vspace{-0.1in}
\centering
\includegraphics[width=9cm, height=4.8cm]{doc2vec2.pdf}
\vspace{-0.1in}
\caption{A new framework for learning document vectors.}
\label{fig:architecture}
\vspace{-0.1in}
\end{figure}
Similar to Word2Vec or Paragraph Vectors, Doc2VecC{} consists of an input layer, a projection layer as well as an output layer to predict the target word, ``ceremony'' in this example. The embeddings of neighboring words (``opening'', ``for'', ``the'') provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC{} represents each document as an average of the embeddings of words randomly sampled from the document (``performance'' at position $p$, ``praised'' at position $q$, and ``brazil'' at position $r$).
\cite{huang2012improving} also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.
Here we describe the stochastic process we used to generate a global context at each update.
The global context, which we denote as $\tilde\mathbf{x}$, is generated through a unbiased \textit{mask-out/drop-out} corruption, in which we randomly overwrites each dimension of the original document $\mathbf{x}$ with probability $q$. To make the corruption unbiased, we set the uncorrupted dimensions to $1/(1 - q)$ times its original value. Formally,
\begin{equation}
\tilde x_{d}=
\begin{cases}
0, & \text{with probability } q\\
\frac{x_{d}}{1-q}, & \text{otherwise}
\end{cases}
\label{eq:dropout}
\end{equation}
Doc2VecC{} then defines the probability of observing a target word $w^t$ given its local context $\mathbf{c}^t$ as well as the global context $\tilde\mathbf{x}$ as
\begin{equation}
\label{eq:pp}
P(w^t|\mathbf{c}^t, \tilde\mathbf{x})= \frac{\exp(\mathbf{v}_{w^t}^\top (\overbrace{\mathbf{U}\mathbf{c}^t}^{\small\mbox{local context}}+\overbrace{\frac{1}{T}\mathbf{U}\tilde\mathbf{x}}^{\small\mbox{global context}}))}{\sum_{w' \in V} \exp(\mathbf{v}_{w'}^\top \left(\mathbf{U}\mathbf{c}^t+\frac{1}{T}\mathbf{U}\tilde\mathbf{x})\right)}
\end{equation}
Here $T$ is the length of the document.
Exactly computing the probability is impractical, instead we approximate it with negative sampling~\citep{mikolov2013efficient}.
\begin{eqnarray}
f(w, \mathbf{c}, \tilde\mathbf{x}) &\equiv& \log P(w^t|\mathbf{c}^t, \tilde\mathbf{x}) \nonumber\\
&\approx& \log\sigma\left(\mathbf{v}_{w}^\top (\mathbf{U}\mathbf{c}+\frac{1}{T}\mathbf{U}\tilde\mathbf{x})\right) + \sum_{w'\sim P_{v}}\log\sigma\left(-\mathbf{v}_{w'}^\top (\mathbf{U}\mathbf{c}+\frac{1}{T}\mathbf{U}\tilde\mathbf{x})\right) \label{eq:ns}
\end{eqnarray}
here $P_v$ stands for a uniform distribution over the terms in the vocabulary. The two projection matrices $\mathbf{U}$ and $\mathbf{V}$ are then learned to minimize the loss:
\begin{equation}
\ell = - \sum_{i=1}^n \sum_{t = 1}^{T_i} f(w_i^t, \mathbf{c}_i^t, \tilde\mathbf{x}_i^t)~\label{eq:llh}
\end{equation}
Given the learned projection matrix $\mathbf{U}$, we then represent each document simply as an average of the embeddings of the words in the document,
\begin{equation}
\mathbf{d} = \frac{1}{T}\sum_{w \in D} \mathbf{u}_{w}.
\label{eq:test}
\end{equation}
We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.(\ref{eq:dropout}) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time.
\subsection{Corruption as data-dependent regularization}
\label{sec:adaptive}
We approximate the log likelihood for each instance $f(w, \mathbf{c}, \tilde\mathbf{x})$ in eq.(\ref{eq:llh}) with its Taylor expansion with respect to $\tilde\mathbf{x}$ up to the second-order~\citep{van2013learning,wager2013dropout,chen2014marginalized}. Concretely, we choose to expand at the mean of the corruption $\mu_\mathbf{x} = \mathbb{E}_{p(\tilde\mathbf{x}|\mathbf{x})}[\tilde\mathbf{x}] $:
$$f(w, \mathbf{c}, \tilde\mathbf{x}) \approx f(w, \mathbf{c}, \mu_\mathbf{x} ) + (\tilde\mathbf{x} - \mu_\mathbf{x})^\top \nabla_{\tilde\mathbf{x}} f + \frac{1}{2} (\tilde\mathbf{x} - \mu_\mathbf{x} )^\top \nabla_{\tilde{\mathbf{x}}}^2 f (\tilde\mathbf{x} - \mu_\mathbf{x} ) $$
where $\nabla_{\tilde\mathbf{x}}f$ and $\nabla_{\tilde\mathbf{x}}^2f$ are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to $\tilde\mathbf{x}$. Expansion at the mean $\mu_{\mathbf{x}}$ is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context $\tilde\mathbf{x}$ infinitely many times, and thus compute the expected log likelihood with respect to the corrupted $\tilde\mathbf{x}$.
$$\mathbb{E}_{p(\tilde\mathbf{x}|\mathbf{x})}[f(w, \mathbf{c}, \tilde\mathbf{x})] \approx f(w, \mathbf{c}, \mu_\mathbf{x}) + \frac{1}{2}\mbox{tr}\left(\mathbb{E}[(\tilde\mathbf{x} - \mathbf{x})(\tilde\mathbf{x} - \mathbf{x})^\top]\nabla_{\tilde\mathbf{x}}^2 f\right)$$
The linear term disappears as $\mathbb{E}_{p(\tilde\mathbf{x}|\mathbf{x})}[\tilde\mathbf{x} - \mu_{\mathbf{x}}] = 0$. We substitute in $\mathbf{x}$ for the mean $\mu_\mathbf{x}$ of the corrupting distribution (unbiased corruption) and the matrix $\Sigma_{\mathbf{x}} = \mathbb{E}[(\tilde\mathbf{x}-\mu_\mathbf{x})(\tilde\mathbf{x}-\mu_\mathbf{x})^\top]$ for the variance, and obtain
\begin{equation}
\mathbb{E}_{p(\tilde\mathbf{x}|\mathbf{x})}[f(w, \mathbf{c}, \tilde\mathbf{x})] \approx f(w, \mathbf{c}, \mathbf{x}) + \frac{1}{2}\mbox{tr}\left(\Sigma_{\mathbf{x}}\nabla_{\tilde\mathbf{x}}^2 f\right)~\label{eq:taylor}
\end{equation}
As each word in a document is corrupted independently of others, the variance matrix $\Sigma_\mathbf{x}$ is simplified to a diagonal matrix with $j^{th}$ element equals $\frac{q}{1-q}x_j^2$. As a result, we only need to compute the diagonal terms of the Hessian matrix $\nabla^2_{\tilde\mathbf{x}}f$.
The $j^{th}$ dimension of the Hessian's diagonal evaluated at the mean $\mathbf{x}$ is given by
$$\frac{\partial^2 f}{\partial x_j^2} = - \sigma_{w, \mathbf{c}, \mathbf{x}} (1 - \sigma_{w, \mathbf{c}, \mathbf{x}} )(\frac{1}{T}\mathbf{v}_{w}^\top\mathbf{u}_j)^2 - \sum_{w'\sim P_v}\sigma_{w', \mathbf{c}, \mathbf{x}} (1 - \sigma_{w', \mathbf{c}, \mathbf{x}} )(\frac{1}{T}\mathbf{v}_{w'}^\top\mathbf{u}_j)^2$$
Plug the Hessian matrix and the variance matrix back into eq.(\ref{eq:taylor}), and then back to the loss defined in eq.(\ref{eq:llh}), we can see that Doc2VecC{} intrinsically minimizes
\begin{equation}
\ell = -\sum_{i=1}^n\sum_{t=1}^{T_i} f(w_i^t, \mathbf{c}_i^t, \mathbf{x}_i) + \frac{q}{1-q} \sum_{j=1}^v R(\mathbf{u}_j)
\end{equation}
Each $f(w_i^t, \mathbf{c}_i^t, \mathbf{x}_i)$ in the first term measures the log likelihood of observing the target word $w_i^t$ given its local context $\mathbf{c}_i^t$ and the document vector $\mathbf{d}_i = \frac{1}{T}\mathbf{U}\mathbf{x}_i$. \textit{As such, Doc2VecC{} enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context.}
The second term here is a data-dependent regularization. The regularization on the embedding $\mathbf{u}_j$ of each word $j$ takes the following form,
$$R(\mathbf{u}_j) \propto \sum_{i=1}^n\sum_{t=1}^{T_i} x_{ij}^2\left[\sigma_{w_i^t, \mathbf{c}_i^t, \mathbf{x}_i} (1 - \sigma_{w_i^t, \mathbf{c}_i^t, \mathbf{x}_i})(\frac{1}{T}\mathbf{v}_{w_i^t}^\top\mathbf{u}_j)^2 + \sum_{w'\sim P_{v}}\sigma_{w', \mathbf{c}_i^t, \mathbf{x}_i} (1 - \sigma_{w', \mathbf{c}_i^t, \mathbf{x}_i} )(\frac{1}{T}\mathbf{v}_{w'}^\top\mathbf{u}_j)^2\right] $$
where $\sigma_{w, \mathbf{c}, \mathbf{x}} = \sigma(\mathbf{v}_w^\top(\mathbf{U} \mathbf{c} + \frac{1}{T}\mathbf{U} \mathbf{x}))$ prescribes the confidence of predicting the target word $w$ given its neighboring context $\mathbf{c}$ as well as the document vector $\mathbf{d} = \frac{1}{T}\mathbf{U}\mathbf{x}$.
Closely examining $R(\mathbf{u}_j)$ leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word $j$ that frequently appears across the training corpus, i.e, $x_{ij} = 1$ often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by $\sigma_{w, \mathbf{c}, \mathbf{x}} (1 - \sigma_{w, \mathbf{c}, \mathbf{x}})$, which is small if $\sigma_{w, \mathbf{c}, \mathbf{x}} \rightarrow 1 \mbox{ or } 0$. In other words, if $\mathbf{u}_j$ is critical to a confident prediction $\sigma_{w, \mathbf{c}, \mathbf{x}}$ when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model~\citep{wager2013dropout} and denoising autoencoders~\citep{chen2014marginalized}.
\section{Experiments}
\label{exp}
We evaluate Doc2VecC{}
on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017
\subsection{Baselines}
We compare against the following document representation baselines: \textbf{bag-of-words (BoW)}; \textbf{Denoising Autoencoders (DEA) \citep{vincent2008extracting}}, a representation learned from reconstructing original document $\mathbf{x}$ using corrupted one $\tilde\mathbf{x}$. SDAs have been shown to be the state-of-the-art for sentiment analysis tasks~\citep{glorot2011domain}. We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of $\mathbf{x}$ in the reconstruction error and employed negative sampling for the remainings; \textbf{Word2Vec \citep{mikolov2013efficient}+IDF}, a representation generated through weighted average of word vectors learned using Word2Vec; \textbf{Doc2Vec \citep{le2014distributed}}; \textbf{Skip-thought Vectors\citep{kiros2015skip}}, a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include \textbf{RNNLM~\citep{mikolov2010recurrent}}, a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to \textbf{LSTM-based methods}~\citep{tai2015improved} that have been reported on this dataset.
\subsection{Sentiment analysis}
For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split~\citep{maas2011learning}: 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.
\begin{table}
\caption{Classification error of a linear classifier trained on various document representations on the Imdb dataset. }
\label{tbl:sentiment}
\centering
\begin{tabular}{|c||c|c|}
\hline
Model & Error rate \% (include test) & Error rate \% (exclude test)\\
\hline
\hline
Bag-of-Words (BOW) & 12.53 & 12.59\\
\hline
RNN-LM & 13.59 & 13.59\\
\hline
Denoising Autoencoders (DEA) & 11.58 & 12.54\\
\hline
Word2Vec + AVG & 12.11 & 12.69\\
Word2Vec + IDF & 11.28 & 11.92\\
\hline
Paragraph Vectors & 10.81 & 12.10 \\
\hline
Skip-thought Vectors & - & 17.42 \\
\hline
Doc2VecC & \textbf{10.48} & \textbf{11.70} \\
\hline
\end{tabular}
\end{table}
\textbf{Setup.} We test the various representation learning algorithms under two settings: one follows the same protocol proposed in~\citep{mesnil2014ensemble}, where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM)~\citep{fan2008liblinear} is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model\footnote{available at https://github.com/ryankiros/skip-thoughts} trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.
\textbf{Accuracy.}
Comparing the two columns in Table~\ref{tbl:sentiment}, we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC{} achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15\%.
In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC{} naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC{} reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure~\ref{fig:architecture}. By including the context words, Doc2VecC{} allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is $296.5$, much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in ~\citep{tai2015improved}, the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.
\begin{table}
\caption{Learning time and representation generation time required by different representation learning algorithms. }
\label{tbl:time}
\centering
\begin{tabular}{|c||c|c|}
\hline
Model & Learning time & Generation time\\
\hline
\hline
Denoising Autoencoders & 3m 23s & 7s \\
Word2Vec + IDF & 2m 33s & 7s\\
Paragraph Vectors & 4m 54s & 4m 17s\\
Skip-thought & ~2h & ~2h \\
Doc2VecC & 4m 30s & 7s \\
\hline
\end{tabular}
\label{tbl:sentimenttime}
\end{table}
\textbf{Time.} Table~\ref{tbl:sentimenttime} summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC{} second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in $\tilde\mathbf{x}$. We found that both models are not sensitive to the corruption rate $q$ in the noise model. Since the learning time decreases with higher corruption rate, we used $q=0.9$ throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC{} all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time\footnote{As reported in the original paper, training of the skip-thought vector model on the book corpus dataset takes around 2 weeks on GPU.} reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu
\begin{table}
\caption{Words with embeddings closest to 0 learned by different algorithms. }
\label{tbl:adaptive}
\centering
\hspace{-0.1in}
\begin{tabular}{|c||p{12cm}|}
\hline
Word2Vec & harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heavens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103) \\
\hline
ParaVectors & harp(118) dense(108) reels(115) fireworks(101) its'(103) unnoticed(112) pony(102) fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103) \\
\hline
Doc2VecC & ,(1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570) that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)\\
\hline
\end{tabular}
\end{table}
\textbf{Data dependent regularization.} As explained in Section~\ref{sec:adaptive}, the corruption introduced in Doc2VecC{} acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table~\ref{tbl:adaptive} lists the words having the smallest $l_2$ norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC{} manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
\textbf{Subsampling frequent words.} Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in ~\citep{mikolov2013distributed} to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from $12.1\%$ to $13.2\%$. Doc2VecC{} on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick.
\subsection{Word analogy}
In table~\ref{tbl:adaptive}, we demonstrated that the corruption model introduced in Doc2VecC{} dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC{} to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by~\cite{mikolov2013efficient}. The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.
We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.
\begin{figure}
\subfloat[h=50]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
ybar,
bar width=0.2cm,
enlargelimits=0.15,
legend style={at={(0.5, 1.15)},
anchor=north,legend columns=-1},
ylabel={Accuracy (\%)},
xlabel={Number of paragraphs used for learning},
ymax={60},
symbolic x coords={1M, 2M, 4M, 8M, 15M},
xtick=data,
nodes near coords,
every node near coord/.append style={font=\tiny},
nodes near coords align={vertical},
]
\addplot coordinates {(1M, 3.8) (2M, 6.1) (4M, 8.3) (8M, 9.1) (15M, 13.3)};
\addplot coordinates {(1M, 18.7) (2M, 26.4) (4M, 32.7) (8M, 36.1) (15M, 38.9)};
\addplot coordinates {(1M, 20.3) (2M, 28.1) (4M, 36.4) (8M, 42.5) (15M, 46.7)};
\legend{ParagraphVectors, Word2Vec, Doc2VecC}
\end{axis}
\end{tikzpicture}
}
\quad\quad
\subfloat[h=100]{
\begin{tikzpicture}[scale=0.75]
\begin{axis}[
ybar,
bar width=0.2cm,
enlargelimits=0.15,
ymax={60},
legend style={at={(0.5,1.15)},
anchor=north,legend columns=-1},
xlabel={Number of paragraphs used for learning},
symbolic x coords={1M, 2M, 4M, 8M, 15M},
xtick=data,
nodes near coords,
every node near coord/.append style={font=\tiny},
nodes near coords align={vertical},
]
\addplot coordinates {(1M, 5.1) (2M, 7.5) (4M, 10.9) (8M, 10.2) (15M, 10.2)};
\addplot coordinates {(1M, 23.6) (2M, 34.7) (4M, 42.4) (8M, 48.2) (15M, 50.7)};
\addplot coordinates {(1M, 24.3) (2M, 34.1) (4M, 44.1) (8M, 52.6) (15M, 58.2)};
\legend{ParagraphVectors, Word2Vec, Doc2VecC}
\end{axis}
\end{tikzpicture}
}
\caption{Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questions containing words from the most frequent 30k words are included in the test.}
\end{figure}
We observe similar trends as in~\cite{mikolov2013efficient}. Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC{} produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to $20\%$ uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW~\cite{le2014distributed} model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.
\begin{table}
\small
\begin{tabular}{|c|c|c||c|c|c|}
\hline
Semantic questions & Word2Vec & Doc2VecC & Syntactic questions & Word2Vec & Doc2VecC \\
\hline
\hline
capital-common-countries & 73.59 & \textbf{81.82} & gram1-adjective-to-adverb & 19.25 & \textbf{20.32}\\
capital-world &67.94 & \textbf{77.96} & gram2-opposite & 14.07 & \textbf{25.54} \\
currency & 17.14 & 12.86 & gram3-comparative & 60.21 & \textbf{74.47} \\
city-in-state & 34.49 & \textbf{42.86} & gram4-superlative & 52.87 & \textbf{55.40} \\
family & 68.71 & 64.62 & gram5-present-participle & 56.34 & \textbf{65.81} \\
& & & gram6-nationality-adjective & 88.71 & \textbf{91.03} \\
& & & gram7-past-tense & 47.05 & \textbf{51.86} \\
& & & gram8-plural & 50.28 & \textbf{61.27} \\
& & & gram9-plural-verbs & 25.38 & \textbf{39.69} \\
\hline
\end{tabular}
\caption{Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.}
\label{tbl:wordanalogy}
\end{table}
In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC{} on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC{} significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks.
\subsection{Document Classification}
For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size $107,691$.
\begin{table}
\caption{Classification error (\%) of a linear classifier trained on various document representations on the Wikipedia dataset. }
\label{tbl:wiki}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
Model & BOW & DEA & Word2Vec + AVG & Word2Vec + IDF & ParagraphVectors & Doc2VecC\\
\hline
\hline
$h = 100$ & 36.03 & 32.30 & 33.2 & 33.16 & 35.78& \textbf{31.92} \\
$h = 200 $ & 36.03 & 31.36 & 32.46 & 32.48& 34.92 & \textbf{30.84}\\
$h = 500 $ & 36.03 & 31.10 & 32.02 & 32.13& 33.93 & \textbf{30.43}\\
$h = 1000 $ & 36.03 & 31.13 & 31.78 & 32.06 & 33.02 & \textbf{30.24}\\
\hline
\end{tabular}
\end{table}
Table~\ref{tbl:wiki} summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC{} outperform the existing algorithms by a significant margin. In fact, Doc2VecC{} can achieve same or better performance with a much smaller representation vector.
\begin{figure}%
\centering
\subfloat[Doc2Vec]{{\includegraphics[width=0.49\textwidth]{embedding_doc2vec.png} }}%
\subfloat[Doc2VecC{}]{{\includegraphics[width=0.49\textwidth]{embedding.png} }}%
\caption{Visualization of document vectors on Wikipedia dataset using t-SNE.}%
\label{fig:comp}%
\end{figure}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-0.3in}
\centering
\includegraphics[width=7.2cm, height=5.4cm]{coarse.png}
\caption{Visualization of Wikipedia Doc2VecC{} vectors using t-SNE.
\label{fig:coarse}
\vspace{-0.2in}
\end{wrapfigure}
Figure~\ref{fig:comp} visualizes the document representations learned by Doc2Vec (left) and Doc2VecC{} (right) using t-SNE~\citep{maaten2008visualizing}. We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC{}. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table~\ref{tbl:wiki}.
Figure~\ref{fig:coarse} visualizes the vector representation generated by Doc2VecC{} w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species.
\subsection{Semantic relatedness}
We test Doc2VecC{} on the SemEval 2014 Task 1: semantic relatedness SICK dataset~\citep{marelli2014semeval}. Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.
We compare Doc2VecC{} with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM\footnote{The word representation was initialized using
publicly available 300-dimensional Glove vectors trained on 840 billion tokens of Common Crawl data} trained from scratch on this dataset, Skip-thought vectors learned a large book corpus \footnote{The dataset contains 11,038 books with over one billion words} ~\citep{moviebook} and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC{} on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in ~\citep{kiros2015skip} to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way:
we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in ~\citep{ kiros2015skip} to score each pair of sentences: with two sentence embedding $\mathbf{u}_1$ and $\mathbf{u}_2$, we concatenate their component-wise product, $\mathbf{u}_1 \cdot \mathbf{u}_2$ and their absolute difference, $|\mathbf{u}_1 - \mathbf{u}_2|$ as the feature representation.
Table~\ref{tbl:sick} summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC{} significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in ~\citep{socher2014grounded}, which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC{} is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ($11.70\%$ error rate vs $17.42\%$). As we hypothesized in previous section, while Doc2VecC{} is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s).
We would like to point out that Doc2VecC{} is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC{} on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.
\begin{table}
\caption{Test set results on the SICK semantic relatedness task. The first group of results are from the submission to the 2014 SemEval competition; the second group includes several baseline methods reported in ~\citep{tai2015improved}; the third group are methods based on LSTM reported in ~\citep{tai2015improved} as well as the skip-thought vectors~\citep{kiros2015skip}.}
\label{tbl:sick}
\centering
\begin{tabular}{|c||c|c|c|}
\hline
Method & Pearson's $\gamma$ & Spearman's $\rho$ & MSE\\
\hline
\hline
Illinois-LH & 0.7993 & 0.7538 & 0.3692\\
UNAL-NLP & 0.8070 & 0.7489 & 0.3550\\
Meaning Factory & 0.8268 & 0.7721& 0.3224\\
ECNU & 0.8279 & 0.7689 & 0.3250 \\
\hline
Mean vectors (Word2Vec + avg) & 0.7577 & 0.6738 & 0.4557\\
DT-RNN \citep{socher2014grounded} & 0.7923 & 0.7319 & 0.3822\\
SDT-RNN \citep{socher2014grounded} & 0.7900 & 0.7304 & 0.3848\\
\hline
LSTM \citep{tai2015improved} & 0.8528 & 0.7911 & 0.2831\\
Bidirectional LSTM \citep{tai2015improved} & 0.8567 & 0.7966 & 0.2736\\
Dependency Tree-LSTM \citep{tai2015improved} & 0.8676 & 0.8083 & 0.2532\\
combine-skip \citep{kiros2015skip} & 0.8584 & 0.7916 & 0.2687\\
\hline
Doc2VecC & 0.8381 & 0.7621 & 0.3053\\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
We introduce a new model architecture Doc2VecC{} for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC{} intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC{} outperforms not only in testing efficiency, but also in the expressiveness of the generated representations.
|
1,108,101,562,750 | arxiv | \section{Introduction}
When $R$ is a commutative ring, the group $K_1(R)$ is an abelian group generated by invertible matrices with entries in $R$.
In particular, when $R$ is a field, it is well-known that the determinant map $\det : K_1(R) \rightarrow R^\times$ is an isomorphism.
An important consequence of this fact is that $(AB)=(A)+(B)$, i.e., the product $A B$ of two invertible matrices $A$ and $B$
represents the element obtained by adding two elements in $K_1(R)$,
which are represented by the matrices $A$ and $B$, respectively, since
$\det \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} = \det A \, \det B$. In the present article, we endeavor to generalize this property
to the case of commuting matrices in terms of motivic cohomology. The motivic chain complex proposed
by Goodwillie and Lichtenbaum as follows will be perfectly suitable for our purpose.
In \cite{MR96h:19001}, a chain complex for motivic cohomology of a regular local ring $R$, by Goodwillie and Lichtenbaum,
is defined to be the chain complex associated to the simplicial abelian group $d \mapsto
K_0(R\Delta^d,\, \G_m^{\wedge t})$, together with a shift of degree by $-t$.
Here, $K_0(R\Delta^d,\, \G_m^{\wedge t})$ is the Grothendieck group of the exact category
of projective $R$-modules with $t$ commuting automorphisms factored by the subgroup generated by
classes of the objects one of whose $t$ automorphisms is the identity map.
The motivic cohomology of a regular scheme $X$
is given by hypercohomology of the sheafification of the complex above.
Walker showed, in Theorem 6.5 of \cite{MR2052190}, that it agrees with
motivic cohomology given by Voevodsky and thus various other
definitions of motivic cohomology for smooth schemes over
an algebraically closed field.
In \cite{MR96h:19001}, Grayson showed that a related chain complex
$\Omega^{-t}|d \mapsto K_0^\oplus (R\Delta^d,\, \G_m^{\wedge
t})|$, which uses direct-sum Grothendieck groups instead, arises
as the consecutive quotients in $K$-theory space $K(R)$ when $R$
is a regular noetherian ring and so gives rise to a spectral
sequence converging to $K$-theory. Suslin, in \cite{MR2024054}, showed
that Grayson's motivic cohomology complex is equivalent to the
other definitions of motivic complex and consequently settled the problem of a motivic
spectral sequence. See also \cite{MR2181820} for an overview.
The main results of this article are multilinearity and skew-symmetry properties for the symbols of Goodwillie and Lichtenbaum
in motivic cohomology.
First, we establish them for $H^n_{\M} \bigl(\Spec k , {\mathbb Z}(n) \bigr)$ of a field $k$ in Corollary \ref{multilin-l-l}.
We also give a direct proof of Nesterenko-Suslin's theorem (\cite{MR992981}) that
the motivic cohomology of a field $k$, when the degree is equal to the weight, is equal to the Milnor's $K$-group $K^M_n (k)$
for this version of motivic complex in Theorem \ref{Milnor-iso}.
Even though Nesterenko-Suslin's theorem have already appeared in several articles
including \cite{MR992981}, \cite{MR1187705} and \cite{MR1744945},
we believe that the theorem is a central one in the related subjects and it is worthwhile to have another proof of it.
Moreover, multilinearity and skew-symmetry properties for the symbols of Goodwillie and Lichtenbaum motivic cohomology
$H^n_{\M} \bigl(\Spec k , {\mathbb Z}(n) \bigr)$ and the similar properties for the symbols in Milnor's $K$-groups
are visibly compatible through our isomorphism.
Secondly, we establish multilinearity and skew-symmetry of the irreducible symbols for $H^{l-1}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr)$ in Theorem
\ref{multilinear} and Proposition \ref{skewsymmetry}.
These results are particularly interesting because these are the properties which have been expected
through the construction of the author's regulator map in \cite{MR2189214}
in case $k$ is a subfield of the field ${\mathbb C}$ of complex numbers and $l=2$.
These properties may provide the Goodwillie-Lichtenbaum complex
with a potential to be one of the better descriptions of motivic cohomology of fields.
\section{Multilinearity for Goodwillie-Lichtenbaum motivic complex and Milnor's $K$-groups}
For a ring $R$, let $\mathcal{P}
(R,\, \G_m^l)$ be the exact category each of whose objects
$(P,\theta_1,\dots,\theta_l)$ consists of a finitely generated
projective $R$-module $P$ and commuting automorphisms
$\theta_1,\dots,\theta_l$ of $P$. A morphism from
$(P,\theta_1,\dots,\theta_l)$ to $(P',\theta_1',\dots,\theta_l')$
in this category is a homomorphism $f: P \rightarrow P'$ of
$R$-modules such that $f \theta_i = \theta_i' f$ for each $i$.
Let $K_0(R,\, \G_m^l)$ be the Grothendieck group of this category
and let $K_0(R,\, \G_m^{\wedge l})$ be the quotient of $K_0(R,\,
\G_m^l)$ by the subgroup generated by those objects $(P,\,
\theta_1,\, \dots,\, \theta_l)$ where $\theta_i = 1$ for some $i$.
For each $d \ge 0$, let $R\Delta^d$ be the $R$-algebra
$$R\Delta^d = R[t_0,\dots,t_d]/(t_0 + \cdots + t_d -1).$$
It is isomorphic to a polynomial ring with $d$ indeterminates over
$R$. We denote by \Ord \ the category of finite nonempty ordered
sets and by $[d]$ where $d$ is a nonnegative integer the object
$\{ 0 < 1 < \dots < d\}$. Given a map $\varphi : [d] \rightarrow
[e]$ in \Ord, the map $\varphi^* : R\Delta^e \rightarrow
R\Delta^d$ is defined by $\varphi^*(t_j) = \sum_{\varphi(i)=j}
t_i$. The map $\varphi^*$ gives us a simplicial ring
$R\Delta^\bullet$.
By applying the functor $K_0(-,\, \G_m^{\wedge l})$, we get the simplicial abelian group
$$[d] \mapsto K_0(R\Delta^d,\, \G_m^{\wedge l}).$$
The associated (normalized) chain complex, shifted cohomologically
by $-l$, is called the motivic complex of Goodwillie and
Lichtenbaum of $weight$ $l$.
For each $(P,\theta_1,\dots,\theta_l)$ in $K_0(R,\, \G_m^{\wedge
l})$, there exists a projective module $Q$ such that $P \oplus Q$
is free over $R$. Then $(P \oplus Q,\theta_1 \oplus
1_Q,\dots,\theta_l \oplus 1_Q)$ represents the same element
of $K_0(R,\, \G_m^{\wedge l})$ as $(P,\theta_1,\dots,\theta_l)$.
Thus $K_0(R\Delta^d,\, \G_m^{\wedge l})$ can be explicitly presented
with generators and relations involving $l$-tuples of commuting
matrices in $GL_n(R\Delta^d), \ n \ge 0$.
For a regular local ring $R$, the motivic cohomology $H^q_{\M} \bigl( \Spec R, \, {\mathbb Z}(l) \bigr)$ will be
the $(l-q)$-th homology group of the Goodwillie-Lichtenbaum complex of weight $l$. In particular, when $k$ is any field,
\begin{align*}
H^q_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)
= \pi_{l-q} |d \mapsto K_0(k \Delta^d,\, \G_m^{\wedge l})|.
\end{align*}
$K_0 (k \Delta^{d}, \, \G_m^{\wedge l})$ ($l \ge 1$) may be considered as the abelian group
generated by $l$-tuples of the form
$\left( \theta_1(t_1, \dots, t_d),\dots,\theta_l(t_1, \dots, t_d)\right)$
and certain explicit relations, where $\theta_1(t_1, \dots, t_d),\dots,\theta_l(t_1, \dots, t_d)$ are commuting matrices in
$GL_n(k [t_1, \dots, t_d])$ for various $n \ge 1$.
When $d=1$, we set $t=t_1$ and the boundary map $\partial$ on the motivic complex sends
$\left( \theta_1(t),\dots,\theta_l(t) \right)$ in $K_0 (k \Delta^{1}, \, \G_m^{\wedge l})$ to
$\left( \theta_1(1),\dots,\theta_l(1) \right) - \left( \theta_1(0),\dots,\theta_l(0) \right)$ in $K_0 (k \Delta^0, \, \G_m^{\wedge l})$.
We will denote by the same notation $(\theta_1,\dots,\theta_l)$ the element in
$K_0 (k \Delta^0, \, \G_m^{\wedge l}) / \partial K_0 (k \Delta^1, \, \G_m^{\wedge l}) = H^{l}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$
represented by $(\theta_1,\dots,\theta_l)$, by abuse of notation, whenever $\theta_1,\dots,\theta_l$ are commuting matrices in $GL_n(k)$.
\begin{lemma} \label{basicelements}
Let $a_1,a_2, \dots, a_n$ and $b_1, b_2,\dots, b_n$ be elements in $\bar k$ (an algebraic closure of $k$)
not equal to either 0 or 1. Suppose also that $a_1 a_2 \cdots a_n = b_1 b_2 \cdots b_n$ and
$(1-a_1)(1-a_2) \cdots (1-a_n) = (1-b_1)(1-b_2) \cdots (1-b_n)$.
If all the elementary symmetric functions evaluated at $a_1, a_2, \dots , a_n$ and $b_1, b_2, \dots , b_n$
are in $k$, then there is a matrix $\theta(t)$
in $GL_n(k[t])$ such that $1_n-\theta(t)$ is also invertible and the eigenvalues of $\theta(0)$
and $\theta(1)$ are $a_1,a_2, \dots, a_n$ and $b_1,b_2, \dots, b_n$, respectively.
\end{lemma}
\begin{proof}
Let
$$p(\lambda) = (1-t) \prod_{i=1}^n (\lambda -a_i) + t \prod_{i=1}^n (\lambda -b_i)$$
be a polynomial in $\lambda$ with coefficients in $k[t]$.
It is a monic polynomial with the constant term equal to $(-1)^n a_1 a_2 \cdots a_n$.
It has roots $b_1, b_2, \dots, b_n$ and $a_1,a_2, \dots, a_n$ when $t=1$ and $t=0$, respectively.
Now let $\theta(t)$ be its companion matrix in $GL_n(k[t])$. Then $\text{det} \, (1_n-\theta(t)) = p(1)$ since
$\text{det} \, (\lambda 1_n-\theta(t)) = p(\lambda)$.
But $p(1)=(1-a_1)(1-a_2) \cdots (1-a_n)$ $ = (1-b_1)(1-b_2) \cdots (1-b_n)$ is in $k^{\times}$,
and so $1_n-\theta(t)$ is invertible. It is clear that the eigenvalues of $\theta(t)$ are $a_1,\, a_2, \, \dots, \, a_n$
and $b_1,\, b_2, \, \dots, \, b_n$ when $t=0$ and $t=1$, respectively.
\end{proof}
\begin{definition} \label{Z}
For $l \ge 2$, let $\mathbf{Z}$ be the subgroup of $K_0 (k \Delta^1, \G_m^{\wedge l})$ generated by the elements of the following types
for various $n \ge 1$ :
$(Z_1)$ $(\theta_1,\dots,\theta_l)$, where $\theta_1,\dots,\theta_l \in GL_n(k[t])$ commute and $\theta_i$ is in $GL_n(k)$ for some $i$;
$(Z_2)$ $(\theta_1,\dots,\theta_l)$, where $\theta_i = \theta_j \in GL_n(k[t])$ for some $i \ne j$;
$(Z_3)$ $(\theta_1,\dots,\theta_l)$, where $\theta_i = 1_n -\theta_j \in GL_n(k[t])$ for some $i \ne j$.
\end{definition}
\begin{lemma} \label{boundary-Z}
Let $\partial \mathbf{Z}$ denote the image of $\mathbf{Z}$ under the boundary homomorphism
$\partial: K_0 (k \Delta^1, \, \G_m^{\wedge l}) \rightarrow K_0 (k \Delta^0, \, \G_m^{\wedge l})$ when $l \ge 2$.
Then $\partial \mathbf{Z}$ contains all elements of the following forms:
(i) $(\varphi \psi,\theta_2,\dots,\theta_l)-(\varphi,\theta_2,\dots,\theta_l)-(\psi,\theta_2,\dots,\theta_l)$,
for all commuting $\varphi, \psi, \theta_2,\dots,\theta_l \in GL_n(k)$;
Similarly, $(\theta_1, \dots, \theta_{i-1}, \varphi \psi, \theta_{i+1}, \dots, \theta_l)
-(\theta_1, \dots, \theta_{i-1}, \varphi, \theta_{i+1}, \dots, \theta_l)
-(\theta_1, \dots, \theta_{i-1}, \psi, \theta_{i+1}, \dots, \theta_l)$
for all commuting $\varphi, \psi, \theta_1,\dots,\theta_{i-1}, \theta_{i+1},\dots, \theta_l \in GL_n(k)$;
(ii) $(\theta_1,\dots,\theta_i, \dots, \theta_j, \dots, \theta_l) + (\theta_1,\dots,\theta_j, \dots, \theta_i, \dots, \theta_l)$,
for all commuting $\theta_1,\dots,\theta_l \in GL_n(k)$;
(iii) $(\theta_1,\dots,\theta_i, \dots, \theta_j, \dots, \theta_l)$, when $\theta_i= -\theta_j$
for commuting $\theta_1,\dots,\theta_l \in GL_n(k)$;
(iv) $(c_1, \dots, b,\dots, 1-b, \dots, c_l)-(c_1, \dots, a,\dots, 1-a,\dots, c_l)$, for $a, b \in k-\{0,1\}$ and $c_i \in k^\times$
for each appropriate $i$.
\end{lemma}
\begin{proof}
$(i)$
We first observe the following identities of matrices:
\begin{align} \label{linear-equa}
\begin{pmatrix} 1_n & 0 \\ \psi & 1_n \end{pmatrix} \begin{pmatrix} \psi & 1_n \\ 0 & \varphi \end{pmatrix} \begin{pmatrix} 1_n & 0 \\ -\psi & 1_n \end{pmatrix}
&=\begin{pmatrix} 0 & 1_n \\ -\varphi \psi & \varphi+\psi \end{pmatrix}, \\
\label{linear-equb}
\begin{pmatrix} 1_n & 0 \\ 1_n & 1_n \end{pmatrix}
\begin{pmatrix} 1_n & 1_n \\ 0 & \varphi \psi \end{pmatrix}
\begin{pmatrix} 1_n & 0 \\ -1_n & 1_n \end{pmatrix}
&=\begin{pmatrix} 0 & 1_n \\ -\varphi \psi & 1_n+\varphi \psi \end{pmatrix}.
\end{align}
Let $\Theta(t)$ be the $2n \times 2n$ matrix
$$\begin{pmatrix} 0 & 1_n \\
-\varphi \psi & t(1_n+\varphi \psi)+(1-t)(\varphi+\psi) \end{pmatrix}. $$
Then, $\Theta(t)$ is in $GL_{2n}(k[t])$, $\bigl( \Theta(t),\, \theta_2 \oplus \theta_2, \dots, \theta_l \oplus \theta_l \bigr)$
is in $\mathbf{Z}$ by Definition \ref{Z} $(Z_1)$
and the boundary of $\bigl( \Theta(t),\, \theta_2 \oplus \theta_2, \dots, \theta_l \oplus \theta_l \bigr)$ is, by (\ref{linear-equa})
and by (\ref{linear-equb}),
$(1_n \oplus \varphi \psi, \theta_2 \oplus \theta_2, \dots, \theta_l \oplus \theta_l)
-(\varphi \oplus \psi,\theta_2 \oplus \theta_2, \dots, \theta_l \oplus \theta_l)
=(\varphi \psi,\theta_2, \dots, \theta_l)
-(\varphi,\theta_2, \dots, \theta_l)-(\psi,\theta_2, \dots, \theta_l)$.
The proof is similar for other cases.
$(ii)$ We let $\Theta(t)$ be the matrix
$$\begin{pmatrix} 0 & 1_n \\
-\theta_i \theta_j & t(1_n+\theta_i \theta_j)+(1-t)(\theta_i+\theta_j) \end{pmatrix}. $$
Then $\bigl( \theta_1^{\oplus 2},\dots,\Theta(t), \dots, \Theta(t),\dots, \theta_l^{\oplus 2} \bigr)$ is in $\mathbf{Z}$
by Definition \ref{Z} $(Z_2)$ and the boundary of $\bigl( \theta_1^{\oplus 2},\dots,\Theta(t), \dots, \Theta(t),\dots, \theta_l^{\oplus 2} \bigr)$ is
\begin{align*}
&(\theta_1,\dots,\theta_i \theta_j, \dots, \theta_i \theta_j,\dots, \dots, \theta_l)
-(\theta_1,\dots,\theta_i, \dots, \theta_i,\dots, \theta_l)
-(\theta_1,\dots,\theta_j, \dots, \theta_j,\dots, \theta_l)\\
&= \bigl( (\theta_1,\dots,\theta_i, \dots, \theta_i,\dots, \theta_l)
+((\theta_1,\dots,\theta_i, \dots, \theta_j,\dots, \theta_l)
+(\theta_1,\dots,\theta_j, \dots, \theta_i,\dots, \theta_l)
+(\theta_1,\dots,\theta_j, \dots, \theta_j,\dots, \theta_l) \bigr) \\
&-(\theta_1,\dots,\theta_i, \dots, \theta_i,\dots, \theta_l)
-(\theta_1,\dots,\theta_j, \dots, \theta_j,\dots, \theta_l) \\
&=(\theta_1,\dots,\theta_j, \dots, \theta_i,\dots, \theta_l) + (\theta_1,\dots,\theta_j, \dots, \theta_i,\dots, \theta_l)
\quad \text {modulo } \partial \mathbf{Z} \text{ by} \ (i).
\end{align*}
$(iii)$ We note that $\displaystyle \left( \begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix},\dots, \begin{pmatrix} 0 & 1_n \\ -\theta & t(\theta+1_n) \end{pmatrix}
\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right)$
is an element of $\mathbf{Z}$ by Definition \ref{Z} $(Z_1)$. So its boundary
\begin{multline*}
\quad \ \left( \begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & \theta+1_n \end{pmatrix},
\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right) \\
-\left( \begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right) \\
=\left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} \theta & 1_n \\ 0 & 1_n \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right)\\
-\left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right) \\
=(\theta_1, \dots, -\theta, \dots, ,\theta, \dots, \theta_l)
-\left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix}\right)
\end{multline*}
is in $\partial \mathbf{Z}$. Thus it suffices to prove that
$\displaystyle \left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
\begin{pmatrix} -\theta & 0 \\ 0 & -\theta \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right)$
is in $\partial \mathbf{Z}$. But it is equal to
\begin{multline*}
\left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
{\begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}}^2, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right) \\
= 2 \left(\begin{pmatrix} \theta_1 & 0 \\ 0 & \theta_1 \end{pmatrix}, \dots,
{\begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}}, \dots, \begin{pmatrix} 0 & 1_n \\ -\theta & 0 \end{pmatrix}
,\dots, \begin{pmatrix} \theta_l & 0 \\ 0 & \theta_l \end{pmatrix} \right),
\end{multline*}
which is in $\partial \mathbf{Z}$ by $(ii)$ above.
$(iv)$ Apply Lemma \ref{basicelements} to $a_1=a,\ a_2=\sqrt{b},\ a_3=-\sqrt{b},
\ b_1=-\sqrt{a}, \ b_2=\sqrt{a}, \ b_3= b$ to get $\theta(t) \in GL_3(k[t])$ with the properties stated in the lemma.
Then $z=2 \bigl( c_1^{\oplus 3}, \dots, \theta(t),\dots, 1_3-\theta(t),\dots,c_l^{\oplus 3} \bigr)$ is in $\mathbf{Z}$ by Definition \ref{Z} $(Z_3)$. But, by the theory of
rational canonical form, we have
{\allowdisplaybreaks
\begin{multline*}
\partial z = 2 \left( (c_1, \dots,b,\dots,1-b,\dots,c_l)
+\left( \begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} 0 & 1 \\ a & 0 \end{pmatrix}, \dots, \begin{pmatrix} 1 & -1 \\ -a & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \right) \\
-2 \left( (c_1, \dots,a,\dots,1-a,\dots,c_l)
+\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} 0 & 1 \\ b & 0 \end{pmatrix}, \dots, \begin{pmatrix} 1 & -1 \\ -b & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \right) \\
= -2 (c_1, \dots,a,\dots,1-a,\dots,c_l) + 2(c_1, \dots,b,\dots,1-b,\dots,c_l) \\
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
{\begin{pmatrix} 0 & 1 \\ b & 0 \end{pmatrix}}^2, \dots, \begin{pmatrix} 1 & -1 \\ -b & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \\
+\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
{\begin{pmatrix} 0 & 1 \\ a & 0 \end{pmatrix}}^2, \dots, \begin{pmatrix} 1 & -1 \\ -a & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right)\\
= \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 1-b & 0 \\ 0 & 1-b \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \\
- \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 1 & -1 \\ -b & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right) \\
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 1-a & 0 \\ 0 & 1-a \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \\
+ \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 1 & -1 \\ -a & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \\
=\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots,
\begin{pmatrix} 1-b & 0 \\ 0 & 1-b \end{pmatrix} {\begin{pmatrix} 1 & -1 \\ -b & 1 \end{pmatrix}}^{-1}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right) \\
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots,
\begin{pmatrix} 1-a & 0 \\ 0 & 1-a \end{pmatrix} {\begin{pmatrix} 1 & -1 \\ -a & 1 \end{pmatrix}}^{-1}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
=\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots \begin{pmatrix} 1 & 1 \\ b & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 1 & 1 \\ a & 1 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right) \\
=\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix},\dots, \begin{pmatrix} {\frac {-b} {1-b}} & {\frac 1 {1-b}} \\ 0 & 1 \end{pmatrix}
\begin{pmatrix} 1 & 1 \\ b & 1 \end{pmatrix} {\begin{pmatrix} {\frac {-b}{1-b}} & {\frac 1 {1-b}} \\ 0 & 1 \end{pmatrix}}^{-1}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix},\dots, \begin{pmatrix} {\frac {-a} {1-a}} & {\frac 1 {1-a}} \\ 0 & 1 \end{pmatrix}
\begin{pmatrix} 1 & 1 \\ a & 1 \end{pmatrix} {\begin{pmatrix} {\frac {-a}{1-a}} & {\frac 1 {1-a}} \\ 0 & 1 \end{pmatrix}}^{-1}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
=\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ b-1 & 2 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ a-1 & 2 \end{pmatrix}
,\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right).
\end{multline*}
}
By taking the boundary of the element
\begin{align*}
&\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ b-1 & (2-b)t+2(1-t) \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right) \\
&-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ a-1 & (2-a)t+2(1-t) \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right),
\end{align*}
which is in $\mathbf{Z}$ by Definition \ref{Z} $(Z_1)$, we see that
\begin{align*}
\partial z
&= \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ b-1 & 2-b \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
&-\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 0 & 1 \\ a-1 & 2-a \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right) \\
&= \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \dots, \begin{pmatrix} 1-b & 0 \\ 0 & 1 \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix}\right)
\end{align*}
by (\ref{linear-equa}), which then is equal to
\begin{align*}
& -\left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}, \dots, \begin{pmatrix} 1-a & 0 \\ 0 & 1 \end{pmatrix},
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right)\\
&= (c_1, \dots, b,\dots, 1-b, \dots, c_l)-(c_1, \dots, a,\dots, 1-a,\dots, c_l) \\
\end{align*}
in $K_0 (k \Delta^0, \, \G_m^{\wedge l})/\partial \mathbf{Z}$.
Therefore, $(iv)$ lies in $\partial \mathbf{Z}$.
\end{proof}
\begin{corollary} \label{multilin-l-l} (Multilinearity and Skew-symmetry for $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$)
(i) $(\theta_1, \dots, \theta_{i-1}, \varphi \psi, \theta_{i+1}, \dots, \theta_l) =
(\theta_1, \dots, \theta_{i-1}, \varphi, \theta_{i+1}, \dots, \theta_l) +(\theta_1, \dots, \theta_{i-1}, \psi, \theta_{i+1}, \dots, \theta_l)$
in $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$,
for all commuting $\varphi, \psi, \theta_1,\dots,\theta_{i-1}, \theta_{i+1},\dots, \theta_l \in GL_n(k)$
(ii) $(\theta_1,\dots,\theta_i, \dots, \theta_j, \dots, \theta_l) = - (\theta_1,\dots,\theta_j, \dots, \theta_i, \dots, \theta_l)$
in $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$
for all commuting $\theta_1,\dots,\theta_l \in GL_n(k)$
\end{corollary}
If $\theta_1,\dots,\theta_l$ and $\theta'_1,\dots,\theta'_l$ are commuting matrices in $GL_n(k)$ and $GL_m(k)$, respectively, then
$(\theta_1,\dots,\theta_l) + (\theta'_1,\dots,\theta'_l) = (\theta_1 \oplus \theta'_1,\dots,\theta_l \oplus \theta'_l)$
in $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$. Therefore, we obtain the following result from Corollary \ref{multilin-l-l}.
\begin{corollary}
Every element in $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ can be written as a single symbol $(\theta_1,\dots,\theta_l)$, where
$\theta_1,\dots,\theta_l$ are commuting matrices in $GL_n(k)$.
\end{corollary}
Thanks to Lemma \ref{boundary-Z}, we can construct a map from Milnor's $K$-groups to the motivic cohomology groups.
\begin{proposition} \label{Milnor-map}
For any field $k$, the assignment $\{ a_1,a_2, \dots, a_l \} \mapsto (a_1,a_2,\dots,a_l) $ for each Steinberg symbol
$\{a_1,a_2, \dots, a_l \}$ gives a well-defined homomorphism $\rho_l$
from the Milnor's $K$-group $K^M_l(k)$ to $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
\end{proposition}
\begin{proof}
This proposition turns out to be straightforward when $l=1$. So we assume that $l \ge 2$.
By Corollary \ref{multilin-l-l} $(i)$, the multilinearity is satisfied by our symbol $( \ ,\dots, \ )$.
Therefore all we need to show is that for every $\alpha \in k-\{0,1\}$ and $c_r \in k^\times$ for $1 \le r \le l$, $r \ne i,j$,
$(c_1, \dots, \alpha, \dots, 1-\alpha, \dots, c_l)$ is in $\partial K_0 (k \Delta^1, \, \G_m^{\wedge l})$.
We will actually show that it is contained in $\partial \mathbf{Z}$.
The proposition is immediate for a prime field ${\mathbb F}_p$ because $K^M_l({\mathbb F}_p)=0$ for $l \ge 2$.
So we may assume that there exists an element $e \in k$ such that $e^3-e \neq 0$.
By Lemma \ref{boundary-Z} $(iv)$ with $a=e,\, b=1-e$, we have
$(c_1, \dots, e,\dots,1-e, \dots, c_l)-(c_1, \dots, 1-e,\dots,e, \dots, c_l) = 2 (c_1, \dots, e,\dots,1-e, \dots, c_l) = 0$
modulo $\partial \mathbf{Z}$.
With $a=-e,\, b=1+e$, we have $2(c_1, \dots, e,\dots,1+e, \dots, c_l) = 2(c_1, \dots,-e,\dots,1+e, \dots, c_l)=0$.
Hence, $(c_1, \dots, e^2,\dots,1-e^2, \dots, c_l) = 2(c_1, \dots, e,\dots,1-e, \dots, c_l)+2(c_1, \dots, e,\dots,1+e, \dots, c_l) = 0$.
On the other hand, by Lemma \ref{boundary-Z} $(iv)$ with $a=e^2,\, b=\alpha$,
we see that $-(c_1, \dots, e^2,\dots,1-e^2, \dots, c_l) + (c_1, \dots, \alpha,\dots,1-\alpha, \dots, c_l)$ is in
$\partial \mathbf{Z}$ and we're done.
More explicitly, let $z = 2 \bigl(c_1^{\oplus 3}, \dots, \theta(t), \dots, 1-\theta(t), \dots, c_l^{\oplus 3} \bigr) \in \mathbf{Z}$, where
$$\theta(t) = \begin{pmatrix} 0 & 1 & 0 \\
0 & 0 & 1 \\
-e^2\alpha & (e^2-\alpha)t +\alpha & (\alpha-e^2)t+e^2 \end{pmatrix}.$$
This matrix $\theta(t)$ is constructed with Lemma \ref{basicelements} with $a_1=e^2,\ a_2=\sqrt{\alpha},\ a_3=-\sqrt{\alpha},
\ b_1=-e, \ b_2=e, \ b_3= \alpha$. Hence, by the computation we have done in the proof of Lemma \ref{boundary-Z} $(iv)$,
\begin{align*}
\partial z
&= 2(c_1, \dots, -e,\dots,1+e, \dots, c_l)+2(c_1, \dots, e,\dots,1-e, \dots, c_l) \\
&+ 2(c_1, \dots, \alpha,\dots,1-\alpha, \dots, c_l)-2(c_1, \dots, e^2,\dots,1-e^2, \dots, c_l) \\
&-2 \left(\begin{pmatrix} c_1 & 0 \\ 0 & c_1 \end{pmatrix}, \dots,
\begin{pmatrix} 0 & 1 \\ \alpha & 0 \end{pmatrix}, \dots, \begin{pmatrix} 1 & -1 \\ -\alpha & 1 \end{pmatrix}
\dots, \begin{pmatrix} c_l & 0 \\ 0 & c_l \end{pmatrix} \right) \\
&=-(c_1, \dots, e^2,\dots,1-e^2, \dots, c_l) + (c_1, \dots, \alpha,\dots,1-\alpha, \dots, c_l) \\
&=((c_1, \dots, \alpha,\dots,1-\alpha, \dots, c_l).
\end{align*}
\end{proof}
For Goodwillie-Lichtenbaum motivic complex, there is a straightforward \underline{functorial} definition of the norm map for the motivic cohomology
for any finite extension $k \subset L$.
\begin{definition} \label{def-norm}
If $\theta_1, \dots, \theta_l$ are commuting automorphisms on a finitely generated projective $L \Delta^d$-module $P$,
then by identifying $L \Delta^d$ as a free $k \Delta^d$-module of finite rank,
we may consider $P$ as a finitely generated projective $k \Delta^d$-module and $\theta_1, \dots, \theta_l$ as commuting automorphisms on it.
This gives a simplicial map $K_0(L \Delta^d,\, \G_m^{\wedge l}) \rightarrow K_0(k \Delta^d,\, \G_m^{\wedge l})$.
The resulting homomorphism $N_{L/k}: \ H^{q}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr) \rightarrow H^{q}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr)$ is called the
norm map.
\end{definition}
We summarize some basic results for the norm in the following lemma.
\begin{lemma} \label{basic-norm}
(i) $N_{L'/L} \circ N_{L/k} = N_{L'/k}$ whenever we have a tower of finite field extensions $k \subset L \subset L'$.
(ii) If $[L:k]=d$, the composition
$$ \xymatrix {H^{q}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr) \ar[r]^-{i_{L/k}} & H^{q}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr) \ar[r]^-{N_{L/k}}
& H^{q}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr) },$$ where $i_{L/k}$ is induced by the inclusion of the fields $k \subset L$,
is multiplication by $d$.
(iii) For $\alpha_1, \dots, \alpha_l \in k^\times$ and $\beta \in L^\times$,
$N_{L/k} \left( \alpha_1, \dots, \alpha_l, \beta \right) = \left(\alpha_1, \dots, \alpha_l, N_{L/k}(\beta) \right)$
in $H^{l+1}_{\M} \bigl(\Spec k , {\mathbb Z}(l+1) \bigr)$,
where $N_{L/k}(\beta) \in k^\times$ is the image of $\beta$ under the usual norm map $N_{L/k}: L^\times \rightarrow k^\times$.
\end{lemma}
\begin{proof}
$(i)$ and $(ii)$ are immediate from Definition \ref{def-norm}. $(iii)$ follows from the observation that, in $H^{1}_{\M} \bigl(\Spec k , {\mathbb Z}(1) \bigr)$,
the two elements represented by two matrices with same determinants are equal since any matrix with determinant 1 is a product
of elementary matrices and an element represented by an elementary matrix vanishes in $H^{1}_{\M} \bigl(\Spec k , {\mathbb Z}(1) \bigr)$.
\end{proof}
We also have the norm maps $N_{L/K}: \ K^M_l (L) \rightarrow K^M_l (k)$ for the Milnor's $K$-groups
whenever $L/k$ is a finite field extension, whose definition we recall briefly as follows. (See \cite{MR0442061} or \cite{MR603953} \S 1.2)
For each discrete valuation $v$ of the field $K=k(t)$ of rational functions over $k$, let $\pi_v$ be a uniformizing parameter and
$k_v = R_v/(\pi_v)$ be the residue field of the valuation ring $R_v=\{ r \in K | v(r) \ge 0 \}$.
Then we define the tame symbol $\partial_v : K^M_{l+1}(K) \rightarrow K^M_l(k_v)$ to be the epimorphism such that
$\partial ( \{ u_1, \dots, u_l, y \}) = v(y) \{\overline{u_1}, \dots, \overline{u_l} \}$ whenever $u_1, \dots, u_l$ are
units of the valuation ring $R_v$.
Let $v_\infty$ be the valuation on $K=k(t)$, which vanishes on $k$, such that $v_\infty (t) = -1$.
Every simple algebraic extension $L$ of $k$ is isomorphic to $k_v$ for some discrete valuation $v \ne v_\infty$ which corresponds to a
prime ideal $\mathfrak{p}$ of $k[t]$. The norm maps $N_v : K^M_l(k_v) \rightarrow K^M_l(k)$ are the unique homomorphisms such that,
for every $w \in K^M_{l+1}(k(t)),$ $\displaystyle \sum_{v} N_v \left( \partial_v w \right)=0$
where the sum is taken over all discrete valuations, including $v_\infty$ on $k(t)$, vanishing on $k$. This equality is called
the Weil reciprocity law. Note that we take $N_{v_\infty} = \rm{Id}$ for $v=v_\infty$.
Kato (\cite{MR603953} \S 1.7) has shown that these maps, if defined as compositions of norm maps for simple extensions for a given tower of
simple extensions, depend only on the field extension $L/k$, i.e., that it enjoys functoriality. See also \cite{MR689382}. It also enjoys
a projection formula similar to $(iii)$ of Lemma \ref{basic-norm}.
The following key lemma shows the compatibility between these two types of norm maps.
\begin{lemma} \label{norm-compatible}
For every finite field extension $k \subset L$, we have the following commutative diagram, where the vertical maps are the norm maps
and the horizontal maps are the homomorphisms in Proposition \ref{Milnor-map}:
$$\xymatrix{K^M_l (L) \ar[r]^-{\rho_l} \ar[d]^-{N_{L/k}} &
H^{l}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr) \ar[d]^-{N_{L/k}} \\
K^M_l (k) \ar[r]^-{\rho_l} &
H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr)}$$
\end{lemma}
\begin{proof}
We will follow the same procedure which is used in \cite{MR2242284} for the proof.
Because of the functoriality property of the norm maps, we may assume that $[L:k]$ is a prime number $p$.
First, let us assume that $k$ has no extensions of degree prime to $p$. By Lemma (5.3) in \cite{MR0442061},
$K^M_l(L)$ is generated by the symbols of the form $x=\{x_1, \dots, x_{l-1}, y\}$ where $x_i \in k$ and $y \in L$. Then,
by the projection formula for Milnor's $K$-groups,
$\rho_l N_{L/k}\left(\{x_1, \dots, x_{l-1}, y\}\right) = \rho_l \left(\{x_1, \dots, x_{l-1}, N_{L/k}(y)\}\right)
= \left( x_1, \dots, x_{l-1}, N_{L_k}(y) \right)$.
We also have $N_{L/k} \rho_l \left(\{x_1, \dots, x_{l-1}, y\}\right) = N_{L/k} \bigl( (x_1, \dots, x_{l-1}, y) \bigr)
=\left( x_1, \dots, x_{l-1}, N_{L_k}(y) \right)$ by $(iii)$ of Lemma \ref{basic-norm} and so we're done in this case.
Next, for the general case, let $k'$ be a maximal prime-to-$p$ extension of $k$. Then, by the previous case applied to $k'$ and by $(i)$ of Lemma
\ref{basic-norm}, we see that $z = N_{L/k} \rho_l(x) - \rho_l N_{L/k}(x)$, which is in the kernel of
$i_{k'/k}: H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr) \rightarrow H^{l}_{\M} \bigl(\Spec k' , {\mathbb Z}(l) \bigr)$, is
a torsion element of $H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr)$ of exponent prime to $p$.
In particular, if $L/k$ is a purely inseparable extension of degree $p$, then $y^p \in k$ and so $z$ is clearly killed by $p$, i.e., $z=0$.
Hence we may assume that $L/k$ is separable.
Since the kernel of $i_{L/k}: H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr) \rightarrow H^{l}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr)$ has exponent $p$,
it suffices to prove that $i_{L/k}(z) = 0$ to conclude $z=0$.
Now $L \otimes_k L$ is a finite product of fields $L_i$ with $[L_i:L] < p$ and we have the following commutative diagrams.
$$\xymatrix{K^M_l (L) \ar[r]^-{\oplus i_{L_i/L}} \ar[d]^-{N_{L/k}} &
\oplus_i K^M_l (L_i) \ar[d]^-{\sum_i N_{L_i/L}} \\
K^M_l (k) \ar[r]^{i_{L/k}} &
K^M_l (L)}
\quad \quad \xymatrix{H^{l}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr) \ar[r]^-{\oplus i_{L_i/L}} \ar[d]^-{N_{L/k}} &
\oplus_i H^{l}_{\M} \bigl(\Spec L_i , {\mathbb Z}(l) \bigr) \ar[d]^-{\sum_i N_{L_i/L}} \\
H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr) \ar[r]^-{i_{L/k}} &
H^{l}_{\M} \bigl(\Spec L , {\mathbb Z}(l) \bigr)}
$$
The left diagram is the diagram (15) in p.387 of \cite{MR0442061} and the right diagram follows easily from
Definition \ref{def-norm}.
By induction on $p$, we have $i_{L/k}(z) =\oplus N_{L_i/L}\rho_l (i_{L_i/L}(x))- \oplus \rho_l N_{L_i/L} (i_{L_i/L}(x))=0$ and the proof is complete.
\end{proof}
\begin{lemma} \label{phi-map}
For any field $k$, there is a homomorphism $\phi_l : H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr) \rightarrow K^M_l(k)$ such that,
for each element $z \in H^{l}_{\M} \bigl(\Spec k , {\mathbb Z}(l) \bigr)$, there is an expression
$z = \displaystyle \sum_{j=1}^r N_{L_i/k} \left((\alpha_{1j}, \dots, \alpha_{lj})\right)$
where $L_1, \dots, L_r$ are finite field extensions of $k$,
$\alpha_{ij} \in GL_1(L_j) = L_j^\times$ ($1 \le i \le l$, $1 \le j \le r$) and an equality
$\displaystyle \phi_l (z) = \sum_j N_{L_j/k}\left( \{ \alpha_{1j}, \dots, \alpha_{lj} \} \right)$ in $K^M_l(k)$
\end{lemma}
\begin{proof}
For a tuple $z=(\theta_1, \theta_2, \dots, \theta_l)$, where $\theta_1, \theta_2, \dots, \theta_l$ are commuting matrices
in $GL_n(k)$, consider the vector space $E=k^n$ as an $R=k[t_1, t_1^{-1} \dots, t_l, t_l^{-1}]$-module,
on which $t_i$ acts as $\theta_i$. Since $E$ is of finite rank over $k$,
it has a composition series $0=E_0 \subset E_1 \subset \dots \subset E_r=E$
with simple factors $L_j = E_j / E_{j-1}$ ($j=1,\dots,r$).
Then, there exists a maximal ideal $\mathfrak{m}_j$ of $R$ such that $L_j \simeq R / \mathfrak{m}_j$.
So we see that $L_j$ is a finite field extension of $k$, and $\displaystyle z = \sum_{j=1}^r (\theta_1 | L_j, \dots, \theta_l | L_j)$,
where $\theta_i |L_j$ is the automorphism on $L_j$ induced by $\theta_i$.
Let us denote by $\alpha_{ij}$ the element of $L_j^\times$ which corresponds to $t_i$ (mod $\mathfrak{m}_j$) for $i=1, \dots, l$, then
$\displaystyle (\theta_1 | L_j, \dots, \theta_l | L_j) = N_{L_j/k} \left((\alpha_{1j}, \dots, \alpha_{lj})\right)$.
Since these factors $L_j$ are unique up to an order and a Milnor symbol vanishes if one of its coordinates is $1$,
the assignment $(\theta_1, \theta_2, \dots, \theta_l) \mapsto \sum_j N_{L_j/k} \left(\{\alpha_{1j}, \dots, \alpha_{lj}\}\right)$
gives us a well-defined homomorphism from $K_0 (k, \, \G_m^{\wedge l})$ to $K^M_l(k)$.
It remains to show that this homomorphism vanishes on $\partial K_0 (k \Delta^1, \, \G_m^{\wedge l})$.
Let $A_1(t), \dots, A_l(t)$ be commuting matrices in $GL_n(k[t])$, where $t$ is an indeterminate.
Then $M=k(t)^n$ can be considered as an $S=k(t)[t_1,t_1^{-1}, \dots, t_l, t_l^{-1}]$-module, on which $t_i$ acts as $A_i(t)$. Then
find a composition series $0=M_0 \subset M_1 \subset \dots \subset M_s=M$ with simple $S$-modules $Q_j = M_j / M_{j-1}$ ($j=1,\dots,s$)
and maximal ideals $\mathfrak{n}_j$ of $S$ such that $Q_j \simeq S / \mathfrak{n}_j$.
We also denote by $\beta_{ij}$ the element of $Q_j^\times$ which corresponds to $t_i$ (mod $\mathfrak{n}_j$) for $i=1, \dots, l$ and $j=1, \dots, s$.
Each $Q_j$ is a finite extension field of $k(t)$ and let $x = \sum_{j=1}^s N_{Q_j / k(t)} (\{\beta_{1j}, \dots, \beta_{lj}\}) \in K^M_l(k(t))$.
Now consider the element $\displaystyle y=\{x, {(t-1)/ t} \}$ in $K^M_{l+1}(k(t))$,
where the symbol $\{x, {(t-1)/ t} \}$ denotes $\sum_u \{x_{1u}, \dots, x_{lu}, {(t-1)/t} \}$ if $x = \sum_u \{x_{1u}, \dots, x_{lu}\}$ in $K^M_l(k(t))$.
Then $\partial_v (y) = - \phi_l \big( (A_1(0), \dots, A_l(0)) \big)$ if $\pi_v = t$ and
$\partial_v (y) = \phi_l \big( (A_1(1), \dots, A_l(1)) \big)$ if $\pi_v = t-1$.
Also, the image $\partial_v (y)$ is zero unless $v$ is the valuation associated with either $\pi_v = t-1$ or $\pi_v = t$.
Hence we have $\phi_l \big( (A_1(0), \dots, A_l(0)) \big) = \phi_l \big( (A_1(1), \dots, A_l(1)) \big)$ by the Weil reciprocity law for
the Milnor's $K$-groups.
\end{proof}
The isomorphism in the following theorem was first given by Nesterenko and Suslin (\cite{MR992981}) for Bloch's higher Chow groups.
Totaro, in \cite{MR1187705}, gave another proof of the theorem.
Suslin and Voevodsky, in Chapter 3 of \cite{MR1744945}, gave a proof of it for their motivic cohomology.
Here, we present another version of it for the Goodwillie-Lichtenbaum motivic complex
such that the isomorphism is given explicitly in the form which transforms
the multilinearity of the the symbols of Milnor into the corresponding properties of the symbols of Goodwillie and Lichtenbaum.
\begin{theorem} \label{Milnor-iso}
For any field $k$ and $l \ge 1$, the assignment $\{ a_1,a_2, \dots, a_l \} \mapsto (a_1,a_2,\dots,a_l) $ for each Steinberg symbol
$\{a_1,a_2, \dots, a_l \}$ gives rise to an isomorphism $K^M_l(k) \simeq H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
\end{theorem}
\begin{proof}
The case $l=1$ is straightforward and we assume $l \ge 2$. By Proposition \ref{Milnor-map}, the assignment $\{ a_1,a_2, \dots, a_l \} \mapsto (a_1,a_2,\dots,a_l) $
gives rise to a homomorphism $\rho_l$ from the Milnor's $K$-group $K^M_l(k)$ to the motivic cohomology group $H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
We also have a well-defined map $ \phi_l : H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr) \rightarrow K^M_l(k)$ in Lemma \ref{phi-map} and it suffices
to show that they are the inverses to each other.
It is clear that $\phi_l \circ \rho_l$ is the identity map on $K^M_l(k)$
since each Steinberg symbol is fixed by it. On the other hand, for each $z \in H^l_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$,
$\displaystyle z = \sum_{j=1}^r N_{L_j/k} \left((\alpha_{1j}, \dots, \alpha_{lj})\right)$ for some finite field extensions
$L_1, \dots, L_r$ of $k$ and $\alpha_{ij} \in L_j$ ($1 \le i \le l$, $1 \le j \le r$). Then
$\displaystyle (\rho_l \circ \phi_l) (z) = \rho_l \left( \sum_j N_{L_j/k}\left( \{ \alpha_{1j}, \dots, \alpha_{lj} \} \right) \right)
= \sum_j N_{L_j/k} \left( \rho_l \left( \{ \alpha_{1j}, \dots, \alpha_{lj} \} \right) \right)
= \sum_j N_{L_j/k} \left( \left( \alpha_{1j}, \dots, \alpha_{lj} \right) \right) = z$ by Lemma \ref{norm-compatible}.
Therefore, $\rho_l \circ \phi_l$ is also the identity map and the proof is complete.
\end{proof}
\section{Multilinearity and Skew-symmetry for $H^{l-1}_{\mathcal{M}} \bigl( \rm{Spec} \, k, \, {\mathbb Z}(l) \bigr)$}
In \cite{MR2189214}, the author constructed a dilogarithm map $D: H^{1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(2) \bigr) \rightarrow {\mathbb R}$ whenever $k$ is a subfield
of ${\mathbb C}$ such that $D$ satisfies certain bilinearity and skew-symmetry. (See Lemma 4.8 in \cite{MR2189214}).
Since $D$ can detect all the torsion-free elements of the motivic cohomology group , e.g.,
if $k$ is a number field (\cite{MR58:22016}, \cite{MR2001i:11082}),
we have expected that bilinearity and skew-symmetry for symbols should hold
for $D: H^{1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(2) \bigr) \rightarrow {\mathbb R}$ in such cases.
In this section, we extend multilinearity and skew-symmetry results of the previous section to
the symbols in the motivic cohomology groups $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ when $k$ is a field.
$K_0 (k \Delta^{1}, \, \G_m^{\wedge l})$ ($l \ge 1$) can be identified with the abelian group
generated by $l$-tuples $(\theta_1,\dots,\theta_l)$ $\left( = \left( \theta_1(t),\dots,\theta_l(t)\right) \right)$
and certain explicit relations, where $\theta_1,\dots,\theta_l$ are commuting matrices in $GL_n(k [t])$ for various $n \ge 1$.
$K_0 (k \Delta^2, \, \G_m^{\wedge l})$ is identified with the abelian group
generated by the symbols $\left( \theta_1(x,y),\dots,\theta_l(x,y) \right)$ with commuting $\theta_1(x,y),\dots,\theta_l(x,y)
\in GL_n(k[x,y])$ and certain relations, and the boundary map $\partial$ on the motivic complex sends
$\left( \theta_1(x,y),\dots,\theta_l(x,y) \right)$ to
$\left( \theta_1(1-t,t),\dots,\theta_l(1-t,t) \right) - \left( \theta_1(0,t),\dots,\theta_l(0,t) \right)
+ \left( \theta_1(t,0),\dots,\theta_l(t,0) \right)$ in $K_0 (k \Delta^1, \, \G_m^{\wedge l})$.
The same symbol $(\theta_1,\dots,\theta_l)$ will denote the element in
$K_0 (k \Delta^1, \, \G_m^{\wedge l}) / \partial K_0 (k \Delta^2, \, \G_m^{\wedge l})$ represented by $(\theta_1,\dots,\theta_l)$,
by abuse of notation. The motivic cohomology group $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ is a subgroup of this quotient group,
which consists of the elements killed by $\partial$.
\begin{lemma} \label{simple1}
In $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$, we have the following two simple relations of symbols for any commuting matrices
$\theta_1,\dots,\theta_l$ and any other commuting matrices $\psi_1,\dots,\psi_l$ in $GL_n(k[t])$:
\begin{align*}
-\left( \theta_1(t),\dots,\theta_l(t)\right) = \left( \theta_1(1-t),\dots,\theta_l(1-t)\right)
\end{align*}
$$\left( \theta_1(t),\dots,\theta_l(t) \right)+ \left( \psi_1(t),\dots,\psi_l(t) \right)
= \left( \theta_1(t)\oplus \psi_1(t), \dots, \theta_l(t) \oplus \psi_l(t) \right).$$
\end{lemma}
\begin{proof}
The second relation is immediate from definition of the motivic complex.
The first relation can be shown by applying the boundary map $\partial$ to the element $\left( \theta_1(x),\dots,\theta_l(x) \right)$
regarded as in $K_0 (k \Delta^2, \, \G_m^{\wedge l})$ and by noting that $(\theta_1,\dots,\theta_l) = 0$
in $H^{l-1}_{\M} \left( \Spec k, \, {\mathbb Z}(l) \right)$ when $\theta_1,\dots,\theta_l$ are constant matrices.
The fact that $(\theta_1,\dots,\theta_l) = 0$ for constant matrices $\theta_1,\dots,\theta_l$ is obtained simply by
applying the boundary map $\partial$ to the element $\left( \theta_1,\dots,\theta_l \right)$ regarded as in $K_0 (k \Delta^2, \, \G_m^{\wedge l})$.
\end{proof}
\begin{corollary}
Any element of the cohomology group $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ can be represented by a single expression $(\theta_1,\dots,\theta_l)$,
where $\theta_1,\dots,\theta_l$ are commuting matrices in $GL_n(k[t])$ for some nonnegative integer $n$.
\end{corollary}
We remark that the symbol $\left( \theta_1(t),\dots,\theta_l(t)\right)$ represents an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$
only when its image under the boundary map $\partial$ vanishes in $K_0 (k \Delta^0, \, \G_m^{\wedge l})$.
A tuple $\left( \theta_1(t),\dots,\theta_l(t)\right)$ where $\theta_1,\dots,\theta_l$ are commuting matrices in $GL_n(k[t])$
is called irreducible if $k[t]^n$ has no nontrivial proper submodule when regarded as a $k[t,x_1, x_1^{-1} \dots, x_l, x_l^{-1}]$-module
where $x_i$ acts on $k[t]^n$ via $\theta_i(t)$ for each $i=1, \dots, l$. Note that if $k(t)^n$ is regarded as
a $k(t)[x_1, x_1^{-1} \dots, x_l, x_l^{-1}]$-module with the same actions and if $M$ is a nontrivial proper
submodule of $k(t)^n$, then $M \cap k[t]^n$ is a nontrivial proper $k[t,x_1, x_1^{-1} \dots, x_l, x_l^{-1}]$-submodule of $k[t]^n$. Therefore,
$k(t)^n$ is irreducible as a $k(t)[x_1, x_1^{-1} \dots, x_l, x_l^{-1}]$-module if $\left( \theta_1(t),\dots,\theta_l(t)\right)$ is irreducible.
It can be easily checked that, if two matrices $A, B \in GL_n(k)$ commute and $A$ is a block matrix of the form
$\displaystyle A = \begin{pmatrix} I & 0 \\ 0 & C \end{pmatrix}$ where $I$ is a matrix whose characteristic polynomial is a power of $x-1$ and
$C$ does not have 1 as an eigenvalue, then $B$ must be a block matrix $\displaystyle B = \begin{pmatrix} B_1 & 0 \\ 0 & B_2 \end{pmatrix}$,
where the blocks $B_1$ and $B_2$ are of compatible sizes with the blocks $I$ and $C$ of $A$. Therefore, we may easily relax the notion of
irreduciblity of a symbol $\left( \theta_1(t),\dots,\theta_l(t)\right)$ as an element in $K_0 (k \Delta^1, \, \G_m^{\wedge l})$
by declaring it \underline{irreducible} when its restriction to the largest submodule $V \subset k[t]^n$, where none of the restrictions of $\theta_1(t),\dots,\theta_l(t)$
has 1 as an eigenvalue, is irreducible.
\begin{theorem} \label{multilinear} (Multilinearity)
Suppose that $\varphi(t), \psi(t)$ and $\theta_1(t), \dots, \theta_l(t)$ (with $\theta_i(t)$ omitted) are commuting matrices
in $GL_n(k [t])$ such that the symbol represented by one of these matrices is irreducible in $K_0 (k \Delta^1, \, \G_m^{\wedge 1})$.
Assume further that the symbols $\bigl( \theta_1(t) ,\dots, \varphi(t),\dots, \theta_l(t) \bigr)$ and
$\bigl( \theta_1(t) ,\dots, \psi(t),\dots, \theta_l(t) \bigr)$ represent elements in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
Then $\bigl( \theta_1(t) ,\dots, \varphi(t) \psi(t),\dots, \theta_l(t) \bigr)$ represents an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ and
$$\bigl( \theta_1(t) ,\dots, \varphi(t),\dots, \theta_l(t) \bigr) + \bigl( \theta_1(t) ,\dots, \psi(t),\dots, \theta_l(t) \bigr)
= \bigl( \theta_1(t) ,\dots, \varphi(t) \psi(t),\dots, \theta_l(t) \bigr)$$
in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
\end{theorem}
\begin{proof}
For simplicity of notation, we may assume that $i=1$ and prove the multilinearity on the first variable, i.e., we will want to show that
$$\bigl( \varphi(t),\theta_2(t), \dots, \theta_l(t) \bigr) + \bigl( \psi(t),\theta_2(t), \dots, \theta_l(t) \bigr)
= \bigl( \varphi(t)\psi(t),\theta_2(t), \dots, \theta_l(t) \bigr).$$
In this proof, all equalities are in $K_0 (k \Delta^{1}, \, \G_m^{\wedge l}) / \partial K_0 (k \Delta^{2}, \, \G_m^{\wedge l})$
unless mentioned otherwise and $1$ denotes the identity matrix $1_n$ of rank $n$ whenever appropriate.
Let $p(t)$ and $q(t)$ be matrices with entries in $k [t]$ such that $p(t)$ is invertible and
$p(t)$, $q(t)$ and $\theta_2(t), \dots, \theta_l(t)$ commute. Then
the boundary of the element
$$ \left( {\begin{pmatrix} 0 & 1 \\- p(y) & x y q(y)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(y) & 0 \\ 0 & \theta_2(y) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(y) & 0 \\ 0 & \theta_l(y) \end{pmatrix}} \right)$$
of $K_0 (k \Delta^2, \, \G_m^{\wedge l})$ vanishes in
$H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ by the definition of the cohomology group. Hence we have
\begin{multline*}
0 = \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & (1-t) t q(t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & 0 \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right)
+ \left( {\begin{pmatrix} 0 & 1 \\ p(0) & 0
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{multline*}
But, as in the proof of Lemma \ref{simple1}, the last term, which is a tuple of constant matrices, is 0 and we have
\begin{multline} \label{bilinear-equ1}
\quad \quad \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & (1-t) t q(t) \end{pmatrix}},\
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
= \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & 0 \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right). \quad \quad
\end{multline}
Next, by taking the boundary of $\left( {\begin{pmatrix} 0 & 1 \\ -p(y) & (x+y) q(y)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(y) & 0 \\ 0 & \theta_2(y) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(y) & 0 \\ 0 & \theta_l(y) \end{pmatrix}} \right)$, we get
\begin{multline} \label{bilinear-equ2}
\left( {\begin{pmatrix} 0 & 1 \\ - p(t) & q(t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
= \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & t q(t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -p(0) & t q(0) \bigr)
\end{pmatrix}},\ {\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{multline}
If $p(t)$, $q(t)$ and $\theta_2(t), \dots, \theta_l(t)$ are replaced by $p(1-t)$,$(1-t)q(1-t)$ and $\theta_2(1-t), \dots, \theta_l(1-t)$ respectively in
(\ref{bilinear-equ2}), then we obtain
\begin{multline} \label{bilinear-equ3}
\left( {\begin{pmatrix} 0 & 1 \\ - p(1-t) & (1-t)q(1-t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(1-t) & 0 \\ 0 & \theta_2(1-t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1-t) & 0 \\ 0 & \theta_l(1-t) \end{pmatrix}} \right) \\
= \left( {\begin{pmatrix} 0 & 1 \\ -p(1-t) & t(1-t) q(1-t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(1-t) & 0 \\ 0 & \theta_2(1-t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1-t) & 0 \\ 0 & \theta_l(1-t) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -p(1) & t q(1) \bigr)
\end{pmatrix}},\ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right).
\end{multline}
If we apply Lemma \ref{simple1} to the first term, the right hand side of the equality (\ref{bilinear-equ2}) can be written as
\begin{multline*}
- \left( {\begin{pmatrix} 0 & 1 \\ -p(1-t) & (1-t) q(1-t) \end{pmatrix}},\
{\begin{pmatrix} \theta_2(1-t) & 0 \\ 0 & \theta_2(1-t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1-t) & 0 \\ 0 & \theta_l(1-t) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -p(0) & t q(0) \bigr)
\end{pmatrix}},\ {\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{multline*}
By applying (\ref{bilinear-equ3}) to the first term and by (\ref{bilinear-equ2}), we have
\begin{align*}
&\quad \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & q(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
&= - \left( {\begin{pmatrix} 0 & 1 \\ -p(1-t) & t(1-t) q(1-t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(1-t) & 0 \\ 0 & \theta_2(1-t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1-t) & 0 \\ 0 & \theta_l(1-t) \end{pmatrix}} \right) \\
&\hskip 1 in + \left( {\begin{pmatrix} 0 & 1 \\ -p(1) & t q(1) \bigr)
\end{pmatrix}} , {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right)\\
\displaybreak[0]
&\hskip 2 in - \left( {\begin{pmatrix} 0 & 1 \\ -p(0) & t q(0) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
&= \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & t(1-t) q(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
&\hskip 1 in + \left( {\begin{pmatrix} 0 & 1 \\ -p(1) & t q(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
\displaybreak[0]
&\hskip 2 in - \left( {\begin{pmatrix} 0 & 1 \\ -p(0) & t q(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
&= \left( {\begin{pmatrix} 0 & 1 \\ -p(t) & 0 \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
&\hskip 1 in + \left( {\begin{pmatrix} 0 & 1 \\ -p(1) & t q(1)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 2 in - \left( {\begin{pmatrix} 0 & 1 \\ -p(0) & t q(0) \bigr)\end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{align*}
The second equality is obtained by applying Lemma \ref{simple1} to the first term and the last equality is by (\ref{bilinear-equ1}).
Now by setting $p(t) = \varphi(t)\psi(t)$ and $q(t) = \varphi(t) + \psi(t)$ in the above equality, we have
\begin{multline} \label{bilinear-equ4}
\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & \varphi(t)+ \psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
= \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 0 \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \hskip 1.5 in \\
+ \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( \varphi(1)+ \psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{multline}
Similarly, with $p(t) = \varphi(t)\psi(t)$ and $q(t) = 1 + \varphi(t)\psi(t)$ this time, we get
\begin{multline} \label{bilinear-equ5}
\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1 + \varphi(t)\psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
= \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 0 \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \hskip 1.5 in \\
+ \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( 1 + \varphi(1)\psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
- \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{multline}
The first terms on the right of (\ref{bilinear-equ4}) and (\ref{bilinear-equ5}) are the same,
so by subtracting (\ref{bilinear-equ5}) from (\ref{bilinear-equ4}), we obtain
\begin{align*}
&\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & \varphi(t)+ \psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right)\\
&\hskip 1 in - \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1 + \varphi(t)\psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
&=\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( \varphi(1)+ \psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 1 in - \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
&\hskip 0.5 in - \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( 1 + \varphi(1)\psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 1.5 in + \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{align*}
Now we state our claim:
\vskip 0.1 in
\bf{Claim: } \it{The right hand side of the above equality is equal to 0.}
\vskip 0.1 in
\rm Once the claim is proved, we obtain the following equality.
\begin{multline} \label{bilinear-equ6}
\quad \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & \varphi(t)+ \psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right)\\
= \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1 + \varphi(t)\psi(t) \end{pmatrix}},
\ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right). \quad
\end{multline}
To prove the claim, note first that, by our assumption, one of $\varphi(t)$, $\psi(t)$ and $\theta_2(t), \dots, \theta_l(t)$, denoted $\theta(t)$,
is irreducible on the largest submodule $V \subset k[t]^n$, where none of the restrictions of $\theta_1(t),\dots,\theta_l(t)$
has 1 as an eigenvalue. We may easily assume that $V=k[t]^n$ since all the symbols under our interest vanish on the complement of $V$ in $k[t]^n$.
Then all of $\varphi(t)$, $\psi(t)$ and $\theta_2(t), \dots, \theta_l(t)$ can be written as polynomials of $\theta(t)$ with coefficients in $k(t)$.
Since $\bigl( \varphi(0), \theta_2(0),\dots,\theta_l(0)\bigr) = \bigl( \varphi(1), \theta_2(1),\dots,\theta_l(1)\bigr)$
in $K_0 (k, \, \G_m^{\wedge l})$ by our assumption, it follows that $S \varphi(0) S^{-1} = \varphi(1)$, $S \theta_i(0) S^{-1}=\theta_i(1)$
for every legitimate $i$, for some $S \in GL_n(k)$. Now, it is immediate that, in $K_0 (k \Delta^1, \, \G_m^{\wedge l})$,
\begin{align*}
&\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( \varphi(1)+ \psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 1 in = \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
&\text{and } \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( 1 + \varphi(1)\psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 1 in = \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{align*}
Therefore, the proof of the claim is complete.
Thanks to the identities (\ref{linear-equa}) and (\ref{linear-equb}), we have, by (\ref{bilinear-equ6}),
\begin{align*}
&\quad \ \left( {\begin{pmatrix} \psi(t) & 1 \\ 0 & \varphi(t) \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
\displaybreak[0]
&=\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t) \psi(t) & \varphi(t)+\psi(t) \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
\displaybreak[0]
&=\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1+\varphi(t)\psi(t) \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
&=\left( {\begin{pmatrix} 1 & 1 \\ 0 & \varphi(t)\psi(t) \end{pmatrix}}, \ {\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right).
\end{align*}
Hence $\bigl( \varphi(t),\,\theta_2(t), \dots, \theta_l(t) \bigr) + \bigl( \psi(t), \,\theta_2(t), \dots, \theta_l(t) \bigr)
= \bigl( \varphi(t)\psi(t), \, \theta_2(t), \dots, \theta_l(t) \bigr)$, as required.
\end{proof}
The irreducibility assumption in Theorem \ref{multilinear} is used only to justify the claim in the proof of the theorem.
The various conditions in the following corollary can replace the irreducibility assumption in the theorem.
We state the multilinearity of symbols only in the
first coordinate to simplify the notation, but a similar statement in another coordinate holds obviously.
\begin{corollary} \label{multilinear1}
Suppose that $\varphi(t), \psi(t), \theta_2(t), \dots, \theta_l(t)$ are commuting matrices in $GL_n(k [t])$
and that the symbols $\bigl( \varphi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ and
$\bigl( \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ represent elements in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
Then $\bigl( \varphi(t)\psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ represents an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ and
$$\bigl( \varphi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr) + \bigl( \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)
= \bigl( \varphi(t)\psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$$
in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ if one of the following assumptions is satisfied:
(i) The symbol $\bigl( \varphi(t), \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ is irreducible and
$k$ is a field of characteristic $0$ or $n < char(k)$.
(ii) There exists a filtration $0=V_0 \subset V_1 \subset \dots \subset V_n=k[t]^n$ of $k[t,x_0, x_0^{-1} \dots, x_l, x_l^{-1}]$-modules where
$x_0$ and $x_1$ act via $\varphi(t)$ and $\psi(t)$ and $x_i$ acts via $\theta_i(t)$ for $i \ge 2$ such that the restriction of the symbol
$\bigl( \varphi(t), \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ to each $V_{i+1}/V_i$ ($i=0, \dots, n-1$) is irreducible and
$k$ is of characteristic $0$ or $n < char(k)$.
(iii) One of the matrices $\varphi(t), \psi(t), \theta_2(t) ,\dots, \theta_l(t)$ has a characteristic polynomial equal to its minimal polynomial.
This is the case, for example, when one of the matrices is a companion matrix of a polynomial with coefficients in $k[t]$ and constant term in $k^\times$.
\end{corollary}
\begin{proof}
$(i)$ $k(t)^n$ as a $k(t)[x_0, x_0^{-1} \dots, x_l, x_l^{-1}]$-module,
where $x_0$ and $x_1$ act via $\varphi(t)$ and $\psi(t)$ and $x_i$ acts via $\theta_i(t)$ for $i \ge 2$, is irreducible. Therefore, it is a field
extension of $k(t)$ of degree $n$. By our assumption on the field $k$, it is generated by a primitive element, say $\theta(t)$, and all of
$\varphi(t), \psi(t), \theta_2(t) ,\dots, \theta_l(t)$ can be written as polynomials of $\theta(t)$ with coefficients in $k(t)$.
So the claim in the proof of Theorem \ref{multilinear} holds and we obtain the multilinearity.
$(ii)$ is an obvious consequence of $(i)$.
$(iii)$ it true since any matrix which commutes with a given companion matrix of a polynomial can be written as a polynomial of the companion matrix.
(Theorem 5 of Chapter 1 in \cite{MR0201472})
\end{proof}
In the following corollary, we don't require the commutativity of $\varphi(t)$ and $\psi(t)$.
\begin{corollary} \label{multilinear2}
Suppose that $\theta_2(t), \dots, \theta_l(t)$ are commuting matrices in $GL_n(k [t])$
which commute also with $\varphi(t), \psi(t) \in GL_n(k[t])$ and that the symbols $\bigl( \varphi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ and
$\bigl( \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ represent elements in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
Then $\bigl( \varphi(t)\psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$ represents an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ and
$$\bigl( \varphi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr) + \bigl( \psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)
= \bigl( \varphi(t)\psi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$$
in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ if one of the following assumptions is satisfied:
(i) $\varphi(0)=\varphi(1)$, $\psi(0)=\psi(1)$ and $\theta_i(0)=\theta_i(1)$ for $i=2, \dots, l$ as matrices in $GL_n(k)$.
(ii) $\theta_i(0)$ or $\theta_i(1)$ has $n$ distinct eigenvalues for some $i=2, \dots, l$.
\end{corollary}
\begin{proof}
$(i)$ clearly guarantees the claim in the proof of Theorem \ref{multilinear}.
$(ii)$ We may assume that none of $\varphi(0), \psi(0), \theta_2(0) ,\dots, \theta_l(0)$ has 1 as an eigenvalue.
If $\theta_i(0)$ has $n$ distinct eigenvalues for some $i$, then $\theta_i(1)$ also has the same $n$ distinct eigenvalues since
$(\theta_i(0)) = (\theta_i(1))$ in $K_0 (k, \, \G_m^{\wedge 1})$ by the assumption that $\bigl( \varphi(t), \theta_2(t) ,\dots, \theta_l(t) \bigr)$
belongs to $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
Also, each of $\varphi(0), \psi(0), \theta_2(0) ,\dots, \theta_l(0)$ are
diagonalizable by the same similarity matrix by the commutativity of the matrices with $\theta_i(0)$.
Let us denote the tuples of joint eigenvalues by $(a_i, b_i, c_{2i}, \dots, c_{li})$ for $i=1, \dots, n$.
A similar statement is true for $\varphi(1), \psi(1), \theta_2(1) ,\dots, \theta_l(1)$ and their joint eigenvalues are denoted by
$(a'_i, b'_i, c'_{2i}, \dots, c'_{li})$ for $i=1, \dots, n$. By permuting the indices $i$ if necessary, we may assume that
$a_i = a'_i$, $b_i = b'_i$, $c_{ji}=c'_{ji}$ for $j=2, \dots, l$ and $i=1, \dots, n$.
Then the claim in the proof of Theorem \ref{multilinear} holds since
\begin{align*}
&\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( \varphi(1)+ \psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 0.3 in = \sum_{i=1}^n \left( {\begin{pmatrix} 0 & 1 \\ -a_i b_i & t \bigl( a_i+ b_i \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} c_{2i} & 0 \\ 0 & c_{2i} \end{pmatrix}}, \dots, {\begin{pmatrix} c_{li} & 0 \\ 0 & c_{li} \end{pmatrix}} \right) \\
&\hskip 1 in = \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
&\text{and similarly} \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(1)\psi(1) & t \bigl( 1 + \varphi(1)\psi(1) \bigr)
\end{pmatrix}}, \ {\begin{pmatrix} \theta_2(1) & 0 \\ 0 & \theta_2(1) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(1) & 0 \\ 0 & \theta_l(1) \end{pmatrix}} \right) \\
&\hskip 1 in = \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(0) & 0 \\ 0 & \theta_2(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right).
\end{align*}
\end{proof}
\begin{note}
(i) In Theorem \ref{multilinear}, the commutativity of $\varphi(t)$ and $\psi(t)$ would not have been necessary if we wanted merely to define the symbols
$\bigl( \varphi(t),\theta_2(t), \dots, \theta_l(t) \bigr)$ and $\bigl( \psi(t),\theta_2(t), \dots, \theta_l(t) \bigr)$.
But, if we do not insist the commutativity of these two matrices, then $\bigl( \varphi(t)\psi(t),\theta_2(t), \dots, \theta_l(t) \bigr)$
does not have to represent an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ even if the symbols
$\bigl( \varphi(t),\theta_2(t), \dots, \theta_l(t) \bigr)$ and $\bigl( \psi(t),\theta_2(t), \dots, \theta_l(t) \bigr)$ do.
For example, take $l=2$ and let $a, b \in k-\{0,1\}$ be two distinct numbers and take any $c \in k-\{0,1\}$. Let
$$\varphi(t) = \begin{pmatrix} (a+b)t & {\frac {(a+b)^2} {ab}} t(1-t) -1 \\ ab & (a+b)(1-t) \end{pmatrix},
\psi(t) = \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix},
\theta(t) = \begin{pmatrix} c & 0 \\ 0 & c \end{pmatrix} $$
Then the boundaries of both $\bigl( \psi(t),\, \theta(t) \bigr)$ and $\bigl( \varphi(t),\, \theta(t) \bigr)$ are 0, but
the boundary of $\bigl(\varphi(t) \psi(t),\, \theta(t) \bigr)$ is not 0 in $K_0 (k \Delta^0, \, \G_m^{\wedge 2})$.
(ii) The irreducibility condition in Theorem \ref{multilinear} or other similar assumptions in Corollary \ref{multilinear1} and \ref{multilinear2}
are essential. For example, take $l=1$ and let $a, b \in k - \{0,1\}$ be two distinct elements.
Find any distinct $c, d \in k-\{0,\pm 1\}$ such that the set
$\{a, acd, bc, bd \}$ is not equal to $\{ac, ad, b, acd\}$. Consider
$$A(t) = \begin{pmatrix} a & 0& 0& 0 \\ 0 & a & 0 & 0 \\ 0 & 0 & b & 0 \\ 0 & 0 & 0 & b \end{pmatrix}, \
B(t) = \begin{pmatrix} 0 & -cd & 0& 0 \\ 1 & (c+d)t + (1+cd)(1-t) & 0 & 0 \\ 0 & 0 & 0 & -cd \\ 0 & 0 & 1 & (c+d)(1-t) + (1+cd)t \end{pmatrix}.$$
Then $A(0)=A(1)$ and $(B(0)) = (1) + (cd) + (c) + (d) = (B(1))$ in $K_0 (k, \, \G_m^{\wedge 1})$.
But, $(A(0)B(0)) = (a) + (acd) + (bc) + (bd) \ne (ac) + (ad) + (b) + (bcd) = (A(1)B(1))$ in $K_0 (k, \, \G_m^{\wedge 1})$ and thus
$(A(t)B(t))$ does not represent an element in $H^{0}_{\M} \bigl( \Spec k, \, {\mathbb Z}(1) \bigr)$
\end{note}
\begin{proposition} \label{skewsymmetry}
(Skew-Symmetry) Suppose that $\theta_1(t),\dots, \theta_l(t) \in GL_n(k [t])$ commute and one of the symbols represented by
$\theta_1(t),\dots, \theta_{l-1}(t)$ or $\theta_l(t)$ is irreducible.
If $\bigl( \theta_1(t),\dots, \theta_l(t) \bigr)$ represents an element in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$ ($l \ge 2$), then
$\bigl( \theta_1(t),\dots,\theta_i(t),\dots,\theta_j(t),\dots,\theta_l(t) \bigr) =
-\bigl( \theta_1(t),\dots,\theta_j(t),\dots,\theta_i(t),\dots,\theta_l(t) \bigr)$ in $H^{l-1}_{\M} \bigl( \Spec k, \, {\mathbb Z}(l) \bigr)$.
\end{proposition}
\begin{proof}
For simplicity of notations, we assume that $i=1$ and $j=2$. Let $\varphi=\theta_1$ and $\psi=\theta_2$.
An argument similar to the one utilized in the proof of Theorem \ref{multilinear} can be used to prove that
\begin{multline} \label{skewsymmetryeq}
\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t) \psi(t) & \varphi(t)+\psi(t) \end{pmatrix}},
\ {\begin{pmatrix} 0 & 1 \\ -\varphi(t) \psi(t) & \varphi(t)+\psi(t) \end{pmatrix}},\
{\begin{pmatrix} \theta_3(t) & 0 \\ 0 & \theta_3(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
=\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1+\varphi(t)\psi(t) \end{pmatrix}},
\ {\begin{pmatrix} 0 & 1 \\ -\varphi(t)\psi(t) & 1+\varphi(t)\psi(t) \end{pmatrix}},\
{\begin{pmatrix} \theta_3(t) & 0 \\ 0 & \theta_3(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right).
\end{multline}
\begin{multline*}
\text{ Just replace }\left( {\begin{pmatrix} 0 & 1 \\- p(t) & q(t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_2(t) & 0 \\ 0 & \theta_2(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right) \\
\text{ by }\left( {\begin{pmatrix} 0 & 1 \\- p(t) & q(t) \end{pmatrix}}, \
{\begin{pmatrix} 0 & 1 \\- p(t) & q(t) \end{pmatrix}}, \
{\begin{pmatrix} \theta_3(t) & 0 \\ 0 & \theta_3(t) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(t) & 0 \\ 0 & \theta_l(t) \end{pmatrix}} \right)
\end{multline*}
and make similar replacements throughout the course of the proof of the claim in the proof of Theorem \ref{multilinear}.
Then note that
\begin{align*}
&\left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( \varphi(0)+ \psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_3(0) & 0 \\ 0 & \theta_3(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right) \\
& = \left( {\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} 0 & 1 \\ -\varphi(0)\psi(0) & t \bigl( 1 + \varphi(0)\psi(0) \bigr) \end{pmatrix}}, \
{\begin{pmatrix} \theta_3(0) & 0 \\ 0 & \theta_3(0) \end{pmatrix}}, \dots, {\begin{pmatrix} \theta_l(0) & 0 \\ 0 & \theta_l(0) \end{pmatrix}} \right)
\end{align*}
to show that the right-hand side of an equality similar to the one as in the claim in the proof of Theorem \ref{multilinear} vanishes.
This proves (\ref{skewsymmetryeq}).
From (\ref{skewsymmetryeq}), we have, using (\ref{linear-equa}) and (\ref{linear-equb}),
$$ \bigl( \varphi(t)\psi(t),\, \varphi(t)\psi(t), \theta_3(t), \dots, \theta(l) \bigr)
= \bigl(\varphi(t),\, \varphi(t), \theta_3(t), \dots, \theta(l) \bigr) + \bigl( \psi(t),\, \psi(t), \theta_3(t), \dots, \theta(l) \bigr).$$
On the other hand, by Theorem \ref{multilinear}, we also have
\begin{multline*}
\quad \bigl( \varphi(t)\psi(t),\, \varphi(t)\psi(t), \theta_3(t), \dots, \theta(l) \bigr) \\
= \bigl( \varphi(t),\, \varphi(t),\theta_3(t), \dots, \theta(l) \bigr) + \bigl( \varphi(t),\, \psi(t),\theta_3(t), \dots, \theta(l) \bigr ) \\
+ \bigl ( \psi(t), \, \varphi(t),\theta_3(t), \dots, \theta(l) \bigr) + \bigl( \psi(t), \, \psi(t),\theta_3(t), \dots, \theta(l) \bigr).
\end{multline*}
The equality of the right hand sides of these two identities leads to the skew-symmetry.
\end{proof}
The irreducibility assumption in Proposition \ref{skewsymmetry} can be replaced by an assumption similar to one of the conditions
in Corollary \ref{multilinear1} or \ref{multilinear2}. For example, it is enough to require that the symbol $\bigl( \theta_1(t),\dots, \theta_l(t) \bigr)$
is irreducible if the field $k$ is of characteristic 0.
\bibliographystyle{plain}
|
1,108,101,562,751 | arxiv | \section{Introduction}
The physical and physiological mechanism of sound production are
important to understanding mammal vocalization which ranges from
periodic vocal fold vibrations to completely aperiodic vibration
and atonal noise. Between these two extremes, a large amount of
phenomena have been observed and reported \cite{m2,m4}:
biphonation, cycles, subharmonic and chaotic behavior. These
behaviors can be predicted by theoretical models. For example, the
two mass model (the most accepted for mammal apparatus of
phonation) can exhibit irregular oscillations
\cite{2masse,jasa110-7}.
The apparatus of phonation can be investigated through the
characterization of the animal vocalization, where vocal
nonlinearity can be used. According to Tokuda\cite{toku} the
nonlinear analysis of human speech signal has been carried out
extensively, while nonlinear characteristics for animal voice
signals have not yet been investigated.
Using the methods of nonlinear time series analysis we wished to
understand the mechanics of the vocal folds starting from the
vocalization time series. The characterization of the vocal
signal as a chaotic time series can give important information on
the health status of the animal, since the oscillation modes are
related to the status of the throat tissues and to the strength of
the animal. Furthermore tissues shapes of vocal apparatus are
different among the animals and the characterization of several
chaotic signals can be used in the monitoring of biodiversity.
The last remaining populations of a sub-specie of the red deer:
the Sardinian deer ({\em Cervus Elaphus Corsicanus}) are found in
the well preserved evergreen forest of {\em Monte Arcosu} in
Sardinia (a protected area owned by WWF Italy). The {\em Cervus
Elaphus} is the largest and most phylogenetically advanced species
of Cervus. Head and body length is 1.65-2.65m, tail length is
0.11-0.27m, height at the shoulder is 0.75-0.15m, and weight is
75-340 kg. The largest and strongest male generally has the
largest harem. In order to maintain this position of superiority
he must constantly keep the distance with rival males by bellowing
out, and chasing off potential rivals who come near his females.
After vocalizing, the largest remaining males size each other up,
and if antler and body size are comparable, they battle for the
females. Their antlers lock and each male attempts to forcefully
push the other away. The strongest and most powerful male wins and
secures a harem (group) of females for mating. In this work an
extensive characterization of the vocalization of {\em Cervus
Elaphus Corsicanus} is presented by means of Lyapounov exponents
of the chaotic oscillation evidence of registered sounds.
\section{Material}
A number of different signals corresponding to different sound
emissions were considered. Only clear and low noise sound
emissions have been analyzed, in order to focus exclusively on
meaningful vocalizations, and to avoid spurious effect. The
vocalizations were recorded from adult males in their natural
environment and digitized with a sampling frequency of 22050 Hz.
Fig.\ref{fig1}(b) shows a small portion of the analyzed signal and
in Fig.\ref{fig1}(d), the spectrogram (512 points FFT) of the
signal is shown.
Discrete Fourier Transform (Fig.\ref{fig1}(c)) was used to
perform a preliminary spectral analysis on vocalization units. The
presence of regions with high density of unresolved frequencies
is a necessary, even if not sufficient, condition for the
occurrence of chaotic dynamical regimes \cite{ott93}. Non-linear
dynamics analysis were, therefore, was limited to signal units
characterized by broad-band features in the frequency domain.
Results reported in the present work refer to a single signal
0.420s long. The time series examined consists of a 9455 points
sampled at 22050Hz.
\section{Computational methods}
The analysis of the time series was performed using the software
package TISEAN\footnote{The TISEAN software package is publicy
available at
http://www.mpipks-dresden.mpg.de/$\sim$tisean/TISEAN$\_2.1$/index.html.}
(TIme SEries ANalysis) \cite{Kantz97}, valued as the most well
known and robust algorithm set for nonlinear time series analysis.
Typical steps are attractor reconstruction from time series and
the characterization of the chaotic dynamic by means of Lyapunov
exponents and maximum Lyapunov exponent (MLE).
\subsection{Attractor reconstruction}
The attractor of underlying dynamics has been reconstructed in
phase space by applying the time delay vector method
\cite{ott93,aba96}.
Starting from a time series $s(t)=[s_1,\dots,s_N]$ the system
dynamic can be reconstructed using the delay theorem by Takens and
Ma\~{n}e. The reconstructed trajectory $\mathbf{X}$ can be
expressed as a matrix where each row is a phase space vector:
\begin{equation}
\mathbf{X}=[X_{1},X_{2},\dots,X_{M}]^T
\end{equation}
where $X_{i}=[s_{i},s_{i+T},\dots,s_{i-(D_{E}-1)T}]$ and
$M=N-(D_{E}-1)T$.
The matrix is characterized by two key parameters: The {\em
Embedding Dimension} $D_{E}$ and the {\em Delay Time} $T$. The
embedding dimension is the minimum dimension at which the
reconstructed attractor can be considered completely unfolded and
there is no overlapping in the reconstructed trajectories. If the
chosen dimension is lower than $D_{E}$ the attractor is not
completely unfolded and the underlying dynamics cannot be
investigated. Higher dimension was not used due to the increase
in computational effort.
The algorithm used for the computation of $D_{E}$ is the method of
{\em False Nearest Neighbors}\cite{aba93}. A false neighbor is a
point of trajectory intersection in a poorly reconstructed
attractor. As the dimension increases, the attractor is unfolded
with greater fidelity, and the number of false neighbors decreases to zero. The first
dimension with no overlapping points is $D_E$.
The delay time $T$ represents a measure of correlation existing
between two consecutive components of $D_{E}$-dimensional vectors
used in the trajectory reconstruction. Following a commonly
applied methodology, the time delay $T$ is chosen in
correspondence to the first minimum of the average mutual
information function \cite{fra86}.
\subsection{Lyapunov exponents}
Chaotic systems display a sensitive dependence on initial
conditions. Such a property deeply affects the time evolution of
trajectories starting from infinitesimally close initial
conditions, and Lyapunov exponents are a measure of this
dependence. These characteristic exponents give a coordinate
independent measure of the local stability properties of a
trajectory. If the trajectory evolves in a $N$-dimensional state
space there are $N$ exponents arranged in decreasing order,
referred to as the {\em Spectrum of Lyapunov Exponents (SLE)}:
\begin{equation}
\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_{n}
\end{equation}
Conceptually these exponents are a generalizations of eigenvalues
used to characterize different types of equilibrium points.
A trajectory is chaotic if there is at least one positive
exponent, the value of this exponent, said the {\em Maximum
Lyapunov Exponent (MLE)} gives a measure of the divergence rate of
infinitesimally close trajectories and of the unpredictability of
the system and gives a good characterization of the underlying
dynamics.
Starting from the reconstructed attractor $\mathbf{X}$, it is
possible to compute with the method of Sano and
Sawada\cite{Greene87,Sano85} the SLE consisting of exactly
$n=D_{E}$ exponents. This method is a qualitative one, and in
presence of a positive exponents, $\lambda_1$, a more accurate
method is necessary for the computation.
The method of Rosenstein-Kantz\cite{rose93,kantz94} is used to
compute the MLE from the time series. This method measures in the
reconstructed attractor the average divergence of two close
trajectories in the time $d_{j}(i)$. This can be expressed as:
\begin{equation}
d_{j}(i)=C_{j}e^{\lambda_{1}(i\Delta t)}
\end{equation}
where $C_{j}$ is the initial separation. By taking the logarithm
of both sides we obtain:
\begin{equation}
\ln d_{j}(i)=\ln C_{j} +\lambda_{1}(i\Delta t)
\end{equation}
This is a set of approximately parallel lines (for $j=1,2,\dots,
M$) each with a slope roughly proportional to $\lambda_{1}$. The
MLE is easily calculated using a least-squares fit to the average
line defined by
\begin{equation}
y(i)=\frac{1}{\Delta t} \langle \ln d_{j}(i)\rangle
\end{equation}
where $\langle \cdot \rangle$ denotes the average over all values
of $j$. Figure \ref{fig2}(d) shows a typical plot of $\langle \ln
d_{j}(i)\rangle$: after a short transition there is a linear
region that is used to extract the MLE.
\section{Results and Discussion}
The signal considered was characterized by highly complex patterns
in which different transients with both periodic and apparently
aperiodic features were identified. The apparently random behavior
of the numerical series, easily detectable with a simple visual
inspection of the sound pattern, was confirmed by the power
spectrum and spectrogram. Three different regions were put into
evidence: at low frequencies, between 0 and 70 Hz, a first
distribution of unresolved peaks is present, a sharp peak is also
present at 450 Hz, while a broad band of frequencies, ranging
between 850 and 1500 Hz, is easily detectable.
The chaotic characterization was performed calculating the
embedding dimension $D_{E}$ by the false nearest method and in
Fig.\ref{fig2}(a) the result of the computation is shown. The
figure reports the fraction of false neighbors with respect to the
embedding dimension and a value of $D_{E}=4$ was found. The delay
Time was considered as the first minimum of the mutual information
function, and the value $T=8$ was found.
Starting from the time series the attractor was reconstructed
using the delay method, and in Fig.\ref{fig2}(b) a three
dimensional projection of the attractor is shown. The structure of
the attractors, related to the chaotic oscillation of the vocal
folds, demonstrated that the irregular behavior observed in the
time series was not due to noise.
In order to completely characterize the chaotic nature of the
vocalization, the Spectrum of Lyapunov Exponents and the Maximum
Lyapunov Exponent $\lambda_{1}$ were evaluated. In
Fig.\ref{fig2}(c) values of the four exponents are reported and
the presence of a positive exponent was detected. The accurate
value of the MLE was computed by the Rosenstein-Kantz method and a
value of $\lambda_{1}=0.48$ was found by a linear regression of
the curves in the region between 0 and 20 iterations.
The Kaplan-Yorke fractal dimension $D_L$ of the attractor
\cite{kap79}, equal to $D_L = 2.58$, confirms the high dimensional
fractal qualities of the strange attractor.
\section{Concluding remarks}
The analysis method proposed in this letter was applied to the
vocalization of an adult male of {\em Cervus elaphus corsicanus}
and put in evidence the chaotic behavior of the irregolar
oscillations in the signal considered. A full characterization by
means of attractor reconstruction,Spectrum of Lyapunov Exponents,
and Maximum Lyapunov Exponent was performed. A positive value of
MLE was found. Future work aimed at identifying different
individuals through the discussed parameters, will consist in the
analysis of other vocalizations looking for a {\em vocal
fingerprint} that may be useful in biodiversity monitoring.
\begin{acknowledgments}
The authors are thankful to Dr. Carlo Murgia (Director of Oasi di
monte Arcosu Sardegna) for providing the vocalizations.
\end{acknowledgments}
\begin{figure}
\includegraphics[height=12cm, keepaspectratio=true]{figura1.eps
\caption{(a) The analyzed signal. (b) A portion of the signal
showing the irregular nature of the vocalization. (c) The power
spectrum shows the typical Spectral contents of an irregular
signal: a broadband and a continuous one. (d) The spectrogram of
the signal showing a fundamental frequency of 70Hz and other
frequencies until 2000Hz} {\label{fig1}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=12cm, keepaspectratio=true]{figura2.eps
\caption{(a) Computation of the embedding dimension by the False
Nearest Method applied to the time series. The fraction of false
neighbors decrease to zero at a reconstruction dimension $D_E=4$.
(b) The attractor reconstructed by the method of delays: This
highly structured trajectory indicates that the signal is chaotic
and that the irregular motion is not a noisy one. (c) The Spectrum
of Lyapunov Exponent showing the presence of a positive Lyapunov
exponent and three negative exponents. (d) Computation of the
Maximum Lyapunov Exponent by the Rosenstein-Kantz algorithm. The
value of $\lambda_{1}$ is obtained by a linear regression of the
curves in the zone between 0 and 20 iterations. The value
$\lambda_{1}=0.48$ was found.
}
{\label{fig2}}
\end{center}
\end{figure}
\begin{table}
\caption{\label{tab} Results of the analysis performed on the
vocalization signal. A positive Lyapunov exponent and the value of
Kaplan-Yorke dimension indicates the chaotic nature of the
signal.}
\begin{ruledtabular}
\begin{tabular}{lc}
Parameter & Value \\
\hline
Delay Time $\mathbf{T}$ & 8 \\
Embedding Dimension $\mathbf{D_{E}}$ & 4 \\
Maximum Lyapunov Exponent $\mathbf{\lambda_{1}}$ & 0.48 \\
Kaplan-Yorke Dimension $\mathbf{D_{L}}$& 2.58 \\
\end{tabular}
\end{ruledtabular}
\end{table}
|
1,108,101,562,752 | arxiv |
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{D}{eep} neural networks (DNNs)~\cite{lecun1989backpropagation} have been the workhorse of many challenging tasks, including image classification~\cite{krizhevsky2012imagenet,srivastava2015training,he2016deep,alexey2021vit,liu2021swin}, semantic segmentation~\cite{evan2017fully,chen2018encoder,xie2021segformer,wang2021hrnet} and object detection~\cite{joseph2018yolov3,zhao2019object,tian2019fcos,nicolas2020end}.
However, designing effective architectures often relies heavily on human expertise.
To alleviate this issue, neural architecture search (NAS) methods have been proposed to automatically design effective architectures~\cite{zoph2016neural}. Existing studies show that these automatically searched architectures often outperform the manually designed ones in many computer vision tasks~\cite{zoph2018learning,li2020block,tan2020efficientdet,dai2021fbnetv3,white2021powerful,chen2021neural,yan2021fp,guo2020breaking}.
However, the state-of-the-art deep networks often contain a large number of parameters and come with extremely high computational cost.
As a result, it is hard to deploy these models to real-world scenarios with limited computation resources.
Regarding this issue, we have to carefully design architectures to fulfill a specific computational budget (\mbox{\textit{e.g.}}, a feasible model should have a latency lower than 100ms on a specified mobile device).
More critically, we may have to consider different computational budgets in the real world.
For example, a company may simultaneously develop/maintain multiple applications and each of them has a specific budget of latency.
In order to design feasible architectures, most methods~\cite{tan2019mnasnet,stamoulis2019single} only considers a single computational budget and incorporates architecture's computational cost into the objective function of NAS.
When we consider diverse budgets, they have to conduct an independent search process for each budget~\cite{tan2019mnasnet}, which is very inefficient yet unnecessary.
Unlike these methods, one can also exploit the population-based methods to simultaneously find multiple architectures and then select an appropriate one from them to fulfill a specific budget~\cite{lu2019nsga,lu2020nsganetv2}. However, due to the limited population size, these searched architectures do not necessarily satisfy the required budget. More critically, all these searched architectures are fixed after search and cannot be easily adapted for a slightly changed budget. Thus, how to design effective architectures under diverse computational budgets in an efficient and flexible way still remains an open question.
\begin{figure*}
\centering
\subfigure[An illustration of generating feasible architectures for diverse budgets using PNAG.]{
\includegraphics[width = 1.1\columnwidth]{application.pdf}\label{fig:application_sub}
}~
\subfigure[Comparisons between PNAG and conventional NAS methods.]{
\includegraphics[width = 0.82\columnwidth]{search_direction.pdf}\label{fig:search_direction}
}
\caption{We show an illustration of how to apply PNAG to generate feasible architectures for diverse computational budgets and the comparisons between PNAG and conventional NAS methods. (a) PNAG\xspace takes an arbitrary budget as input and flexibly generates architectures.
(b) PNAG\xspace learns the whole Pareto frontier rather than finding discrete architectures. Here, the accuracy is measured on the constructed validation set.
}
\label{fig:application}
\end{figure*}
In this paper,
we propose a Pareto-aware Neural Architecture Generator (PNAG\xspace) which only needs to be trained once and then dynamically produces Pareto optimal architectures for diverse budgets via \emph{inference} (as shown in Fig.~\ref{fig:application_sub}).
Note that the Pareto optimal architectures under different budgets should lie on a distribution, \mbox{\textit{i.e.}}, the Pareto frontier over model performance and computational cost~\cite{kim2005adaptive}.
We propose to jointly learn the whole Pareto frontier (\mbox{\textit{i.e.}}, improving the blue curve to the red curve in Fig.~\ref{fig:search_direction}) instead of finding a single Pareto optimal architecture.
During training, we randomly sample budgets from a predefined distribution and maximize the expected reward of the searched architectures to approximate the ground-truth Pareto frontier.
It is worth noting that learning the Pareto frontier is able to share the learned knowledge across different budgets and greatly improve the search results in practice (see results in Table~\ref{tab:pareto_learning}).
Furthermore, when evaluating architectures under diverse budgets,
we design an architecture evaluator that learns a Pareto dominance rule to determine which architecture is a relatively better one in pairwise comparisons.
Unlike existing methods, we highlight that our PNAG designs architectures through a generation process instead of search, which is very efficient (see results in Table~\ref{tab:generation_cost}) and practically useful in real-world model deployment.
We summarize the contributions of our paper as follows.
\begin{itemize}
\item
Instead of designing architectures for a single budget, we propose a Pareto-aware Neural Architecture Generator (PNAG\xspace) which is only trained once and flexibly generates effective architectures for arbitrary budget via inference (see Fig.~\ref{fig:application_sub}). In this way, our architecture generation process becomes very efficient and practically useful in real-world applications.
\item
To train our PNAG, we propose to explicitly learn the Pareto frontier by maximizing the expected reward of the searched architectures over diverse budgets. Interestingly, learning the Pareto frontier shares the learned knowledge across the search processes under diverse budgets and greatly improves the search results (see results in Table~\ref{tab:pareto_learning}).
\item
Since an architecture should have different rewards/scores under different budgets, we propose an architecture evaluator to adaptively evaluate architectures for any given budget.
To train the evaluator, we propose to learn a Pareto dominance rule which determines whether an architecture is better than the other in pairwise comparisons.
\item We measure the latencies on three hardware platforms and take them as the computational budgets to generate feasible architectures. Extensive experiments show that the architectures produced by PNAG\xspace consistently outperform the architectures searched by existing methods across different budgets and platforms.
\end{itemize}
\section{Related Work}
In this section, we provide a brief overview of existing work on neural architecture search, architecture design under resource constraints, as well as Pareto frontier learning.
\subsection{Neural Architecture Search (NAS)}
Unlike manually designing architectures with expert knowledge, NAS seeks to automatically design more effective architectures~\cite{he2020milenas,li2020sgas,yang2021netadaptv2, zhang2021you,zheng2021migonas}.
Existing NAS methods can be roughly divided into three categories, namely, reinforcement-learning-based methods, evolutionary approaches, and gradient-based methods.
Specifically, reinforcement-learning-based methods~\cite{zoph2016neural, pham2018efficient, pasunuru2019continual, tian2020offrl,arash2020unas} learn a controller to produce architectures. Evolutionary approaches~\cite{real2017large, real2019regularized,lu2021neural,ming_zennas_iccv2021,chen2021autoformer,liu2021survey} search for promising architectures by gradually evolving a population.
Gradient-based methods~\cite{liu2018darts, chen2019progressive, xu2020pcdarts, chu2021dartsminus,chen2021drnas,guo2022towards} relax the search space to be continuous and optimize architectures by gradient descent.
Besides designing effective search algorithms, many efforts have also been made to improve the accuracy of architecture evaluation~\cite{zhao2021few,yi2021renas, chu2021fairnas}.
Unlike these methods that find a single architecture, one can design different architectures by training an architecture generator. Specifically, RandWire~\cite{xie2019exploring} designs stochastic network generators to generate randomly wired architectures.
NAGO~\cite{ru2020neural} is the first work to learn an architecture generator and proposes a hierarchical and graph-based search space to reduce the optimization difficulty.
However, these generated architectures tend to perform very similarly (\mbox{\textit{i.e.}}, low diversity) in terms of both model performance and computational cost~\cite{xie2019exploring,ru2020neural}.
Thus, these architectures may not satisfy an arbitrary required budget.
In other words, they still have to learn a generator for a required budget to produce feasible architectures.
\begin{figure*}[t]
\centering
\includegraphics[width=0.90\textwidth]{overview.png}
\caption{Overview of the proposed PNAG\xspace.
Our PNAG\xspace mainly consists of two modules: an architecture generator $f(\cdot;\theta)$ and an architecture evaluator $R(\cdot|\cdot;w)$. Specifically, we build the generator model based on an LSTM network, which takes a budget constraint $B$ as input and produces a promising architecture $\alpha_B$ that satisfies the budget constraint, \mbox{\textit{i.e.}}, $c(\alpha)$. To optimize the generator model, we design the evaluator using three fully connected (FC) layers to estimate the performance of the generated architectures $\alpha_B$. The orange and green boxes in (c) denote the embeddings of architecture $\alpha_{\scriptscriptstyle B}$ and the budget w.r.t. $B$, respectively. }
\label{fig:overview}
\end{figure*}
\subsection{Architecture Design under Resource Constraints}
Many efforts have been made in designing architectures under a resource constraint~\cite{cai2019once, huang2020ponas, elsken2018efficient, Bender2020TuNAS, guo2020single,li2021hw}.
Specifically, PONAS~\cite{huang2020ponas} builds an accuracy table to find architectures satisfying a single budget constraint.
{TuNAS~\cite{Bender2020TuNAS} proposes a reward function to restrict the latency of the searched architecture, which omits additional hyper-parameter tuning.}
Related to our work,
SGNAS~\cite{huang2021searching} proposes an architecture generator which generates architectures for specific budget constraints. Nevertheless, SGNAS optimizes a regression loss \mbox{\textit{w.r.t. }} budget constraint and the resultant architecture does not necessarily have lower cost than the target budget, \mbox{\textit{i.e.}}, violating the budget. More critically, SGNAS considers a fixed hyper-parameter $\lambda$ to balance the regression loss and a classification loss. Due to the large diversity among architectures, their accuracy and computational cost may vary significantly across different budgets, also leading to suboptimal search results (See Table~\ref{tab:mobile_comp}).
\subsection{Pareto Frontier Learning}
Given multiple objectives, Pareto frontier learning aims to find a set of Pareto optimal solutions over them.
Most methods exploit evolutionary algorithms~\cite{deb2002fast,kim2004spea} to solve this problem.
Inspired by them, many efforts have been made to simultaneously find a set of Pareto optimal architectures over accuracy and computational cost~\cite{cheng2018searching,dong2018dpp}.
Recently, NSGANetV1~\cite{lu2020multi} presents an evolutionary approach to find a set of trade-off architectures over multiple objectives in a single run.
NSGANetV2~\cite{lu2020nsganetv2} further presents two surrogates (at the architecture and weights level) to produce task-specific models under multiple competing objectives.
Given a target budget, these methods may manually select an appropriate architecture from a set of searched architectures.
However, given limited population size, the selected architectures do not necessarily satisfy a required budget.
More critically, all the searched architectures are fixed after search and cannot be easily adapted for a slightly changed budget.
Thus, how to learn the Pareto frontier and use it to generate architectures for arbitrary budget in a flexible way still remains unexplored.
\section{Pareto-aware Architecture Generation}\label{sec:method}
In this paper, we focus on the architecture generation problem and intend to generate effective architectures for diverse computational budgets via \emph{inference} instead of search/training.
{Note that the optimal architectures under different budgets lie on the Pareto frontier over model performance and computational cost~\cite{kim2005adaptive}.
Thus, we develop a Pareto-aware Neural Architecture Generator (PNAG\xspace) to explicitly learn the whole Pareto frontier.}
To locate the best architecture from the frontier for a given budget, we build our PNAG as a conditional model which takes the budget as input and directly produces a feasible architecture.
In Section~\ref{sec:generator}, we depict our architecture generator model and present a novel learning algorithm to learn the Pareto frontier. In Section~\ref{sec:reward}, we propose an architecture evaluator, as well as its training algorithm, to adaptively evaluate architectures under different budgets.
Algorithm~\ref{alg:training} shows the whole training process of PNAG\xspace.
\subsection{Learning the Architecture Generator $f(B;\theta)$} \label{sec:generator}
We seek to build an architecture generator model {to dynamically and flexibly produce effective architectures for any given computational budget.}
Let $B$ be a budget (\mbox{\textit{e.g.}}, latency or MAdds) which can be considered as a random variable drawn from some distribution ${\mathcal B}$, namely $B {\sim} {\mathcal B}$.
Let $\Omega$ be an architecture search space. For any architecture $\alpha \in \Omega$, we use $c(\alpha)$ and ${\rm Acc}(\alpha)$ to measure the cost and validation accuracy of $\alpha$, respectively.
Since an architecture can be represented as a sequence of tokens (each token denotes a setting of a layer, \mbox{\textit{e.g.}}, width or kernel size)~\cite{zoph2016neural,pham2018efficient}, we cast the architecture generation problem as a sequential decision problem and build the architecture generator $f(B;\theta)$ using an LSTM network.
As shown in Fig.~\ref{fig:overview}, the generator takes a budget $B$ as input and generates architectures $\alpha_{\scriptscriptstyle B} {=} f(B;\theta)$ (satisfying the constraint $c(\alpha_{\scriptscriptstyle B}) \leq B$) by sequentially predicting the token sequences, \mbox{\textit{i.e.}}, the depth, width, and kernel size of each layer.
Here, $\theta$ denotes the learnable parameters.
Note that the optimal architecture under a specific budget should lie on the Pareto frontier over model performance and computational cost.
To make the generator generalize to arbitrary budget, we seek to learn the Pareto frontier rather than finding discrete architectures. In the following, we first illustrate our training method in Section~\ref{sec:train_generator} and then discuss how to represent a budget with arbitrary value in Section~\ref{sec:rep_budget}.
\subsubsection{Training Method of $f(B;\theta)$}
\label{sec:train_generator}
To illustrate the training objective of our method, we first revisit the NAS problem with a single budget and then generalize it to the problem with diverse budgets.
\textbf{NAS under a single budget.}
Since it is non-trivial to directly find the optimal architecture~\cite{zoph2016neural}, by contrast,
one can first learn a policy $\pi(\cdot; \theta)$ and then conduct sampling from it to find promising architectures, \mbox{\textit{i.e.}}, $\alpha \sim \pi(\cdot; \theta)$. Given a budget $B$, the optimization problem becomes
\begin{equation}\label{eq:obj-single-constraint}
\begin{aligned}
\max_{\theta} ~\mathbb{E}_{\alpha \sim \pi(\cdot; \theta)} ~\left[R \left( \alpha|B; w \right)\right], ~\text{s.t. } ~c(\alpha) \leq B.
\end{aligned}
\end{equation}
Here, $\pi(\cdot;\theta)$ is the learned policy parameterized by $\theta$, and $R(\alpha|B; w)$ is the reward function parameterized by $w$ that measures the joint performance of both the accuracy and latency of $\alpha$. $\mathbb{E}_{\alpha \sim \pi(\cdot; \theta)} \left[ \cdot \right]$ is the expectation over the searched architectures.
\begin{algorithm}[t]
\small
\caption{Training method of PNAG\xspace.}
\label{alg:training}
\begin{algorithmic}[1]\small
\REQUIRE{
Search space $\Omega$, latency distribution ${\mathcal B}$,
learning rate $\eta$, training data set ${\mathcal D}$, parameters $M$, $N$, and $K$.
}
\STATE Initialize model parameters $\theta$ for the generator and $w$ for the architecture evaluator. \\
// \emph{Collect the architectures with accuracy and latency} \\
\STATE Train a supernet $S$ on ${\mathcal D}$. \\
\STATE Randomly sample architectures $\left\{ \beta_i \right\}_{i=1}^{M}$ from $\Omega$. \\
\STATE Construct tuples $\left\{( \beta_i, c(\beta_i), {\rm Acc}(\beta_i)) \right\}_{i=1}^{M}$ using $S$. \\
// \emph{Learn the architecture evaluator} \\
\WHILE{not convergent}
\STATE Sample a set of latencies $\{B_k\}_{k=1}^{K}$ from ${\mathcal B}$. \\
\STATE Update the architecture evaluator by: \\
\STATE ~~~~~~~~~$w \leftarrow w - \eta \nabla_w L(w)$. \\
\ENDWHILE \\
// \emph{Learn the architecture generator} \\
\WHILE{not convergent}
\STATE Sample a set of latencies $\{B_k\}_{k=1}^{K}$ from ${\mathcal B}$. \\
\STATE Obtain $\{\alpha_{\scriptscriptstyle {B_k}}^{\scriptscriptstyle (i)}\}_{i=1}^{N}$ from $\pi(\cdot|B_k; \theta)$ for each $B_k$. \\
\STATE Update the generator via policy gradient by: \\
\STATE ~~~~~~~~~$\theta \leftarrow \theta + \eta \nabla_\theta J(\theta)$. \\
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\textbf{NAS under diverse budgets.}
Problem~(\ref{eq:obj-single-constraint}) only focuses on one specific budget constraint. In fact, we seek to learn the Pareto frontier over the whole range of budgets (\mbox{\textit{e.g.}}, latency).
However, this problem is hard to solve since there may exist infinite Pareto optimal architectures with different computational cost. To address this, one can learn an approximated Pareto frontier by finding a set of uniformly distributed Pareto optimal points~\cite{grosan2008generating}. Here, we evenly sample $K$ budgets from the range of latency and maximize the expected reward over them.
Thus, the problem becomes
\begin{equation}\label{eq:obj-multi-constraint}
\begin{aligned}
\max_{\theta} &~\mathbb{E}_{B \sim {\mathcal B}} \left[ \mathbb{E}_{\alpha_{_B} \sim \pi(\cdot|B; \theta)} ~\left[R \left(\alpha_{\scriptscriptstyle B} | B; w \right) \right] \right], \\
&~\text{s.t. } ~c(\alpha_{\scriptscriptstyle B}) \leq B, ~B \sim {\mathcal B},
\end{aligned}
\end{equation}
where $\mathbb{E}_{B \sim {\mathcal B}} \left[ \cdot \right]$ denotes the expectation over the distribution of budget.
Unlike Eqn.~(\ref{eq:obj-single-constraint}), $\pi(\cdot|B;\theta)$ is the learned policy conditioned on the budget of $B$.
In practice, we use policy gradient to learn the architecture generator.
To encourage exploration, we follow~\cite{pham2018efficient,guo2019nat,guo2021towards} to introduce an entropy regularization. Please refer to the supplementary materials for more details.
\textbf{Advantages over existing NAS methods.}
Our PNAG exhibits two advantages over existing NAS methods.
\emph{First}, our PNAG is able to share the learned knowledge across the search processes under different budgets, which greatly improves the search results (see Table~\ref{tab:pareto_learning}).
The main reason is that, once we find a good architecture for one budget, we may easily obtain a competitive architecture for a larger/smaller budget by slightly modifying some components (model width or kernel size).
\emph{Second}, given a well-trained PNAG, we can directly use it to generate feasible architectures for any required budget via inference, which is very efficient and practically useful (see Table~\ref{tab:generation_cost}).
\subsubsection{Vector Representation of Budget Bounds} \label{sec:rep_budget}
To learn the architecture generator, we still have to consider how to represent the budget bound $B$ as the inputs of PNAG\xspace.
As mentioned before, our PNAG\xspace considers $K$ discrete budgets during training.
To represent different budgets,
we use an embedding vector~\cite{pham2018efficient} to represent different budgets (See details in Section~\ref{sec:rep_budget}).
Following~\cite{pham2018efficient}, we build a learnable embedding vector ${\bf b} = g(B)$ for each sampled budget $B$. We incorporate these learnable embedding vectors into the parameters of the architecture generator and train them jointly. In this way, we are able to automatically learn the vectors of these budgets and encourage PNAG\xspace to produce feasible architectures.
As mentioned before, we only sample a set of discrete budgets to train PNAG\xspace. To accommodate all the budgets belonging to a continuous space, we propose an embedding interpolation method to represent a budget with any possible value.
Specifically, we perform a linear interpolation between the embedding of two adjacent discrete budgets to represent the considered budgets.
For a target budget ${B}$ between two sampled budgets $B_1 {<} {B} {<} B_2$, the linear interpolation of the budget vector ${\bf b}$ can be computed by
\begin{equation*}\label{eq:interpolation}
\begin{aligned}
{\bf b} = g({B}) = \xi g(B_1) {+} (1 {-} \xi) g(B_2),
\text{~where~} \xi = \frac{{B_2} {-} B} {B_{2} {-} B_{1}},
\end{aligned}
\end{equation*}
Here, $\xi \in [0,1]$ denotes the weight of $B_1$ in interpolation.
\subsection{Learning the Architecture Evaluator $R(\cdot|B;w)$}\label{sec:reward}
Given diverse budgets, an architecture should have different rewards/scores regarding whether it satisfies the corresponding budget constraint.
However, it is non-trivial to manually design a reward function for each budget. Instead, we propose to learn an architecture evaluator to automatically predict the score.
To this end, we build an evaluator with three fully connected layers. Given any architecture $\beta$ and a budget $B$, we seek to predict the performance $R(\beta|B;w)$ of $\beta$ under the budget $B$.
Since we have no ground-truth labels for training, following~\cite{freund2003efficient,burges2005learning,chen2009ranking}, we learn the evaluator via pairwise architecture comparisons.
\begin{figure*}[t]
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.84\linewidth]{mobile_compare.pdf}
\caption{
Comparisons of the architectures obtained by different methods on a mobile device (Qualcomm Snapdragon 821).
}
\label{fig:mobile_compare}
\end{minipage}\hfill
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.84\linewidth]{pareto_curve.pdf}
\caption{
Comparisons of the Pareto frontiers of the {generated} architectures between NAS-MO and PNAG. Here, we report the accuracy evaluated on the constructed validation set.
}
\label{fig:pareto_curve}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[Ground-truth latency histogram.]{
\includegraphics[width = 0.67\columnwidth]{mobile_distribution.pdf}\label{fig:mobile_dist}
}~
\subfigure[{Generation results with $B{=}110$ms.}]{
\includegraphics[width = 0.67\columnwidth]{histogram_110.pdf}\label{fig:histogram_110}
}~
\subfigure[Generation results with $B{=}140$ms.]{
\includegraphics[width = 0.67\columnwidth]{histogram_140.pdf}\label{fig:histogram_140}
}
\caption{
Latency histograms {of sampled architectures} on mobile devices. (a) Ground-truth latency histogram of $16,000$ architectures that are uniformly sampled from the search space. (b) The latency histogram of $1,000$ architectures sampled by different methods given $B{=}110$ms. {(c) The latency histogram of $1,000$ architectures sampled by different methods given $B{=}140$ms.}}
\label{fig:distribution}
\end{figure*}
\subsubsection{Training Method of $R(\cdot|B;w)$}
To obtain a promising evaluator, we train the architecture evaluator using a pairwise ranking loss, which has been widely used in ranking problems~\cite{freund2003efficient,burges2005learning,chen2009ranking}.
Specifically, we collect $M$ architectures with accuracy and latency, and record them as a set of triplets $\{(\beta_i, c(\beta_i), {\rm Acc}(\beta_i))\}_{i=1}^{M}$.
Thus, given $M$ architectures, we have $M(M{-}1)$ architecture pairs $\{(\beta_{i}, \beta_{j})\}$ in total after omitting the pairs with themselves.
Assuming that we have $K$ budgets, the pairwise ranking loss becomes
\begin{equation}\label{eq:ranking_loss}
\begin{aligned}
L(w) = &\frac{1}{KM(M{-}1)} \sum_{k=1}^K \sum_{i=1}^{M} \sum_{j=1, j \neq i}^{M} \phi \Big( d( \beta_{i}, \beta_{j}, B_k) \\
&\cdot \big[ R(\beta_{i}|B_k;w) - R(\beta_{j}|B_k;w) \big] \Big),
\end{aligned}
\end{equation}
where $d\big(\beta_1, \beta_2, B_k \big)$ denotes a function to indicate whether $\beta_{i}$ is better than $\beta_{j}$ under the budget $B_k$, as will be discussed in Section~\ref{sec:pareto_dominance}. $\phi(z) = \max (0, 1-z)$ is a hinge loss function and we use it to enforce the predicted ranking results $R(\beta_{i}|B_k;w) - R(\beta_{j}|B_k;w)$ to be consistent with the results of $d( \beta_{i}, \beta_{j}, B_k)$ obtained by a comparison rule based on Pareto dominance.
\subsubsection{Pareto Dominance Rule}\label{sec:pareto_dominance}
To compare the performance between two architectures, we need to define a reasonable function $d\big(\beta_1, \beta_2, B \big)$ in Eqn.~(\ref{eq:ranking_loss}). To this end, we define a Pareto dominance to guide the design of this function.
Specifically, Pareto dominance requires that the quality of an architecture should depend on both the satisfaction of budget and accuracy.
That means, given a specific budget $B$, a good architecture should be the one with the cost lower than or equal to $B$ and with high accuracy.
In this sense, we use Pareto dominance to compare two architectures and judge which one is dominative.
Given any two architectures $\beta_1, \beta_2$, if both of them satisfy the budget constraints (namely $c(\beta_1) \leq B$ and $c(\beta_2) \leq B$), then $\beta_1$ dominates $\beta_2$ if ${\rm Acc}(\beta_1) \geq {\rm Acc}(\beta_2)$.
Moreover, when at least one of $\beta_1, \beta_2$ violates the budget constraint, clearly we have that $\beta_1$ dominates $\beta_2$ if $c(\beta_1) \leq c(\beta_2)$.
Formally, we define the Pareto dominance function $d\big(\beta_1, \beta_2, B \big)$ to reflect the above rules:
\begin{equation}\label{eq:compare_rule}
d\big(\beta_1, \beta_2, B \big) =
\begin{cases}
~1, ~~~~~{\rm if} ~~\left(c(\beta_1) \leq B ~{\land}~ c(\beta_2) \leq B \right) \\
~~~~~~~~~~~~~{\land}~ (\textcolor{black}{{\rm Acc}(\beta_1) \geq {\rm Acc}(\beta_2)}); \vspace{3 pt} \\
-1, ~~~{\rm else~if} ~~\left(c(\beta_1) \leq B ~{\land}~ c(\beta_2) \leq B \right) \\
~~~~~~~~~~~~~{\land}~ (\textcolor{black}{{\rm Acc}(\beta_1) < {\rm Acc}(\beta_2)}); \vspace{3 pt} \\
~1, ~~~~~{\rm else~if} ~~c(\beta_1) \leq ~ c(\beta_2); \vspace{3 pt} \\
-1, ~~~{\rm otherwise}.
\end{cases}
\end{equation}
Based on Eqn.~(\ref{eq:compare_rule}), we have $d(\beta_1, \beta_2, B) = - d(\beta_2, \beta_1, B)$ if $\beta_1 \neq \beta_2$, making it a symmetric metric \mbox{\textit{w.r.t. }} $\beta_1$ and $\beta_2$.
\begin{remark}
The accuracy constraint ${\rm Acc}(\beta_1) \geq {\rm Acc}(\beta_2)$ plays an important role in the proposed Pareto dominance function $d\big(\beta_1, \beta_2, B \big)$. Without the accuracy constraint, we may easily find the architectures with very low computation cost and poor performance (See results in Table~\ref{tab:diff_reward}).
\end{remark}
\begin{table*}[t!]
\centering
\caption{
Comparisons with state-of-the-art architectures on mobile devices. $^*$ denotes the best architecture reported in the original paper.
``-'' denotes the results that are not reported. All the models are evaluated on $224 \times 224$ images of ImageNet.
}
\resizebox{0.85\textwidth}{!}
{
\begin{tabular}{ccccccc}
\toprule [0.15em]
\multirow{2}[0]{*}{Architecture} & \multirow{2}[0]{*}{Latency (ms)} & \multicolumn{2}{c}{Test Accuracy (\%)} & \multirow{2}[0]{*}{\#Params (M)} & \multirow{2}[0]{*}{\#MAdds (M)} & Search Cost \\
\cline{3-4}
& & Top-1 & Top-5 & & & (GPU Days) \\
\midrule [0.1em]
MobileNetV3-Small (1.0$\times$)~\cite{howard2019searching} & 39.8 & 67.4 & - & 2.4 & 56 \\
MobileNetV3-Large (0.75$\times$)~\cite{howard2019searching} & 93.0 & 73.3 & - & 4.0 & 155 & - \\
MobileNetV2 (1.0$\times$)~\cite{sandler2018mobilenetv2} & 90.3 & 72.0 & - & 3.4 & 300 & - \\
FBNetV2~\cite{wan2020fbnetv2} & - & 76.3 & 92.9 & - & 321 & 30.0 \\
MnasNet-A1 (0.5$\times$)~\cite{tan2019mnasnet} & 37.5 & 68.9 & 88.4 & 2.1 & 105 \\
SGNAS-B~\cite{huang2021searching} & - & 76.8 & - & - & 326 & 0.3 \\
{EVO}-80 & 76.8 & 77.1 & 93.3 & 6.1 & 350 & 0.7 \\
NAS-MO-80 & 77.6 & 76.6 & 93.2 & 7.9 & 340 & 0.7 \\
PNAG\xspace-80 & 79.9 & \textbf{78.3} & \textbf{94.0} & 7.3 & 349 & 0.7 \\
\midrule
FBNet-A~\cite{wu2019fbnet} & 91.7 & 73.0 & - & 4.3 & 249 & 9.0 \\
SGNAS-A~\cite{huang2021searching} & - & 77.1 & - & - & 373 & 0.3 \\
ProxylessNAS-Mobile~\cite{cai2018proxylessnas} & 97.3 & 74.6 & - & 4.1 & 319 & 8.3 \\
ProxylessNAS-CPU~\cite{cai2018proxylessnas} & 98.5 & 75.3 & - & 4.4 & 438 & 8.3 \\
MobileNetV3-Large (1.0$\times$)~\cite{howard2019searching} & 107.7 & 75.2 & - & 5.4 & 219 & - \\
{EVO}-110 & 109.3 & 78.4 & 94.0 & 10.2 & 482 & 0.7 \\
NAS-MO-110 & 106.3 & 78.0 & 93.8 & 8.4 & 478 & 0.7 \\
PNAG\xspace-110 & 106.8 & \textbf{79.4} & \textbf{94.5} & 9.9 & 451 & 0.7 \\
\midrule
RandWire~\cite{xie2019exploring} & - & 74.7 & 92.2 & 5.6 & 583 & - \\
ProxylessNAS-GPU~\cite{cai2018proxylessnas} & 123.3 & 75.1 & - & 7.1 & 463 & 8.3 \\
MnasNet-A1 (1.0$\times$)~\cite{tan2019mnasnet} & 120.7 & 75.2 & 92.5 & 3.4 & 300 & $\sim$3792 \\
FBNet-C~\cite{wu2019fbnet} & 135.2 & 74.9 & - & 5.5 & 375 & 9.0 \\
{EVO}-140 & 133.7 & 78.7 & 94.1 & 9.1 & 488 & 0.7 \\
NAS-MO-140 & 139.0 & 78.6 & 94.0 & 9.5 & 486 & 0.7 \\
PNAG\xspace-140 & 127.8 & \textbf{79.8} & \textbf{94.7} & 9.2 & 492 & 0.7 \\
\midrule
NSGANetV1~\cite{lu2020multi} & - & 76.2 & 93.0 & 5.0 & 585 & 27 \\
PONAS-C~\cite{huang2020ponas} & 145.1 & 75.2 & - & 5.6 & 376 & 8.8 \\
P-DARTS~\cite{chen2019progressive} & 168.7 & 75.6 & 92.6 & 4.9 & 577 & 3.8 \\
BigNAS-L~\cite{yu2020bignas} & - & 79.5 & - & 6.4 & 586 & 1.5 \\
{EVO}-170 & 168.3 & 79.2 & 94.4 & 10.7 & 661 & 0.7 \\
NAS-MO-170 & 165.0 & 78.7 & 94.4 & 8.5 & 584 & 0.7 \\
PNAG\xspace-170 & 167.1 & \textbf{80.3} & \textbf{95.0} & 10.0 & 606 & 0.7 \\
\midrule
NSGANetV2~\cite{lu2020nsganetv2} & - & 79.1 & 94.5 & 8.0 & 400 & 1 \\
NAGO~\cite{ru2020neural} & - & 76.8 & 93.4 & 5.7 & - & 20.0 \\
PC-DARTS~\cite{xu2020pcdarts} & 194.1 & 75.8 & 92.7 & 5.3 & 597 & 0.1 \\
EfficientNet B0~\cite{EfficientNet} & 237.7 & 77.3 & 93.5 & 5.3 & 390 & - \\
Cream-L~\cite{peng2020cream} & - & 80.0 & 94.7 & 9.7 & 604 & 12 \\
OFA$^*$~\cite{cai2019once} & 201.9 & 80.2 & 95.1 & 9.1 & 743 & 51.7 \\
{EVO}-200 & 195.9 & 79.8 & 94.5 & 11.0 & 783 & 0.7 \\
NAS-MO-200 & 187.4 & 79.2 & 94.4 & 9.1 & 630 & 0.7 \\
PNAG\xspace-200 & 193.9 & \textbf{80.5} & \textbf{95.2} & 10.4 & 724 & 0.7 \\
\bottomrule[0.15em]
\end{tabular}
}
\label{tab:mobile_comp}
\end{table*}
\section{Experiments}\label{sec:exp}
We apply the proposed PNAG\xspace to produce architectures under diverse latency budgets evaluated on three hardware platforms, including a mobile device (equipped with a Qualcomm Snapdragon 821 processor), a CPU processor (Intel Core i5-7400), and a GPU card (NVIDIA TITAN X).
For convenience, we use ``Architecture-$B$'' to represent the {generated} architecture that satisfies the latency budget w.r.t. $B$, \mbox{\textit{e.g.}}, PNAG\xspace-80. Our code and all the pretrained models are available at \href{https://github.com/guoyongcs/PNAG}{https://github.com/guoyongcs/PNAG}.
\subsection{Implementation Details}\label{sec:implementation}
~\indent \textbf{Search space}.
Following~\cite{cai2019once}, we use MobileNetV3~\cite{howard2019searching} as the backbone to build the search space~\cite{cai2019once,huang2020ponas}.
We divide a network into several units.
To find promising architectures, we allow each unit to have {1)} any {numbers of layers} (\mbox{\textit{i.e.}}, depth) chosen from $\{2,3,4\}$, {2)} any {width expansion ratios in each layer} (\mbox{\textit{i.e.}}, width) chosen from $\{3,4,6\}$, and {3)} any {kernel sizes} chosen from $\{3,5,7\}$.
We build the model with 5 units{.} Thus, there are $3 {\times} 3$ combinations of widths and kernel sizes for each layer.
\textbf{Training the supernet}.
To accelerate the training of supernet, we follow~\cite{wu2019fbnet} to randomly choose 100 classes from original 1000 classes in ImageNet for training and train the supernet with progressive shrinking strategy~\cite{cai2019once} for 90 epochs.
We treat 80\% of these data as the training set to train the supernet and the rest 20\% as the validation set to measure the validation accuracy of candidate architectures (we report such validation accuracy in Figs.~\ref{fig:search_direction} and~\ref{fig:pareto_curve}).
We consider the original ImageNet validation set as the test data and report the test accuracy of candidate architectures on them in all the other tables and figures.
Based on a NVIDIA V100 GPU, the training process of the supernet takes around \emph{15 GPU hours} (\mbox{\textit{i.e.}}, 0.6 GPU days).
\textbf{Training the architecture evaluator.}
We collect {$M{=}16,000$} architectures by uniformly sampling architectures from the search space $\Omega$ (See Fig.~\ref{fig:mobile_dist}) following~\cite{cai2019once} and obtain the latency ranges on three hardware devices.
We deploy these architectures to different devices and measure the latency over a batch of images.
Specifically, we measure the latency on mobile and CPU devices with a batch size of 1. Since the inference on GPU is too fast to obtain the accurate latency, we measure the latency with a batch size of 64 on NVIDIA TITAN X.
We compute the accuracy ${\rm Acc}(\cdot)$ on our validation set (\mbox{\textit{i.e.}}, 20\% samples of 100 selected classes in ImageNet).
We train the architecture evaluator for $250$ epochs.
The learning rate is initialized to $0.1$ and decreased to $1 {\times} 10^{-3}$ with a cosine annealing.
Following~\cite{cai2019once}, we train two predictors to predict the latency and validation accuracy, respectively.
We set the dimension of the embedding vector of budgets to 64.
We emphasize that training the architecture evaluator is very efficient and only takes less than \emph{0.2 GPU hours}.
\textbf{Training the architecture generator.}
We train the model for $120$k iterations using an Adam optimizer with a learning rate of $3\times10^{-4}$.
Following ENAS~\cite{pham2018efficient}, we sample $N{=}1$ architecture at each iteration and find it works well in practice.
We select $K{=}10$ latency budgets by evenly dividing the range.
We add an entropy regularization term to the reward weighted by $1 {\times} 10^{-3}$.
Note that training the architecture generator approximately takes \emph{2 GPU hours}.
When evaluating the searched architectures, following~\cite{cai2019once,lu2021neural}, we first obtain the parameters from the OFA full network and then finetune them for 75 epochs to obtain the final performance.
\begin{figure*}[t]
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width = 0.84\columnwidth]{cpu_compare.pdf}\label{fig:cpu_compare}
\caption{
Comparisons of the architectures obtained by different methods on a Core i5-7400 CPU.
}
\label{fig:cpu_compare}
\end{minipage}\hfill
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width = 0.84\columnwidth]{gpu_compare.pdf}\label{fig:gpu_compare}
\caption{
Comparisons of the architectures obtained by different methods on a NVIDIA TITAN X GPU.
}
\label{fig:gpu_compare}
\end{minipage}
\end{figure*}
\subsection{Compared Methods}
{To investigate the effectiveness of the proposed method, we compare our PNAG with two variants:
}
1) \textbf{EVO} uses the evolutionary search method~\cite{real2019regularized} to perform architecture search.
2) \textbf{NAS-MO} conducts architecture search based by exploiting the multi-objective reward~\cite{tan2019mnasnet}.
{We also compare our method with several state-of-the-art methods, including ENAS~\cite{pham2018efficient}, DARTS~\cite{liu2018darts}, P-DARTS~\cite{chen2019progressive}, PC-DARTS~\cite{xu2020pcdarts}, MNasNet~\cite{tan2019mnasnet}, MobileNetV2~\cite{sandler2018mobilenetv2}, MobileNetV3~\cite{howard2019searching}, FBNet~\cite{wu2019fbnet}, FBNetV2~\cite{wan2020fbnetv2}, ProxylessNAS~\cite{cai2018proxylessnas}, EfficientNet~\cite{EfficientNet}, OFA~\cite{real2019regularized}, Cream~\cite{peng2020cream}, PONAS~\cite{huang2020ponas}, BigNAS~\cite{yu2020bignas}, and NSGANetV2~\cite{lu2020nsganetv2}.}
\subsection{Architecture Search for Mobile Devices}
In this experiment, we train our PNAG to produce feasible architectures for the latency budgets based on a mobile device (Qualcomm Snapdragon
821 processor).
Based on the proposed budget interpolation method in Section~\ref{sec:rep_budget}, our PNAG is flexible to generate feasible architectures for any arbitrary budget.
To evaluate our method, for simplicity, we manually choose 5 latency budgets $\{$80ms, 110ms, 140ms, 170ms, 200ms$\}$ and reports the results under each of them. The other budgets are also possible.
We compare our PNAG\xspace with state-of-the-art methods given different latency budgets evaluated on he considered mobile device.
In Fig.~\ref{fig:mobile_compare}, we compare the architectures searched by different methods in terms of both accuracy and latency. We draw the following conclusions. \emph{First}, our PNAG (red line) consistently generates better architectures than the considered variants EVO and NAS-MO under diverse budgets. \emph{Second}, our best architecture (the rightmost point of the red line) yields a better trade-off between accuracy and latency than a strong baseline OFA$^*$, \mbox{\textit{i.e.}}, the best architecture reported in~\cite{cai2019once}.
For convenience, we put more detailed comparison results in Table~\ref{tab:mobile_comp}.
Given diverse latency budgets, our PNAG greatly outperforms the compared NAS methods in terms of the accuracy of the generated/searched architectures. Specifically, our PNAG-200 yields the best accuracy of 80.5, which is better than the best reported results in OFA~\cite{cai2019once}, namely OFA$^*$.
We also highlight that, besides the superior performance, the total training cost of our PNAG is about 0.7 GPU days\footnote{We report the training cost of each component of PNAG in Section~\ref{sec:implementation}}, which is much more efficient than most state-of-the-art NAS methods, such as~\cite{cai2019once,peng2020cream,yu2020bignas}.
Moreover, we compare the learned/searched frontiers of different methods
{and show the comparisons of Pareto frontiers in Fig.~\ref{fig:pareto_curve}.
We plot all the architectures produced by different methods to form the Pareto frontier.
Specifically, we use the architectures searched by multiple independent runs under different budgets for NAS-MO.
For PNAG\xspace, we use linear interpolation to generate architectures that satisfy different budgets.}
From Fig.~\ref{fig:pareto_curve}, our PNAG\xspace finds a better frontier than NAS-MO due to the shared knowledge across the search process under different budgets.
We also visualize the latency histograms of the architectures evaluated on mobile devices in Fig.~\ref{fig:histogram_110} and {Fig.~\ref{fig:histogram_140}}.
{Given latency budgets of $110$ms and $140$ms, NAS-MO is prone to produce a large number of architectures that cannot satisfy the target budgets.
These results show that it is hard to design the multi-objective reward to obtain the preferred architectures.}
Instead, PNAG\xspace uses the Pareto dominance reward to encourage the architectures to satisfy the desired budget constraints.
In this sense, most architectures generated by our PNAG\xspace are able to fulfill the target budgets.
We put more visual results of latency histograms \mbox{\textit{w.r.t. }} other latency budgets in the supplementary.
\begin{table*}[t!]
\caption{
Comparisons with state-of-the-art architectures on Intel Core i5-7400 CPU. $^*$ denotes the best architecture reported in the original paper.
``-'' denotes the results that are not reported. All the models are evaluated on $224 \times 224$ images of ImageNet.
}
\centering
\resizebox{0.80\textwidth}{!}
{
\begin{tabular}{ccccccc}
\toprule [0.15em]
\multirow{2}[0]{*}{Architecture} & \multirow{2}[0]{*}{Latency (ms)} & \multicolumn{2}{c}{Test Accuracy (\%)} & \multirow{2}[0]{*}{\#Params (M)} & \multirow{2}[0]{*}{\#MAdds (M)} & Search Cost \\
\cline{3-4}
& & Top-1 & Top-5 & & & (GPU Days) \\
\midrule [0.1em]
MobileNetV2 (1.0$\times$)~\cite{sandler2018mobilenetv2} & 28.6 & 72.0 & - & 3.4 & 300 & - \\
MobileNetV3-Large (1.0$\times$)~\cite{howard2019searching} & 22.6 & 75.2 & - & 5.4 & 219 & - \\
FBNet-C~\cite{wu2019fbnet} & 25.7 & 74.9 & - & 5.5 & 375 & 9.0 \\
SGNAS-B~\cite{huang2021searching} & - & 76.8 & - & - & 326 & 0.3 \\
EVO-30 & 29.1 & 77.9 & 93.8 & 7.9 & 385 & 0.7 \\
NAS-MO-30 & 29.7 & 77.5 & 93.7 & 6.6 & 353 & 0.7 \\
PNAG\xspace-30 (Ours) & 29.7 & \textbf{78.3} & \textbf{94.1} & 7.6 & 335 & 0.7 \\
\midrule
ProxylessNAS-CPU~\cite{cai2018proxylessnas} & 34.6 & 75.3 & - & 4.4 & 438 & 8.3 \\
MnasNet-A1 (1.4$\times$)~\cite{tan2019mnasnet} & 34.6 & 77.2 & 93.5 & 6.1 & 592 & $\sim$3792 \\
EVO-35 & 34.5 & 78.5 & 94.3 & 8.2 & 354 & 0.7 \\
NAS-MO-35 & 34.7 & 78.3 & 94.0 & 7.9 & 478 & 0.7 \\
PNAG\xspace-35 (Ours) & 34.5 & \textbf{79.4} & \textbf{94.5} & 8.4 & 431 & 0.7 \\
\midrule
ResNet-18~\cite{he2016deep} & 38.6 & 69.8 & 90.1 & 11.7 & 1814 & - \\
EfficientNet B0~\cite{EfficientNet} & 39.1 & 77.3 & 93.5 & 5.3 & 390 & - \\
EVO-40 & 36.3 & 78.8 & 94.6 & 8.4 & 388 & 0.7 \\
NAS-MO-40 & 39.3 & 78.6 & 94.3 & 8.3 & 491 & 0.7 \\
PNAG\xspace-40 (Ours) & 39.6 & \textbf{79.8} & \textbf{94.9} & 9.4 & 502 & 0.7 \\
\midrule
MobileNetV2 (1.4$\times$)~\cite{sandler2018mobilenetv2} & 42.6 & 74.7 & - & 6.9 & 585 & - \\
EVO-45 & 43.2 & 79.1 & 94.6 & 9.1 & 481 & 51.7 \\
NAS-MO-45 & 43.7 & 78.8 & 94.4 & 9.3 & 626 & 0.7 \\
PNAG\xspace-45 (Ours) & 44.7 & \textbf{80.2} & \textbf{95.0} & 10.4 & 620 & 0.7 \\
\midrule
PONAS-C~\cite{huang2020ponas} & 52.2 & 75.2 & - & 5.6 & 376 & 8.8 \\
OFA$^*$~\cite{cai2019once} & 53.7 & 80.2 & 95.1 & 9.1 & 743 & 51.7 \\
EVO-50 & 47.4 & 79.3 & 94.7 & 9.1 & 511 & 0.7 \\
NAS-MO-50 & 46.7 & 78.9 & 94.4 & 9.1 & 632 & 0.7 \\
PNAG\xspace-50 (Ours) & 48.9 & \textbf{80.5} & \textbf{95.1} & 10.5 & 682 & 0.7 \\
\bottomrule[0.15em]
\end{tabular}
}
\label{tab:cpu_comp}
\end{table*}
\begin{table*}[t!]
\caption{
Comparisons with state-of-the-art architectures on NVIDIA TITAN X GPU. $^*$ denotes the best architecture reported in the original paper.
``-'' denotes the results that are not reported. All the models are evaluated on $224 \times 224$ images of ImageNet.
}
\centering
\resizebox{0.8\textwidth}{!}
{
\begin{tabular}{ccccccc}
\toprule [0.15em]
\multirow{2}[0]{*}{Architecture} & \multirow{2}[0]{*}{Latency (ms)} & \multicolumn{2}{c}{Test Accuracy (\%)} & \multirow{2}[0]{*}{\#Params (M)} & \multirow{2}[0]{*}{\#MAdds (M)} & Search Cost \\
\cline{3-4}
& & Top-1 & Top-5 & & & (GPU Days) \\
\midrule [0.1em]
ProxylessNAS-GPU~\cite{cai2018proxylessnas} & 84.7 & 75.1 & - & 7.1 & 463 & 8.3 \\
MobileNetV2 (1.0$\times$)~\cite{sandler2018mobilenetv2} & 71.6 & 72.0 & - & 3.4 & 300 & - \\
NAGO~\cite{ru2020neural} & - & 76.8 & 93.4 & 5.7 & - & 20.0 \\
{EVO}-90 & 88.9 & 77.3 & 93.1 & 5.9 & 332 & 0.7 \\
NAS-MO-90 & 89.8 & 75.4 & 92.4 & 4.9 & 266 & 0.7 \\
PNAG\xspace-90 (Ours) & 86.9 & \textbf{78.3} & \textbf{94.0} & 5.7 & 310 & 0.7 \\
\midrule
MnasNet-A1 (1.4$\times$)~\cite{tan2019mnasnet} & 112.9 & 77.2 & 93.5 & 6.1 & 592 & $\sim$3792 \\
EfficientNet B0~\cite{EfficientNet} & 115.5 & 77.3 & 93.5 & 5.3 & 390 & - \\
ENAS~\cite{pham2018efficient} & 110.8 & 73.8 & 91.7 & 5.6 & 607 & 0.5 \\
{EVO}-115 & 105.4 & 78.4 & 94.1 & 8.4 & 388 & 51.7 \\
NAS-MO-115 & 111.2 & 78.1 & 94.0 & 8.8 & 431 & 0.7 \\
PNAG\xspace-115 (Ours) & 111.2 & \textbf{79.3} & \textbf{94.6} & 8.9 & 411 & 0.7 \\
\midrule
{EVO}-140 & 135.7 & 78.9 & 94.4 & 9.1 & 481 & 0.7 \\
NAS-MO-140 & 137.2 & 78.4 & 94.1 & 8.8 & 470 & 0.7 \\
PNAG\xspace-140 (Ours) & 138.9 & \textbf{79.7} & \textbf{94.9} & 9.7 & 510 & 0.7 \\
\midrule
ResNet-50~\cite{he2016deep} & 159.8 & 76.2 & 92.9 & 25.6 & 4087 & - \\
{EVO}-165 & 164.1 & 79.1 & 94.5 & 10.7 & 597 & 51.7 \\
NAS-MO-165 & 162.6 & 78.8 & 94.4 & 10.5 & 583 & 0.7 \\
PNAG\xspace-165 (Ours) & 162.7 & \textbf{80.3} & \textbf{95.0} & 10.5 & 582 & 0.7 \\
\midrule
NASNet-A~\cite{zoph2018learning} & 162.3 & 74.0 & 91.6 & 5.3 & 564 & $\sim$3 \\
PONAS~\cite{huang2020ponas} & 182.4 & 75.2 & - & 5.6 & 376 & 8.8 \\
EfficientNet B1~\cite{EfficientNet} & 192.7 & 79.2 & 94.5 & 7.8 & 700 & - \\
OFA$^*$~\cite{cai2019once} & 204.3 & 80.2 & 95.1 & 9.1 & 743 & 51.7 \\
{EVO}-190 & 188.1 & 79.5 & 94.8 & 11.3 & 687 & 0.7 \\
NAS-MO-190 & 183.2 & 78.8 & 94.5 & 10.7 & 652 & 0.7 \\
PNAG\xspace-190 (Ours) & 185.5 & \textbf{80.4} & \textbf{95.0} & 10.4 & 640 & 0.7 \\
\bottomrule[0.15em]
\end{tabular}
}
\label{tab:gpu_comp}
\end{table*}
\subsection{Architecture Search for CPU Devices}\label{sec:result_cpu}
We further exploit our PNAG to generate architectures under the latency budgets evaluated on a CPU device (Core i5-7400). Similar to the experiments for mobile devices, we evaluate our PNAG under 5 latency budgets, \mbox{\textit{i.e.}}, $\{$30ms, 35ms, 40ms, 45ms, 50ms$\}$.
As shown in Fig.~\ref{fig:cpu_compare}, our PNAG yields a large performance improvement over the considered two variants, \mbox{\textit{i.e.}}, EVO and NAS-MO, under diverse budgets. Moreover, our PNAG also outperforms popular NAS based (MnasNet, OFA$^*$) and manually designed architectures (MobileNetV2, MobileNetV3, and EfficientNet).
As for the quantitative comparisons, in Table~\ref{tab:cpu_comp}, our PNAG consistently yields the best results across all the considered latency budgets. To be specific, given a small latency budget $B{=}35$ms, our PNAG-35 yields better accuracy than the compared NAS methods with much lower search cost. Given a relatively large budget $B{=}50$ms, our PNAG-50 yields the same accuracy (80.5\%) as the best result on mobile devices (\mbox{\textit{i.e.}}, PNAG-200). This indicates that our PNAG generalizes well across the latency budgets based on different hardwares.
Overall, these results demonstrate that our PNAG is able to generate very competitive architectures while satisfying diverse latency budgets.
\begin{table*}[t]
\centering
\caption{Comparisons of different reward functions based on PNAG\xspace. We report the latency on mobile devices.
}
\resizebox{1.0\textwidth}{!}
{
\begin{tabular}{c|cc|cc|cc|cc|cc}
\toprule
\multirow{2}[0]{*}{Reward} & \multicolumn{2}{c|}{$B_1 {=} 80$ms} & \multicolumn{2}{c|}{$B_2 {=} 110$ms} & \multicolumn{2}{c|}{$B_3 {=} 140$ms} & \multicolumn{2}{c|}{$B_4 {=} 170$ms} & \multicolumn{2}{c}{$B_5 {=} 200$ms} \\
& \multicolumn{1}{c}{Acc. (\%)} & \multicolumn{1}{c|}{Lat. (ms)} & \multicolumn{1}{c}{Acc.} & \multicolumn{1}{c|}{Lat. (ms)} & \multicolumn{1}{c}{Acc.} & \multicolumn{1}{c|}{Lat. (ms)} & \multicolumn{1}{l}{Acc.} & \multicolumn{1}{c|}{Lat. (ms)} & \multicolumn{1}{l}{Acc.} & \multicolumn{1}{l}{Lat. (ms)} \\
\hline
\multirow{1}[0]{*}{Multi-objective Reward~\cite{tan2019mnasnet}} & 77.0 & 77.6 & 78.5 & 106.3 & 78.9 & 139.0 & 79.3 & 165.1 & 79.5 & 187.3 \\
\multirow{1}[0]{*}{Multi-objective Absolute Reward~\cite{Bender2020TuNAS}} & 78.1 & 76.8 & 78.9 & 109.2 & 79.2 & 130.1 & 79.5 & 163.6 & 79.9 & 197.5 \\
\multirow{1}[0]{*}{Pareto Dominance Reward (w/o acc. constraint)} & 73.8 & 74.4 & 73.6 & 64.9 & 74.3 & 66.5 & 73.9 & 70.0 & 74.0 & 70.8 \\
\multirow{1}[0]{*}{Pareto Dominance Reward (Ours)} & \textbf{78.4} & 79.9 & \textbf{79.5} & 106.8 & \textbf{79.8} & 127.8 & \textbf{80.3} & 167.1 & \textbf{80.5} & 193.9 \\
\bottomrule
\end{tabular}%
}
\label{tab:diff_reward}%
\end{table*}%
\begin{table*}[h]
\centering
\caption{Effect of different search strategies on the performance of PNAG\xspace. We report the accuracy on ImageNet.
}
\resizebox{0.7\textwidth}{!}
{
\begin{tabular}{c|c|c|c|c|c}
\toprule
Search Strategy & \multicolumn{1}{c|}{$B_1 {=} 80$ms} & \multicolumn{1}{c|}{$B_2 {=} 110$ms} & \multicolumn{1}{c|}{$B_3 {=} 140$ms} & \multicolumn{1}{c|}{$B_4 {=} 170$ms} & \multicolumn{1}{c}{$B_5 {=} 200$ms} \\
\hline
Repeated Independent Search & 76.7 & 78.6 & 79.1 & 79.4 & 79.7 \\
Pareto Frontier Search & \textbf{78.4} & \textbf{79.5} & \textbf{79.8} & \textbf{80.3} & \textbf{80.5} \\
\bottomrule
\end{tabular}%
}
\label{tab:pareto_learning}%
\end{table*}%
\begin{table}[t]
\centering
\caption{
Comparisons of the time cost for architecture generation/design among different methods.}
\resizebox{0.40\textwidth}{!}
{
\begin{tabular}{c|cccc}
\toprule
Method & PNAG\xspace & PC-DARTS & ENAS & DARTS \\
\midrule
Time Cost & \textbf{$\leq$5 s} & 2 hours & 12 hours & 4 days \\
\bottomrule
\end{tabular}
}
\label{tab:generation_cost}
\end{table}
\subsection{Architecture Search for GPU Devices}\label{sec:result_gpu}
Besides the mobile and CPU devices, we also consider GPUs and adopt the latency on them as the computational budget. Since the inference speed on GPU is much faster than mobile processor and CPU, we measure the latency of deep models on a NVIDIA TITAN X GPU with a batch size of 64.
In this experiments, we compare different architecture design/search methods under the budgets of $\{$90ms, 115ms, 140ms, 165ms, 190ms$\}$.
As shown in Fig.~\ref{fig:gpu_compare}, similar to the results on mobile and CPU devices, our PNAG outperforms existing methods and the constructed variants by a large margin. We also reported the detailed comparisons in terms of accuracy and computational cost in Table~\ref{tab:gpu_comp}. Again, compared with both the hand-crafted methods (\mbox{\textit{e.g.}}, MobileNetV2~\cite{sandler2018mobilenetv2} and EfficientNet~\cite{EfficientNet}) and NAS methods (\mbox{\textit{e.g.}}, ENAS~\cite{pham2018efficient} and MnasNet~\cite{tan2019mnasnet}), our PNAG consistently produces better architectures under diverse budgets.
These results further emphasize the generalization ability of our PNAG to the latency budgets evaluated on different hardware devices.
\section{Further Experiments}
{In this section, we conduct ablation studies on our method.
Then we compare the architecture generation cost of our proposed method among different methods and discuss the impact of the number of considered budgets $K$.}
\subsection{Effect of the Pareto Dominance Reward}
We investigate the effectiveness of the Pareto frontier learning strategy and the Pareto dominance reward.
From Table~\ref{tab:diff_reward} and Table~\ref{tab:pareto_learning},
the Pareto frontier learning strategy tends to find better {architectures} than the independent search process due to the shared knowledge across the search processes under different budgets.
Compared with two existing multi-objective rewards~\cite{tan2019mnasnet,Bender2020TuNAS},
the Pareto dominance reward encourages the generator to produce architectures that satisfy the considered budget constraints.
Moreover, if we do not consider accuracy constraint in the Pareto dominance reward, the generated architectures have low latency and poor accuracy.
With both the Pareto frontier learning strategy and the Pareto dominance reward, our method yields the best results under all budgets.
\begin{table}[t]
\centering
\caption{Effect of $K$ on the generation performance of PNAG\xspace.
We compare the {generated} architectures using different values of $K$ with the target latency $B{=}140$ms on ImageNet.}
\resizebox{0.42\textwidth}{!}
{
\begin{tabular}{c|ccccc}
\toprule
$K$ & 1 & 2 & 5 & 10 & 30 \\
\midrule
Top-1 Acc. (\%) & 78.5 & 79.1 & 79.4 & \textbf{79.8} & \textbf{79.8} \\
\bottomrule
\end{tabular}%
}
\label{tab:effect_k}%
\end{table}%
\subsection{Comparisons of Architecture Generation Cost}
{
In this part, we compare the architecture generation cost of different methods for 5 different budgets and show the comparison results in Table~\ref{tab:generation_cost}.
Given an arbitrary target budget, existing NAS methods need to perform an independent search to find feasible architectures.
By contrast, since PNAG\xspace directly learns the whole Pareto frontier, we are able to generate promising architectures based on a learned generator model via \emph{inference}.
Thus, the architecture generation cost of PNAG\xspace is much less than other existing methods (See results in Table~\ref{tab:generation_cost}). In this sense, we are able to greatly accelerate the architecture design process in real-world scenarios.
These results demonstrate the efficiency of our PNAG\xspace in generating architectures.
}
\subsection{Effect of $K$ on {the Generation Performance}}\label{sec:effect_k}
We investigate the effect of $K$ on the generation performance of PNAG\xspace based on mobile device.
Note that we evenly select $K$ budgets from the range of latency.
To investigate the effect of $K$, we consider several candidate values of {$K \in \{ 2, 5, 10, 30 \}$.}
We show the Top-1 accuracies of the architectures generated by PNAG\xspace with different $K$ on ImageNet in Table~\ref{tab:effect_k}.
Since a small number of selected budgets $K$ cannot accurately approximate the ground-truth Pareto frontier or provide enough shared knowledge between different search processes, our method yields poor results with $K=2$.
When we increase $K$ larger than 5, we are able to greatly improve the performance of the generated architectures. From Table~\ref{tab:effect_k}, our method yields the best result when $K\geq10$ and we use this setting in the experiments.
\section{Conclusion}
In this paper,
we focus on designing effective and feasible architectures via an architecture generation process. To this end, we have proposed a novel Pareto-aware Neural Architecture Generator (PNAG\xspace) which only needs to be trained once and dynamically generates promising architectures satisfying any given budget via inference.
Unlike existing methods, we seek to learn the whole Pareto frontier instead of finding a single or several discrete Pareto optimal architectures.
Based on the learned Pareto frontier, our PNAG\xspace consistently outperforms existing NAS methods across diverse budgets.
Extensive experiments on three hardware platforms (\mbox{\textit{i.e.}}, mobile devices, CPU, and GPU) demonstrate the effectiveness of the proposed method.
\bibliographystyle{IEEEtran}
|
1,108,101,562,753 | arxiv | \section{Introduction}
\yp{Pyramidal neurons, the most ubiquitous type of neurons in the mammalian neocortex, each feature tens of thousands of excitatory convergent synaptic inputs. Most incoming synaptic signals terminate on sub-micron bulbs known as dendritic spines \cite{nimchinsky2002structure}. Spines exhibit a significant degree of morphological plasticity \cite{kasai2010structural,holtmaat2009experience} with pathological spine formation implicated in disorders such as Autism spectrum disorder and Alzheimer's disease \cite{penzes2011dendritic}. Normal synaptic function, including the dynamic process of spine remodeling, requires intracellular transport for maintenance \cite{da2015positioning}. Micron-sized vesicles carrying surface proteins are squeezed through the submicron-sized neck, undergoing strong deformations before fusing with the spine head. Recent experiments have shown that movement is not always unidirectional (translocation), but includes no movement (corking), and rejection \cite{park2006plasticity,wang2008myosin}. The mechanisms underlying these directional changes are not well understood.
To understand this question in greater detail, we use two primary considerations. First, we explicitly include intracellular transport in a vesicle trafficking model to understand how motor forces affect vesicle movement. This idea is common and there are many theoretical studies in this direction. Some use Markov processes representing motor complexes to understand the distribution of motor complex velocities \cite{muller2008tug,kunwar2011mechanical} and mean first passage times to transport targets on dendritic morphologies \cite{bressloff2009directed,newby2009directed,newby2010random,newby2011asymptotic,bressloff2013metastability}. More detailed studies include individual motors as part of a larger complex or population, which generatively produces bidirectional motion despite the assumption of symmetry \cite{julicher1995cooperative,guerin2011motion,allard2019bidirectional,portet2019deciphering}. However, the effect of constrictions on \yp{cargo} dynamics has typically been held constant or neglected. Indeed, intracellular movement into closed spaces is the second fundamental assumption of this study.
In contrast to open constrictions, which feature an infinitely long tube with a small constricted region, closed constrictions feature a semi-infinite tube that is closed on one end. How constricted and closed spaces affect molecular motor dynamics is a question that has been less studied. Earlier studies of constricted motion often consider open ends and are of general interest with related problems appearing in manufactured elastic capsules \cite{dawson2015extreme,duncanson2015microfluidic}, hydrogels \cite{li2015universal}, the movement of living cells \cite{bagnall2015deformability,byun2013characterizing,gabriele2010simple}, and axonal transport \cite{zimmermann1996accumulation,koehnle1999slow,walker2019local}. However, \cite{fai2017active} found that constrictions enrich motor populations' dynamics, allowing motors to switch between multidirectional and unidirectional motion. This paper aims to thoroughly explore the dynamics of the model by classifying the bifurcations of the underlying ODE.}
The paper is organized as follows. In Section \ref{sec:lubrication}, we briefly review the derivation of the lubrication model and its nondimensionalization, and we briefly discuss two equivalent versions, one of which (the ``slow'' subsystem) is the model considered in \cite{fai2017active}, and other of which (the ``fast'' subsystem) is the model we explore in-depth in this paper. In Section \ref{sec:bifurcations}, we numerically establish the existence and robustness of multistability through a bifurcation analysis of the ``fast'' subsystem. We corroborate some numerical results in Section \ref{sec:existence}, where we analytically establish the existence and stability of particular velocities as a function of key parameters. We conclude the paper with a discussion that includes estimates of realistic parameter regimes of this model, and the resulting behaviors predicted for different dendritic spines. All code, data, and documentation for reproducing figures in this paper are available on our GitHub repository at \url{https://github.com/youngmp/park_fai_2020}.
\section{Lubrication Model}\label{sec:lubrication}
\yp{The lubrication model we consider in this paper is an idealized model of vesicle translation through the neck of a dendritic spine}. In contrast to previous studies of transport through constrictions in periodic or unbounded tubes, we consider constrictions closed at one end to model transport into spine-like geometries \yp{(Figure \ref{fig:geometry}A)}.
\begin{figure}[ht!]
\makebox[\textwidth][c]{
\centering
\includegraphics[width=.75\textwidth]{cylinder_with_motors.pdf}}
\caption{\yp{Idealized dendritic spine and molecular motors. A: Three dimensional spine geometry with the vesicle (black sphere) shown at the center of the constriction. The black arrow shows the direction towards the spine head $R_p=0.96$, $R_c=1.22$. See Figure \ref{fig:constriction}A for additional details on the geometry. B: Transverse cross-section. C: Vertical cross-section with molecular motors. Blue: upwards-preferred motors. Red: downwards-preferred motors.}}\label{fig:geometry}
\end{figure}
Before deriving the lubrication model, we first define variables and parameters. The vesicle center of mass is $Z$ (Figure \ref{fig:geometry}C), and its radius is $R_p$ \yp{(Figure \ref{fig:geometry}B)}. The function $p(z)$ is the pressure exerted on the vesicle at position $z\in I(Z)$, \yp{where $I(Z) = [Z-R_p,Z+R_p]$ and the center of mass position $Z=0$ corresponds to the thinnest portion of the constriction (see Figure \ref{fig:constriction}A, B, where the vesicles (black circles) are drawn at position $Z=0$). The function $h(z)$ is the distance between the vesicle surface and the spine wall at position $z$ (Figure \ref{fig:geometry}B).
We now closely follow the derivation of the lubrication model in \cite{fai2017active}. To start, we assume that the constriction radius $R_c$ is close to the vesicle radius $R_p$. Therefore, the minimum distance between the vesicle surface and spine wall is very small, i.e., $\min_{z\in I(Z)}\{h(z)\} \ll R_c$. In this scenario, fluid backflow surrounds objects entering closed constrictions. This backflow introduces a large velocity gradient, making particular terms dominate in the Navier-Stokes equations, allowing a reduction of the equations using lubrication theory \cite{acheson1991elementary}. Applied to our problem, lubrication theory yields,
\begin{equation*}
u(z) = \frac{1}{2\mu} \frac{\partial p(z)}{\partial z} r(r-h(z)) + \frac{U}{h(z)} r,
\end{equation*}
where $u(z)$ is the fluid velocity at position $z$, $r \in[0,R_c-R_p]$ is the radial coordinate in the thin fluid layer, $h(z)$ is the maximum radial thickness of the fluid layer at position $z$, and $U = dZ/dt$ is the vesicle velocity in the $Z$ direction. We will often refer to $h(z)$ as height, not to be confused with the $Z$-position.
By incompressibility, the flux $Q$ through the gap must be equal through each cross-section, so
\begin{equation}\label{eq:flux}
Q = 2\pi R_c \left( -\frac{h^3}{12\mu}\frac{\partial p}{\partial z} + \frac{1}{2} U h \right) = \text{const.}
\end{equation}
Rewriting \eqref{eq:flux} in terms of $\partial p/\partial z$ and integrating yields,
\begin{equation}\label{eq:pressure}
\frac{p(z)-p_0}{6\mu} = U \int_{Z-R_p}^z \frac{1}{h^2(s)} ds - \frac{2Q}{2\pi R_c} \int_{Z-R_p}^z \frac{1}{h^3(s)}ds.
\end{equation}
Setting $z=Z+R_p$ in Equation \eqref{eq:pressure} determines the flow rate $Q$ in terms of the pressure drop $\Delta p:=p(Z+R_p)-p_0$, which is a function of the applied force $F$ by $\Delta p = F/(\pi R_p^2)$. This results in the equation,
\begin{equation}\label{eq:Q}
Q = 2\pi \yp{R_c} \left(U \int_{\yp{I(Z)}} \frac{1}{h^2(s)} ds - \frac{F(U)}{6\pi \yp{R_p}^2\mu}\right)\left/ \left(2 \int_{\yp{I(Z)}} \frac{1}{h^3(s)} ds\right)\right. ,\\
\end{equation}
where $F(U)$ is the net force from the molecular motors and is a key feature that contributes to the tug-of-war dynamics of the model. By conservation of mass, the fluid dragged forward by the vesicle balances the backflow $Q$:
\begin{equation}\label{eq:Q2}
Q = -\pi R_c^2 U,
\end{equation}
where $R_c$ is the radius of the constriction (Figure \ref{fig:geometry}B). To close the system of equations for $p(z)$, $h(z)$, $Q$, and $U$, we define the constitutive law relating height to pressure:
\begin{equation}\label{eq:h}
h(z) = \tilde R_c(z) - \sqrt{R_p^2 - (z-Z)^2} + C[p(z)-p_0],
\end{equation}
where we assume that the vesicle is approximately spherical, $C$ is the compliance of the vesicle, and $\tilde{R}_c(z)$ is the radius of the channel at position $z$. Note that $\min_z\{\tilde{R}_c(z)\}=R_c$ as long as $0 \in I(Z)$. Equations \eqref{eq:flux}--\eqref{eq:h} constitute the reduced axisymmetric model of vesicle trafficking.
}
While the viscosity of water is on the order of $\mu=0.69$\si{.m.Pa.s} at a body temperature of 37\si{.\degreeCelsius}, proteins, filaments, and organelles densely pack the intracellular environment \cite{park2006plasticity,yuste2010dendritic,gray1959axo}, which may increase the effective viscosity by several orders of magnitude. To reflect this assumption, we take $\mu=1.2$\si{.m.Pa.s} \cite{fai2017active}. The vesicles we consider in this paper are recycling endosomes, which serve to replenish surface proteins and vary from \yp{1--2\si{.\um}} in diameter \cite{da2015positioning}. Recycling endosomes are much larger than other vesicles commonly found in dendritic spines. Other vesicles are on the order of ten to hundreds of nanometers and may serve other functions in spines (based on data from VAST Lite \cite{berger2018vast}). For simplicity, and because we do not consider non-recycling endosomes, we will refer to recycling endosomes as vesicles throughout this paper.
\begin{figure}[ht!]
\makebox[\textwidth][c]{
\centering
\includegraphics[width=1.25\textwidth]{constriction.pdf}}
\caption{Constriction geometry and resulting dynamics for different parameter sets. A: The initial spine diameter (6\si{.\um}) decreases to the neck radius $R_c=1.22$\si{.\um}. The vesicle (black circle, radius $R_p=0.96$\si{.\um}) begins at the base of the channel (dashed vertical gray line) and moves in the direction of the arrow for initial condition 1. B, C: Resulting velocity $U$ (\si{.\um/s}) and position $Z$ (\si{.\um}) plotted over time (s) for two different initial conditions $(U_0,Z_0)=(0.43 \si{.\um/s},-5\si{.\um})$ (black) and $(-0.3 \si{.\um/s},-5\si{.\um})$ (red). We use the parameters $\yp{\phi}=0.57$, $\pi_1=1$, $\pi_3=1$, $\pi_4=4.7$, $\pi_5=0.1$, $\pi_6=10$, $F_0=50$. D, E, F: same information as A, B, C, but with parameters $R_c=2.15$\si{.\um}, $R_p=1.5$\si{.\um}, $\yp{\phi}=0.54$, $\pi_1=1$, $\pi_3=1$, $\pi_4=4.7$, $\pi_5=0.02$, $\pi_6=10$, $F_0=200$. Initial spine diameter is 6\si{.\um}. The two initial conditions are $(U_0,Z_0)=(0.17 \si{.\um/s},-5\si{.\um})$ (black) and $(-0.1 \si{.\um/s},0\si{.\um})$ (red). Simulation parameters $\varepsilon=1$, \texttt{dt}=0.02, integrated \yp{numerically} (see Appendix \ref{a:integration}).}\label{fig:constriction}
\end{figure}
\yp{Many studies assume constant forces \cite{adrian2014barriers,kusters2014forced}, but in reality the forces generated by molecular motors are dependent on quantities such as the cargo velocity. To capture this effect, we include a biophysical model of the forces generated by two species of myosin motors that are likely to dominate transport into spines \cite{da2015positioning}.} In our model, the two species are identical except that one prefers to push the vesicle up towards the spine head and the other prefers to push the vesicle down away from the spine head.
\yp{Forces exerted by each species are described entirely using the standard convention of force-velocity curves, where the motor forces depend on the cargo velocity (Figure \ref{fig:fv})}. Following the notation of \cite{fai2017active}, the net motor force is written $F(U) = \phi F_{-A}(U) + (1-\phi) F_{A}(U)$, where $F_{-A}(U)$ and $F_A(U)$ are the force-velocity curves of motors that push towards and away from the spine head, respectively. The parameter $\phi$ represents the ratio of motor populations: $\phi=0$ corresponds to only downwards-pushing motors, $\phi=1$ corresponds to only upwards-pushing motors, and $\phi=0.5$ corresponds to equal numbers of motors pushing up and down.
For a given species, when the vesicle moves in the preferred direction, the motors attach and detach with intrinsic rates $\alpha$ and $\beta$, respectively. In the non-preferred direction, the motors not only detach due to the rate $\beta$ but are subject to yield effects: the motors extend up to a finite extension, beyond which the motors yield and no longer exert a force. The force $p(z)$ exerted by each motor depends on their position $z$ and is generally a monotonically increasing function with $p(0)=0$. In the present study, we use
\begin{equation}\label{eq:motor_force}
p(z)=p_1(e^{\gamma z}-1),
\end{equation}
\begin{figure}[ht!]
\centering
\includegraphics[width=.75\textwidth]{myosin.pdf}
\caption{\yp{Microscopic motor dynamics of a downwards-preferred motor. The $x$-axis represents the $Z$ coordinate of the motor head relative to its base. A, B: when the vesicle (gray) moves in the preferred direction of a motor (red hatched), the motor attaches with a rate $\alpha$ and detaches with a rate $\beta$. C, D: when the vesicle moves in the non-preferred direction of a motor, the vesicle attaches with a rate $\alpha$ and detaches with rate $\beta$, but has an additional mechanism of detachment when the motor extends past $Z=B$}.}\label{fig:myosin}
\end{figure}
where $p_1$ and $\gamma$ are the motor force parameters (note that the position $z$ in Equation \eqref{eq:motor_force} represents the relative position of an individual motor, which is distinct from the $z$ used in the height function Equation \eqref{eq:h}. Figure \ref{fig:myosin} contains a brief description of the microscopic motor dynamics. Because we only focus on the mean-field dynamics, we will no longer reference $p(z)$, and any further reference to position $z$ will refer to the absolute position used in Equation \eqref{eq:h}). With this choice of force-extension, in the limit of large motor number, the forces in the preferred and non-preferred directions are functions of velocity. For upwards-pushing motors, the force-velocity curve, $F_{-A}(U)$, is given by
\begin{equation}
F_{-A}(U) = \begin{cases}
\frac{\alpha n_0 p_1}{\alpha c(U) + \beta} \frac{e^{\gamma A} \left(1-e^{-\beta(B-A)/U} e^{\gamma(B-A)}\right) - \left(1- e^{-\beta(B-A)/U}\right)(1+\gamma U/\beta)}{1+
\gamma U/\beta}, & U < 0\\
\frac{\alpha n_0 p_1}{\alpha+\beta}\frac{e^{\gamma A}-1 - \gamma U/\beta}{1+\gamma U/\beta}, & U \geq 0
\end{cases},\label{eq:fv0}
\end{equation}
where $c(U)=1-\exp[\beta(B-A)/U]$. Because the downwards-pushing motors follow the same rules, but for opposite signs in force and velocity, it follows that $F_{A}(U) = -F_{-A}(-U)$. We refer the reader to \cite{fai2017active,hoppensteadt2012modeling} for details on the derivation of the force-velocity functions $F_{-A}(U)$ and $F_{A}(U)$. \yp{We show the nondimensional version of the force-velocity curves (Equation \eqref{eq:fv}) in Figure \ref{fig:fv}}.
\subsection{Nondimensionalized Lubrication Model}
\begin{figure}
\makebox[\textwidth][c]{
\centering
\includegraphics[width=1.25\textwidth]{fv.pdf}}
\caption{\yp{Example force-velocity curves. A, B, C: as parameters $\phi$ and $\zeta$ vary, the underlying force-velocity curves and viscous drag forces change. Blue dashed: proportion of the motor force from upwards-pushing motors. Red dashed: proportion of the motor force from downwards-pushing motors. Purple: total force from molecular motors. Gray: total force from viscous drag. Black dots indicate intersections between the motor forces and viscous drag. D, E, F: respective total force including viscous drag, i.e., plots of $\phi F_{-A}(U)+(1-\phi)F_A(U) - \zeta U$. Black dots indicate force-balance (equilibria), and arrows indicate stability. The changing numbers of equilibria as a function of parameters indicate the loss or gain of multistability. We use the parameters $\pi_1=1$, $\pi_3=1$,$\pi_4=4.7$,$\pi_5=0.1$,$\pi_6=10$}}\label{fig:fv}
\end{figure}
To enable an analysis of the dynamics of \cref{eq:pressure,eq:Q,eq:Q2,eq:h}, we first reduce the equations: we plug in Equation \eqref{eq:pressure} into Equation \eqref{eq:h} and Equation \eqref{eq:Q2} into Equation \eqref{eq:Q}, yielding a system of two equations for the velocity $U$ and the height between the vesicle and the constriction wall $h(z)$:
\begin{align}
U &= \frac{F(U)}{6\pi \yp{R_p} \mu}\frac{1}{\int_{Z-R_p}^{Z+R_p} \frac{\yp{R_p R_c}}{h^3(s)} + \frac{\yp{R_p}}{h^2(s)} ds},\label{eq:reduced_lub1}\\
h(z) &= \tilde R_c(z) - \sqrt{R_p^2 - (z-Z)^2} + C 6\mu \left[ U \int_{Z-R_p}^{z}\frac{1}{h^2(s)}ds - \frac{2Q}{2\pi R_c}\int_{Z-R_p}^{z}\frac{1}{h^3(s)}ds \right].\label{eq:reduced_lub2}
\end{align}
Next, we nondimensionalize Equations \eqref{eq:reduced_lub1} and \eqref{eq:reduced_lub2} and take $\widetilde z = z/R_p$, $\widetilde h = h/R_p$, and $\widetilde U = 6\pi R_p \mu U/F_0$, where $F_0 = (\exp(\gamma A)-1)\alpha p_1 n_0 / (\alpha + \beta)$ is the stall force. Note that here we use tildes to denote dimensionless quantities (tildes will be dropped later on). Plugging the nondimensionalized terms into Equations \eqref{eq:reduced_lub1} and \eqref{eq:reduced_lub2} yields,
\begin{align}
\widetilde U &=\widetilde F(\widetilde U)/\tilde \zeta(\widetilde Z) \label{eq:nondim_u},\\
\widetilde h(\widetilde Z + \widetilde z) &= \tilde R_c(\widetilde Z+ \widetilde z)/R_p - \sqrt{1-\widetilde{z}^2} + \pi_2 \widetilde U \int_{-1}^{\widetilde{z}} \left[\widetilde h^{-2}(\widetilde Z + s) + \yp{\pi_1}\widetilde h^{-3}(\widetilde Z + s) \right]ds\label{eq:nondim_h},
\end{align}
where \yp{$\pi_1=R_c/R_p$}, $\pi_2 = CF_0/(\pi R_p^3)$, and
\begin{equation}\label{eq:zeta}
\tilde \zeta(\widetilde Z) = \int_{-1}^1 \left[\widetilde h^{-2}(\widetilde Z + s) + \yp{\pi_1} \widetilde h^{-3}(\widetilde Z + s) \right]ds.
\end{equation}
The function $\tilde \zeta$ is the viscous drag coefficient produced by the constriction geometry. We show examples of this function and discuss its importance for this system's dynamics in the next section. \yp{Although the nondimensional drag term $\tilde{\zeta}$ (Equation \eqref{eq:zeta}) is purely geometrical, the drag force itself is a direct consequence of the fluid flow. The dimensional drag in Equation 2.8 includes a factor of $6\pi R_p \mu$, and the fluid viscosity $\mu$ has been absorbed into the nondimensionalization. The purely geometrical term $\tilde{\zeta}$ is simply a correction factor to the Stokes drag law that quantifies the degree of confinement. Indeed, this is a consequence of the higher force needed to sustain the large velocity gradients in the narrow gaps.}
The nondimensionalized net motor force $\widetilde F(\widetilde U) = \phi \widetilde F_{-A}(\widetilde U) + (1-\phi)\widetilde F_A(\widetilde U)$ consists of two functions $\widetilde F_{-A}(\widetilde U)$ and $\widetilde F_A(\widetilde U)$, which are related by $\widetilde F_A(\widetilde U)=-\widetilde F_{-A}(-U)$ using the same symmetry arguments in the dimensional equations. It is straightforward to show that the nondimensionalized force-velocity curve is
\begin{equation}\label{eq:fv}
\widetilde F_A (\widetilde U) = \left\{ \begin{matrix}
-\frac{1+\pi_6\widetilde U(e^{\pi_4}-1)^{-1}}{1-\pi_6\widetilde U}, & \text{if}\quad \widetilde U < 0\\
\frac{-(\pi_3+1)}{\pi_3(1-e^{-\pi_5/\pi_6 \widetilde U}) + 1} \frac{[e^{\pi_4}(1-e^{\pi_5}e^{-\pi_5/\pi_6\widetilde U})]-(1-\pi_6\widetilde U)(1- e^{-\pi_5/\pi_6 \widetilde U})}{(e^{\pi_4}-1)(1-\pi_6\widetilde U)}, & \text{if} \quad \widetilde U \geq 0.
\end{matrix}
\right.
\end{equation}
The function includes numerous parameters representing various microscopic motor properties: $\pi_3 = \alpha/\beta$ (ratio of attachment and detachment rates), $\pi_4 = \gamma A$ (the nondimensional attachment position), $\pi_5 = \gamma(B-A)$ (the maximum displacement of a motor in its non-preferred direction), and $\pi_6 = (F_0/[6\pi R_p \mu])/(\beta/\gamma)$ (the ratio of velocity scales between translocation and motor adhesion dynamics). When conversions back to dimensional forces are needed, we write $F_X = \widetilde F_X F_0$ for $X=A,-A$. \yp{For a fixed set of $\pi_i$, $i=3,4,5,6$, we show how the force-velocity curves change as a function of $\phi$ and $\zeta$}.
\subsection{Fast-Slow Lubrication Model}
From this point on, we work exclusively with the nondimensional system unless explicitly stated. Therefore, we write $Z = \widetilde Z$, $U = \widetilde U$, $h=\widetilde h$, $\zeta=\tilde\zeta$ and $F=\widetilde F$. Note that Equation \eqref{eq:nondim_h} includes a term to account for vesicle compliance with a prefactor of $\pi_2$. Representative parameters of vesicle compliance \yp{in the case of a spherical vesicle} reveal the nondimensional compliance $\pi_2$ to be relatively small, on the order of $\pi_2 \approx 0.09$ \cite{fai2017active}. To a first approximation, we take $\pi_2 \approx0$, which significantly simplifies Equations \eqref{eq:nondim_u} and \eqref{eq:nondim_h}. \yp{This approximation means that we assume a rigid, spherical vesicle. We refer the reader to the discussion in Section \ref{sec:limitations} for details regarding this choice}.
The low compliance limit yields the fast-slow system,
\begin{equation}\label{eq:fs1}
\begin{split}
\frac{dZ}{dt} &= U,\\
\varepsilon\frac{dU}{dt} &= F(U) - \zeta(Z) U.
\end{split}
\end{equation}
$F$ is the dimensionless net motor force, $U$ is the dimensionless vesicle velocity, $Z$ is the dimensionless vesicle position, $\zeta$ is the dimensionless drag that captures information about the constriction geometry (Equation \eqref{eq:zeta}), and $\varepsilon$ is a dimensionless mass term that may be zero, and equals zero in the overdamped limit.
Figure \ref{fig:constriction} shows some example dynamics of Equation \eqref{eq:fs1}. Figure \ref{fig:constriction}A shows the axisymmetric idealized dendritic spine from Figure \ref{fig:geometry}. The base of the spine is marked by a vertical gray dashed line positioned at the dimensional position of $-5$\si{.\um}, with a base diameter of $6$\si{.\um}. The spine transitions linearly into the constriction, which has a radius of $R_c=1.22$\si{.\um}, and a length of $5$\si{.\um}. The vesicle, shown as a black circle, has a radius of $R_p=0.96$\si{.\um}. The first initial condition we consider starts the vesicle at the base of the spine with positive initial velocity $(U_0,Z_0)=(0.43 \si{.\um/s},-5\si{.\um})$. Black curves show solutions to this initial condition. As the vesicle moves into the constriction, confinement effects at the neck significantly reduce the translocation velocity (Figure \ref{fig:constriction}B, black). However, the vesicle position increases until it reaches the end of the channel (Figure \ref{fig:constriction}C, black). We show another initial condition that starts at the base with a negative initial velocity $(U_0,Z_0)=(-0.3 \si{.\um/s},-5\si{.\um})$ and denotes solutions of this initial condition in red. The vesicle velocity remains negative until it hits the no-penetration boundary condition, which we impose at the two ends of the channel. When the vesicle hits the base of the spine at $-5$\si{.\um}, the velocity is instantaneously reset to zero and the position to $-5$\si{.\um}. This zero-velocity solution is in the basin of attraction of a negative velocity, so the vesicle remains at the base. Different initial conditions reveal different long-time dynamics, suggesting multistability.
We show another representative spine in Figure \ref{fig:constriction}D, where the vesicle and constriction values are 1.5\si{.\um} and 2.15\si{.\um}, respectively (all other geometry parameters are the same as in Panel A). The two initial conditions are $(U_0,Z_0)=(0.17 \si{.\um/s},-5\si{.\um})$ (black) and $(-0.1 \si{.\um/s},0\si{.\um})$ (red). Note that while the first initial condition (black) successfully translocates, the second initial condition (red) does not (Figure \ref{fig:constriction}F). Again, the differing dynamics as a function of initial conditions suggest the system is multistable. We remark that while we use piecewise linear channels throughout this paper, any constriction geometry is allowed as long as the vesicle radius is close to the channel radius, i.e., $R_p\approx R_c$
As a starting point for our analysis, we explore system \eqref{eq:fs1} under two equivalent limits that reveal different aspects of the dynamics. Viewed in the ``slow'' time $t$, taking the \yp{overdamped} limit $\varepsilon \rightarrow 0$ yields the slow subsystem,
\begin{equation}\label{eq:slow}
\begin{split}
\frac{dZ}{dt} &= U,\\
0 &= F(U) - \zeta(Z) U.
\end{split}
\end{equation}
The dynamics of Equation \eqref{eq:slow} exist on the critical manifold $S_0$ defined by
\begin{equation}\label{eq:s0}
S_0 := \{(Z,U) \in \mathbb{R}^2 \,|\, 0 = F(U) - \zeta(Z)U\}.
\end{equation}
\yp{Equations \eqref{eq:slow} and \eqref{eq:s0} correspond to how the vesicle moves in real-time. Time $t$ is in units of seconds, and when re-dimensionalized, the center of mass of the vesicle moves with velocity $dZ/dt$ in \si{\um/s}. Translocation across a spine tends to occur on the order of minutes. On the other hand, individual myosin motors attach and detach on a time scale in the range of \si{10^{-1}.s} to \si{10^{-2}.s}. With a few dozen myosin motors operating at this time scale within a relatively viscous environment, force balance is virtually instantaneous relative to translocation. Thus we take the forces to satisfy the instantaneous force-balance condition in Equation \eqref{eq:s0}.}
\yp{This paper aims to classify the bifurcations of the manifold satisfying Equation \eqref{eq:s0} to understand how velocities become multistable}. Fenichel theory guarantees that for $\varepsilon$ sufficiently small, the dynamics of Equation \eqref{eq:fs1} closely follow the dynamics on the slow manifold \cite{fenichel1979geometric,broer2013geometric}. While in the present study, we operate primarily within the overdamped limit where $\varepsilon=0$, we sometimes take $\varepsilon>0$ to numerically integrate the equations using standard methods, as in Figure \ref{fig:constriction}. We refer the reader to Appendix \ref{a:integration} for details of this approach.
\begin{figure}[ht!]
\centering
\makebox[\textwidth][c]{
\includegraphics[width=1.2\textwidth]{critical_manifold.pdf}}
\caption{The mapping between the bifurcation diagram and critical manifold through the viscous drag function. A: An example of the critical manifold in the phase space of $U$ and $Z$. Black arrowheads denote the direction of motion on the slow manifold. Gray dashed arrows indicate the direction of motion in the fast system. B: An example of a one-parameter bifurcation diagram. Steady-states $U$ are plotted as a function of $\zeta$. C: The relationship between viscous drag $\zeta$ and position $Z$. The dimensional positions $Z=-5$\si{.\um} through $Z=0$\si{.\um}, we expect the critical manifold to resemble a version of the bifurcation diagram given by the mapping between drag $\zeta$ and position $Z$. Beyond the constriction, from dimensional positions $Z=0$\si{.\um} through $Z=5$\si{.\um}, the critical manifold resembles a reflected version of the bifurcation curve. A--C: Parameters as in Figure \ref{fig:constriction}A--C. D--F: Parameters as in Figure \ref{fig:constriction}D--F.}\label{fig:c0}
\end{figure}
In Figure \ref{fig:c0}A,D, we show examples of the critical manifold for the corresponding geometries in Figure \ref{fig:constriction}A,D. \yp{We compute the critical manifolds using the following process. We define a grid in $U$ and $Z$ and plot the value of the function $G(U,Z):=F(U) - \zeta(Z)U$. We then have a surface plot of the function $G(U,Z)$ on the $U$-$Z$ domain, from which we extract the contours where $G(U,Z)=0$. These contours give us the critical manifolds because they correspond precisely to the critical manifold condition $0 = F(U) - \zeta(Z)U$ in Equation \eqref{eq:s0}}.
A hybrid system determines the dynamics on the manifold: for a given set of initial conditions $(Z_0,U_0)$, the fast dynamics instantaneously carry the solution to the nearest stable manifold. Along the stable manifold, the slow dynamics evolve according to \eqref{eq:slow}, until the solution reaches a fold, at which point the fast dynamics instantaneously carry the solution to the next stable manifold. Figure \ref{fig:c0}A,D show these hybrid dynamics for the fast dynamics (dashed gray arrows) and for the slow dynamics (black arrows).
\yp{Let $s=t/\varepsilon$, and call $s$ the ``fast'' time. A straightforward application of the chain rule yields,
\begin{align}
\frac{dZ}{ds} &= \varepsilon U\\
\frac{dU}{ds} &= F(U) - \zeta(Z) U.
\end{align}
Thus, from the perspective of the fast time $s$, $Z$ is a slow variable. Letting $\varepsilon \rightarrow 0$ yields the fast subsystem,
\begin{align*}
\frac{dZ}{ds} &= 0\\
\frac{dU}{ds} &= F(U) - \zeta(Z) U.
\end{align*}
The fast subsystem describes the same overdamped limit as the slow subsystem, but instead of viewing the vesicle position in real-time and the velocity dynamics as instantaneous, it freezes the vesicle position and shows how the velocity dynamics converge to force-balance in the timescale of molecular motors. Unlike the slow subsystem, the fast subsystem directly informs us of the stability of equilibria. Moreover, the fast subsystem yields a substantially more tractable version of the lubrication model that is much easier to analyze using bifurcation theory. Note that because $Z$ is constant in this limit and because $\zeta$ only depends on $Z$, what remains is the one-dimensional ODE,
\begin{equation}\label{eq:fast}
\frac{dU}{ds} = F(U) - \zeta U,
\end{equation}
where $\zeta$ can be treated as a parameter.}
The bifurcations of Equation \eqref{eq:fast} are related to the slow subsystem (Equation \eqref{eq:slow}) through the viscous drag term. Because $\zeta$ is a function of position $Z$, there exists a mapping from the critical manifold to the bifurcation curve. For example, as the vesicle center of mass approaches the center of the constriction, viscous drag increases monotonically (Figure \ref{fig:c0}C). At this stage, the bifurcation curve and critical manifold closely resemble scaled versions of each other (Figures \ref{fig:c0}A, B and Figure \ref{fig:c0}D, E when $Z\in[-5\si{.\um},0\si{.\um}]$). Beyond the center of the constriction, the viscous drag term decreases monotonically, and the critical manifold resembles a reflected version of the bifurcation curve (Figure \ref{fig:c0}A, B and Figures \ref{fig:c0}D, E when $Z\in[0\si{.\um},5\si{.\um}]$). Thus understanding the bifurcation curves and the viscous drag terms are sufficient to understand the critical manifold of the overdamped system. With this mapping in mind, we turn to a thorough numerical analysis of this system's bifurcations.
\section{Bifurcations of the Force-Velocity Curve}\label{sec:bifurcations}
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{bifurcations_colored.pdf}
\caption{Two parameter bifurcation diagram in $\phi$ and $\zeta$. Saddle-node (SN) bifurcations are shown in (D) as colored branches with a unique color and symbol. Numbers in (D) indicate the total number of fixed points in the corresponding region of parameter space. Subplots A, B, C, E, F, show one-parameter slices of the two-parameter diagram. Saddle-nodes are labeled with the corresponding branch color and symbol. The critical vesicle-to-spine diameter ratio at the cusps is roughly 2\si{.\um}/3\si{.\um}.}\label{fig:2par}
\end{figure}
In this section, we perform a numerical bifurcation analysis of the fast subsystem by following the roots of the right-hand side of Equation \eqref{eq:fast}:
\begin{equation}
f(U) = \phi F_{-A}(U) +(1-\phi)F_{A}( U) - \zeta U.
\end{equation}
Details of the numerics are given in Appendix \ref{a:continuation}.
\subsection{Bifurcations in $\phi$-$\zeta$}
We begin with single-parameter bifurcation diagrams in $\phi$ and $\zeta$ by fixing one parameter and varying the other. In Figures \ref{fig:2par}A--C, we fix $\zeta$ at three different values and follow equilibria as a function of \yp{$\phi$}. The symmetry of these curves about $\phi=0.5$ comes from our choice of force-velocity curves for competing motors: we use identical force-velocity curves for both species, so the existence of fixed points for $\phi>0.5$ is the same but with opposite sign when $\phi<0.5$.
As $\zeta$ decreases from Panels A to C, the saddle-node denoted by the orange star occurs at progressively smaller values of \yp{$\phi$}, and the saddle-node denoted by the yellow $+$ occurs at progressively greater values of \yp{$\phi$}. The change results in the creation of multistable velocities (stable velocities are black and unstable velocities are red). In Panel A, there exist values of $\phi$ with only one or three fixed points. In Panels B and C, the change in the folds' position gives way to the existence of five fixed points. We also find that $\phi$ above and below 0.5 tend to yield positive and negative velocities. There are some exceptions, such as Panel C, where $\phi>0.5$ can result in negative velocity. This observation is the well-established tug-of-war effect. These one-parameter bifurcation diagrams give us a good starting point of how stable velocities change as a function of two parameters and some insight into the shape of the bifurcation surface. Note that in Panel A, the parameter range in $\phi$ for which there is only one solution is much greater relative to the range of $\phi$ for three solutions. We call the single-velocity solutions more ``robust'' relative to the multistable solutions. We can also conclude from Panel C that the tug-of-war effect is more likely to be seen for lower viscous drag values. We will later discuss robustness in a similar way, where relatively larger parameter ranges correspond to increased robustness.
In Figures \ref{fig:2par}E, F, we fix \yp{$\phi$} at 0.48 and 0.49, respectively, and vary $\zeta$. These bifurcation diagrams are less intuitive but can be understood as slices through the bifurcation surface described above. More importantly, these bifurcation diagrams reveal some information about the underlying critical manifold. In \ref{fig:2par}E, we see that if the vesicle has a sufficiently positive initial velocity at low drag, it maintains a positive velocity as the vesicle moves into the spine neck, and the drag grows due to the constriction. At a critical drag denoted by the orange star, the vesicle instantaneously switches velocity in the opposite direction by jumping down to the lower stable branch and eventually exits the constriction through the base of the spine at $-5$\si{.\um}. Figure \ref{fig:2par}F exhibits similar discontinuous behavior: with an appropriate positive initial velocity, the vesicle moves towards the constriction before jumping down to the stable middle branch where the velocity is near zero. In this scenario, the vesicle remains stuck for long times. Note that by the symmetry of these functions, the bifurcation diagrams look identical with a change of sign for $\phi=0.52,0.5$. Therefore, the model predicts that motor-driven transport through constrictions will generally push the vesicle towards the spine head as long as the initial condition is sufficiently far into the constriction and that upwards-pushing motors are dominant.
While one-parameter diagrams are useful, we wish to understand how multistability changes in the entire $\phi$-$\zeta$ parameter space. We address the question of multistability by noting how each one-parameter bifurcation changes as a function of an additional parameter. This process naturally partitions the $\phi$-$\zeta$ parameter space into multistable regions. Noting that our one-dimensional system only produces saddle-node (SN) bifurcations that produce or destroy pairs of fixed points, we only need to track these saddle-node bifurcations. This process yields Figure \ref{fig:2par}D, in which we suppress fixed points and only display bifurcation points and the number of fixed points in each region. Each of the four colored curves corresponds to a saddle-node bifurcation with a unique color and symbol. As expected, the number of fixed points changes across the various saddle-node curves. For example, panel A shows that as \yp{$\phi$} increases, the total number of fixed points changes in the order $1\rightarrow 3 \rightarrow 1 \rightarrow 3 \rightarrow 1$. The same can be observed in panel A by tracing the slice at $\zeta = 1.75\times 10^{-4}$ \si{kg/s}.
The two-parameter diagram completely characterizes the total number of fixed points for each region of the parameter space and shows that we can expect multistability in much of the displayed parameter space. If viscous drag is sufficiently small (in the bottom region of panel D), multistability exists for a wide range of motor ratios \yp{$\phi$}. As viscous drag increases, the range of multistability becomes much smaller as fixed points disappear through saddle-node bifurcations. For sufficiently large viscous drag $\zeta$, multistability ceases to exist as the saddle-nodes disappear through cusps, and there exists only one stable velocity for all motor ratios. The critical vesicle-to-spine diameter ratio at the cusps is roughly 2\si{.\um}/3\si{.\um}.
In terms of the bifurcation surface, the parameter slices in panels A--C show that for several values of fixed $\zeta$, the bifurcation surface contains four folds. By choosing greater values of $\zeta$, we find that the folds of the surface eventually flatten out through a pair of cusp bifurcations. Beyond this cusp bifurcation, for greater values of $\zeta$, we expect that the velocities $U$ are negative when $\yp{\phi} < 0.5$, positive when $\yp{\phi}>0.5$, and zero when $\yp{\phi} = 0.5$. From this geometric intuition, it follows that when confinement effects are sufficiently large, the vesicle's velocity is determined purely by the ratio of upwards- and downwards-pushing motors. We make this observation more rigorous in Section \ref{sec:existence}.
\subsection{Robustness in $\pi_4$, $\pi_5$}
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{pi4_vs_pi5_colored.pdf}
\caption{Cusp bifurcations as a function of $\pi_4$ and $\pi_5$. A: For each $\zeta>0$,cusp bifurcations exist along a set of $\pi_4,\pi_5$. Example level curves are plotted for $\zeta = 0,4.3\times10^{-5}, 1.3\times10^{-4},2.2\times10^{-4}$. We take 4 representative pairs of $\pi_4,\pi_5$ labeled B--E and show the corresponding two-parameter bifurcation diagrams in B--E. The point labeled with $\star$ corresponds to Figure \ref{fig:2par}.}\label{fig:cusps}
\end{figure}
When we fix the motor parameters $\pi_1$-$\pi_6$, Figure \ref{fig:2par} provides a complete description of how multistability changes as a function of motor ratio and constriction geometry. However, there may be variations in motor parameters due to the existence of multiple motor types such as myosin V and VI \cite{da2015positioning} and variations in ATP and ADP concentration, which is known to differentially modulate myosin motor dynamics \cite{zimmermann2015actin}.
As explained in detail in Appendix \ref{a:cusps}, the cusp bifurcation separates the parameter space between multistable velocities and globally stable or unstable solutions. Indeed, cusps generally serve as a sufficient condition for the existence of hysteresis. Therefore, understanding the cusp bifurcation may provide essential insights into controllability. For a given $\zeta$, it is possible to track cusps as a function of $\pi_4$ and $\pi_5$. The result of this process for various $\pi_4$ and $\zeta$ is shown in Figure \ref{fig:cusps}A. Each curve represents a cusp bifurcation as a function of $\pi_4,\pi_5$ for a given $\zeta$ (we briefly describe how we determined the location of these cusps in Appendix \ref{a:cusps}).
Given parameters $\pi_4,\pi_5$ chosen somewhere in the $\pi_4$-$\pi_5$ parameter region below the $\zeta=0$ level curve in Figure \ref{fig:cusps}, the $\zeta$ level curves suggest that there exists a cusp bifurcation at some $\zeta^*(\pi_4,\pi_5)>0$. Because cusps are a sufficient condition for multistability, it follows that multistable states exist for appropriate choices of \yp{$\phi$} and $\zeta \leq \zeta^*$. The $\zeta=0$ level curve represents a sufficient condition for the loss of multistability: noting that for each fixed point, $\partial \zeta/\partial\pi_5 <0$ in a neighborhood about the fixed point, i.e., $\zeta$ decreases as $\pi_5$ increases, it follows that for any $\pi_4,\pi_5$ in at least a neighborhood above this curve, there can exist no cusp bifurcation with a positive $\zeta$.
Various points in Figure \ref{fig:cusps}A are marked B--E, with corresponding two-parameter bifurcation diagrams shown in the remaining subplots, Figures \ref{fig:cusps}B--E. The point marked by $\star$ in Figure \ref{fig:cusps}A represents the $\pi_4$ and $\pi_5$ values for Figure \ref{fig:2par}. These diagrams show that the region of multistability tends to increase for smaller $\pi_5$ and greater $\pi_4$. Recalling that $\pi_5$ is the maximum displacement of a motor in its non-preferred direction, smaller $\pi_5$ implies that bidirectional motion due to noise can be made more likely by allowing the motor to detach earlier. Next, $\pi_4$ is the initial motor attachment position in either the preferred or non-preferred direction, and we find that the area of multistability increases as motors have a greater initial extension. Together, we predict that strong initial attachment forces combined with greater yield effects can result in more frequent directional switching.
\section{Existence and Stability of Solutions}\label{sec:existence}
\subsection{Existence}
The existence of a stationary solution $U = 0$ is straightforward to prove by inspection for $\phi=0.5$. In this section, we expand about this solution to determine the existence of solutions when $\phi$ is near the equal motor ratio $\phi=0.5$ and when $\zeta$ is large. We let $\phi = 0.5+\hat\phi$ and $\hat\zeta=1/\zeta$ and explore the cases where $\hat \phi$ and $\hat\zeta$ are small. These limits inform us of linear behavior of the velocity function $U=\yp{U}(\hat\phi,\hat\zeta)$ by writing
\begin{equation*}
\yp{U}(\hat\phi,\hat\zeta) = \left.\frac{\partial \yp{U}}{\partial \hat\phi}\right|_{(0,0)}\hat\phi + \left.\frac{\partial \yp{U}}{\partial \hat\zeta}\right|_{(0,0)}\hat\zeta + O(\hat\phi \hat\zeta,\hat\phi^2,\hat\zeta^2).
\end{equation*}
First we derive an equation for the small deviation $\phi=0.5+\hat \phi$, where $0<|\hat\phi|\ll1$. Constant velocity solutions $U=\yp{U}$ must satisfy
\begin{align*}
0 &= \phi F_{-A}(\yp{U}) + (1-\phi) F_{A}(\yp{U}) - \zeta \yp{U}\\
&= \frac{1}{2} F_{-A}(\yp{U}) + \frac{1}{2}F_A(\yp{U}) + \hat \phi [F_{-A}(\yp{U}) - F_A(\yp{U})] - \zeta \yp{U}.
\end{align*}
Solving for $\hat \phi$ yields,
\begin{equation*}
\hat \phi = \frac{1}{2}\frac{F_{-A}(\yp{U}) + F_A(\yp{U}) -2 \zeta \yp{U}}{F_A(\yp{U}) - F_{-A}(\yp{U})}.
\end{equation*}
Our nondimensional parameters are always greater than zero, so $0 < 1-e^{\pi_4}$ and \yp{$\zeta<e^{\pi_4}(\pi_6+\zeta)$}. It follows that the derivative of $\hat \phi$ with respect to $\yp{U}$ is nonzero:
\begin{equation*}
\left.\frac{d\hat\phi}{d \yp{U}}\right|_{\yp{U}=0} = \yp{\frac{1}{2}\frac{e^{\pi_4}(\pi_6+\zeta)-\zeta}{e^{\pi_4}-1}\neq 0}.
\end{equation*}
We then obtain a local equation for $\yp{U}$ as a function of $\hat \phi$ by invoking the inverse function theorem:
\begin{equation}
\yp{U}(\hat\phi) = \yp{2\hat\phi \frac{e^{\pi_4}-1}{e^{\pi_4}(\pi_6+\zeta)-\zeta} + O(\hat\phi^2)}.
\end{equation}
We have seen that in some parameter regimes, viscous drag grows to large values during translocation (Figure \ref{fig:c0}C,F), but nonzero velocities persist due to unequal motor ratios (Figure \ref{fig:constriction}C,F). To establish this inverse relationship between velocity and drag, we Taylor expand about infinity with $\hat \zeta = 1/\zeta$ as defined above. Then solving for $\hat \zeta$ in terms of $\yp{U}$ yields
\begin{equation*}
\hat\zeta = \frac{\yp{U}}{\phi F_{-A}(\yp{U}) + (1-\phi)F_A(\yp{U})}.
\end{equation*}
To derive a local equation for $\yp{U}$ as a function of $\hat\zeta$, we examine the derivative of $\hat\zeta$ with respect to $\yp{U}$:
\begin{equation*}
\left. \frac{d\hat \zeta}{d \yp{U}}\right|_{\yp{U}=0}=\yp{\frac{1}{2\phi-1}}.
\end{equation*}
So long as $\phi \neq 1/2$, this derivative is well-defined, and we can invoke the inverse function theorem to write the local equation for $\yp{U}$ as a function of $\hat\zeta$.
\begin{equation*}
\yp{U}(\hat \zeta) = \yp{\hat \zeta(2\phi-1)} + O(\hat\zeta^2).
\end{equation*}
After a trivial substitution, we arrive at the desired equation,
\begin{equation*}
\yp{U}(\zeta) = \yp{\zeta^{-1} (2\phi-1)} + O\left(\zeta^{-2}\right) = \yp{2 \zeta^{-1}\hat \phi} + O\left(\zeta^{-2}\right).
\end{equation*}
Combining the local velocity estimates, we arrive at the \yp{local} velocity \yp{existence} equation as a function of $\hat\phi$ and $\zeta$:
\begin{equation}\label{eq:V}
\yp{U}(\hat\phi,\zeta) = \yp{2\hat\phi \left[\frac{e^{\pi_4}-1}{e^{\pi_4}(\pi_6+\zeta)-\zeta} + \frac{1}{\zeta}\right] } + O(\hat\phi \zeta^{-1},\hat\phi^2,\zeta^{-2}).
\end{equation}
This existence equation is valid for large $\zeta$ and any $\hat\phi\in[-1/2,1/2]$, or for small $\hat\phi$ and any $\zeta$.
Equation \eqref{eq:V} justifies the effects observed from simulations earlier in the paper. For $\zeta$ large, the velocity must be small and proportional to $\zeta^{-1}$, and the sign of $\hat\phi$ determines the sign of the velocity. It is the ratio of motors that determines the direction of motion for large drag. In the case of small drag, this equation is only valid for small $\hat\phi$, but the equation yields the same intuition that is consistent with our one-parameter bifurcation diagrams: the dominant motor species determine the sign of the velocity.
In the case of large drag, or small drag with near-equal (but non-equal) motor ratios, increasing the velocity scale ratio between translocation and motor adhesion dynamics $\pi_6$ will \yp{decrease} translocation velocity. Finally, velocity depends weakly on the initial motor displacement $\pi_4$ and is entirely independent of the ratio of motor detachment position and the maximum displacement of each motor ($\pi_3$ and $\pi_5$). \yp{Therefore, we expect that even significant changes to the motor displacements or substantial changes to motor attachment and detachment rates will have little effect on translocation dynamics.}
\subsection{Linear Stability}
For our one-dimensional problem, a linear stability analysis is sufficient to understand the stability of fixed points, which follows from the slope of the total force function:
\begin{equation}\label{eq:stability}
\lambda = F'(\yp{U}) - \zeta.
\end{equation}
Note that if the derivative of $F$ is bounded, it follows that the eigenvalue is negative ($\lambda < 0$) for sufficiently large drag forces. Therefore, all velocities satisfying force-balance are stable. We can rule out multistability for large drag because, if there is more than one fixed point, there must also be more than one unstable fixed point. This is impossible by the negativity of $\lambda$ shown above. Using the continuity of the force-velocity curves (Appendix \ref{a:cont}), the proof follows by contradiction: if there are only two fixed points that are both stable, the slope of the net force function must be negative at each point. By the intermediate value theorem, there must exist a third point between them, which contradicts the original claim of two stable points. So, either one point must be unstable, or there is only one fixed point. This argument can be extended to eliminate any number of stable fixed points in the case of large drag.
In the special case $\yp{U}=0$, the eigenvalue can be computed explicitly to yield
\begin{equation*}
\lambda = \frac{e^{\pi_4}\pi_6}{1-e^{\pi_4}} - \zeta.
\end{equation*}
In the physically relevant parameter regime, the conditions $\pi_4$, $\pi_6>0$, and $\zeta\geq0$ always hold. Therefore $\lambda <0$ for any choice of parameters, implying that the stationary solution is always stable.
For small deviations from $\phi=0.5$, we use Equation \eqref{eq:V} to rewrite the eigenvalue:
\begin{align*}
\lambda &= F'\left(0 + 2\hat\phi \frac{e^{\pi_4}-1}{e^{\pi_4}\pi_6} + O(\hat\phi^2)\right) - \zeta\\
&=F'(0) + 2\hat\phi \frac{e^{\pi_4}-1}{e^{\pi_4}\pi_6} F''(0) - \zeta + O(\hat\phi^2)\\
&=\frac{e^{\pi_4}\pi_6}{1-e^{\pi_4}} + 2\hat\phi \frac{e^{\pi_4}-1}{e^{\pi_4}\pi_6} \left[ \frac{2 e^{\pi_4}(2\yp{\phi}-1)\pi_6^2}{e^{\pi_4}-1}\right] - \zeta+ O(\hat\phi^2).
\end{align*}
Because we assume that $\yp{\phi} = 0.5 + \hat\phi$, the term $2\yp{\phi}-1$ reduces to $2\hat\phi$, making the second term order $O(\hat\phi^2)$. Therefore, the eigenvalue equation reduces to
\begin{equation*}
\lambda = \frac{e^{\pi_4}\pi_6}{1-e^{\pi_4}} -\zeta + O(\hat\phi^2).
\end{equation*}
Recalling that $\pi_4,\pi_6>0$, we generally expect constant velocity solutions to be stable for $\hat\phi$ small.
\section{Discussion}
In this paper, we fully characterize the dynamics predicted by a model of vesicles driven into closed constrictions. We cast the system into a fast-slow system, perform a two-parameter bifurcation analysis on the fast subsystem, then determine how the dominant motor species affect the cusps in the resulting bifurcation surface. The model predicts multistability, i.e., bidirectional motion, for smaller values of viscous drag and unidirectional motion for greater values of the viscous drag corresponding to tight constrictions.
We remark that our reduced axisymmetric lubrication model captures the diverse morphology of dendritic spines, from thin spines that are often less than $2$\si{.\um} in length with neck diameters ranging from 0.06--0.2\si{.\um} \cite{arellano2007ultrastructure} to mushroom-shaped spines that are often less than $1$\si{.\um} in width \cite{risher2014rapid}. As long as the diameters of the vesicle and spine wall are similar, our theory applies.
\subsection{Physiological Parameters}
While detailed electron microscopy images exist of dendritic spines \cite{kasthuri2015saturated}, it is a significant challenge to search and classify recycling endosomes efficiently. For now, we rely on published endosome images to approximate the physiologically relevant parameter ranges.
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{vesicle_figure.pdf}
\caption{Time-lapse images of recycling endosomes adapted from \cite{da2015positioning} and available under the CC BY NC ND license. A: a recycling endosome translocates through a thin spine in four time-lapse images (left). A kymograph is shown to the right. The vesicle is roughly $1$\si{.\um} in diameter, and the distance between the vesicle and neck wall is at most 0.1\si{.\um}. B: a recycling endosome translocates into a stubby spine in a series of time-lapse images, with the associated kymograph on the right. The vesicle is roughly $0.5$\si{.\um} in diameter, the distance between the vesicle and neck wall is roughly 0.15\si{.\um}, and the distance between the vesicle and spine wall is roughly 0.5\si{.\um}. All scale bars 2\si{.\um}. C: approximate ranges of drag (red transparent) superimposed on the two-parameter diagram from Figure \ref{fig:2par} with the drag $\zeta$ plotted on a log scale. Labels A and B correspond to Panels A and B. }\label{fig:vesicle_example}
\end{figure}
In Figure \ref{fig:vesicle_example}A, B, we show two representative experimental images from \cite{da2015positioning} of spines containing recycling endosomes. A thin yellow line outlines the spine, and the endosome travels through the spine neck in a series of four time-lapse images with an associated kymograph on the right. The scale bar is 2\si{.\um} in all panels. Using these images, we determine approximate regions where physiological parameters may lie in the $\phi$-$\zeta$ parameter space (Figures \ref{fig:2par}D and \ref{fig:vesicle_example}C). Figure \ref{fig:vesicle_example}C is the same as Figure \ref{fig:2par}D but with $\zeta$ plotted on a log scale. The time-lapse images provide virtually no information about the ratio of motors, so we make no restrictions on $\phi$ for now.
We estimate the viscous values from these images by estimating the height between the vesicle and spine wall in Figure \ref{fig:vesicle_example}A and assume a constant constriction for simplicity. Through a crude manual approximation, we estimate the height in panel A to be at most 0.1\si{.\um}. This height is substantially smaller than the heights considered in this paper, and therefore yields a relatively greater viscous drag value of $\zeta\approx$ \si{\num{2e-2} .kg/s} (we assume the same microscopic motor parameters as in Figure \ref{fig:2par}). This value corresponds to a point far above the cusps, where only a single-velocity solution exists (Figure \ref{fig:vesicle_example}C, top red band labeled ``A''). This range lies well within the unidirectional regime. Because the time-lapse images in Figure \ref{fig:vesicle_example}A show the endosome traveling towards the spine head, we infer that the upwards-pushing motors are dominant. The only region where the vesicle could switch direction is at the base of the spine, where there some distance between the diameter of the cell wall and the recycling endosome is possible. Thin spines generally are much smaller in diameter relative to recycling endosomes, so that the distance between the vesicle and spine wall is small. We conclude that once the vesicle has entered a thin spine, we generally expect unidirectional movement.
Figure \ref{fig:vesicle_example}B shows an example of a stubby spine. Here, the endosome is smaller, with a diameter range from $0.5$\si{.\um} to $0.6$\si{.\um}. We estimate that the height between the endosome and neck is roughly 0.05--0.1\si{.\um}, which yields viscous drag values in the range $\zeta\approx$ \si{\num{1e-4}} to \si{\num{6e-4}.kg/s}. Recalling that the cusps in Figure \ref{fig:2par} are on the order of \si{\num{2e-4}.kg/s} (where the critical vesicle-to-spine diameter ratio is roughly 2\si{.\um}/3\si{.\um}), we find that both unidirectional and multidirectional motion are possible (Figure \ref{fig:vesicle_example}C, middle red band labeled ``B, neck''). Unidirectional solutions take a much greater portion of the parameter range on a linear scale. Another property to consider is that stubby spines have shallow constrictions that lead into a large head, where the vesicle may spend a substantial amount of time relative to the neck. Therefore, it is worth considering whether multistable solutions exist past the neck. In the head, the height between the endosome and head ranges from roughly 0.1\si{.\um} to 0.5\si{.\um}, which yields a drag range of \si{\num{3e-6}.kg/s} to \si{\num{1e-4}.kg/s}, which places the vesicle squarely in a multistable regime (Figure \ref{fig:vesicle_example}C, lower red band labeled ``B, head''). Strikingly, the kymograph shows unidirectional movement through the neck and bidirectional motion in the spine head.
The viscous drag in these representative examples varies over four orders of magnitude, and our model predicts dramatically different qualitative behaviors over this range. According to our model, thin spines with large vesicles will exhibit unidirectional movement due to the large drag experienced by the vesicle. In contrast, stubby spines will exhibit multidirectional motion, especially if the diameter of the recycling endosome is smaller than the spine. Interestingly, published images of vesicle movement in spines appear to confirm these claims.
\subsection{Limitations and Future Directions}\label{sec:limitations}
There are important caveats to the present study. One is the use of mean-field models. In particular, the force-velocity functions used in this paper rely on the limit of large numbers of myosin motors \cite{hoppensteadt2012modeling}, and therefore questions about noise cannot be addressed in this framework. One possible approach to overcome this limitation is to replace the current mean-field model of molecular motors by a discrete model; such a discrete model is used to compute mean passage times in Allard et al. \cite{allard2019bidirectional}. Finally, numerical experiments are another feasible approach to address the question of mean first passage times of a translocating vesicle. Similar approaches have been performed in \cite{dallon2019stochastic} to pursue questions regarding the effects of intermediate filament parameters on intermediate filament transport.
\yp{Another limitation of the present work is the rigidity assumption $\pi_2=0$, which is equivalent to assuming a rigid, undeformable spherical vesicle. In fact, experimental images show that vesicles undergo large deformations as they squeeze through spine necks. One could, in principle, incorporate this into the present model by adding an equation for the vesicle shape based on its elastic properties, surface area to volume ratio, and the surrounding flow. In the present work we have chosen to focus on the simpler case of a spherical vesicle, since in this case the height function may be parameterized in terms of a single geometric parameter, the radius, which simplifies the problem considerably and allows a more straightforward analysis.}
\yp{The insights gained from studying the transport of single vesicles into dendritic spines are expected to apply to the population-level dynamics with some caveats. A potential application is that multistability noted in the context of single vesicles would translate into the relative fractions of vesicles in the population that experience translocation, rejection, and corking. However, one complicating factor is that the dynamics of different vesicles are not independent---they are coupled through the fluid mechanics of the intracellular fluid. This long-ranged fluid-structure interaction is an important subject for future work.}
\section{Acknowledgments}
The authors acknowledge support under the National Institute of Health grant T32 NS007292 (YP) and National Science Foundation grant DMS-1913093 (TGF). \yp{The authors thank Chris H. Rycroft and Jonathan Touboul for reading the early version of the text and providing useful feedback.}
|
1,108,101,562,754 | arxiv |
\section{Introduction}
\label{sec:intro}
Due to the tremendous potential in learning discriminative feature representation without using data annotations, self-supervised learning has received much attention in the representation learning field.
Contrastive learning \cite{moco_2020,simclr_2020}, as a type of discriminative self-supervised learning method, is heavily studied and has shown remarkable progress in the computer vision field in recent years.
It aims at pulling different augmented ``views'' of the same image (positive pairs) closer while pushing diverse images (negative pairs) far from each other.
To this end, a contrastive loss between the features of different views extracted from an encoder network is employed to train the encoder network end-to-end.
According to whether the negative pairs are used, current contrastive learning can be generally divided into two categories.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figures/fig_introduction.pdf}
\caption{Semantic inconsistency of over-augmentation.
(a) shows three augmentations of two images, in which the third augmentation is over-augmented and contains only background.
(b) shows category probability distributions of the corresponding images in (a), which are obtained from a supervised pre-trained ResNet50 \cite{resnet_2016} model.
(c)(d)(e) show three different samples of an image (the data-augmented image, the original image, and the semantics-consistent feature-augmented sample generated by Eq. \ref{equ:attention} in this study) and their corresponding probability distributions, which point out that the over-augmented image generates different category with the original image, while the feature-augmented sample gets a balanced category probability.}
\label{fig:introduction}
\end{figure}
The first category \cite{moco_2020,simclr_2020} utilizes both positive pairs and negative pairs for contrast.
MoCo \cite{moco_2020,mocov2_2020} uses a momentum update mechanism to maintain a memory bank of negative examples.
SimCLR \cite{simclr_2020,simclrv2_2020} directly trains a single encoder network with a large batch size to ensure sufficient positive and negative samples for learning.
Based on MoCo and SimCLR, some methods \cite{momentum2tea_2021,msf_2021,isd_2021,nnclr_2021,dcl_2021,clsa_2021,adco_2021,mocov3_2021,hsa_2022,hcsc_2022} are proposed to improve the performance.
For example, MSF \cite{msf_2021}, ISD \cite{isd_2021} and NNCLR \cite{nnclr_2021} aim to search semantics-consistent samples for contrast, solving the false negative problem.
While some studies, such as Momentum2Teacher \cite{momentum2tea_2021} and DCL \cite{dcl_2021}, aim to solve the limitation that large batch size is necessary for satisfactory performance.
The second category of contrastive learning methods \cite{byol_2020,simsiam_2021,swav_2020,barlowtwins_2021,vicreg_2021,obow_2021,dino_2021,crafting_2022} only constructs positive pairs for contrast.
Based on MoCo \cite{moco_2020} and SimCLR \cite{simclr_2020}, respectively, BYOL \cite{byol_2020} and SimSiam \cite{simsiam_2021} abandon the negative samples and use an asymmetric architecture to avoid model collapse.
SwAV \cite{swav_2020} uses online clustering to cluster samples and forces the consistency among cluster assignments of different augmentations.
After that, some studies \cite{dino_2021,clsa_2021,nnclr_2021,adco_2021} point out that enriching the augmented samples can improve the performance of contrastive learning.
In addition, the study in \cite{crafting_2022} shows that improving the quality of positive augmented samples is important for self-supervised learning.
However, it is unavoidable to construct data augmentations containing different semantic concepts.
\cref{fig:introduction}(a) shows three augmentations of two images, in which the third augmentation is over-augmented and contains only the background.
\cref{fig:introduction}(b) shows category probability distributions of the corresponding images in (a), which are obtained from a supervised pre-trained ResNet50 \cite{resnet_2016}.
We observed that the probability distribution of over-augmented images changes greatly compared with the first two augmentations, which indicates that the semantic information of the over-augmented images deviates from the normally-augmented images.
Similar observation can be found in \cref{fig:introduction}(c)(d).
The original image in (d) shows a max probability for ``ambulance'', while the over-augmented image in (c) represents the different category ``telescope''.
Due to such semantic inconsistency, conducting contrastive learning on these over-augmentations is harmful to representation learning.
In this study, we found that semantics-consistent feature augmentation (\cref{fig:introduction}(e), generated by \cref{equ:attention}) can balance the original semantics ``ambulance'' and the over-augmented semantics ``telescope'', which can alleviate the influence of semantic inconsistency.
Motivated by this observation, we propose a novel semantics-consistent feature search (SCFS\xspace) method to alleviate the negative influence of semantic inconsistency in contrastive learning.
SCFS\xspace utilizes the global feature of a view to adaptively search the semantics-consistent features of another view for contrast according to their similarity.
It constructs informative feature augmentations and conducts contrast learning between feature augmentations and data augmentations.
Thus, the pre-trained model can learn to focus on meaningful object regions to alleviate the negative influence of unmatched semantic alignment in current contrastive learning for better representation learning.
In addition, the feature search is conducted on multiple layers of the backbone network, further enhancing the semantic alignment at different scales of features.
Extensive experiments conducted on different datasets and tasks demonstrate that SCFS\xspace effectively improves the performance of self-supervised learning and achieves state-of-the-art performance on different downstream tasks.
For example, it achieves state-of-the-art 75.7\% ImageNet top-1 accuracy under the pre-training setting of 1024 batch size and 800 epochs for ResNet50.
The main contributions of this study are threefold:
\begin{itemize}
\item A novel contrastive learning method, i.e., SCFS\xspace, is proposed, and it can enhance semantic alignment in contrastive learning.
To our knowledge, this is the first work that defines a feature search task in contrastive learning.
\item We expand contrastive learning from a data-to-data manner to a feature-to-data manner, which enriches the diversity of augmentations.
\item The proposed SCFS\xspace achieves state-of-the-art performance on different downstream tasks.
\end{itemize}
\section{Related Works}
\label{sec:relatedwork}
Recently, some studies \cite{scrl_2021,resim_2021,soco_2021,detco_2021,ORL_2021,univip_2022,pixpro_2021,densecl_2021,dsc_2021,setsim_2021} pointed out that the problem of semantic inconsistency is more serious for downstream dense prediction tasks, such as object detection and instance segmentation.
Therefore, these methods utilize region-level and pixel-level features for contrast.
In this study, the proposed SCFS\xspace construct feature-level augmentations using dense feature maps.
Therefore, this section introduces related studies that conduct contrastive learning using region-level and pixel-level features.
\textbf{Region-level contrastive learning.}
SCRL \cite{scrl_2021} minimizes the distance between two local features, which are cropped from two corresponding feature maps of two views.
ReSim \cite{resim_2021} aligns regional representations by sliding a fixed-sized window across the overlapping area between two views to improve the performance for localization-based tasks.
SoCo \cite{soco_2021}, ORL \cite{ORL_2021}, and UniVIP \cite{univip_2022} extract object region proposals and use them to construct region-level features for contrastive learning.
They achieve good performance for downstream dense prediction tasks.
\textbf{Pixel-level contrastive learning.}
To obtain a more fine-grained representation, several studies \cite{pixpro_2021,densecl_2021,setsim_2021} design pixel-level contrastive learning task, which assumes that features extracted from the same pixel of different views should be treated as positive pairs while pixels from others must be distinguished.
PixPro \cite{pixpro_2021} utilizes a pixel propagation module to select similar pixel features for contrast and encourages consistency between positive pixel pairs.
DenseCL \cite{densecl_2021} proposes a dense projection head to generate dense feature vectors for pixel-level contrastive learning.
SetSim \cite{setsim_2021} is designed to realize pixel-wise similarity learning by filtering out noisy backgrounds.
As summarized above, data augmentations bring rich information while increasing uncertainty in contrastive learning.
While methods that utilize region-level features expand the granularity of feature representation by alleviating the influence of noises.
Unlike previous studies, our method bridges the correlation between data and feature augmentations and extends the contrastive-based self-supervised task to a semantics-consistent feature search task.
\section{Methods}
\label{sec:methods}
In this section, we first introduce the overall architecture of SCFS\xspace in \cref{subsec_overall}.
Then, the contrast between data augmentations is presented in \cref{subsec_ldd}.
Next, the key feature search module of SCFS\xspace is introduced in detail in \cref{subsec_SCFS}.
Finally, the implementation details are presented in \cref{subsec_implementation}.
\begin{figure*}[!tp]
\centering
\includegraphics[height=5cm]{figures/fig_method.pdf}
\caption{
Overall architecture of the proposed semantics-consistent feature search (SCFS\xspace).
It consists of an encoder and a momentum encoder.
There are two contrastive learning tasks: the contrast between data augmentations ($\mathcal{L}_{d}$) in the final feature space and the feature search task conducted on multiple layers ($\mathcal{L}_{fs}$).
The details of the feature search procedure is shown on the right.
}
\label{fig:method}
\end{figure*}
\subsection{Overall Architecture}
\label{subsec_overall}
The overall architecture of SCFS\xspace is shown in \cref{fig:method}.
It consists of an encoder and a momentum encoder.
The momentum encoder is an exponential-moving-average version of the encoder.
SCFS\xspace consists of two contrastive learning tasks: the contrast between data augmentations ($\mathcal{L}_{d}$) and the contrast between data augmentations and feature augmentations ($\mathcal{L}_{fs}$).
\textbf{The contrast between data augmentations.}
Given two global augmentations ($\vm{I}_1$ and $\vm{I}_2$) and multiple local augmentations $\vm{I}_l$ of an input image, the final output feature representations $\vm{f}$ of data augmentations are utilized to calculate the contrastive loss $\mathcal{L}_{d}$ (which will be introduced in the second subsection).
\textbf{The contrast between data augmentations and feature augmentations.}
As introduced in \cref{sec:intro}, it is unavoidable to construct augmentations that contain different semantic concepts during the augmentation procedure.
It's harmful to pull these augmentations close indiscriminately in the feature space.
Therefore, we propose the SCFS\xspace (which will be introduced in the third subsection) method to enhance the contrast between semantics-consistent regions in different augmentations.
As shown in \cref{fig:method}, to fully enhance the contrast between semantics-consistent features, SCFS\xspace is employed on multiple layers of the backbone network.
At the $i$-th layer, SCFS\xspace utilizes the feature ${}^i{\vm{f}_l}$ from the encoder to search semantics-consistent feature ${}^i{\vm{f'}_{lg}}$ on the feature map ${}^i{\vm{F'}_g}$ from the momentum encoder.
And a feature search loss ${}^i\mathcal{L}_{fs}$ is calculated between the data augmentation ${}^i{\vm{f}_l}$ and the feature augmentation ${}^i{\vm{f'}_{lg}}$.
The overall feature search loss is the sum of all layers:
\begin{equation}
\mathcal{L}_{fs} = \sum\limits_{i \in {V_L}} {{}^i\mathcal{L}_{fs}}
\label{equ:l_fs}
\end{equation}
where $V_L$ denotes the set of layers to conduct SCFS\xspace.
The overall loss is the sum of the contrastive loss between data augmentations and the feature search loss:
\begin{equation}
\mathcal{L} = {\mathcal{L}_d} + \mathcal{L}_{fs}
\label{equ:l_SCFS}
\end{equation}
\subsection{Contrast Between Data Augmentations}
\label{subsec_ldd}
Given a pair of global augmentations ($\vm{I}_1$ and $\vm{I}_2$) of an input image, the feature representations of the two augmentations are used to calculate the global contrastive loss.
Specifically, ${\vm{f}_{1}} = {E_{\vm{\theta}} }\left( {{\vm{I}_{1}}} \right)$ and ${\vm{f'}_{2}} = {E_{\vm{\theta}'} }\left( {{\vm{I}_2}} \right)$, where $\vm{\theta}$ and $\vm{\theta}'$ are parameters of the encoder and the momentum encoder, respectively.
${\vm{f}_1}$, ${\vm{f'}_2} \in {R^K}$, $K$ is the output dimension.
$\vm{f}_1$ is normalized with a softmax function:
\begin{equation}
P_1^i = \frac{{\exp \left( {{{f_1^i} \mathord{\left/
{\vphantom {{f_1^i} \tau }} \right.
\kern-\nulldelimiterspace} \tau }} \right)}}{{\sum\nolimits_{k = 1}^K {\exp \left( {{{f_1^k} \mathord{\left/
{\vphantom {{f_1^k} \tau }} \right.
\kern-\nulldelimiterspace} \tau }} \right)} }}
\label{equ:l_softmax}
\end{equation}
where $\tau>0$ is a temperature parameter that controls the sharpness of the output distribution.
Note that ${P'_2}$ is obtained by normalizing ${\vm{f'}_2}$ with a similar softmax function with temperature $\tau'$.
$\vm{I}_1$ and $\vm{I}_2$ are fed to the momentum encoder and encoder symmetrically, and ${P'_1}$ and ${P_2}$ are obtained respectively.
Following DINO \cite{dino_2021}, the cross-entropy loss is employed as the contrastive loss between two global views:
\begin{equation}
{\mathcal{L}_{g}} = - \left( {P'_2}\log \left( {{P_1}} \right) + {P'_1}\log \left( {{P_2}} \right) \right)
\label{equ:l_gg}
\end{equation}
To enrich augmentations, the multi-crop strategy \cite{swav_2020} is employed.
Multiple local augmentations $\vm{I}_l$ is also constructed and fed to the encoder: ${\vm{f}_l}\!=\!{E_{\vm{\theta}}}\left( {{\vm{I}_l}} \right)$.
${P_l}$ is obtained by normalizing ${\vm{f}_l}$ with the softmax function with temperature $\tau$.
The contrast between local views and global views can be calculated:
\begin{equation}
{\mathcal{L}_{l}} = \sum\limits_{n = 1}^N { - \left( {{{P'}_1}\log \left( {P_l^n} \right) + {{P'}_2}\log \left( {P_l^n} \right)} \right)}
\label{equ:l_lgd}
\end{equation}
where $N$ denotes the number of local views.
Thus, the overall loss is the sum of global loss and local loss:
\begin{equation}
{\mathcal{L}_{d}} = {\mathcal{L}_g} + \mathcal{L}_{l}
\label{equ:l_mc}
\end{equation}
\subsection{Semantics-Consistent Feature Search}
\label{subsec_SCFS}
We propose SCFS\xspace to enhance the importance of semantics-consistent regions in different augmentations by conducting contrast learning between data and feature augmentations.
The architecture of SCFS\xspace is shown in \cref{fig:method}.
By feeding the local augmentations $\vm{I}_{l}$ to the encoder, feature maps from different stages of the backbone ResNet50 \cite{resnet_2016} are extracted.
Specifically, the output features from different stages, i.e., $Res2$, $Res3$ and $Res4$, are utilized to conduct SCFS\xspace, ensuring that each stage of the backbone produces discriminative features:
$\left\{ {{}^2{\vm{F}_l},{}^3{\vm{F}_l},{}^4{\vm{F}_l}} \right\} = {E_{\vm{\theta}} }\left( {{\vm{I}_{l}}} \right)$, where ${}^i{\vm{F}_l} \in {R^{{W_l^i} \times {H_l^i} \times {C^i}}}$, ${W_l^i},{H_l^i},C^i$ denote the width, height and channel dimension, respectively.
Next, the global average pooling operation is conducted on each ${}^i{\vm{F}_l}$ in the spatial dimensions:
\begin{equation}
{}^i{\vm{f}_l} = \frac{1}{{{W_l^i} \times {H_l^i}}}\sum\limits_{x = 1}^{{W_l^i}} {\sum\limits_{y = 1}^{{H_l^i}} {{}^i{\vm{F}_l}\left( {x,y,z} \right)} }
\label{equ:gap}
\end{equation}
where ${}^i{\vm{f}_l} \in {R^{C^i}}$.
Meanwhile, the global augmentations $\vm{I}_g$ ($g=1,2$) are fed to the momentum encoder to extract feature maps from different stages: $\left\{ {{}^2{\vm{F}'_g},{}^3{\vm{F}'_g},{}^4{\vm{F}'_g}} \right\} = {E_{\vm{\theta}'} }\left( {{\vm{I}_{g}}} \right)$, ${}^i{\vm{F}'_g} \in {R^{{W_g^i} \times {H_g^i} \times {C^i}}}$, ${W_g^i},{H_g^i},C^i$ denote width, height and channel dimension, respectively.
Then, based on ${}^i{\vm{f}_l}$ and ${}^i{\vm{F}'_g}$, SCFS\xspace aims to adaptively search the most semantics-consistent features in ${}^i{\vm{F}'_g}$ for contrast, while suppressing irrelevant features.
In SCFS\xspace, each feature ${}^i{\vm{f}_l}$ of the local data augmentations is treated as query, and the features ${}^i{\vm{F}'_g}$ of the global augmentations are treated as keys.
The similarity between ${}^i{\vm{f}_l}$ and ${}^i{\vm{F}'_g}$ is calculated:
\begin{equation}
\vm{A}\left( {x,y} \right) = \frac{{{}^i{\vm{f}_l} \cdot {{}^i{\vm{F}'_g}}\left( {x,y} \right)}}{{{{\left\| {{}^i{\vm{f}_l}} \right\|}_2}{{\left\| {{{}^i{\vm{F}'_g}}\left( {x,y} \right)} \right\|}_2}}}
\label{equ:attention}
\end{equation}
where $\vm{A} \in {R^{{W_g^i} \times {H_g^i}}}$ is the attention map, and $x = 1, \ldots ,{W_g^i}$, $y = 1, \ldots ,{H_g^i}$, ${\left\| \cdot \right\|_2}$ is the L2 norm.
The attention map $\vm{A}$ activates the semantics-consistent regions of the local augmentation on the global augmentation.
Thus, the higher portion of local regions can be searched.
To select semantic features and suppress irrelevant local features, we directly multiply the attention map $\vm{A}$ with ${}^i{\vm{F}'_g}$ to obtain the semantics-consistent feature augmentations:
\begin{equation}
{}^i{\vm{F}'_{lg}} = \vm{A} \cdot {}^i{\vm{F}'_g}
\label{equ:attention}
\end{equation}
This operation can be regarded as attention-weighted average pooling.
Through feature search, $N$ local data augmentations $\vm{I}_{l}$ can search $N$ corresponding semantics-consistent features ${}^i{\vm{F}'_{lg}}$ from a global data augmentation $\vm{I}_{g}$.
That is, in terms of the global data augmentation, $N$ different features are constructed in the feature space through the feature search procedure.
Therefore, we term the searched semantics-consistent features ${}^i{\vm{F}'_{lg}}$ as feature-level augmentations.
After SCFS\xspace, the feature augmentation ${}^i{\vm{F}'_{lg}}$ only contains region-level features which are semantic-related to the local augmentation ${}^i{\vm{F}_l}$.
Next, ${}^i{\vm{F}_l}$ and ${}^i{\vm{F}'_{lg}}$ are fed to corresponding projection heads to obtain their final representations for contrast:
\begin{equation}
\left\{ \begin{array}{l}
{}^i{\vm{f}_l} = {{\mathop{\rm H}\nolimits} _i}\left( {{}^i{\vm{F}_l}} \right) \\
{}^i{\vm{f}'_{lg}} = {{H'}_i}\left( {{}^i{\vm{F}'_{lg}}} \right)
\end{array} \right.
\label{equ:head}
\end{equation}
where ${{\mathop{\rm H}\nolimits} _i}$ and ${\rm{H}'}_i$ denote the projection heads on the $i$-th layer of the encoder and the momentum encoder, respectively.
${}^i{\vm{f}_l}$ and ${}^i{\vm{f}'_{lg}}$ are normalized with softmax function with temperature $\tau$ and $\tau'$, respectively, as the same formulation in \cref{equ:l_softmax}.
The corresponding output probability ${}^i{P_l}$ and ${}^i{P'_{lg}}$ are employed to calculate the contrast loss between local data augmentations and feature augmentations:
\begin{equation}
{}^i{\mathcal{L}_{fs}} = \sum\limits_{g = 1}^2 {\sum\limits_{n = 1}^N { - \left( {{}^i{{P'}_{lg}}\log \left( {{}^iP_l^n} \right)} \right)} }
\label{equ:l_lldfi}
\end{equation}
Through SCFS\xspace, the contrast between feature augmentations and data augmentations is bridged.
The model can adaptively search the semantics-consistent features for contrast.
Therefore, it can enhance the importance of semantics-consistent regions in different augmentations, alleviating the uncertainty in contrastive learning introduced by data augmentations that contain different semantic concepts.
\subsection{Implementation Details}
\label{subsec_implementation}
SCFS\xspace is based on DINO \cite{dino_2021} and we follow the most hyper-parameter settings of DINO.
For a fair comparison, the standard ResNet50 \cite{resnet_2016} is employed as the backbone network in all experiments.
For data augmentation, the global augmentations consist of random cropping, resizing to $224 \times 224$, random horizontal flip, gaussian blur, and color jittering.
And the local augmentations consist of random cropping, resizing to $96 \times 96$, random horizontal flip, gaussian blur, and color jittering.
For feature augmentations in SCFS\xspace, the $Res2$, $Res3$, and $Res4$ layers are used.
Two global views with $N=8$ local views are the default setting of augmentation.
The projection head for the contrast between data augmentations consists of a four-layer multi-layer-perceptron (MLP) with the same architecture as DINO \cite{dino_2021}.
The projection head for feature search consists of three convolutional layers and two FC layers.
The Pytorch-style pseudocode of SCFS\xspace is shown in Algorithm \ref{algo}.
For simplification, we only show one local augmentation and the $i$-th layer for feature search.
\begin{algorithm}[]
\caption{PyTorch-style pseudocode of SCFS\xspace.}
\label{algo}
\begin{lstlisting}[language=python]
# es, et: encoder and momentum encoder networks
# hs_i, ht_i: head on the layer-i for feature search of the encoder and momentum encoder
# C, Ci: centers
# tps, tpt: temperatures
# l, m: network and center momentum rates
et.params = es.params
for I in loader: # load a minibatch I with n samples
I1, I2 = augment(I), augment(I) # global views
Il = augment(I) # multiple local views
# encoder output
s1, _, = es(I1)
s2, _, = es(I2)
sl, Sl_i = es(Il)
# momentum encoder output
t1, T1_i = es(I1)
t2, T2_i = es(I2)
# feature search
sl_i_1, t1_i = FS(Sl_i, T1_i, hs_i, ht_i)
sl_i_2, t2_i = FS(Sl_i, T2_i, hs_i, ht_i)
# contrastive loss for data augmentation
loss_g = H(t1, s2, C)/2 + H(t2, s1, C)/2
loss_l = H(t1, sl, C)/2 + H(t2, sl, C)/2
loss_d = loss_g + loss_l
# feature search loss
loss_fs = H(t1_i, sl_i_1, Ci)/2 + H(t2_i, sl_i_2, Ci)/2
# total loss
loss = loss_d + loss_fs
loss.backward() # back-propagate
# encoder, momentum encoder and center updates
update(es) # SGD
et.params = l*et.params + (1-l)*es.params
C = m*C + (1-m)*cat([t1, t2]).mean(dim=0)
Ci = m*Ci + (1-m)*cat([t1_i, t2_i]).mean(dim=0)
def H(t, s, C):
t = t.detach() # stop gradient
s = softmax(s / tps, dim=1)
t = softmax((t - C) / tpt, dim=1) # center + sharpen
return - (t * log(s)).sum(dim=1).mean()
def FS(t, s, hs, ht):
t = t.detach() # stop gradient
s = gap(s, dim=(1,2)) # gap
s = normalize(s, dim=1) # l2-normalize
t = normalize(t, dim=3) # l2-normalize
a = (s*t).sum(dim=3) # similarity
s = a*s
return hs(s), ht(t)
\end{lstlisting}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
In this section, comprehensive experiments are conducted to demonstrate the effectiveness of SCFS\xspace.
We evaluate the performance on different downstream tasks, including ImageNet classification, object detection, instance segmentation, and other classification task on small datasets.
In addition, we conduct ablation experiments to analyze the influence of each component in SCFS\xspace.
\setlength{\tabcolsep}{4pt}
\begin{table}[!t]
\begin{center}
\begin{tabular}{lllccc}
\toprule
\noalign{\smallskip}
Method & Batch Size & Epochs & LP & $k$-NN \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Supervised & 256 & 100 & 76.2 & 74.8 \\
SimCLR \cite{simclr_2020} & 4096 & 1000 & 69.3 & - \\
BYOL \cite{byol_2020} & 4096 & 1000 & 74.3 & 66.9 \\
BYOL \cite{byol_2020} & 4096 & 200 & 70.6 & - \\
SwAV \cite{swav_2020} & 4096 & 800 & 75.3 & - \\
SwAV \cite{swav_2020} & 256 & 200 & 72.7 & - \\
MoCo-v2 \cite{moco_2020} & 256 & 200 & 67.5 & 54.3 \\
SimSiam \cite{simsiam_2021} & 256 & 200 & 70.0 & - \\
ISD \cite{isd_2021} & 256 & 200 & 69.8 & 62.0 \\
MSF \cite{msf_2021} & 256 & 200 & 71.4 & 64.0 \\
NNCLR \cite{nnclr_2021} & 4096 & 200 & 70.7 & - \\
Barlow Twins \cite{barlowtwins_2021} & 2048 & 1000 & 73.2 & - \\
VICReg \cite{vicreg_2021} & 2048 & 1000 & 73.2 & - \\
OBoW \cite{obow_2021} & 256 & 200 & 73.8 & - \\
DCL \cite{dcl_2021} & 256 & 200 & 66.9 & - \\
CLSA \cite{clsa_2021} & 256 & 200 & 73.3 & - \\
AdCo \cite{adco_2021} & 256 & 200 & 73.2 & - \\
DetCo \cite{detco_2021} & 256 & 200 & 68.6 & - \\
UniVIP \cite{univip_2022} & 4096 & 200 & 73.1 & - \\
HCSC \cite{hcsc_2022} & 256 & 200 & 73.3 & - \\
MoCo-v3 \cite{mocov3_2021} & 4096 & 300 & 72.8 & - \\
MoCo-v3 \cite{mocov3_2021} & 4096 & 1000 & 74.6 & - \\
DINO* \cite{dino_2021} & 256 & 200 & 73.0 & 64.0 \\
DINO \cite{dino_2021} & 4080 & 800 & 75.3 & 67.5 \\
\rowcolor{Gray} \textbf{SCFS\xspace} & 256 & 200 & \underline{73.9} & \underline{65.5} \\
\rowcolor{Gray}\textbf{SCFS\xspace} & 1024 & 800 & \textbf{75.7} & \textbf{68.5} \\
\bottomrule
\end{tabular}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Linear probing and $k$-NN accuracy (\%) on ImageNet. The result with ``*" is reproduced for fair comparison. LP denotes linear probing. Bold font and underline indicate the best results under the setting of 256 batch size and 200 epochs and the setting of 1024 batch size and 800 epochs, respectively.}
\label{table:knnlinear}
\end{table}
\begin{table}[!t]
\begin{center}
\setlength{\tabcolsep}{0.45mm}{
\begin{tabular}{lllcccc}
\toprule
\noalign{\smallskip}
\multirow{2}*{Method} & \multirow{2}*{Batch Size} & \multirow{2}*{Epochs} & \multicolumn{2}{c}{Top-1} & \multicolumn{2}{c}{Top-5} \\
~ & ~ & ~ & 1\% & 10\% & 1\% & 10\% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Supervised \cite{s4l_2019} & 256 & 90 & 25.4 & 56.4 & 48.4 & 80.4 \\
SimCLR \cite{simclr_2020} & 4096 & 1000 & 48.3 & 65.6 & 75.5 & 87.8 \\
BYOL \cite{byol_2020} & 4096 & 1000 & 53.2 & 68.8 & 78.4 & 89.0 \\
SwAV \cite{swav_2020} & 4096 & 800 & 53.9 & 70.2 & 78.5 & 89.9 \\
DINO \cite{dino_2021} & 4080 & 800 & 50.2 & 69.3 & 74.0 & 89.1 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & 1024 & 800 & \textbf{54.3} & \textbf{70.5} & \textbf{78.6} & \textbf{90.2} \\
\bottomrule
\end{tabular}}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Evaluation on small labeled ImageNet. Bold font indicates the best result.}
\label{table:semisupervised}
\end{table}
\subsection{Comparing with SSL methods on ImageNet}
\textbf{$k$-NN and Linear Probing Accuracy on ImageNet.}
After pre-training on the ImageNet ILSVRC-2012 \cite{imagenet_2015} training set, the pre-trained models are evaluated on the ImageNet ILSVRC-2012 validation set.
For $k$-NN, it is evaluated as in study \cite{insdis_2018}.
For linear probing, we train a linear classifier from scratch based on the feature extracted by a fixed backbone with 100 epochs \cite{moco_2020}.
The top-1 accuracy is adopted as the evaluation metric.
The results are reported in Table \ref{table:knnlinear}.
With the standard ResNet50 \cite{resnet_2016} architecture and pre-trained with 256 batch size for 200 epoch, the proposed SCFS\xspace achieves the best $k$-NN top-1 accuracy 65.5\% and the best linear probing top-1 accuracy 73.9\%, outperforming its baseline DINO \cite{dino_2021} by 1.5\% and 0.9\%, respectively.
In addition, with 1024 batch size and 800 epoch, SCFS\xspace achieves the best $k$-NN accuracy (68.5\%) and linear probing accuracy (75.7\%), outperforming the accuracy of DINO \cite{dino_2021} trained with 4080 batch size for 800 epoch.
This result demonstrates that SCFS\xspace can improve the representation learning performance by searching semantics-consistent features for contrast.
\setlength{\tabcolsep}{8pt}
\begin{table}[!t]
\begin{center}
\begin{tabular}{llccc}
\toprule
Method & Epochs & $\rm{AP}^{\rm{b}}$ & $\rm{AP}_{50}^{\rm{b}}$ & $\rm{AP}_{75}^{\rm{b}}$ \\
\hline
Scratch & - & 33.8 & 60.2 & 33.1 \\
Supervised & 90 & 53.5 & 81.3 & 58.8 \\
SimCLR \cite{simclr_2020} & 1000 & 56.3 & 81.9 & 62.5 \\
BYOL \cite{byol_2020} & 300 & 51.9 & 81.0 & 56.5 \\
SwAV \cite{swav_2020} & 400 & 45.1 & 77.4 & 46.5 \\
DINO \cite{dino_2021} & 800 & 55.9 & 82.1 & 62.3 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & 800 & \textbf{57.4} & \textbf{83.0} & \textbf{63.6} \\
\bottomrule
\end{tabular}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Results for PASCAL VOC object detection using Faster R-CNN \cite{fasterrcnn_2015} with ResNet50-C4. Bold font indicates the best result.}
\label{table:voc}
\end{table}
\setlength{\tabcolsep}{0.9mm}
\begin{table*}[t]
\centering
\begin{tabular}{l|c|cccccc|cccccc}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Epochs} & \multicolumn{6}{c}{$1\!\times\!\rm{schedule}$} \vline & \multicolumn{6}{c}{$2\!\times\!\rm{schedule}$} \\
& & $\rm{AP}^{\rm{b}}$ & $\rm{AP}_{50}^{\rm{b}}$ & $\rm{AP}_{75}^{\rm{b}}$ & $\rm{AP}^{\rm{s}}$ & $\rm{AP}_{50}^{\rm{s}}$ & $\rm{AP}_{75}^{\rm{s}}$ & $\rm{AP}^{\rm{b}}$ & $\rm{AP}_{50}^{\rm{b}}$ & $\rm{AP}_{75}^{\rm{b}}$ & $\rm{AP}^{\rm{s}}$ & $\rm{AP}_{50}^{\rm{s}}$ & $\rm{AP}_{75}^{\rm{s}}$ \\ \hline
Scratch & - & 31.0 & 49.5 & 33.2 & 28.5 & 46.8 & 30.4 & 38.4 & 57.5 & 42.0 & 34.7 & 54.8 & 37.2 \\
Supervised & 90 & 38.9 & 59.6 & 42.7 & 35.4 & 56.5 & 38.1 & 41.3 & 61.3 & 45.0 & 37.3 & 58.3 & 40.3 \\
\midrule
MoCo~\cite{moco_2020} & 200 & 38.5 & 58.9 & 42.0 & 35.1 & 55.9 & 37.7 & 40.8 & 61.6 & 44.7 & 36.9 & 58.4 & 39.7 \\
MoCo v2~\cite{mocov2_2020} & 200 & 40.4 & 60.2 & 44.2 & 36.4 & 57.2 & 38.9 & 41.7 & 61.6 & 45.6 & 37.6 & 58.7 & 40.5 \\
BYOL~\cite{byol_2020} & 300 & 40.4 & 61.6 & 44.1 & \textbf{37.2} & 58.8 & \textbf{39.8} & 42.3 & 62.6 & 46.2 & \textbf{38.3} & 59.6 & \textbf{41.1} \\
SwAV~\cite{swav_2020} & 400 & - & - & - & - & - & - & 42.3 & 62.8 & \textbf{46.3} & 38.2 & 60.0 & 41.0 \\
ReSim-FPN$^{T}$~\cite{resim_2021} & 200 & 39.8 & 60.2 & 43.5 & 36.0 & 57.1 & 38.6 & 41.4 & 61.9 & 45.4 & 37.5 & 59.1 & 40.3 \\
SetSim~\cite{setsim_2021} & 200 & 40.2 & 60.7 & 43.9 & 36.4 & 57.7 & 39.0 & 41.6 & 62.4 & 45.9 & 37.7 & 59.4 & 40.6 \\
DenseCL~\cite{densecl_2021} & 200 & 40.3 & 59.9 & 44.3 & 36.4 & 57.0 & 39.2 & 41.2 & 61.9 & 45.1 & 37.3 & 58.9 & 40.1 \\
DSC~\cite{dsc_2021} & 200 & 39.4 & 58.9 & 43.2 & 35.7 & 56.1 & 38.3 & - & - & - & - & - & - \\
HSA~\cite{hsa_2022} & 800 & 40.2 & 60.9 & 43.9 & 36.5 & 57.9 & 39.1 & \textbf{42.2} & 63.0 & 46.1 & 38.1 & 59.9 & 40.9 \\
DetCo~\cite{detco_2021} & 800 & 40.1 & 61.0 & 43.9 & 36.4 & 58.0 & 38.9 & - & - & - & - & - & - \\
ORL*~\cite{ORL_2021} & 800 & 40.3 & 60.2 & \textbf{44.4} & 36.3 & 57.3 & 38.9 & - & - & - & - & - & - \\
DINO~\cite{dino_2021} & 800 & 40.0 & 61.6 & 43.4 & 36.5 & 58.6 & 39.1 & 41.9 & 62.6 & 46.0 & 37.8 & 59.7 & 40.6 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & 800 & \textbf{40.5} & \textbf{61.8} & 44.0 & 36.7 & \textbf{58.8} & 39.2 & 42.1 & \textbf{63.4} & 46.1 & 38.1 & \textbf{60.2} & 41.0 \\
\bottomrule
\end{tabular}
\caption{Object detection and instance segmentation on COCO using Mask R-CNN \cite{maskrcnn_2017} with ResNet50-FPN. Bold font indicates the best result.}
\label{tab_coco}
\end{table*}
\noindent
\textbf{Semi-Supervised Learning on ImageNet.}
In this part, we evaluate the performance of SCFS\xspace under the semi-supervised setting.
Specifically, we use 1\% and 10\% of the labeled training data from ImageNet \cite{imagenet_2015} for finetuning, which follows the semi-supervised protocol in SimCLR \cite{simclr_2020}.
The same splits of 1\% and 10\% of ImageNet labeled training data in SimCLRv2 \cite{simclrv2_2020} are used.
The results are reported in Table \ref{table:semisupervised}.
After finetuning using 1\% and 10\% training data, SCFS\xspace outperforms all the compared methods.
The results demonstrate that SCFS\xspace achieves the best feature representation quality.
\setlength{\tabcolsep}{1.2mm}
\begin{table*}[t]
\begin{center}
\begin{tabular}{lccccccc}
\toprule
Method & CIFAR-10 & CIFAR-100 & CUB-Bird & Stanford-Cars & Aircraft & Oxford-Pets \\ \hline
Supervised &97.5&86.4&81.3&92.1&86.0&92.1\\
SimCLR\cite{simclr_2020} &97.7 & 85.9 & --& 91.3& 88.1 &89.2\\
BYOL\cite{byol_2020} & 97.8 & 86.1 & -- & 91.6 & 88.1 & 91.7 \\
DINO\cite{dino_2021}* & 97.7 & 86.6 & 81.0 &91.1 & 87.4 & 91.5\\
\rowcolor{Gray}\textbf{SCFS\xspace} & \textbf{97.8} & \textbf{86.7} & \textbf{82.7} & \textbf{91.6} & \textbf{88.5} & \textbf{91.9} \\
\bottomrule
\end{tabular}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Transfer learning results from ImageNet with the standard ResNet50 \cite{resnet_2016}. * denotes the results are reproduced in this study. Bold font indicates the best result.}
\label{table:Transfer learning}
\end{table*}
\subsection{Transfer Learning on Downstream Tasks}
\textbf{Object Detection and Instance Segmentation.}
In this part, we evaluate the representations of SCFS\xspace on dense prediction tasks, i.e., object detection and instance segmentation, on mainstream datasets PASCAL VOC \cite{voc_2010} and MS COCO \cite{coco_2014} datasets.
On the PASCAL VOC dataset \cite{voc_2010}, the trainval07+12 set is used as the training set, and the test2007 set is used as the test set.
Following \cite{soco_2021}, Faster R-CNN detector \cite{fasterrcnn_2015} with the ResNet50-C4 backbone initialized by the self-supervised pre-trained model is trained end-to-end.
On the COCO dataset, the train2017 set is used for training and the val2017 set is used for evaluation.
The Mask R-CNN \cite{maskrcnn_2017} with R50-FPN is used.
The $\rm{AP}^{\rm{b}}$, $\rm{AP}_{50}^{\rm{b}}$ and $\rm{AP}_{75}^{\rm{b}}$ metrics are used for object detection.
While the $\rm{AP}^{\rm{s}}$, $\rm{AP}_{50}^{\rm{s}}$ and $\rm{AP}_{75}^{\rm{s}}$ metrics are used for instance segmentation.
The experimental results are shown in Table \ref{table:voc} and Table \ref{tab_coco}.
SCFS\xspace achieves best performance on the two datasets.
For example, on VOC, SCFS\xspace achieves 57.4\% $\rm{AP}^{\rm{b}}$, 83.0\% $\rm{AP}_{50}^{\rm{b}}$ and 63.6\% $\rm{AP}_{75}^{\rm{b}}$.
The $\rm{AP}^{\rm{b}}$ of SCFS\xspace outperforms its baseline DINO by 1.5\%.
These results shows that SCFS\xspace also has good transfer ability on dense prediction tasks.
\noindent
\textbf{Other Classification Tasks.}
In this part, we focus on the performance of self-supervised models when they are finetuned on small datasets, including CIFAR \cite{cifar10} and fine grained datasets \cite{car,cub,aircraft,pets}.
The results are shown in Table \ref{table:Transfer learning}.
The proposed SCFS\xspace shows the best performance on all the small datasets, which demonstrates that SCFS\xspace has good generalization ability.
\subsection{Pre-training on Uncured Dataset}
The proposed SCFS\xspace can solve the problem of semantic inconsistency during pre-training, which is important when pre-training on uncured datasets since this problem is more serious.
To verify this, we pre-train SCFS\xspace and DINO on COCO \cite{coco_2014}, which is much more uncured than ImageNet.
The same hyper-parameters used on ImageNet are applied to train the models with 512 batch size for 500 epochs.
After pre-training, we fine-tune the pre-trained models on COCO for object detection and instance segmentation.
The Mask R-CNN \cite{maskrcnn_2017} with R50-FPN is used.
As shown in Table \ref{tab_pt_on_coco}, SCFS\xspace improves the performance significantly compared to its baseline DINO.
In addition, when compared to other dense pixel-level and region-level methods, such as DenseCL \cite{densecl_2021} and ORL \cite{ORL_2021}, SCFS\xspace also achieves the best performance.
This experiment verifies that SCFS\xspace can effectively solve the problem of semantic inconsistency during pre-training.
\setlength{\tabcolsep}{0.4mm}
\begin{table}[t]
\centering
\begin{tabular}{l|c|cccccc}
\toprule
Method & Pre-train & $\rm{AP}^{\rm{b}}$ & $\rm{AP}_{50}^{\rm{b}}$ & $\rm{AP}_{75}^{\rm{b}}$ & $\rm{AP}^{\rm{s}}$ & $\rm{AP}_{50}^{\rm{s}}$ & $\rm{AP}_{75}^{\rm{s}}$ \\ \hline
Scratch & - & 31.0 & 49.5 & 33.2 & 28.5 & 46.8 & 30.4 \\
Supervised & ImageNet & 38.9 & 59.6 & 42.7 & 35.4 & 56.5 & 38.1 \\
SimCLR\cite{simclr_2020} & COCO & 37.0 & 56.8 & 40.3 & 33.7 & 53.8 & 36.1 \\
MoCov2\cite{mocov2_2020} & COCO & 38.5 & 58.1 & 42.1 & 34.8 & 55.3 & 37.3 \\
BYOL\cite{byol_2020} & COCO & 39.5 & 59.3 & 43.2 & 35.6 & 56.5 & 38.2 \\
DenseCL\cite{densecl_2021} & COCO & 39.6 & 59.3 & 43.3 & 35.7 & 56.5 & 38.4 \\
ORL\cite{ORL_2021} & COCO & 40.3 & 60.2 & 44.4 & 36.3 & 57.3 & 38.9 \\
UniVIP\cite{univip_2022} & COCO & 40.8 & - & - & 36.8 & - & - \\
DINO\cite{dino_2021} & COCO & 39.0 & 59.6 & 42.9 & 35.6 & 56.8 & 38.0 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & COCO & \textbf{40.9} & \textbf{61.6} & \textbf{44.4} & \textbf{36.9} & \textbf{58.4} & \textbf{39.5} \\
\bottomrule
\end{tabular}
\caption{Pre-training and than Fine-tuning on COCO using Mask R-CNN \cite{maskrcnn_2017} with ResNet50-FPN and 1$\times$ schedule. All models pre-trained on COCO are pre-trained with 512 batch size for 800 epochs. Bold font indicates the best result.}
\label{tab_pt_on_coco}
\end{table}
\subsection{Ablation Studies}
We analyze the influence of each component in SCFS\xspace.
To speed up the training time, the ImageNet100 dataset, which contains 100 randomly selected categories from ImageNet \cite{imagenet_2015}, is adopted.
All the models are pre-trained on the ImageNet100 training set with 256 batch size for 200 epoch, and tested on the validation set.
The $k$-NN and linear probing top-1 accuracy are used as the evaluation metrics.
\noindent
\textbf{Influence of Different Contrast Modes.}
The contrast mode can be divided into three types:
contrast between two global data augmentations used in all contrastive learning methods ($\rm{G_d2G_d}$);
contrast between local data augmentations and global data augmentations used in multi-crop strategy ($\rm{L_d2G_d}$);
and contrast between local data augmentations and local feature augmentations used in SCFS\xspace ($\rm{L_d2L_f}$).
The results are shown in Table \ref{table:ablation_mode}.
With multi-crop, DINO \cite{dino_2021} (81.1\%) improves accuracy by 3.0\% compared to DINO without multi-crop baseline.
SCFS\xspace (84.8\%) further improves accuracy by 3.7\% by introducing a contrast between local data augmentation and local feature augmentation.
Some attention maps of SCFS\xspace and DINO are shown in Fig. \ref{fig:exp_attention}.
SCFS\xspace can more accurately focus on semantics-consistent regions between global view and local views, while DINO is easily influenced by background.
We also add multi-layer feature contrastive learning on DINO.
The result in Table \ref{table:ablation_mode} (the ``DINO w ML" row) verifies the improvements of SCFS\xspace are not totally owed to multi-layer contrast.
In addition, we directly crop the corresponding region of local augmentation on the feature map of global augmentation for contrastive learning.
As shown in Table \ref{table:ablation_mode}, this variant (ROIAlign) of SCFS\xspace also outperforms the DINO baseline, which shows that the directly cropped features are also beneficial for contrastive learning.
And the ROIAlign variant of SCFS\xspace achieves lower accuracy than SCFS\xspace, demonstrating that the soft feature search in SCFS\xspace is better than the hard ROIAlign since ROIAlign may damage the continuous semantic context of the feature map.
Further, we also test the performance of SCFS\xspace under the setting without multi-crop.
That is, the feature search is conducted between two global data augmentations.
We term this contrast mode as $\rm{G_d2G_f}$.
As shown in the ``SCFS\xspace w/o MC" row, SCFS\xspace also improves the performance compared to its baseline (the ``DINO w/o MC" row), which proves that SCFS\xspace is also helpful in solving the semantic inconsistency caused by other augmentations, not only the multi-crop augmentation strategy.
\setlength{\tabcolsep}{0.4mm}
\begin{table}[t]
\begin{center}
\begin{tabular}{l|cccc|cc}
\toprule
Contrast Mode & $\rm{G_d2G_d}$ & $\rm{L_d2G_d}$ & $\rm{L_d2L_f}$ & $\rm{G_d2G_f}$ & $k$-NN & LP \\
\hline
DINO w/o MC & $\checkmark$ & & & & 78.1 & 83.7 \\
DINO & $\checkmark$ & $\checkmark$ & & & 81.1 & 87.0 \\
DINO w ML & $\checkmark$ & $\checkmark$ & & & 82.2 & 87.4 \\
SCFS\xspace & $\checkmark$ & $\checkmark$ & $\checkmark$ & & 84.8 & 89.2 \\
ROI Crop & $\checkmark$ & $\checkmark$ & $\checkmark$ & & 83.9 & 88.1 \\
SCFS\xspace w/o MC & & & & $\checkmark$ & 79.7 & 86.3 \\
\bottomrule
\end{tabular}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Influence of different contrast modes. MC, ML, and LP denote multi-crop, multi-layer, and linear probing, respectively.}
\label{table:ablation_mode}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figures/fig_exp_attention.pdf}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Attention maps of SCFS\xspace (the third row) compared with DINO \cite{dino_2021} (the second row).
In each example, (a) shows a global image, and its four local images in (b) are constructed by $2\times2$ jigsaw.
(d) and (f) show the attention maps that highlight the semantics-consistent regions between the local images in (b) and the global image in (a).
They are obtained by multiplying the globally average pooled feature maps from the encoder (Res4) of the local images in (b) with the feature map (Res4) of the global image in (a).
And the encoder is the trained DINO ResNet50 model and SCFS\xspace ResNet50 model in (d) and (f), respectively.
(c) and (e) show the mean attention maps of DINO and SCFS\xspace respectively, which are obtained by multiplying the mean globally average pooled feature map of the four local images in (b) with feature map of the global image in (a).
}
\label{fig:exp_attention}
\end{figure}
\noindent
\textbf{Influence of Multi-Layer Contrast.}
The influence of the feature layer that is used for feature search is analyzed.
The $Res2$, $Res3$ and $Res4$ in the ResNet50 \cite{resnet_2016} backbone are evaluated.
As shown in Table \ref{table:ablation_layer}, the performance improves with the increase of feature layer numbers, which demonstrates that conducting feature search on more layers is helpful for representation learning.
\setlength{\tabcolsep}{6pt}
\begin{table}[!t]
\begin{center}
\begin{tabular}{ccc|cc}
\toprule
Res2 & Res3 & Res4 & $k$-NN & Linear Probing \\
\hline
$\checkmark$ & & & 82.0 & 86.1 \\
$\checkmark$ & $\checkmark$ & & 84.3 & 87.5 \\
$\checkmark$ & $\checkmark$ & $\checkmark$ & 84.8 & 89.2 \\
\bottomrule
\end{tabular}
\end{center}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Influence of different feature augmentation layer.}
\label{table:ablation_layer}
\end{table}
Further, we evaluate the $k$-NN accuracy using feature maps from different layers to observe the influence of feature search on the representation of middle layers.
We also choose the features extracted by the $Res2$, $Res3$ and $Res4$ layer of ResNet50.
The results are shown in Fig. \ref{fig:exp_layersknn}.
Compared with DINO \cite{dino_2021}, SCFS\xspace achieves better performance with features from all middle layers on ImageNet100, which verifies that enhancing the semantic consistency can improve the semantic representation of shallow layers.
Compared with supervised learning, SCFS\xspace model has higher performance on res2 and res3 layer, which shows that SCFS\xspace is more advantageous in the shallow layer feature representation.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/fig_exp_layer234_in100.pdf}
\setlength{\abovecaptionskip}{0.05cm}
\caption{The $k$-NN accuracy of features from different layers.}
\label{fig:exp_layersknn}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{figures/fig_exp_local_patch_nums_linear.pdf}
\setlength{\abovecaptionskip}{0.05cm}
\caption{Influence of local augmentation number.}
\label{fig:localnumber}
\end{figure}
\noindent
\textbf{Influence of Local Augmentation Number.}
In this part, we analyze the performance difference with the change of local augmentation numbers.
The results are shown in Fig. \ref{fig:localnumber}.
The performance of DINO and SCFS\xspace is steadily improved when adding more local augmentations for contrast.
In addition, SCFS\xspace improves the performance under different local augmentation numbers, which demonstrates that semantics-consistent feature search is helpful to alleviate the influence of semantics inconsistent data augmentations.
\noindent
\textbf{Experiments on Other Backbones.}
In this part, we conduct experiments on other backbones to further evaluate the effectiveness of SCFS\xspace.
Apart for the default Reset50 used in other experiments, ResNet101 and Vision Transformer (ViT-S and ViT-B) are tested.
The results are shown in Table \ref{table:backbones}.
SCFS\xspace achieves significant improvement on different backbones compared to its baseline DINO, which demonstrates that SCFS\xspace is applicable to different backbones.
\setlength{\tabcolsep}{3pt}
\begin{table}[]
\centering
\begin{tabular}{cccc|cc}
\hline
Method & Backbone & Batch Size & Epochs & $k$-NN & LP \\ \hline
\toprule
DINO & R101 & 256 & 200 & 81.0 & 86.3 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & R101 & 256 & 200 & \textbf{85.1} & \textbf{88.3} \\ \hline
DINO & ViT-S & 256 & 200 & 75.0 & 80.4 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & ViT-S & 256 & 200 & \textbf{76.3} & \textbf{81.0} \\ \hline
DINO & ViT-B & 256 & 200 & 76.2 & 80.7 \\
\rowcolor{Gray}\textbf{SCFS\xspace} & ViT-B & 256 & 200 & \textbf{77.2} & \textbf{82.3} \\
\bottomrule
\end{tabular}
\setlength{\abovecaptionskip}{0.1cm}
\caption{Experiments on other backbones. LP denotes linear probing.}
\label{table:backbones}
\end{table}
\section{Conclusions}
\label{sec:conclusion}
In this study, we aim to alleviate the problem of unmatched semantic alignment in current contrastive learning by expanding the augmentations from data space to feature space.
The proposed semantics-consistent feature search (SCFS\xspace) adaptively searches semantics-consistent local features between different views for contrast, while suppressing irrelevant local features during pre-training.
It conducts contrast learning between feature augmentation and data augmentation.
The experimental results demonstrate that SCFS\xspace can learn to focus on meaningful object regions and effectively improve the performance of self-supervised learning.
The feature search procedure in SCFS\xspace is learnable parameter-free.
We will utilize the self-attention mechanism in Transformer to perform the feature search procedure to further boost its performance in future work.
\section*{Appendix}
\section{Hyper-parameters Setting}
\label{sec:apd_hyperparameters}
During the pretraining procedure, we follow the most hyper-parameters setting of DINO \cite{dino_2021}.
The SGD optimizer is used and the learning rate is linearly warmed up to its base value during the first 10 epochs.
The base learning rate is set according to the linear scaling rule: $lr=0.1\times batchsize / 256$.
After the warm-up procedure, the learning rate is decayed with a cosine schedule \cite{cosine_decay_2016}.
The weight decay is set to $1e-4$.
For the temperatures, $\tau$ is set to 0.1, and a linear warm-up from 0.04 to 0.07 is set to $\tau'$ during the first 50 epochs.
Following DINO \cite{dino_2021}, the centering operation is applied to the output of the momentum encoder to avoid collapse.
For data augmentation, the global augmentations consist of random cropping (with a scale of 0.14-1), resizing to $224 \times 224$, random horizontal flip, gaussian blur, and color jittering.
And the local augmentations consist of random cropping (with a scale of 0.05-0.14), resizing to $96 \times 96$, random horizontal flip, gaussian blur, and color jittering.
2 global views with $N=8$ local views are the default setting of augmentation.
During the linear probing procedure, we evaluate the representation quality with a linear classifier.
The linear classifier is trained with the SGD optimizer and a batch size of 1024 for 100 epochs on ImageNet.
Weight decay is not used.
For data augmentation, only random resizes crops and horizontal flips are applied.
\section{Projection Head}
\label{sec:apd_projectionhead}
There are two kinds of projection heads in SCFS\xspace.
The projection head for the contrast between data augmentations consists of a four-layer MLP with the same architecture as DINO \cite{dino_2021}.
As shown in \cref{fig:appendix_heads} (a), the hidden layers are with 2048 dimension and are with gaussian error linear units (GELU) activations.
After the MLP, a $L_2$ normalization and a weight normalized FC layer with $K$ ($K=65536$) dimension are applied.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{figures/fig_appendix_heads.pdf}
\caption{Architecture of the projection heads in SCFS\xspace. (a) projection head for the contrast between data augmentations; (b) projection head for feature search.
}
\label{fig:appendix_heads}
\end{figure}
The projection head for feature search consists of three convolutional layers and two FC layers.
The detailed architecture is shown in \cref{fig:appendix_heads} (b).
To make the feature search loss easy to backward, the residual connection is applied to the three convolutional layers.
After global-averaged pooling, two FC layers are applied to project features to the output dimension.
Note that the output dimension is set to $256$, which achieves good performance in all the experiments.
\section{Training Time}
We test the training times on a machine with 8 NVIDIA GeForce RTX 2080Ti GPUs.
As shown in \cref{tab_time}, compared to the baseline DINO \cite{dino_2021}, the extra computational time of SCFS\xspace increases by 30\%.
\begin{table}[]
\centering
\caption{Training Time.}
\label{tab_time}
\begin{tabular}{c|c|c|c}
\hline
Method & Batch Size & Epochs & Time \\ \hline
\toprule
DINO & 256 & 200 & 147h \\
SCFS\xspace & 256 & 200 & 192h \\
\bottomrule
\end{tabular}
\end{table}
\section{More Visualization Results}
We visualize the attention maps of SCFS\xspace between local images and corresponding global image.
As shown in \cref{fig:appendix_attention}, SCFS\xspace can accurately focus on semantics-consistent regions between global images and local images. According to the different semantic concepts inputs, consistent semantic information can be searched on the global feature.
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/fig_appendix_attention1.pdf}
\caption{
Attention maps of SCFS\xspace between local images and corresponding global image.
In each example, (a) shows a global image, (b) shows six local augmentations of the global image, and (c) shows the attention maps that highlight the semantics-consistent regions between the local images in (b) and the global image in (a), which are obtained by multiplying the globally average pooled feature maps from the encoder (Res4) of the local images in (b) with the feature map (Res4) of the global image in (a).
}
\label{fig:appendix_attention}
\end{figure*}
Furthermore, we also visualize the attention maps between local images and another image that contains objects with the same category.
As shown in \cref{fig:appendix_attention_multi}, the attention maps show that the semantics-consistent regions between different images are also activated. When the background images are input, the global images are no longer activated incorrectly, which achieves the contrastive noise mitigation and demonstrates the effectiveness of SCFS\xspace.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.8\linewidth]{figures/fig_appendix_attention2.pdf}
\caption{
Attention maps of SCFS\xspace between local images and another image that contains objects with the same category.
In each example, (a) shows an image that contains objects with the same category in (b), (b) shows six local augmentations of a global image, and (c) shows the attention maps that highlight the semantics-consistent regions between the local images in (b) and the image in (a), which are obtained by multiplying the globally average pooled feature maps from the encoder (Res4) of the local images in (b) with the feature map (Res4) of the image in (a).
}
\label{fig:appendix_attention_multi}
\end{figure*}
|
1,108,101,562,755 | arxiv | \section{Introduction}
Spiking neural networks (SNNs) are known to be suitable for hardware implementation specially on neuromorphic devices~\cite{zenke2021visualizing,bouvier2019spiking}. However, the temporal dynamics of SNNs both in neuron and network levels along with the non-differentiabilty of spike functions have made it difficult to train efficient SNNs~\cite{tavanaei2019deep}. Different studies with different approaches have tried to adapt backpropagation based supervised learning algorithms to SNNs~\cite{pfeiffer2018deep}.
The first approach is to train an artificial neural network (ANN) and then convert it to an equivalent SNN~\cite{rueckauer2018conversion,rathi2020enabling,sengupta2019going,10.3389/fnins.2020.00119,deng2021optimal}. Although converted SNNs could be applied to deep architectures and reached reasonable accuracies, they totally neglect the temporal nature of SNNs and usually are inefficient in terms of the number of spikes/time-steps. The second approach is to directly apply backpropagation on SNNs. Their main challenge is to overcome the non-differentiblity of spike functions required in error-backpropagation algorithm. To solve this problem, some studies propose to use smoothed spike functions with true gradients~\cite{huh2018gradient} and others use surrogate gradients for non-differentible discrete spike functions~\cite{neftci2019surrogate,bohte2011error,esser2016convolutional,shrestha2018slayer,bellec2018long,zimmer2019technical,pellegrini2021low,pellegrini2021fast}.The main issue with this approach is the use of backpropagation through time which makes it too costly and faces it with vanishing/exploding gradient problem, especially for longer simulation times.
In the third approach, known as latency learning, the idea is to define the neuron activity as a function of its firing time~\cite{kheradpisheh2020temporal,zhang2020spike,sakemi2021supervised,zhang2020temporal,bohte2002error,zhou2019direct,wunderlich2021event}. In other words, neurons fire at most once and stronger outputs correspond to shorter spike delays. To apply backpropagation to such SNNs, one should define the neuron firing time as a differentiable function of its membrane potential or the firing times of its afferents~\cite{mostafa2017supervised,kheradpisheh2020temporal}. As an advantage, latency learning does not need backpropagation through time, however, it is more difficult to train outperforming SNNs with latency learning.
The forth approach is the tandem learning which is consists of an SNN and an ANN coupled layer-wise through weight sharing~\cite{wu2021tandem,wu2020progressive}. The auxiliary ANN is used to facilitate the error backpropagation for the training of the SNN at the spike-train level, while the SNN is used to derive the exact spiking neural representation. Literally, in the forward pass, each ANN layer receives its input as the spike counts of the previous SNN layer, and consequently, in the backward pass, each ANN layer computes the gradient of its output with respect to the shared weights based on these input spike counts. Regarding this layer-wised coupling, the learning can be also done in a progressive layer by layer manner.
In this paper, we propose a new learning method to train an SNN via a proxy ANN. To do so, we make an ANN (consists of ReLU neurons) structurally equivalent to the SNN (made of integrate-and-fire (IF) neurons). The two network share their synaptic weights, however, IF neurons in the SNN work with temporally distributed spikes, while, neurons in ANN work with floating points process their input instantly. Contrary to tandem learning, the forward pass of the two networks are totally independent with no interference. By considering IF with rate coding as an approximation of ReLU, we replace the final output of the SNN into that of the ANN, and therefore, we update the shared weights by backpropagating the SNN error in the ANN model. One of the main challenges in conversion and tendem learning methods is the approximation of the ANN max-pooling layers in the corresponding SNN, hence, they exclude pooling layers and use convolutional layers with stride of 2. Here, we used spike-pooling layers to mimic the behaviour of max-pooling layers in the corresponding ANN. We evaluated the proposed proxy learning on Cifar10 and Fashion-MNIST dataset with different deep convolutional architectures and outperformed the currently existing conversion and tandem learning methods.
\section{Spiking neural networks}
\subsection{Spiking neuron}
Artificial neurons are simply implemented by a linear combination of inputs followed by a non-linear activation function. Different activation functions have different mathematical properties and choosing the right function can largely impact the learning speed and efficiency of the whole network. The revolution of deep learning was accompanied by the use of Rectified Linear Unit (ReLU) instead of prior activation functions such as sigmoid and hyperbolic tangent. An artificial neuron $j$ with ReLU activation can be formulated as follows,
\begin{equation}\label{Eq00}
z_j=\sum_i w_{ji}x_i,
\end{equation}
\begin{equation}\label{Eq01}
y_j=ReLU(z)=max(0,z),
\end{equation}
where $x$ and $w$ are respectively the input and weight vectors, and $y_j$ is the neuron output.
Contrary to the artificial neuron models which work with synchronous instant inputs, spiking neurons have a temporal dynamics through which the neuron's internal membrane potential changes in time by receiving asynchronous incoming spikes. The complexity of this neural dynamics can largely impact the computational cost and learning efficiency of SNNs. Integrate-and-fire (IF) is the simplest spiking neuron model in which the membrane potential only changes when receiving an input spike from a presynaptic neuron by an amount proportional to the synaptic weight.
The membrane potential $u_j$ of an IF neuron $j$ is updated at each time step $t$ by the input current $I_j(t)$ caused by the spike train $s_i(t)$ received from each presynaptice neuron $i$ through a synapse with the weight $w_{ji}$,
\begin{equation}\label{Eq1}
u_j(t)= u_j(t-1)+ RI_j(t),
\end{equation}
\begin{equation}\label{Eq2}
I_j(t)= \sum_i w_{ji}s_i(t),
\end{equation}
where $s_i(t)=1$ if presynaptic neuron $i$ has fired at time $t$ and it is zero elsewhere. We set the membrane resistance $R$ to unitary (i. e., $R=1$).
The IF neuron emits an output spike whenever its membrane potential crosses a certain threshold $\theta$,
\begin{equation}\label{Eq3}
s_j(t)=
\begin{cases}
1 & \quad \text{if } V_j(t) \geq \theta,\\
0 & \quad \mathrm{otherwise},
\end{cases}
\end{equation}
and then resets its membrane potential to zero as $u_j=0$ to be ready for the forthcoming input spikes.
\subsection{Information encoding}\label{encoding_section}
ANNs work with decimal values, therefore, their inputs are represented by vectors, matrices, and tensors of floating-point numbers. However, in SNNs, neurons talk through spikes, hence, information needs to be somehow encoded in the spike trains. In other words, the analog input signal should be converted into an equivalent spike train in the entry layer of the network. Different coding schemes are suggested to be used in SNNs ranging from heavy rate codes to economical temporal codes with single spikes.
Another spike-free input coding approach is to use constant input currents (aka direct input coding) applied to input neurons. This way, input neurons with higher input currents will fire with higher rates than others. In other words, contrary to the neurons in the middle and output layers, the input current to the IF neurons in the first layer is proportional to the input signal. Consider an input image $x$, the constant input current to an input IF neuron $j$ is computed as
\begin{equation}\label{Eq4}
I_j(t)= \sum_i w_{ji}x_i,
\end{equation}
where $x_i$ is the $i^{th}$ input pixels inside the receptive field of neuron $j$ and $w_{ji}x_i$ is the constant input current from $x_i$ to $j$.
\subsection{Convolutional SNN}\label{convolution_section}
Convolutional ANNs (CANNs), largely inspired by the hierarchical object recognition process in visual cortex, are comprised of a cascade of interlaying convolution and pooling layers to extract descriptive visual features followed by a stack of fully connected layers to make the final decision. Neural processing in CANNs are performed in a layer by layer fashion, neurons in each layer receive their whole input from the previous layer at once, and instantly send their output to the next layer. The neural processing in convolutional SNNs (CSNNs) is different as neurons work in temporal domain and information is encoded in asynchronous spike trains.
Each convolutional layer of a CSNN is made of IF neurons organized in numerous feature maps. At every time step, each neuron receives the total input current from the afferents inside its receptive window (Eq.~\ref{Eq4} for input layer and Eq.~\ref{Eq2} for other layers), updates its membrane potential (Eq.~\ref{Eq1}) and fires whenever reaches to the threshold (Eq.~\ref{Eq3}). Neurons in the same feature map share their synaptic weights, and hence, look for the same feature but in different locations.
At each time step, neurons in spike-pooling layers of the CSNN simply emit an spike if there is at least one spike in their input window. Hence, if different neurons inside the receptive window of a spike-pooling neuron fire at several different times, the spike-pooling neuron will also fire at each of those time steps. Spike-pooling neurons can simply be implemented by IF neurons with synaptic weights and threshold of one.
Fully-connected layers of the CSNN, including the readout layer, are all made of IF neurons. The last convolution or spike-pooling layer is flattened and its spikes are fed to the first fully-connected layer. The readout layer includes one neuron for each class, and the neuron with the maximum number of spikes determines the winner class.
\section{IF approximating ReLU}\label{IFrelu}
Although the input encoding, internal mechanism, and output decoding of IF are different from those of ReLU neuron model, several studies from different perspectives have shown that an equivalent IF (or Leaky-IF) neuron can fairly approximate the activation of the ReLU neuron. For time-to-first-spike coding, the firing time of IF neuron is inversely proportional to the output of ReLU ~\cite{kheradpisheh2020temporal,rueckauer2018conversion}. In short, the IF neuron remains silent if the output of ReLU is zero, and it will fire with shorter delay for larger ReLU outputs. For rate coding, the higher ReLU outputs correspond to higher firing rates in IF neuron~\cite{wu2021tandem,wu2020progressive,tavanaei2019bp}.
Here, we consider that neural information is encoded in the spike rate of neurons. Let $r_i$ be the input spike rate received from the $i^{th}$ afferent to neuron $j$,
\begin{equation}
r_i=\frac{\sum_t s_i(t)}{T},
\end{equation}
where $T$ is the maximum simulation time. For simplicity, we assume that these input spikes are uniformly distributed in time and the sudden current caused by each spike is evenly and continuously delivered during the inter-spike intervals, then according to Eq.~\ref{Eq2}, the input current to neuron $j$ is constant in time and is calculated as,
\begin{equation}
I_j=\sum_i w_{ji}r_i.
\end{equation}
If we apply this constant input current to IF neuron in Eq.~\ref{Eq1} with the thresholding function in Eq.~\ref{Eq3}, the firing rate of IF neuron $j$ can be calculated as,
\begin{equation}\label{Eq9}
r_j=ReLU(\frac{RI_j}{\theta})=\frac{R}{\theta}\:ReLU(\sum_i w_{ji}r_i).
\end{equation}
The use of ReLU is necessary here, since the IF neuron will not fire at all for negative input currents. As shown in Eq.~\ref{Eq9}, the firing rate of the IF neuron $j$ can be expressed by applying ReLU on the weighted summation of the input firing rates from its afferents.
\section{Training via proxy}
The general idea of the proposed learning method is illustrated in Figure~\ref{fig1}. Here we have a proxy CANN coupled with an equivalent CSNN having same architectures with $L$ layers. The weights in all layers are shared between the two networks. However, the CANN model is made of artificial neurons with ReLU activation and the CSNN model is made of IF neurons and works in the temporal domain. The input image is fed to both model and their outputs are obtained at their softmax layer. Since the thresholding function of IF neurons is not differentiable, it can not be directly trained. We discard the output of CANN (the proxy network) and replace it with the CSNN output, then, we backpropagate the CSNN error in CANN to update the the weights.
\subsection{Forward pass}
During the forward pass the input image is fed to both networks. The first layer of CANN simply convolves its filter kernels over the input image and sends it output, $y^1$ to the next layer by applying the ReLU activation function. The following pooling and convolutional layers apply the max-pooling and convolution operations on their inputs from the previous layer and send their output to the next layer. The fully-connected layers on top receive inputs from all neurons in their previous layer through synaptic weights and send their output to the next layer. Note that we use ReLU activation function in all convolutional and fully-connected layers. At the end, CANN applies softmax activation function over the output of the last layer, $y^L$, to obtain the final output, $O^A$,
\begin{equation}
O_k^A= \frac{e^{y_k^L}}{\sum_j e^{y_j^L}}.
\end{equation}
The first layer of the CSNN model obeys the input encoding scheme explained in Section~\ref{encoding_section}. The CSNN process each input image in $T$ simulation time step. At every time step, the constant input current to IF neurons in the first layer of the CSNN model is computed by convolving the input image with the corresponding filter kernel (see Eq.~\ref{Eq4}). These IF neurons will emit spikes whenever their membrane potential crosses the threshold and send spike train $s^1$ to the next layer. As explained in Section~\ref{convolution_section}, in any time step, spike-pooling neurons will fire a spike if there is at least one spike in their receptive window. Spiking convolutional IF neurons integrate incoming spikes inside their receptive window through weighted synapses and fire when the threshold is reached. It is the same for IF neurons in fully-connected layers but they do not have restricted receptive window and receive spikes from all neurons in the previous layer. To compute the output of CSNN, we count the number of spikes for each neuron in the output layer, $C_k^L$, and send it to a softmax layer to obtain final output, $O^S$,
\begin{equation}
c_k^L= \sum_t s_k^L(t),
\end{equation}
\begin{equation}
O_k^S= \frac{e^{c_k^L}}{\sum_j e^{c_j^L}}.
\end{equation}
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=.4\textwidth]{Fig1.pdf}
\caption{The proposed proxy learning that is comprised of a CSNN coupled with a CANN through the shared weights. The $s^l$ and $y^l$ are respectively the spike train and the output of the $l^{th}$ CSNN and CANN layers during the forward pass. The final output of the CANN is replaced with the output of the CSNN model. During the Backward pass, the error of the CSNN is backpropagated through the CANN network to update the shared weights.}
\label{fig1}
\end{center}
\end{figure}
\subsection{Backward pass}
As mentioned earlier, to update the weights of the CSNN model we use the corresponding shared weights in CANN. To do so, the softmax output of the CANN model is replaced by the softmax output of the CSNN model. By comparing it to the target values in the loss function of the CANN model, we are literally computing the error and the loss of the CSNN model. This loss is then backpropagated through the CANN model to update the shared weights. As explained in Section~\ref{IFrelu}, we assume that the input and output of the ReLU neuron in the CANN model is approximated by the input and output firing rates of the corresponding IF neuron in the CSNN model
Let assume that $Y_k$ is the target of the $k^{th}$ output in the CANN model. The cross-entropy loss function for the CANN model is defined as
\begin{equation}
L=-\sum_k Y_k ln(O^A_k) \simeq -\sum_k Y_k ln(O^S_k).
\end{equation}
To update an arbitrary shared weight $w^l_{ji}$ in the $l^{th}$ layer through the CANN model, we have
\begin{equation}
\Delta w^l_{ji}=w^l_{ji} -\eta\frac{\partial L}{\partial w^l_{ji}}
\end{equation}
where $\eta$ is the learning rate. Instead of using the true gradient of the CANN model which is computed as
\begin{equation}
\frac{\partial L}{\partial w^l_{ji}}=\sum_k \frac{\partial L}{\partial O^A_k}\sum_d \frac{\partial O^A_k}{\partial y_d^L} \frac{\partial y_d^L}{\partial w^l_{ij}},
\end{equation}
we use the following approximated gradient which is obtained by replacing the output of the CSNN into the output of the CANN model,
\begin{equation}
\frac{\partial L}{\partial w^l_{ji}} \simeq \sum_k \frac{\partial L}{\partial O^S_k}\sum_d \frac{\partial O^S_k}{\partial y_d^L} \frac{\partial y_d^L}{\partial w^l_{ij}}.
\end{equation}
Indeed, we are backpropagating the error of the CSNN model in the CANN model to update the shared weights.
\begin{table*}[!htb]
\begin{center}
\caption{Network architecture and parameter setting for Fashion-MNIST and Cifar10 datasets.}\label{tab1}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllll}
Dataset & Architecture & $\theta$ & $T$& $\eta$& $\beta_1$ &$\beta_2$& $\epsilon$ &$\lambda$ \\
\hline
Fashion & 128C3-128C3-P2-128C3-P2-1024F-256F-10F & 2 & 50 &$10^{-3}$ & 0.8&0.99&$10^{-7}$& $10^{-5}$\\
Cifar10& 256C3-512C3-P2-512C3-512C3-512C3-P2-256C3-1024F-512F-256F-10F & 3 & 60 & $10^{-3}$ & 0.8&0.99&$10^{-7}$& $10^{-5}$
\end{tabular}
}
\end{center}
\end{table*}
\begin{table*}[!htb]
\footnotesize
\begin{center}
\caption{Classification accuracies of different CSNNs with different learning rules on Fashion-MNIST. $T$ is the simulation time. The STDBP and STiDi-BP terms stand for spike-time-dependent error backpropagation and spike-time-displacement-based error backpropagation, respectively.}\label{tab2}
\begin{tabular}{lllcl}
Model & Acc (\%) & Network& $T$ & Learning \\
\hline
Cheng et al. (2020)~\cite{cheng2020lisnn}& 92.07& 4-layer CSNN&20& Surrogate Gradient Learning\\
Fang et al. (2021)~\cite{fang2020incorporating} & 94.36 & 6-layer CSNN &8&Surrogate Gradient Learning\\
Yu et al. (2021)~\cite{yu2021constructing}& 92.11 & 4-layer CSNN &220&ANN-SNN Conversion\\
W. Zhang et al. (2020)~\cite{zhang2020temporal}& 92.83 & 4-layer CSNN &5&Spike Sequence Learning\\
M. Zhang et al. (2020)~\cite{zhang2020rectified} & 90.1& 5-layer CSNN&-& STDBP\\
Mirsadeghi et al. (2021)~\cite{mirsadeghi2021spike}&92.8&4-layer CSNN&100& STiDi-BP\\
\hline
This work (CANN) & 94.60 & 6-layer CANN &- & Backpropagation\\
This work (CSNN) & 84.63 & 6-layer CSNN & 50 & ANN-to-SNN Conversion\\
This work (CSNN) & 93.12 & 6-layer CSNN & 100 & ANN-to-SNN Conversion\\
This work (CSNN) & 94.50 & 6-layer CSNN & 200 & ANN-to-SNN Conversion\\
This work (CSNN) & 94.41 & 6-layer CSNN & 50 & Surrogate Gradient Learning\\
\hline
\textbf{This work (CSNN)} & \textbf{94.56} & \textbf{6-layer CSNN} &\textbf{50}& \textbf{Proxy Learning}
\end{tabular}
\end{center}
\end{table*}
\section{Results}
To evaluate the proposed proxy learning method, we performed experiments on two benchmark datasets of Fashion-MNIST and Cifar10. For both datasets, we used deep convolutional networks with architectural details and parameter settings provided in Table~\ref{tab1}. We use Adam optimizer with parameters $\beta_1$, $\beta_2$, and $\epsilon$ and L2-norm regularization with parameter $\lambda$. Our proxy learning outperforms conversion and tandem learning methods by reaching 94.5\% and 92.50\% on Fashion-MNIST and Cifar10 datasets, respectively. In the following subsections, results on both datasets are provided in more details.
\subsection{Fashion-MNIST}
Fashion-MNIST~\cite{xiao2017fashion} is a fashion product image dataset with 10 classes (T-shirt, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, and Ankle boot). Images are gathered from the thumbnails of the clothing products on an online shopping website. The Fashion-MNIST dataset contains 60,000 images of size $28\times28$ pixels as the train set. The test set contains 10,000 images ($1000$ images per class). As presented in Table~\ref{tab1}, the proposed network is comprised of three convolutional, two pooling, and three fully connected layer.
Table~\ref{tab2} provides the categorization accuracy of the proposed network with proxy learning along with the accuracy of other recent spiking neural networks on Fashion-MNIST dataset. Using a 6-layer CSNN architecture our proxy learning method could reach 94.56\% with $T=50$ accuracy and outperform other CSNNs with different learning methods. Interestingly, our network with proxy learning could surpass other networks trained with surrogate gradient learning (SGL). To do a fair comparison, we trained the same CSNN as ours using surrogate gradient learning method (with arc-tangent surrogate function~\cite{fang2020incorporating}) that reached to the best accuracy of 94.41\% with $T=50$. We also trained a CANN with the same architecture to our CSNN using backpropagation that reached to 94.60\% accuracy best (it is 0.04\% better than proxy learning). We also converted this ANN to an SNN with IF neurons (using the conversion method in~\cite{ding2021optimal}) and evaluated it on Fashion-MNIST with different simulation times. As mentioned before, conversion methods require long simulation times to reach acceptable accuracies. With 50 time steps the converted CSNN could only reach to 84.63\% accuracy and it required more 200 time steps for 94.50\% accuracy.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{myfig.pdf}
\caption{The categorization accuracy and MSSE of the proposed CSNN with proxy learning and simulation time of $T=50$ over testing set of Fashion-MNIST.}
\label{fig:loss}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{chart.pdf}
\caption{Classification accuracy of the proposed CSNN trained via proxy learning on Fashion-MNIST with maximum simulation time varying from $T=10$ to $T=60$. The accuracy increases by $T$ and reaches to 94.56 at $T=50$.}
\label{fig:my_label1}
\end{figure}
The categorization accuracy and the mean sum of squared error (MSSE) of the proposed CSNN with the simulation time of $T=50$ over the test set of Fashion-MNIST dataset is provided in Figure~\ref{fig:loss}. Only 30 epochs are enough to reach an accuracy above 94.0\% and a MSSE lower than 1.5.
As explained in section~\ref{IFrelu}, we assume that IF neurons in CSNN approximate ReLU neurons in the proxy CANN. However, it is expected that this approximation should get more accurate by increasing the maximum simulation time, $T$. To verify this expectation, we evaluated the recognition accuracy of the CSNN model (see Figure~\ref{fig:my_label1}) trained with proxy learning on Fashion-MNIST dataset with the simulation time varying from $T=10$ to $T=60$. As seen in Figure~\ref{fig:my_label1}, the recognition accuracy increases as $T$ is increased. The model reached to the reasonable accuracy of 94.26\% from $T=15$ and then ascends up to 94.56\% with $T=50$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{FeatureMaps.png}
\caption{The output feature maps of five randomly selected filters in different convolutional layers of both CSNN and CANN networks over a randomly selected image from Fashion-MNIST. The feature maps of the CSNN network are obtained by computing the spike counts of IF neurons in each map. The firing rates of IF neurons can well approximate the activations of corresponding ReLU neurons.}
\label{fig:my_label2}
\end{figure}
\begin{table*}[!htb]
\footnotesize
\begin{center}
\caption{Classification accuracies of different CSNNs with different learning rules on Cifar10. $T$ is the simulation time. The STDB and SGL terms stand for spike-time-dependent backpropagation and surrogate gradient learning, respectively.}\label{tab3}
\begin{tabular}{lllcl}
Model & Acc (\%) & Network & $T$&Learning \\
\hline
Y. Wu et al. (2019)~\cite{wu2019direct}& 90.53 &8-layer CSNN &12&Surrogate Gradient Learning\\
Syed et al. (2021) ~\cite{syed2021exploring}&91.58 &VGG-13 (CSNN)&15& Surrogate Gradient Learning\\
Fang et al. (2021)~\cite{fang2020incorporating} & 93.50 & 9-layer CSNN &8&Surrogate Gradient Learning \\
Rueckauer et al. (2017)~\cite{rueckauer2017conversion} & 90.85 &VGG-16 (CSNN) &400& ANN-to-SNN conversion\\
Rathi et al. (2020)~\cite{rathi2020enabling}&92.22 & Resnet-20 (SNN) &250& ANN-to-SNN conversion\\
Rathi et al. (2020)~\cite{rathi2020enabling}&91.13 & VGG-16 (CSNN) &100& ANN-to-SNN conversion + STDB\\
Sengupta et al. (2019)~\cite{sengupta2019going}& 91.46 & VGG-16 (CSNN)&2500& ANN-to-SNN conversion \\
Lee et al. (2019)~\cite{lee2020enabling}& 91.59 & ResNet-11 (CSNN)&3000& ANN-to-SNN conversion\\
Rathi et al. (2020)~\cite{rathi2020diet}&92.70& VGG-16 (CSNN)&5& ANN-to-SNN conversion + SGL\\
J. Wu et al. (2021)~\cite{wu2021tandem}& 90.98& CifarNet (CSNN)&8&Tandem Learning\\
J. Wu et al. (2020)~\cite{wu2020progressive}& 91.24& VGG-11 (CSNN)&16 &Progressive Tandem Learning\\
\hline
This work (CANN) & 93.20 & 10-layer CANN &-& Backpropagation\\
This work (CSNN) & 89.14& 10-layer CSNN &60 & ANN-to-SNN conversion\\
This work (CSNN) & 92.91& 10-layer CSNN & 120& ANN-to-SNN conversion\\
This work (CSNN) & 93.16& 10-layer CSNN & 240& ANN-to-SNN conversion\\
This work (CSNN) & 92.85 & 10-layer CSNN &60& Surrogate Gradient Learning\\
\hline
\textbf{This work (CSNN)} & \textbf{93.11} & \textbf{10-layer CSNN} &\textbf{60} & \textbf{Proxy Learning}
\end{tabular}
\end{center}
\end{table*}
In Figure~\ref{fig:my_label2} we plotted the output feature maps of five randomly selected convolutional filters in different layers of the CANN model and their corresponding in the CSNN model. Subfigures~\ref{fig:my_label2}A, \ref{fig:my_label2}B, and \ref{fig:my_label2}C respectively show the selected feature maps in the first, second, and third convolutional layers. In each subfigure, the top (bottom) row belongs to the feature maps of the CANN (CSNN) model. The feature maps of the CSNN model are obtained by computing the spike count (i.e., firing rate) of each IF neuron. As seen, the activation of ReLU neurons in CANN layers is well approximated by the firing rates of corresponding IF neurons in the CSNN model.
\subsection{Cifar10}
Cifar10 is a widely-used benchmark dataset in deep learning and suitable for evaluating spiking neural network on natural image classification tasks. Cifar10 consists color images from 10 different classes, with a standard split of 50,000 and 10,000 for training and testing, respectively. To solve Cifiar10 classification task, we developed a 10-layer CSNN trained via proxy. Architectural details of the proposed network are provided in Table~\ref{tab1}.
The classification accuracy of the proposed network along with those of other CSNNs trained by different learning strategies including surrogate gradient learning, ANN-to-SNN conversion, and tandem learning are presented in Table~\ref{tab3}. Our proposed network could reach 93.11\% categorization accuracy on Cifar10 with $T=60$ and outperform any other CSNN trained listed in Table~\ref{tab3}, except Fang et al. (2021)~\cite{fang2020incorporating} that use surrogate gradient in CSNNs with Leaky-IF neurons with trainable membrane time constants (i.e., each spiking neuron layer has an independent and trainable membrane time constant). Although, they reached 0.04\% better accuracy than us, implementing large CSNNs with Leaky-IF neurons having different time constants, independent of the implementation platform, is highly expensive in terms of memory and computation. However, we use simple IF neurons with no leak and no need for extra parameters, that is easy to implement and has low memory and computational costs.
Interestingly, our proposed CSNN with proxy learning could significantly outperform CSNNs with tandem learning rule~\cite{wu2021tandem,wu2020progressive}. This might be due to inconsistency between the forward and backward passes of tandem learning. In our proxy learning method, only the final output of the CANN is replaced with that of the CSNN, and hence, the forward pass of the two networks are totally independent. However, in the forward pass of tandem learning, CANN layers are disconnected from each other and receive the spike counts of the previous CSNN layer as their input, while in the backward pass, the CSNN error backpropagates through the CANN layers and based on their true outputs, without the intervene of CSNN layers.
Our proposed CSNN has also outperformed CSNNs converted from CANNs with even deeper architectures. In proxy learning, the final error is computed based on the spike counts of the output layer of the CSNN, while in conversion methods, the training phase is totally independent of the CSNN. This shows that being aware of the quantized CSNN activations (the spike counts) in our proxy learning, which are ignored by conversion methods, can lead to CSNNs with higher classification accuracy.
Also, we developed a CANN with ReLU neurons and same architecture to our CSNN and trained it using backpropagation with ADAM optimizer (the learning parameters and conditions were the same as the CSNN model in Table~\ref{tab1}). The CANN reached the best accuracy of 93.20\% that is only 0.09\% better than our CSNN with proxy learning. We then converted this ANN to an SNN with IF neurons using the conversion method in~\cite{ding2021optimal}. With 60 time steps, that our proposed CSNN had its best accuracy, the converted CSNN could only reach to 89.15\% accuracy and it required 240 time steps to reach 93.16\% accuracy. Again, conversion methods require long simulation times to reach reasonable accuracies.
In the last comparison, we trained a CSNN with same architecture and learning parameters to ours using surrogate gradient learning. As presented in Table~\ref{tab3}, it reached the classification accuracy of 92.85\% at its best. Note that contrary to other CSNNs in Table~\ref{tab3} that are trained with surrogate gradient learning, ours is consists of pure IF neurons without leakage.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{chart2.pdf}
\caption{Classification accuracy of the proposed CSNN trained via proxy learning on Cifar10 with maximum simulation time varying from $T=10$ to $T=70$. The accuracy increases by $T$ and reaches to 93.11\% at $T=60$.}
\label{fig:chart2}
\end{figure}
In another experiment, we varied the maximum simulation time from $T=10$ to $T=70$ and evaluated the classification accuracy of the proposed CSNN. As depicted in Figure~\ref{fig:chart2}, there is a trade-off between the simulation time and the classification accuracy. The classification accuracy starts from $88.35\%$ with $T=10$, steadily increases with $T$, and culminates at $93.11\%$ with $T=60$.
\section{Discussion}
In this paper we proposed a proxy learning method for deep convolutional spiking neural networks. The main idea is that IF neurons with rate coding can approximate the activation of ReLU neurons. To do so, first, we build a proxy CANN with ReLU neurons and the same architecture as the CSNN with IF neurons. Then, we feed the input image to both networks and the forward pass is done in both networks independently. Decision in CSNN network is made by applying a softmax on the spike counts of the ouput neurons. Finally, the error of the CSNN model is backpropagated in the CANN model by replacing its output with that of the CSNN.
Our proxy learning method reached 94.56\% on Fashion-MNIST and 93.11\% on Cifar10 datasets and outperformed other ANN-to-SNN conversion and tandem learning methods (see Tables~\ref{tab2} and~\ref{tab3}). The main issue with the conversion methods is neglecting the temporal nature of spiking neural networks~\cite{kheradpisheh2020temporal}. Another limitation of conversion methods is a trade-off between the inference speed and classification accuracy~\cite{wu2019direct}. To reach the optimal classification accuracy, they usually require at least several hundred of inference time steps. Tandem learning methods~\cite{wu2021tandem,wu2020progressive} could resolve these issues in conversion methods. Same as our proxy learning, they assume that IF neurons with rate-code approximate artificial neurons with ReLU activation. Hence, they connect the CANN and CSNN layers in tandem and feed the spike counts of CSNN layers (not the output of the previous CANN layer) as the input of the next CANN layer. This breaking of the forward pass in the CANN, with approximated inputs from the CSNN layers, can attenuate the cohesion of its backward pass. This problem is solved in proxy learning by separating the forward pass of the two networks.
Although, surrogate gradient learning is one of the best direct learning algorithms for spiking neural networks~\cite{neftci2019surrogate}, but it suffers from the challenges of backpropagation through time, especially for longer simulation times, including vanishing/exploding gradients and high computational cost and memory demand. However, in our proxy learning method we do not confront with such issues, as the backpropagation is done with the proxy CANN that is a time-free network.
An important aspect of the proposed CSNN is the use of IF neuron model with no leak. IF neurons are pure integrators and have the simplest neuronal dynamics compared to any other spiking neuron model. For instance in leaky-IF neurons, in case of no other input spikes, at every time step, the neuron membrane potential exponentially decays with a time constant, while in pure IF neuron model, the membrane potential is updated simply by increasing or decreasing it just at the arrival of input spikes and according to the input synaptic weights. Hence, IF neurons are much simpler to be implemented on different hardware and neuromorphic platforms~\cite{oh2020hardware,liang20211}, especially, in large and deep networks.
The proposed proxy learning is based on the approximation of ReLU with rate-coded IF neurons. Rate-coding is the mostly used coding scheme in SNNs, however, other coding schemes such as temporal coding and rank-order coding are more efficient in terms of the number of spikes~\cite{mostafa2017supervised,zhang2020rectified,kheradpisheh2018stdp,mozafari2018first,mirsadeghi2021stidi,kheradpisheh2020bs4nn}. Extending the proxy learning to CSNNs with temporal coding in future studies would lead to accurate and low-cost CSNNs.
\section*{Acknowledgments}
The authors would like to thank Mr. Wei Fang who helped us in the implementation of our idea with his Spikingjelly package designed for developing deep spiking neural networks available at \url{https://github.com/fangwei123456/spikingjelly}.
|
1,108,101,562,756 | arxiv | \section{Introduction}
The breakdown of Dennard scaling \cite{esmaeilzadeh2011dark}, and the seemingly inescapable end of Moore's law~\cite{simonite2016moore}, present new challenges for computer architects striving to achieve increased performance in modern computing systems.
Heterogeneous Computing has emerged to address these issues, but the complexity of heterogeneous systems, consisting of software (SW) processors and hardware (HW) accelerators, has also increased dramatically. Hardware designers assigned with accelerating a certain application domain are required to have a deep knowledge and understanding of both the software applications and the underlying platform characteristics. Additionally, a great deal of manual effort is required to identify and extract the information that is necessary to explore various possible optimizations for every design.\par
These optimizations include exploiting application level parallelism, in the form of Instruction Level (ILP), Loop Level (LLP), Task Level (TLP) and Pipeline Parallelism (PP).
The use of such parallelism has been limited in tools for designing hardware accelerators, in two ways. First, in the few tools \cite{margerm2018tapas, schardl2017tapir} that accommodate application level parallelism, it is limited to TLP and LLP. Second, these approaches do not usually perform Design Space Exploration (DSE) in early design stages (before implementing a particular hardware design, e.g., using an HLS tool) to explore a broad range of possible designs and combinations of different types of parallelism.
\par
A hardware design
DSE flow for a System on Chip (SoC) with hardware accelerators, that automatically extracts and uses parallelism information, requires three main components: a) A program representation that captures and exposes various levels of parallelism in an application, and also potential data movement requiring communication or memory system demands. b) An analysis tool that explores various HW/SW partitioning options, while taking into account not only the execution time and area, but also SoC interconnect bandwidth and communication latency. c) An integration of (a) and (b), such that (a) can provide the information that (b) requires, and (b) can use this information to build efficient performance and cost models to apply in the DSE process.
\par
Spatial \cite{koeplinger2018spatial} is a tool that performs early DSE focusing on parallelism, but it has a number of limitations. First, Spatial aims to support hardware designers by providing a hardware-centric design language, and does not support applications written in high level languages (e.g., C,C++).
Second, it is restricted to modeling performance on FPGAs and CGRAs, and cannot be used to effectively perform DSE for SoCs. In particular, communication latency and memory bandwidth are not taken into account during DSE.
Finally, the parts of the computation to be accelerated need to be specified by the user and no automatic exploration of acceleration candidates takes place, which is the primary goal of our work.
To address these issues we present \texttt{Trireme},\footnote{Trireme was an ancient Greek/Roman boat having three main rows of oars (similar to the three types of parallelism that we explore) and requiring parallel work to flow. A lightweight, quick boat, taking advantage of parallelism is ideal for explorations, hopefully also for early Design Space Exploration.
} an automated tool-chain that integrates the AccelSeeker \cite{ZacharopoulosNov19} and Heterogeneous Parallel Virtual Machine (HPVM) \cite{kotsifakou2018hpvm} tools. AccelSeeker offers automatic identification and selection of HW accelerators based on models of performance, and HPVM is a parallel program representation for heterogeneous systems that exposes all the major forms of parallelism (loop level, task level\ and pipeline parallelism)
relevant to accelerator design.
We extend \texttt{Trireme}{} with novel models of parallel performance evaluation (described below) to enable early DSE that accounts for various forms of parallelism.
Moreover, \texttt{Trireme}{} is able to account for SoC interconnect bandwidth and latency, which enables strong synergy with the explicit dataflow information captured in the HPVM parallel representation (a hierarchical dataflow graph).
The integration of the two thus offers the basis for an extensive exploration of multiple levels of parallelism, provides an early estimation of performance, and outputs HW/SW designs that maximize speedup within specific area budgets.
\par
For each type of potential parallelism that
\texttt{Trireme}\ extracts (LLP, TLP, PP, and combinations of them), we introduce novel models of performance (in terms of latency) and area demands (in terms of hardware resources). With the aid of these models, we carry out comprehensive early DSE that selects combinations of parallel accelerator designs with increasing area budgets.
Additionally, we study a variety of architectural configurations of target
SoCs to distinguish the impact of every type of parallelism in accordance with the characteristics and complexity of novel benchmarks from the Extended Reality (XR) domain.\footnote{Extended Reality combines Augmented, Virtual and Mixed Reality.}
\texttt{Trireme}\ achieves speedups of up to 20$\times$ for complex XR application components (e.g., audio decoder) and up to 37$\times$ for single-kernel applications (e.g., gemm-blocked).
Our contributions are as follows:
\vspace{-0.1cm}
\begin{itemize}
\item We present \texttt{Trireme}, a fully-automated tool integrating HPVM \cite{kotsifakou2018hpvm} and AccelSeeker \cite{ZacharopoulosNov19}, that offers
identification, estimation of performance and selection of hardware accelerators that exploit task level, loop level, and pipeline parallelism{} (Section~\ref{sec:trireme}).
\item We introduce novel models for estimating performance and resource demands (area) for task level, loop level, and pipeline parallelism{} (Section~\ref{sec:models}).
\item
We demonstrate \texttt{Trireme}'s HW/SW partitioning choices while sweeping area budgets and varying the configuration of memory latency and accelerator invocation overhead, thereby covering a wide range of possible
designer scenarios (Sections~\ref{sec:setup} and \ref{sec:results}).
\item We evaluate our tool
using a broad spectrum of applications, spanning from smaller, single-kernel applications, to complex and demanding state-of-the-art application components from the XR domain (derived from a recently released XR testbed \cite{huzaifa2020exploring}) (Section~\ref{sec:results}).
\end{itemize}
\section{Background}\label{sec:background}
\texttt{Trireme}\ performs extensive early DSE of potential parallelism possibilities for HW acceleration, in comparison to tools such as TAPAS \cite{margerm2018tapas} and Peruse \cite{peruse} that offer limited or late DSE. Furthermore, our tool explores a number of different platform configurations, with respect to memory latency and overhead due to the invocation of the accelerators, that can drastically affect the performance of a HW/SW design.\par
Achieving such a thorough and early DSE, while investigating the different parallelism opportunities, is a \emph{challenging endeavor} because it requires both: a) automatic extraction of any parallelism-related information from the applications to be accelerated and b) automatic identification and early evaluation of potential accelerators. HPVM and AccelSeeker, both developed within the LLVM \cite{LattnerMar04} infrastructure, support the former and the latter requirements respectively, and hence, serve as the basis of the \texttt{Trireme}\ tool-chain.
AccelSeeker\ is a tool that performs automatic identification and selection of hardware accelerators, and HPVM is a parallel program representation for heterogeneous systems. \texttt{Trireme}\ uses components of AccelSeeker\ to perform an initial estimation of performance and an estimation of area requirements. We use HPVM to analyze the applications and collect required information regarding the three types of parallelism (TLP, LLP, PP) that we can exploit. In the following sections, we provide detailed backgrounds of both tools.\par
\subsection{AccelSeeker}\label{sec:background:accelseeker}
AccelSeeker is an LLVM-based tool, comprised of analysis passes, that analyzes applications represented by the LLVM Intermediate Representation (IR). It can be used in the early stages of the HW/SW partitioning process and can reveal the most promising parts of an application for HW acceleration.
The tool has three main phases: a) Candidate Identification for HW acceleration, b) Performance and Area Estimation and c) Selection of Candidates for acceleration that maximize speedup under a user-defined area constraint.\par
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{Figs/dfg_enum_edge_detection.pdf}
\caption{Data Flow Graph for "edge detection".}
\label{fig:dfg-edge}
\end{figure}
\emph{Candidate Identification.} The granularity of the candidates for acceleration is defined as that of a subgraph of the call-graph of an application that satisfies two properties: It has a root and there are no outgoing edges. Effectively this translates to a candidate that is a function/task, whose calls to other functions included in it (if any) are part of its computation as a potential HW accelerator. As an example, in Figure \ref{fig:dfg-edge}, every one of edge detections's Data Flow Graph (DFG) nodes, which corresponds to a function in the call graph, can be a candidate for acceleration.
\par
\emph{Performance and Area Estimation.} AccelSeeker\ uses models that estimate speedup (merit) and area usage (cost). Through LLVM static analysis and dynamic profiling \cite{ZacharopoulosMar17}, these models assess software and hardware latency, area, and I/O data transfer requirements for every identified candidate. A default Zynq Programmable System-on-Chip target platform is assumed for the architectural characterization, though it can be configured to adapt to different platforms. The HW accelerators are designed as loosely coupled --- their implementations exploiting ILP within the boundaries of a Basic Block (BB). This type of accelerator, exploiting parallelism within the BB granularity, will be referred to as Basic Block Level Parallelism (BBLP) accelerators in Section \ref{sec:results}.
\emph{Selection of Candidates.} Having assigned a specific speedup estimation (merit) and HW resource requirement (cost) to every identified candidate, the selection phase takes place. For a given area budget (which can be varied from small to large) a subset of the initial candidate list is selected that maximizes speedup. The tool's output is the design of a heterogeneous system that distinguishes the part of the computation that stays in software from the part that is accelerated by hardware.
\subsection{HPVM}\label{sec:background:hpvm}
Heterogeneous Parallel Virtual Machine (HPVM) \cite{kotsifakou2018hpvm} is a parallel program representation for heterogeneous systems, designed to be a virtual ISA, compiler Intermediate Representation (IR) and run-time representation.
Designed as an extension of LLVM IR\cite{LattnerMar04}, HPVM exploits all the optimization and code generation potential of LLVM, both for scalar and vector code, while adding support for parallel computation and heterogeneous systems.
This is achieved by representing programs using a hierarchical Data Flow Graph (DFG).
An HPVM program consists of host code together with one or more DFGs.
All code suitable for acceleration is contained in the DFG nodes.
A DFG node can either contain a part of the computation (called a leaf node) or an entire \emph{nested} data flow graph. This hierarchical representation enables multiple levels of nested parallelism.
Every DFG node has a \textit{node function} associated with it, and node functions for leaf nodes contain ordinary scalar and vector LLVM IR.
Every DFG edge represents an explicit, logical
data transfer between two nodes. Each static node in the graph can have multiple, independent dynamic instances specified as a replication factor (similar to the grid of threads for a CUDA or OpenCL kernel). Put together, this structure allows HPVM to capture loop level\ data parallelism (via the dynamic instances of a node), fine-grain data parallelism (via LLVM vector instructions within a leaf node), task parallelism between concurrent nodes (via pairs of subgraphs that are not connected by any path), and pipelined streaming parallelism (via streaming dataflow edges), all in a single parallel program representation.\par
The HPVM representation promotes optimizations such as node fusion, data mapping to local accelerator memory (e.g., GPU scratchpads), and memory tiling. So a number of transformations can be performed on the HPVM IR to optimize execution on specific target devices. The HPVM code generator traverses the DFG, translating each DFG node into code for one or more processing elements in the target system. The HPVM design is able to leverage LLVM's well-tuned back-ends, such as NVIDIA PTX, Intel AVX and X86-64.
The HPVM run-time, invoked by the host code, interfaces with the corresponding device run-time to launch a kernel and copy needed data to and from the device.
\section{Trireme}\label{sec:trireme}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{Figs/Trireme_method_new.pdf}
\caption{Overview of the \texttt{Trireme}\ methodology.}
\label{fig:overview}
\end{figure*}
An overview of the entire methodology of the \texttt{Trireme}\ tool-chain is depicted in Figure \ref{fig:overview}.
Boxes C, D and E in the figure represent new components developed for this work, while the other boxes represent existing AccelSeeker{} and HPVM components.
The source code (C,C++) of every application is used as input and, with the aid of AccelSeeker,
we analyze its IR to identify candidates for acceleration (Box A). Next we estimate the SW and HW latency, area and the amount of data required for every identified candidate. Their potential performance gain (speedup) is estimated and attached to them as \emph{Merit}, as well as the \emph{Cost} required in terms of HW resources (Box B).\par
The list of candidates and the DFG of the application, generated by HPVM, are then passed as input to a tool that extracts all necessary information regarding potential parallel execution, as detailed in Subsection \ref{sec:ASeekHPVM}
(Box C).
With the aid of novel models for loop level\ (LLP), task level\ (TLP) and pipeline parallelism\ (PP), described in the following sections, we estimate potential speedup (Merit) and area (Cost), including through combination of parallel approaches wherever applicable, i.e., task level$+$loop level parallelism\ (TLP-LLP) and pipeline parallelism$+$task level parallelism\ (PP-TLP)
(Box D).
Figure~\ref{fig:dfg-parallel} shows the DFG of the edge detection benchmark and its respective parallelism opportunities.
\par
We update the list of accelerators with the newly formed candidates for acceleration that can exploit any (or all) of the three extracted types of parallelism (LLP, TLP, PP), and combinations of them
(Box E). Finally, a selection algorithm provides the HW/SW design that maximizes the potential speedup within a given area budget (Box F).
\par
\subsection{AccelSeeker-HPVM Integration}
\label{sec:ASeekHPVM}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{Figs/dfg_edge_detection_parallel.pdf}
\caption{DFG of edge detection depicting Basic Block level parallelism (BBLP) acceleration candidates,
loop level\ (LLP), task level\ (TLP) and pipeline parallelism\ (PP) opportunities.}
\label{fig:dfg-parallel}
\end{figure}
We have integrated AccelSeeker\ and HPVM to exploit any parallelism information that can be provided by the latter and guide the selection process.
In particular, we developed a C++ tool within the HPVM infrastructure that receives a list of the most promising candidates (functions) for acceleration, as evaluated by AccelSeeker, along with their corresponding estimated software ($T_s$), and hardware ($T_h$) execution times. In addition, the HPVM bitcode file of the application being analyzed is provided as input. The tool builds the DFG of the provided application and creates a mapping between the DFG leaf nodes and the respective input functions from AccelSeeker, such that each input function corresponds to a leaf node. For the scope of this work, we only consider candidate functions that correspond to leaf nodes in the HPVM DFG. Any functions called within a leaf node are accounted for as part of the leaf node's analysis, and not analyzed separately.
The tool then performs a set of HPVM DFG analyses that extract the different types of parallelism, as described below.\par
First, a \emph{node-reachability} analysis is performed that queries the HPVM DFG to determine whether each of the candidate DFG nodes has a path connecting it to any of the other candidates.
We consider nodes that belong to separate DFGs to be sequential. For every node $i$, we build a list of nodes that are parallel to it, such that any node $j$ that is found to be unreachable to/from $i$ is added to that list. The output of this analysis is the set of nodes that can run in parallel with each candidate.\par
Second, a critical-path analysis is performed to calculate the Earliest Start Time ($EST$) and Earliest Finish Time ($EFT$) of each candidate node. Two full traversals through the DFG are performed:
a) calculating the times
while the entire run-time is in SW and b) calculating the times while the computation is implemented in HW. In each traversal, the $EST$, $EFT$, and Duration ($D$) of a leaf node ($N$) are calculated as follows:\par\noindent
$D(N)=T_s\ \text{or}\ T_h$ depending on the current traversal.\par\noindent
$EST(N)=MAX(EFT(Pred(N)))$ where $Pred(N)$ is the list of $N$'s predecessors in the graph.\par\noindent
$EFT(N)=EST(N)+D(N)$.\par
For cases with separate DFGs, we set $EST$ of the first node in a DFG $i$ to be the $EFT$ of the last node in the previous DFG $i-1$.
The output of this analysis is the software and hardware $EST$s for each candidate function.
This information is used in conjunction with the reachability analysis results at a later stage to determine task level parallelism\ (Section \ref{ssec:tlp}).\par
Finally, a third round of analysis detects for every candidate node whether or not it has dynamic replication. Its output is a table containing the nodes that have dynamic replication, along with the number of dimensions they are replicated on. Additionally, if the replication factors of a node are constants, those factors are included as well. This information is used at a later stage to determine loop level parallelism\ (Section \ref{ssec:llp}).
\subsection{Tool-chain Features}
\textbf{Accelerator Granularity.}
We consider the granularity of the candidates to be within the boundaries of a function, as identified by an LLVM-based analysis. Furthermore, under the scope of our work, and in order to integrate AccelSeeker analysis with HPVM, HW accelerators correspond to leaf nodes in the HPVM DFG, as seen in the example of Figure~\ref{fig:dfg-edge}. In this instance, every (indexed) node of the DFG of \texttt{edge detection}\ serves as a potential candidate for acceleration.\par
\textbf{Software, Hardware Latency and Area Estimation.}
We perform estimation of software and hardware latency for every identified candidate both by static analysis at the IR level and by extracting run-time profiling information. Furthermore, an estimation of LUTs and $mm^2$ is carried out in order to account for the hardware resource requirements of every accelerator. The former is estimated with AccelSeeker\ and its characterization of area in LUTs, by synthesizing a number of micro-benchmarks on a Zynq Programmable System-on-Chip (PSoC). The latter is retrieved by employing the Aladdin \cite{ShaoJul14} area characterization in $mm^2$. Our method, however, is not constrained to a specific platform and it can easily be adapted for different computing systems (e.g., FPGA boards, ASIC implementations, etc.).\par
\textbf{I/O Communication Estimation.}
The amount of data required by each candidate is also extracted by static analysis and by parsing its dynamic trace, when the latter is available. This data requirement is subsequently used to estimate latency due to communication between an accelerator and memory (e.g., DRAM, last level cache, etc.).\par
\textbf{Merit and Cost Estimation.}
Given the characteristics of the platform for which we are going to implement HW accelerators, we estimate potential speedup (Merit) for every acceleration candidate and the hardware resources required (Cost) to achieve that speedup. To obtain an accurate estimate, we use the AccelSeeker model for \emph{Merit}, which translates to cycles saved, and its model for \emph{Cost}, which accounts for the area budget in terms of LUTs (Section~\ref{sec:background:accelseeker}).
\textbf{Automatic Extraction of Parallelism.}
Using the tool developed for AccelSeeker-HPVM\ integration
(\ref{sec:ASeekHPVM}) we automatically extract information about the potential for loop level, task level\ and pipeline parallelism. This serves as input, along with AccelSeeker's list of candidates for (BBLP) acceleration, for the novel performance models of multiple levels of parallelism explored by \texttt{Trireme}.
These models are presented in detail in Section \ref{sec:models}.
\textbf{Selection Algorithm.}
The updated list of candidates for acceleration is generated, including both the Basic Block Level Parallelism (BBLP) accelerators from AccelSeeker\ and the candidates that exploit all types of parallelism explored by our tool-chain.
The selection algorithm recursively explores the subsets of the updated list of candidates, in a similar manner to the Bron-Kerbosch algorithm \cite{BronKerbosch73}. The output returned is the set
with the highest speedup (cumulative Merit) that stays within the user defined area budget (Cost).
\begin{figure}[t]
\centering
\includegraphics[width=0.65\linewidth]{Figs/parallel_types1.pdf}
\includegraphics[width=0.65\linewidth]{Figs/parallel_types2.pdf}
\vspace{-3.5cm}
\caption{Designs exploiting Basic Block level (BBLP - AccelSeeker \cite{ZacharopoulosNov19}), loop level\ (LLP), task level\ (TLP), and pipeline parallelism\ (PP) in edge detection provided a $40 \times 10^3$ LUTs area budget. The size of the black rectangles represents area usage.}
\label{fig:paralel_types}
\end{figure}
\section{Merit and Cost Models}
\label{sec:models}
As mentioned in the previous section, we introduce novel models for estimation of speedup, which we denote as \emph{Merit}, and an estimation of the area required for every HW accelerator implementation, denoted as \emph{Cost}.
These models, inspired by the respective ones from RegionSeeker \cite{ZacharopoulosApr19} and AccelSeeker \cite{ZacharopoulosNov19}, are introduced to accommodate the estimation of loop level, task level\ and pipeline parallelism\ extracted by HPVM. Having an early estimation of speedup and area budget needs, for every possible design exploiting any, or a combination, of these three types of parallelism can lead to better design choices and significantly less engineering effort.
\subsection{Loop Level Parallelism (LLP)} \label{ssec:llp}
With the aid of the tool described in \ref{sec:ASeekHPVM}, information regarding the DFG nodes loop-level parallelism is retrieved. As shown in the example of Figure \ref{fig:dfg-parallel}, the marked DFG nodes of edge detection are identified as nodes that
contain a fully-parallelizable loop and, thus, are analyzed further so that multiple versions of the same functions are generated with an increasing LLP
factor. For each factor, the loop is parallelized by replicating its body, and the corresponding speedup and cost estimates are computed. To simplify the estimation we assume an equal workload for every iteration of the loop.\par
\emph{LLP Merit and Cost Estimation.}
Let $S = \{ S_1, S_2, \ldots, S_N \}$ be a set of parallelizable loop candidates, with associated SW latency ($SW_i$), HW computation latency ($HWcomp_i$), HW communication latency ($HWcom_i$), invocation overhead ($OVHD_i$) and area cost ($A_i$). Also let LLP factor $j=1\dots,K\ |\ K=max(Loop\ Trip\ Count)$ be the factor by which we parallelize each loop. To simplify the analysis, we assume that the loop is perfectly load-balanced, and communication latency is constant, independent of $j$.
\par
Under these simplifying assumptions, for every loop candidate $\{S_{ij}\ |\ i=1,\dots, N$, $\ j=1,\dots,K\}$,
we compute the merit
$M(S_{ij}) = SW_i- {HWcomp_i \over j} - HWcom_i - OVHD_i$,
and the loop candidate area cost
$C(S_{ij}) = A_i \times j$, respectively.
As anticipated, by increasing the replication factor, better performance is achieved with the higher cost of area required. LLP, where applicable, can yield tremendous speedup benefits but at a high area budget cost, as
seen in Figure \ref{fig:paralel_types} (LLP vs software-only implementations) and discussed in greater extent in Section \ref{sec:results}.
\subsection{Task Level Parallelism (TLP)} \label{ssec:tlp}
To compute the potential speedup of a number of tasks that can be run in parallel we need first to extract all possible sets of independent candidates, i.e., all candidates that have no data flow dependencies. As depicted in the example in Figures \ref{fig:dfg-edge} and \ref{fig:dfg-parallel}, edge detection candidates indexed \{2,4\} and \{3,5\} are independent sets and can therefore be invoked in parallel. The same applies for candidates \{2,5\} and \{3,4\}. For this analysis, we use the SW and HW estimated times, as well as the EST provided from the tool described in~\ref{sec:ASeekHPVM} as described below.\par
\emph{Merit and Cost Definition/Estimation of TLP.}
Let $S = \{ S_1, S_2, \ldots, S_N \}$ be a set of independent candidates (tasks), with associated SW latency ($SW_i$), HW computation latency ($HWcomp_i$), HW communication latency ($HWcom_i$), invocation overhead ($OVHD_i$) and area cost ($A_i$). In the best case, all candidates in the set will be able to start execution at the same time, and the total HW latency of this set of candidates $S$ would be $MAX(S_{H_W}) = max(HWcomp_i + HWcom_i + OVHD_i)\ |\ i=1,\dots, N\ $.
\par
In practice, some candidates may have varying starting times (e.g., \{2,5\}) because of dependences on previous tasks not in the candidate set (e.g., 5 must wait for 4 to complete). To account for these delays, we add an extra overhead based on the difference of ESTs of the nodes in the candidate set: $EST\_OVHD = max(EST_i) - min(EST_i) | i=1,\dots, N$. Intuitively, the overhead allows us to mark the candidate set \{2,4\} as a better candidate for acceleration compared to \{2,5\}.
\par
We denote the merit of set $S$, by $M(S) = \sum_{i\in [1,N]}{ SW_i} - MAX(S_{H_W}) - EST\_OVHD$ and we denote the cumulative cost of set $S$ in area by $C(S) = \sum_{i\in [1,N]}A_i$.\par
Task level parallelism, in applications that have independent tasks, can offer significant speedup compared to, for instance, sequential accelerators exploiting only Basic Block level parallelism (BBLP) that require the same HW resources. Figure \ref{fig:paralel_types} provides a comparison between TLP and BBLP when accelerating the edge detection application.
\subsection{Pipeline Parallelism (PP)}
We assume that the pipeline has \(K\) stages, \(S_{1}\), \(S_{2}\), ... \(S_{K}\), and the time needed on stage \(i\) is \(T_{i}\). We also assume that the stage that requires the longest time is \(S_{j}\) (i.e., $\forall i \in \{1, 2, ..., K\}, T_{j} \geq T_{i}$). Now we will prove that the total execution time for pipeline parallelism is \(T_{total} = \sum_{i=1}^{K}{T_{i}} + {T_{j}} \times (N - 1)\), where $N$ is the number of iterations.
The first term \(\sum_{i=1}^{K}{T_{i}}\) is the time spent on the first iteration. The second term \(\max_{i}{T_{i}} \times (N - 1)\) is the timing overhead caused by the following \((N-1)\) iterations.
\textit{Step 1.}
Provided that the inter-stage dependencies are not considered (e.g., \(S_{2}\) cannot start before \(S_{1}\) finishes, etc.), the earliest starting time for each stage is the ending time of the same stage in the previous iteration.
If we start the second iteration after \(T_{j}\), since $T_{j} \geq T_{i}, \forall i \in \{1, 2, ..., K\}$, the starting time of every stage in the second iteration will be no later than the ending time of the same stage in the previous iteration. In other words, there should not be any idle time.
Thus, the ending time for the second iteration is \(T_{j}\) later than the first iteration.
For the third iteration the ending time is \(T_{j}\) later than the second iteration. Thus, based on mathematical
induction, we can prove that at iteration \(n\), execution is completed \(T_{j}\) later than the previous iteration.
\textit{Step 2.} We prove that iteration \(n\) cannot finish at time \(t\) later than the previous iteration, if \(t < T_{j}\).
Provided that the ending time of the second iteration is \(t\) later than the first iteration, the ending time of stage \(S_{K}\) in the second iteration is \(t\) later than the one in the first iteration. Therefore, the starting time of stage \(S_{K}\) in the second iteration is \(t\) later than the one in the first iteration. Due to the inter-stage correlation, the ending time of stage \(S_{K-1}\) in the second iteration should be no more than time \(t\) later than the one in the first iteration.
Hence, if we trace back to \(S_{j}\), we can say that the ending time of \(S_{j}\) in the second iteration should be no more than time \(t\) later than the one in the first iteration. However, since \(t < T_{j}\), the starting time of \(S_{j}\) in the second iteration will be \(T_{j} - t\) earlier than the ending time of \(S_{j}\) in the first iteration. In other words, there will be an overlap between two consecutive iterations on stage \(S_{j}\) (Figure~\ref{fig:ppOverlap}).
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{Figs/overlap.pdf}
\caption{Overlap on stage \(S_{j}\).}
\vspace{-0.3cm}
\label{fig:ppOverlap}
\end{figure}
\emph{Merit and Cost Definition/Estimation of PP.}
Based on the previous illustration, let $S = \{ S_1, S_2, \ldots, S_K \}$ be a set of pipelined candidates (tasks) and $N$ be the number of iterations, with associated SW latency ($SW_i$), HW computation latency ($HWcomp_i$), HW communication latency ($HWcom_i$), invocation overhead ($OVHD_i$),
HW latency $HW_i = HWcomp_i+HWcom_i+OVHD_i$
and area cost $(A_i)$.
We compute the HW latency, using the previous proof, as $HW_{TOTAL} = \sum_{i=1}^{K}{HW_i} + \max_{i}{HW_i} \times (N - 1) $. This formula can be applied to both a balanced pipeline and an unbalanced pipeline.
\par
We denote the merit of set S, by $M(S) = \sum_{i\in [1,K]}{ SW_i} - HW_{TOTAL} $ and we denote the cumulative cost of set S in area by $C(S) = \sum_{i\in [1,K]}A_i$.\par
\section{Experimental Setup}
\label{sec:setup}
For our experiments, we assume a heterogeneous system constituted by a single SW processor and multiple loosely coupled HW accelerators. The processor invokes the accelerators via a memory-mapped interface. DMA is used to transfer data from main memory to accelerator scratchpads and vice versa in order to store the accelerators output to main memory and be available to the SW processor.
As AccelSeeker, used as a baseline, targets by default an FPGA SoC (Zynq UltraScale SoC), we also use FPGA SoCs in our experiments.
\textbf{Benchmarks.} We evaluated the \texttt{Trireme}\ tool-chain in a variety of applications, spanning from smaller, single-kernel ones, to larger and more demanding ones. The type of potential parallelism extracted from every benchmark, as expected, also varies. The kernels from Parboil \cite{stratton2012parboil} and MachSuite \cite{reagen2014machsuite} offer opportunities for loop level parallelism\ only. Medium and large size applications from the XR domain, such as 3D spatial \texttt{audio encoder}\ from a recently released XR testbed \cite{huzaifa2020exploring} and Camera Vision Pipeline \texttt{cava} \cite{yaoyuannnn}, where both loop level parallelism\ and pipelining would be feasible, and visual inertial odometry (VIO), often referred to as \texttt{SLAM}, where 70\% of its run-time is evaluated and loop level\ and task level parallelism\ opportunities are present. Larger and more complex applications, where all types of parallelism can be retrieved (as well as combinations of them), are also rigorously evaluated. These include 3D spatial \texttt{audio decoder}\ (XR domain) from the XR testbed \cite{huzaifa2020exploring} and \texttt{edge detection}, a six stage image processing pipeline used in \cite{kotsifakou2018hpvm}.
\textbf{Parallelism Strategies.} We evaluate and compare the following parallelism strategies for HW acceleration:\\
\textbf{a) Basic Block Level Parallelism (BBLP)}. Function (Task) accelerators that exploit Instruction Level Parallelism within a Basic Block. It corresponds to the accelerators selected by AccelSeeker \cite{ZacharopoulosNov19}.\\
\textbf{b) Loop Level Parallelism (LLP)}. Replication and parallel execution of fully parallelizable loops, represented in HPVM as leaf nodes with multiple dynamic instances.\\
\textbf{c) Task Level Parallelism (TLP)}. Sets of two or more tasks (HPVM leaf nodes) that have no data flow dependencies between them (i.e., no path in the HPVM dataflow graph connecting any pair of nodes in the set) and can therefore all run in parallel with each other.\\
\textbf{d) Pipeline Parallelism (PP)}. Sequences of HPVM nodes (tasks) connected by streaming dataflow edges, and therefore can be pipelined.\\
\textbf{e) Task and Loop Level Parallelism (TLP-LLP)}.
Sets of tasks that can be either executed as parallelizable loops or run as parallel tasks or both. The final design may have any of these forms of parallelism applied.\\
\textbf{f) Pipeline and Task Level Parallelism (PP-TLP)}. Sets of pipelined tasks that can also be run in parallel.
\textbf{Validation.} For the validation of our models we evaluated HW acceleration with Aladdin \cite{ShaoJul14} HW accelerator simulator. The run-time of the non-accelerated part was measured using gem5 \cite{BinkertFeb11}. The processor modelled is an ARMv8-A processor of issue width of 1, having an atomic model, in-order execution and clocked at 100 MHz. Additionally, we used Catapult HLS\cite{CatapultHLS} to synthesize the HW accelerators for further validation.
\section{Experimental Results}
\label{sec:results}
In the following subsections, we showcase the speedup achieved from the hierarchical multi-level parallelism strategies explored by our tool-chain.
We group the results by different types of parallelism exploited by \texttt{Trireme}{}.
First, the performance benefits in single-kernel applications that solely exploit LLP are presented. Then, we investigate XR applications with pipelines (\texttt{audio encoder, cava}) and independent tasks (\texttt{SLAM}), where both LLP/PP or LLP/TLP can be applied. Finally, we study larger ones (\texttt{audio decoder, edge detection}), where LLP/TLP/PP and combinations of them can be used, such as TLP-LLP and PP-TLP, as described in the previous section. We evaluate the above against SW-only implementations, and against state-of-the-art{} AccelSeeker. As such, we target FPGA SoCs in all our experiments. \par
We validate the designs selected by our tool, given increasing area constraints, first using Aladdin \cite{ShaoJul14} (for the latency of HW accelerators) and gem5 \cite{BinkertFeb11} (for the software latency), and second using Catapult HLS for real hardware measurements. Finally, we study the effects of varying the bandwidth of data transfers between host and accelerator, and the overhead of accelerator invocation, on the \texttt{audio decoder} and \texttt{edge detection} benchmarks.
\subsection{Loop Level Parallelism}
\label{sec:res_llp}
\texttt{Trireme}, extracting information exposed by HPVM, identifies the application kernels that contain a fully parallelizable loop or loop nest. Subsequently, the Merit/Cost estimation models for loop level parallelism, as described in Section \ref{sec:models}, are used to estimate the speedup and hardware resource utilization for varying LLP factor. Figure \ref{fig:llp_results} shows the speedup obtained on six benchmarks from Parboil (\texttt{sgemm, lbm, spmv}) and MachSuite (\texttt{gemm-blocked, md-grid, stencil}), compared to a SW-only baseline.\par
\begin{figure*}[t]
\centering
\includegraphics[width=0.32\textwidth]{Figs/sgemm.mcifiles.0.1.100.pdf}
\includegraphics[width=0.32\textwidth]{Figs/spmv.mcifiles.0.1.10.pdf}
\includegraphics[width=0.32\textwidth]{Figs/lbm.mcifiles.0.1.100.pdf}
\hspace{-1.5cm}
\includegraphics[width=0.32\textwidth]{Figs/gemm-blocked.mcifiles.0.1.100.pdf}
\includegraphics[width=0.32\textwidth]{Figs/stencil.mcifiles.0.1.10.pdf}
\includegraphics[width=0.32\textwidth]{Figs/md-grid.mcifiles.0.1.100.pdf}
\vspace{-0.2cm}
\caption{Speedup obtained for
applications from Parboil \cite{stratton2012parboil} and MachSuite \cite{reagen2014machsuite} benchmark suites, varying the area budget constraint. We evaluate AccelSeeker \cite{ZacharopoulosNov19} (BBLP) and LLP, while the baseline is a SW-only implementation.}
\vspace{-0.2cm}
\label{fig:llp_results}
\end{figure*}
All applications benefit significantly from replicating their loop-bodies and running them in parallel, and the parallelism enables the designs to take advantage of larger area resources to achieve greater speedups than is possible without loop level parallelism. For an area budget of $3\times 10^3$ LUTs, \texttt{sgemm}\ and\ \texttt{gemm-blocked}\ reach a 16$\times$ and 25$\times$ speedup respectively, compared to the baseline, and a 3$\times$ and \textasciitilde2$\times$ speedup compared to BBLP, which corresponds to state-of-the-art\ AccelSeeker\ selections.\par
Kernels such as \texttt{spmv}\ and \texttt{stencil}{} realize a 4.7$\times$ and 3.4$\times$ speedup compared to a SW-only implementation respectively, for a budget of $5\times 10^3$ LUTs, whereas \texttt{lbm}\,
having a smaller loop body, i.e., fewer instructions and less computation time within the loop body compared to the previous ones,
has little benefit from extra area resources and LLP. Finally, \texttt{md-grid}\ requires more area compared to the previous kernels and, having a large potential for loop level parallelism, reaches a 27$\times$ speedup compared to the SW baseline and 5.4$\times$ compared to state-of-the-art\ BBLP accelerators. Overall, \texttt{Trireme}{} is able in many cases to achieve substantial performance improvements for given hardware resources by exploiting loop level parallelism{} alone.
\subsection{Loop vs. Pipeline and Loop vs. Task Parallelism}
\label{sec:res_llp_pp}
Richer applications, such as components from the XR testbed \cite{huzaifa2020exploring}, contain a variety of opportunities to exploit parallelism. For \texttt{audio encoder}\ and \texttt{cava}, in addition to parallelizable loops, the DFG nodes can also be pipelined. For \texttt{SLAM}, apart from LLP, independent tasks are present as well. \texttt{Trireme}\ automatically generates designs exploiting this information.
\par
Figure \ref{fig:enc-cava} shows the speedup obtained from applying LLP and PP on \texttt{audio encoder}\ and \texttt{cava}, for a number of increasing area budgets. For a budget of $5 \times 10^3$ LUTs \texttt{audio encoder}\ achieves an 8$\times$ (for LLP) and 9$\times$ (for TLP) speedup compared to SW-only baseline, as the entire pipeline fits the budget. Additionally, a slight improvement over BBLP (AccelSeeker selection) is achieved. Nonetheless, more area is required to parallelize the loops within the selected accelerators, which is evident by the increasing trend line for LLP. \par
For the same area budget in \texttt{cava}, the pipeline does not fit. Thus, the speedup gain for PP is the same as for BBLP (10$\times$ over the baseline). LLP on the other hand benefits from loop parallelization and achieves a 20$\times$ speedup.\par
For larger budgets, we can observe significant benefits in speedup for LLP, both in \texttt{audio encoder}\ and \texttt{cava}. With $15 \times 10^3$ LUTs \texttt{audio encoder}\ achieves a \textasciitilde17$\times$ speedup compared to baseline, and with $10 \times 10^3$ LUTs \texttt{cava}\ attains a 33$\times$ speedup.
These are respectively about 2$\times$ and 3$\times$ the speedup achieved with BBLP alone.
Figure \ref{fig:enc-cava} shows that \texttt{SLAM}\ benefits from LLP, reaching up to 7$\times$ speedup, as the area budget allows for more loop level parallelism. On the other hand, since only two tasks --- with small latency relative to the total run-time --- can be parallelized, TLP offers no performance gain.\par
For \texttt{audio encoder}\ and \texttt{cava}, PP produces little improvement in performance. This is due to the unbalanced pipelines in these workloads. One of the functions (DFG nodes) in each application dominates the computation time, therefore applying the PP strategy yields little benefit.
However, as demonstrated in the following round of experiments, this is not the case for the next two applications evaluated: \texttt{audio decoder}\ and \texttt{edge detection}.\par
\begin{figure*}[t]
\centering
\hspace{-0.5cm}
\includegraphics[width=0.33\linewidth]{Figs/audio_encoder_hpvm.pdf}
\includegraphics[width=0.33\linewidth]{Figs/cava_hpvm.mcifiles.0.1.100.pdf}
\includegraphics[width=0.33\linewidth]{Figs/slam.mcifiles.0.1.100.pdf}
\vspace{-0.4cm}
\caption{
Speedup obtained over the entire run-time of audio encoder \cite{huzaifa2020exploring}, cava \cite{yaoyuannnn} and
OpenVINS algorithm for SLAM \cite{huzaifa2020exploring}, varying the area constraint.
We evaluate AccelSeeker \cite{ZacharopoulosNov19} (BBLP),
LLP, PP and TLP, while the baseline is a SW-only implementation.
}
\vspace{-0.2cm}
\label{fig:enc-cava}
\end{figure*}
\subsection{Loop/Task/Pipeline Parallelism}
\label{sec:res_llp_pp_tlp}
In the previous subsection we encountered applications that could only exploit LLP and PP, whereas \texttt{audio decoder}, a state-of-the-art\ XR application component, and \texttt{edge detection}, a six-stage image processing pipeline, can offer LLP, TLP, PP, as well as combinations of them. Such applications are ideal candidates to employ \texttt{Trireme}\ and unlock their full parallelism potential. Figure \ref{fig:audio_decoder} presents the speedup achieved by multiple levels of parallelism explored by our tool-chain, for increasing area budgets.\par
On \texttt{audio decoder}, Figure \ref{fig:audio_decoder} (left) and Table \ref{tab:decoder}, for an area budget of $12 \times 10^3$ LUTs, LLP and PP reach a 13.2$\times$ and 13.7$\times$ speedup respectively, compared to a SW-only baseline. This budget is enough to fit one of the two \texttt{audio decoder}\ pipelines, and since the workloads are fairly balanced, we see the benefit obtained from this strategy. TLP and TLP-LLP achieve the same 15.1$\times$ speedup, as not enough area is available to benefit from parallelizing the loops, while the selected independent tasks are accelerated in parallel.\par
Increasing the budget to $14 \times 10^3$ LUTs, almost equivalent to Xilinx Artix Z-7007S PSoC \cite{ZynqMar17}, we can see that LLP and TLP-LLP are making use of the larger area and increase their respective speedups to 14.21$\times$ and 15.74$\times$. Conversely, BBLP, TLP and PP extract no benefit, using only 85\% of the available resources, as their potential candidate choices require more area to be selected (Table \ref{tab:decoder} - row 2). A budget of $15 \times 10^3$ LUTs, however, accommodates all available tasks to be parallelized (TLP-16.7$\times$), as well as the pipelines (PP-16.5$\times$), including the possibility to parallelize the independent pipelines (PP-TLP-18.31$\times$), yielding the maximum possible speedup for these strategies.\par
The latter point can also be seen in the last row of Table~\ref{tab:decoder}. A larger area budget, almost equivalent to Xilinx Artix Z-7012S PSoC \cite{ZynqMar17}, allows LLP and TLP-LLP to benefit from increased parallelization of the loop bodies of their accelerators. TLP, PP and PP-TLP show no benefit from the doubling of the hardware resources as they have already reached their better-performing designs.
An interesting aspect is that PP-TLP, the strategy that achieves the best speedup, along with TLP and PP require
fewer hardware resources to reach their maximum speedup compared to LLP and TLP-LLP, the latter achieving an almost equivalent speedup to PP-TLP but for much larger area.
Also BBLP is consistently outperformed by all parallelism strategies explored.\par
Similar trends can be seen in \texttt{edge detection}\ while investigating its potential for parallelism (Figure \ref{fig:audio_decoder} -- right). For a $14 \times 10^3$ LUTs area budget TLP (3.2$\times$), PP (3.4$\times$) and PP-TLP (4.4$\times$) can accommodate all their respective HW/SW designs and reach their top speedups compared to the SW-only baseline. For the same budget, LLP and TLP-LLP can achieve 2.5$\times$ and 3.2$\times$ respectively, requiring more area to reach better performance. An area budget of $40 \times 10^3$ LUTs, equivalent to Artix Z-7014S PSoC, would allow for more parallelization of the loop bodies for LLP an TLP-LLP, the latter reaching an equivalent of the PP-TLP maximum speedup (4.4$\times$).\par
For even larger area budgets, such as $100 \times 10^3$ LUTs, we notice that LLP reaches a 4$\times$ speedup and TLP-LLP surpasses the highest-performing PP-TLP design by achieving 4.7$\times$ speedup compared to the baseline. This is because, unlike \texttt{audio decoder}, all of the accelerated functions in \texttt{edge detection}\ have parallelizable loops, which allows for increasing speedup as the area increases.
\begin{figure*}[t]
\centering
\hspace{-1cm}
\includegraphics[width=0.45\linewidth]{Figs/audio_decoder_hpvm.mcifiles.0.1.100.pdf}
\includegraphics[width=0.45\linewidth]{Figs/edge_detection_hpvm.mcifiles.0.1.100.pdf}
\vspace{-0.2cm}
\caption{
Speedup over the entire runtime for different versions of audio decoder (left) and edge detection (right), varying the area constraint. The baseline is a SW-only implementation.
}
\label{fig:audio_decoder}
\end{figure*}
\begin{table}[h]
\footnotesize
\centering
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Benchmark} & \textbf{Parallelism} & \textbf{Area Budget} & \textbf{Area Used} & \textbf{Speedup} \\
& \textbf{Version} & (LUTs) & (LUTs) & \textbf{vs. SW} \\
\hline \hline
audio decoder & BBLP & 12 000 & 11916 (99\%) & 12.65 \\
& LLP & & 11655 (97\%) & 13.2 \\
& TLP & & 11916 (99\%) & 15.1 \\
& TLP-LLP & & 11916 (99\%) & 15.1 \\
& PP & & 11916 (99\%) & 13.7 \\
& PP-TLP & & 11916 (99\%) & 12.65 \\ \hline
& BBLP & 14 000 & 11916 (85\%) & 12.65 \\
& LLP & \textbf{Artix Z-7007S} & 13889 (99\%) & \underline{ 14.21} \\
& TLP & \cite{ZynqMar17} & 11916 (85\%) & 15.1 \\
& TLP-LLP & & 13889 (99\%) & \underline{15.74} \\
& PP & & 11916 (85\%) & 13.7 \\
& PP-TLP & & 13861 (99\%) & \underline{14.09} \\
\hline
& BBLP & 15 000 & 14166 (94\%) & \underline{13.62} \\
& LLP & & 14722 (98\%) & \underline{14.7} \\
& TLP & & 14166 (94\%) & \underline{16.7} \\
& TLP-LLP & & 14471 (96\%) & \underline{16.9} \\
& PP & & 14166 (94\%) & \underline{16.5} \\
& PP-TLP & & 14166 (94\%) & \underline{\textbf{18.31}} \\
\hline
& BBLP & 30 000 & 14166 (\textcolor{applegreen}{47\%}) & 13.62 \\
& LLP & \textbf{Artix Z-7012S } & 29773 (\textcolor{Crimsonglory}{99\%}) & \underline{16.3} \\
& TLP & \cite{ZynqMar17} & 14166 (\textcolor{applegreen}{47\%}) & 16.7 \\
& TLP-LLP & & 29773 (\textcolor{Crimsonglory}{99\%}) & \underline{18.24} \\
& PP & & 14166 (\textcolor{applegreen}{47\%}) & 16.5 \\
& PP-TLP & & 14166 (\textcolor{applegreen}{47\%}) & \textbf{18.31} \\
\hline
\end{tabular}
}
\caption{Area Budget and Area Used for audio decoder.
}
\vspace{-0.2cm}
\label{tab:decoder}
\end{table}
\subsection{Aladdin/gem5 and Catapult HLS}
\begin{figure}[t]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.55\linewidth]{Figs/audio_decoder_hpvm.validation.pdf}
\vspace{-0.3cm}
\caption{
Speedup obtained for audio decoder varying the area constraint using Aladdin \cite{ShaoJul14} for the HW acceleration parts and gem5 \cite{BinkertFeb11} for the SW-only implementation.
}
\vspace{-0.1cm}
\label{fig:audio_decoder_aladdin}
\end{figure}
To validate the selection of the HW/SW designs for every parallelism strategy explored and evaluated by our tool-chain, we use Aladdin \cite{ShaoJul14}, a HW accelerator simulator, and the gem5 \cite{BinkertFeb11} simulator.
Aladdin was chosen as a faster, yet accurate, alternative to commercial HLS tools that offer latency and area results.
For \texttt{audio decoder}, we gather the HW latency and area of the available candidates for acceleration with Aladdin, and their respective SW latency with gem5, as well as the run-time of the application as detailed in Section \ref{sec:setup}.\par
Figure \ref{fig:audio_decoder_aladdin} shows the speedup over increasing area budgets. For every area budget, the outputs of applying the parallelism strategies explored in this work match the ones generated by the Aladdin/gem5 simulations. This reinforces our expectation that our tool-chain selects the most promising designs with respect to performance and area usage.\par
As expected, speedup absolute values for \texttt{audio decoder}\
(Figures \ref{fig:audio_decoder} and \ref{fig:audio_decoder_aladdin})
differ. This is due to two factors: A) Our performance and area models are not based on cycle-accurate estimations, but aim to enable the selection of high-performance HW/SW choices automatically, and faster than performing demanding simulations or RTL synthesis. B) The characterization of latency for Aladdin is performed targeting
OpenPDK 45nm technology, which is different to the characterization of our tool targeting a Zynq Programmable
SoC.
To further evaluate our tool flow,
we designed accelerator prototypes using SystemC, guided by \texttt{Trireme}. To gather HW latency and area requirements, the accelerators were synthesized using Catapult HLS \cite{CatapultHLS}. The RTL was then synthesized, placed and routed by ASIC EDA tools using a commercial 12nm FinFET technology. The accelerators were clocked at 500MHz frequency and cycle-accurate Catapult simulations were used to measure the HW latency.\par
Table \ref{tab:catapult}
shows the HW latency comparison of Trireme (LLP, TLP, TLP-LLP) to AccelSeeker (BBLP).
For \texttt{audio encoder}, LLP designs guided by \texttt{Trireme}\ achieve impressive performance gains at the expense of more HW resources.
In \texttt{audio decoder}, LLP designs achieve smaller speedup and require the same or more resources compared to TLP-LLP. The latter can be up to six times faster compared to the respective AccelSeeker\ design for a large area budget (\textasciitilde$252 \times 10^3 uM^2$).
A medium area budget of \textasciitilde $126 \times 10^3 uM^2$ can yield significant speedup for TLP and TLP-LLP where accelerators Rotate 1-3 are operating in parallel.
Figure \ref{fig:layout} shows the physical layout of this design for \texttt{audio decoder}.
\begin{table}[h]
\footnotesize
\centering
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Benchmark} & \textbf{Parallelism} & \textbf{Area Used} & \textbf{Speedup vs.} \\
& \textbf{Version} & ($uM^2$) & \textbf{ state-of-the-art} \\
& & & \textbf{AccelSeeker\ (BBLP)} \\
\hline \hline
audio encoder & BBLP & 3854 & 1 \\
\hline
& LLP & 5415 & 2 \\
\hline
& LLP & 8578 & 4 \\
\hline
& LLP & 15072 & 8 \\
\hline
& LLP & 27491 & 16 \\
\hline
\hline
audio decoder & BBLP & 92 738 & 1 \\
& LLP & 85 602 & 1.5 \\
& TLP-LLP & 85 602 & 2 \\
\hline
& BBLP & 125 865 & 1 \\
& LLP & 171 385 & 2 \\
& TLP & \textbf{125 865} & \textbf{3} \\
& TLP-LLP & \textbf{125 865} & \textbf{3} \\
& TLP-LLP & 251 641 & 6 \\
\hline
\end{tabular}
}
\caption{
\texttt{Trireme}\ vs. AccelSeeker\ \cite{ZacharopoulosNov19} by Catapult HLS
\cite{CatapultHLS}.
}
\label{tab:catapult}
\end{table}
\begin{figure}[h]
\centering
\vspace{-0.5cm}
\hspace{1.4cm}
\includegraphics[width=0.28\linewidth]{Figs/layout_area2.pdf}
\includegraphics[width=0.15\linewidth]{Figs/area_value.pdf}
\vspace{-0.2cm}
\caption{HLS design of audio decoder guided by \texttt{Trireme}.
}
\label{fig:layout}
\end{figure}
\subsection{Configurations of the Target Platform}
To gain better intuition on how different platform configurations affect potential speedup in HW accelerated systems, we apply a round of experiments varying the bandwidth of the data transfers to and from the HW accelerators (affecting memory latency), and the overhead of invoking them. Note that for Subsections \ref{sec:res_llp}, \ref{sec:res_llp_pp} and \ref{sec:res_llp_pp_tlp} we have been assuming a configuration of 1 GBps bandwidth and $1 \mu\text{s}$ overhead per accelerator invocation.\par
Figure \ref{fig:configs} (left) shows the \texttt{audio decoder}\ speedup due to varying the bandwidth over 100 MBps, 1 GBps and 10 GBps, and the area budgets over 12, 15 and 30$ \times 10^3$ LUTs. We observe that low bandwidth (100 MBps), even when the area budget is increased, offers little speedup from BBLP, LLP, TLP, TLP-LLP and PP.
This reveals the limitation of platforms where communication to memory can severely affect the speedup of a HW/SW design.\par
Overall, as expected, all parallelism strategies reach greater speedup when both bandwidth and area are increased. Nonetheless, LLP and TLP-LLP are favored, compared to the rest of the strategies, when bandwidth is increased for a given area budget. This result is even more evident for \texttt{edge detection}\ compared to \texttt{audio decoder}, as seen in Figure \ref{fig:configs} (right), as it has more parallelizable loops than the latter. For the largest area budget of $100 \times 10^3$ LUTs we notice that the second and fourth bars increase vastly reaching 4.2$\times$ and 4.9$\times$ speedup respectively, as bandwidth increases, surpassing the previous better performing strategy (PP-TLP) for a smaller budget of $15 \times 10^3$. We can also notice this for \texttt{audio decoder}\ for the largest area budget of $30 \times 10^3$ LUTs where TLP-LLP reaches the maximum speedup (20$\times$), compared to the rest of the parallelism approaches.\par
\begin{figure*}[h]
\centering
\hspace{-0.4cm}
\includegraphics[width=0.5\linewidth]{Figs/decoder_speedup.pdf}
\includegraphics[width=0.5\linewidth]{Figs/edge_speedup.pdf}
\caption{Speedup of audio decoder (left) and edge detection (right), for increasing bandwidth and area.
Baseline is SW-only.}
\label{fig:configs}
\end{figure*}
\section{Related Work}\label{sec:related}
We classify related research literature across five dimensions, as shown in Table \ref{tab:taxonomy}. The types of parallelism supported by each piece of research vary from ILP within Basic Block boundaries \cite{ZacharopoulosApr19, ZacharopoulosNov19}, to loop level\ \cite{nardi2019hypermapper, koeplinger2018spatial,peruse, DurstFeldman2020}, task level parallelism\ \cite{ nguyen2016fcuda, margerm2018tapas} and Tensor level \cite{circt}.
Early DSE, one of the most important aspects of \texttt{Trireme}, is in many instances not supported by tools developed to expose and exploit parallelism in HW acceleration \cite{fcuda, peruse, margerm2018tapas, schardl2017tapir, circt}.
FCUDA~\cite{fcuda} is a source-to-source tool that translates CUDA code to FPGA accelerators, however offers no DSE or estimation of HW acceleration performance. On the other hand, Spatial \cite{koeplinger2018spatial} is an early DSE infrastructure that uses Hypermapper $2.0$~\cite{nardi2019hypermapper} in order to apply early DSE, however the parts to be accelerated need to be user-defined and high level languages are not supported as input. Aetherling \cite{DurstFeldman2020} applies early DSE as well and can be configured onto FPGAs, but it is restricted to loop level parallelism only and, like Spatial, does not support high level languages (C/C++).\par
Methodologies that combine static analysis and machine learning have been used in Peruse \cite{peruse} and in \cite{ZacharopoulosJul18} to predict the potential speedup of loop accelerators.
TAPAS~\cite{margerm2018tapas} is a tool-chain focusing on loop and task level parallelism by leveraging the TAPIR~\cite{schardl2017tapir} Parallel IR representation of the code. Although TAPIR is able to generate parallelism at arbitrary granularities, HPVM is able to expose nested parallelism which is leveraged by \texttt{Trireme}.\par
HeteroCL~\cite{lai2019heterocl}, developed within a Python-based domain specific language, performs early DSE and offers estimations on performance and area targeting FPGAs. It uses parallel processing pipelines and shifts towards tensor-related computations, used in Linear Algebra, Computer Vision and Machine Learning. Since HeteroCL is domain specific, it uses the domain expertise to trade accuracy for performance aggressively by reducing the bitwidth for key functional units. \par
High Level Synthesis (HLS) tools have improved substantially in
recent years \cite{MeeusSep12}. Commercial tools like
Xilinx Vivado HLS \cite{VivadoHLSMar17} and Cadence Stratus HLS \cite{StratusHLSApr16}, and academic tools like Bambu \cite{PilatoMar12} and Legup \cite{CanisSep13b},
carry out the design of computation-heavy accelerators from application source code. They achieve performance on a par with that of hand-crafted implementations written in low level hardware description languages like VHDL and Verilog. But these HLS tools provide no DSE or early estimation of accelerator performance; hence, they are complementary to \texttt{Trireme}{} in an application-driven hardware-design workflow. \par
Tools that perform HW acceleration simulation and can be used for DSE such as Aladdin \cite{ShaoJul14}, gem5-aladdin \cite{gem5aladdin} and gem5-SALAM \cite{gem5salam}
can achieve high cycle and power accuracy, comparable to that of commercial HLS tools. Furthermore optimizations, such as loop unrolling and loop pipelining, can be applied. However, a considerable amount of manual work is required and the simulation process is fairly time-consuming, even though significantly less than the time required by commercial HLS tools.
Finally, frameworks used for automatic binary parallelization \cite{zhou2019janus} and for automatic parallelization of non-numerical applications \cite{campanoni2014helix} by decoupling communication from computation, in order to avoid the overhead due to synchronization, have also been proposed.
\begin{table}[h]
\Huge
\vspace{0.7cm}
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\textbf{Feature} & \textbf{FCUDA} & \textbf{Spatial} & \textbf{Peruse} & \textbf{TAPAS} & \textbf{CIRCT} & \textbf{Aether} & \textbf{Accel} & \textbf{\texttt{Trireme}} \\
& & \textbf{} & & & & \textbf{ling} & \textbf{Seeker} & \\
& & & & & & & & \\
& \cite{fcuda} & \cite{koeplinger2018spatial, nardi2019hypermapper} & \cite{peruse} & \cite{margerm2018tapas} & \cite{circt} & \cite{DurstFeldman2020} & \cite{ZacharopoulosNov19} & \\
\hline \hline
Levels of & Loop & Loop & Loop &Loop & Tensor & Loop & Intra-BB & Intra-BB\\
Parallelism & Task & Task & & Task & & & ILP & Loop \\
& & & & & & & & Task \\
& & & & & & & & Pipeline \\
\hline
Early & \textcolor{Crimsonglory}{\xmark} & \textcolor{applegreen}{\cmark} & \textcolor{Crimsonglory}{\xmark} & \textcolor{Crimsonglory}{\xmark} & \textcolor{Crimsonglory}{\xmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} \\
DSE & & & & & & & & \\
\hline
Performance & & & & & & & & \\
Estimation & \textcolor{Crimsonglory}{\xmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{Crimsonglory}{\xmark} & N/A & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} \\
\hline
Automated & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} & \textcolor{applegreen}{\cmark} \\ \hline
Configurations & & & & & & & & \\
of Target & & & & & & & &\\
SoCs & \textcolor{Crimsonglory}{\xmark} &
\textcolor{Crimsonglory}{\xmark} &
\textcolor{Crimsonglory}{\xmark} & \textcolor{Crimsonglory}{\xmark} & N/A & \textcolor{Crimsonglory}{\xmark} & \textcolor{Crimsonglory}{\xmark} & \textcolor{applegreen}{\cmark}\\
\hline
\end{tabular}
}
\caption{Taxonomy Table.}
\label{tab:taxonomy}
\end{table}
\section{Conclusions}
Early DSE in modern applications, along with the extraction of critical information about parallelism, can be crucial to the outcome of a final HW/SW design and its respective performance on SoCs.
\texttt{Trireme}\ leverages information automatically retrieved by HPVM and applies it to accelerators automatically identified and evaluated by AccelSeeker. Using novel performance models, \texttt{Trireme}{} is able to thoroughly explore a variety of parallelism strategies and select the highest performing HW/SW design as output for area budgets of increasing size. We have explored multiple SoC configurations, varying the data transfer bandwidth between memory and accelerators, as well as accelerator invocation overhead. Application of \texttt{Trireme}\ to the XR domain yields substantial speedup gain with fixed resources when compared with
state-of-the-art tools (e.g., AccelSeeker\ \cite{ZacharopoulosNov19})
that do not consider loop level parallelism, task level parallelism{} and pipeline parallelism.
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,562,757 | arxiv | \section*{Acknowledgments}
\par
The author is grateful to Prof. A. Parola and Prof. L. Reatto
for stimulating discussions.
\newpage
\section*{References}
\begin{description}
\item{\ 1.} M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman,
and E.A. Cornell, Science {\bf 269}, 189 (1995).
\item{\ 2.} K.B. Davis, M.O. Mewes, M.R. Andrews, N.J. van Druten,
D.S. Drufee, D.M. Kurn, and W. Ketterle, Phys. Rev. Lett. {\bf 75},
3969 (1995).
\item{\ 3.} C.C. Bradley, C.A. Sackett, J.J. Tollet, and R.G. Hulet,
Phys. Rev. Lett. {\bf 75}, 1687 (1995).
\item{\ 4.} M. Edwards and K. Burnett, Phys. Rev. A {\bf 51}, 1382 (1995).
\item{\ 5.} M. Lewenstein and L. You, Phys. Rev. A {\bf 53}, 909 (1996).
\item{\ 6.} F. Dalfovo and S. Stringari, Phys. Rev. A {\bf 53}, 2477 (1996).
\item{\ 7.} A.L. Fetter, Phys. Rev. A {\bf 53}, 4245 (1996).
\item{\ 8.} S. Stringari, Phys. Rev. Lett. {\bf 77}, 2360 (1996).
\item{\ 9.} A. Smerzi and S. Fantoni, Phys. Rev. Lett. {\bf 78}, 3589
(1997).
\item{\ 10.} C.C. Bradley, C.A. Sackett, and R.G. Hulet,
Phys. Rev. Lett. {\bf 78}, 985 (1997).
\item{\ 11.} A.L. Fetter and J.D. Walecka, {\it Quantum Theory
of Many--Particle Systems} (McGraw--Hill, New York, 1971).
\item{\ 12.} E.P. Gross, Nuovo Cimento {\bf 20}, 454 (1961);
L.P. Pitaevskii, Sov. Phys. JETP {\bf 13}, 451 (1961).
\item{\ 13.} A. Parola, L. Salasnich and L. Reatto,
"Tuning the interaction strangth in bosonic clouds: How to
use Alkali atoms with negative scattering length",
submitted to Phys. Rev. A.
\item{\ 14.} M. Robnik and L. Salasnich, J. Phys. A {\bf 30},
1711 (1997); M. Robnik and L. Salasnich, J. Phys. A {\bf 30}, 1719 (1997).
\item{\ 15.} L. Salasnich, Mod. Phys. Lett. B {\bf 11}, 269 (1997).
\end{description}
\end{document}
|
1,108,101,562,758 | arxiv | \section{Introduction}
\vspace*{-0.4em}
Of all the clustering algorithms in use today, among the simplest and most
utilized is the venerated $k$-means clustering algorithm, usually implemented
via Lloyd's algorithm: given a dataset $S$, repeat the following two steps (a
`Lloyd iteration') until the centroids of each of the $k$ clusters converge:
\vspace*{-1.0em}
\begin{enumerate} \itemsep -2pt
\item Assign each point $p_i \in S$ to the cluster with nearest centroid.
\item Recalculate the centroids for each cluster using the assignments of each
point in $S$.
\end{enumerate}
\vspace*{-1.0em}
Clearly, a simple implementation of this algorithm will take $O(kN)$ time where
$N = |S|$. However, the number of iterations is not bounded unless the
practitioner manually sets a maximum, and $k$-means is not guaranteed to
converge to the global best clustering. Despite these shortcomings, in practice
$k$-means tends to quickly converge to reasonable solutions. Even so, there is
no shortage of techniques for improving the clusters $k$-means converges to:
refinement of initial centroids \cite{bradley1998refining} and weighted
sampling of initial centroids \cite{arthur2007k} are just two of many popular
existing strategies.
There are also a number of methods for accelerating the runtime of a single
iteration of $k$-means. In general, these ideas use the triangle inequality to
prune work during the assignments step. Algorithms of this sort include the
work of Pelleg and Moore \yrcite{pelleg1999accelerating}, Elkan
\yrcite{elkan2003using}, Hamerly \yrcite{hamerly2010making}, and Ding et~al.
\yrcite{ding2015yinyang}. However, the scaling of these algorithms can make
them problematic for the case of large $k$ and large $N$.
\setlength{\textfloatsep}{0.4em}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
{\bf Algorithm} & {\bf Setup} & {\bf Worst-case} & {\bf Memory} \\
\hline
naive & n/a & $O(kN)$ & $O(k + N)$ \\
blacklist & $O(N \log N)$ & $O(kN)$ & $O(k \log N + N)$ \\
elkan & n/a & $O(k^2 + kN)$ & $O(k^2 + kN)$ \\
hamerly & n/a & $O(k^2 + kN)$ & $O(k + N)$ \\
yinyang & $O(k^2 + kN)$ & $O(kN)$ & $O(kN)$ \\
{\bf dualtree} & $O(N \log N)$ & $O(k \log k + N)^1$ & $O(k + N)$ \\
\hline
\end{tabular}
\end{center}
\vspace*{-1.0em}
\caption{Runtime and memory bounds for $k$-means algorithms.}
\label{tab:runtimes}
\vspace*{-1.0em}
\end{table*}
In this paper, we describe a dual-tree $k$-means algorithm tailored to the large
$k$ and large $N$ case that outperforms all competing algorithms in that
setting; this dual-tree algorithm also has bounded single-iteration runtime in
some situations (see Section \ref{sec:theory}). This algorithm, which is our
main contribution, has several appealing aspects:
\vspace*{-0.5em}
\begin{itemize} \itemsep -1pt
\item {\bf Empirical efficiency}. In the large $k$ and large $N$ setting for
which this algorithm is designed, it outperforms all other alternatives, and
scales better to larger datasets. The algorithm is especially efficient in
low dimensionality.
\item {\bf Runtime guarantees}. Using adaptive runtime analysis
techniques, we bound the single-iteration runtime of our algorithm with respect
to the intrinsic dimensionality of the centroids and data, when cover trees are
used. This gives theoretical support for the use of our algorithm in large data
settings. In addition, the bound is dependent on the intrinsic dimensionality,
{\it not} the extrinsic dimensionality.
\item {\bf Generalizability}. We develop our algorithm using a
tree-independent dual-tree algorithm abstraction \cite{curtin2013tree}; this
means that our algorithm may be used with {\it any} type of valid tree. This
includes not just $kd$-trees but also metric trees, cone trees,
octrees, and others. Different trees may be suited to different types of data,
and since our algorithm is general, one may use any type of tree as a
plug-and-play parameter.
\item {\bf Separation of concerns}. The abstraction we use to develop our
algorithm allows us to focus on and formalize each of the pruning rules
individually (Section \ref{sec:strategies}). This aids understanding of the
algorithm and eases insertion of future improvements and better pruning rules.
\end{itemize}
\vspace*{-0.8em}
Section \ref{sec:scaling} shows the relevance of the large $k$ case; then, in
Section \ref{sec:trees}, we show that we can build a tree on the $k$ clusters,
and then a dual-tree algorithm \cite{curtin2013tree} can be used to efficiently
perform an exact single iteration of $k$-means clustering. Section
\ref{sec:strategies} details the four pruning strategies used in our algorithm,
and Section \ref{sec:algorithm} introduces the algorithm itself. Sections
\ref{sec:theory} and \ref{sec:empirical} show the theoretical and empirical
results for the algorithm, and finally Section \ref{sec:conclusion} concludes
the paper and paints directions for future improvements.
\vspace*{-0.2em}
\section{Scaling $k$-means}
\label{sec:scaling}
\vspace*{-0.1em}
Although the original publications on $k$-means only applied the algorithm to a
maximum dataset size of 760 points, the half-century of relentless progress
since then has seen dataset sizes scale into billions. Due to its
simplicity, though, $k$-means has remained relevant, and is still applied in
many large-scale applications.
In cases where $N$ scales but $k$ remains small, a good choice of algorithm is a
sampling algorithm, which will return an approximate clustering. One sampling
technique, coresets, can produce good clusterings for $n$ in the millions using
several hundred or a few thousand points \cite{coresets}. However, for large
$k$, the number of samples required to produce good clusterings can become
prohibitive.
For large $k$, then, we turn to an alternative approach: accelerating exact
Lloyd iterations. Existing techniques include the brute-force
implementation, the {\it blacklist} algorithm
\cite{pelleg1999accelerating}, Elkan's algorithm \yrcite{elkan2003using}, and
Hamerly's algorithm \yrcite{hamerly2010making}, as well as the recent Yinyang
$k$-means algorithm \cite{ding2015yinyang}. The blacklist algorithm builds a
$kd$-tree on the dataset and, while the tree is traversed, blacklists individual
clusters that cannot be the closest cluster (the {\it owner}) of any descendant
points of a node. Elkan's algorithm maintains an upper bound and a lower bound
on the distance between each point and centroid; Hamerly's algorithm is a
memory-efficient
simplification of this technique. The Yinyang algorithm
organizes the centroids into groups of about 10 (depending on algorithm
parameters) using 5 iterations of $k$-means on the centroids followed by a
single iteration of standard $k$-means on the points. Once groups are built,
the Yinyang algorithm attempts to prune groups of centroids at a time using
rules similar to Elkan and Hamerly's algorithms.
Of these algorithms, only Yinyang $k$-means considers centroids in groups at
all, but it does not consider points in groups. On the other hand, the
blacklist algorithm is the only algorithm that builds a tree on the points and
is able to assign multiple points to a single cluster at once. So, although
each algorithm has its own useful region, none of the four we have considered
here are particularly suited to the case of large $N$ {\bf and} large $k$.
Table \ref{tab:runtimes} shows setup costs,
worst-case per-iteration runtimes, and memory usage of each of these algorithms
as well as the proposed dual-tree algorithm\footnote{The dual-tree algorithm
worst-case runtime bound also depends on some assumptions on dataset-dependent
constants. This is detailed further in Section \ref{sec:theory}.}. The
expected runtime of the blacklist algorithm is, under some assumptions,
$O(k + k \log N + N)$ per iteration. The expected runtime of Hamerly's and
Elkan's algorithm is $O(k^2 + \alpha N)$ time, where $\alpha$ is the expected
number of clusters visited by each point (in both Elkan and Hamerly's results,
$\alpha$ seems to be small).
However, none of these algorithms are specifically tailored to the large $k$
case, and the large $k$ case is common. Pelleg and Moore
\yrcite{pelleg1999accelerating} report several hundred clusters in a subset of
800k objects from the SDSS dataset. Clusterings for $n$-body simulations on
astronomical data often involve several thousand clusters
\cite{kwon2010scalable}. Csurka
et~al. \yrcite{csurka} extract vocabularies from image sets using $k$-means with
$k \sim 1000$. Coates et~al. \yrcite{coates} show that $k$-means can work
surprisingly well for unsupervised feature learning for images, using $k$ as
large as 4000 on 50000 images. Also, in text mining, datasets can have up to
18000 unique labels \cite{bengio2010label}. Can and Ozkarahan
\yrcite{can1990concepts} suggest that the number of clusters in text data is
directly related to the size of the vocabulary, suggesting $k \sim mN/t$ where
$m$ is the vocabulary size, $n$ is the number of documents, and $t$ is the
number of nonzero entries in the term matrix.
Thus, it is important to have an algorithm with favorable scaling properties for
both large $k$ and $N$.
\vspace*{-0.4em}
\section{Tree-based algorithms}
\label{sec:trees}
\vspace*{-0.2em}
The blacklist algorithm is an example of a {\it single-tree algorithm}: one tree
(the {\it reference tree}) is built on the dataset, and then that tree is
traversed. This approach is applicable to a surprising variety of
other problems, too \cite{bentley1975multidimensional, moore1998very,
curtin2013fast}. Following the blacklist algorithm, then, it is only
natural to build a tree on the data points. Tree-building is (generally) a
one-time $O(N \log N)$ cost and for large $N$ or $k$, the cost of tree
building is often negligible compared to the time it takes to perform the
clustering.
\setlength{\textfloatsep}{0.4em}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.31\textwidth}
\begin{tikzpicture}
\filldraw [lightgray!60!blue] (0.0, 0.0) circle (0.6) { };
\draw [thin] (0.0, 0.0) circle (0.6) { };
\node [ ] at (0.0, 0.0) { $\mathscr{N}_q$ };
\filldraw [lightgray!60!red] (-0.6, 0.2) circle (0.4) { };
\draw [thin] (-0.6, 0.2) circle (0.4) { };
\node [ ] at (-0.6, 0.2) { $\mathscr{N}_{r2}$ };
\filldraw [lightgray!60!red] (2.1, 0.1) circle (0.4) { };
\draw [thin] (2.1, 0.1) circle (0.4) { };
\node [ ] at (2.1, 0.1) { $\mathscr{N}_r$ };
\draw [black,dashed,domain=-30:30] plot ({1.63246*cos(\x)},
{1.63246*sin(\x)});
\draw [black,dashed,domain=150:210] plot ({1.63246*cos(\x)},
{1.63246*sin(\x)});
\draw (0.6, 0.0) -- (1.63246, 0.0) { };
\draw (0.6, 0.0) -- (0.7, 0.1) { };
\draw (0.6, 0.0) -- (0.7, -0.1) { };
\draw (1.63246, 0.0) -- (1.53246, 0.1) { };
\draw (1.63246, 0.0) -- (1.53246, -0.1) { };
\node [ ] at (1.1, 0.3) { $\scriptstyle{\operatorname{ub}(\mathscr{N}_q)}$ };
\end{tikzpicture}
\caption{$\mathscr{N}_r$ can be pruned.}
\label{fig:prune-1}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\begin{tikzpicture}
\draw [gray,dashed] (-0.65, -0.5) -- (-0.65, 1.1) { };
\filldraw [lightgray!60!blue] (0.0, 0.0) circle (0.05) { };
\draw [thin] (0.0, 0.0) circle (0.05) { };
\node [ ] at (-0.26, 0.0) { $p_q$ };
\filldraw [lightgray!60!red] (0.4, 0.2) circle (0.05) { };
\draw [thin] (0.4, 0.2) circle (0.05) { };
\node [ ] at (0.3, 0.42) { $c_j$ };
\draw [gray] (0.44, 0.23) -- (1.2, 0.6) { };
\draw [gray] (0.44, 0.23) -- (0.47, 0.3) { };
\draw [gray] (0.44, 0.23) -- (0.51, 0.2) { };
\draw [gray] (1.2, 0.6) -- (1.17, 0.53) { };
\draw [gray] (1.2, 0.6) -- (1.13, 0.63) { };
\node [ ] at (0.9, 0.1) { $m_j$ };
\draw [black,dashed,domain=-30:50] plot ({1.3416*cos(\x)},
{1.3416*sin(\x)});
\draw [gray] (0.045, -0.005) -- (1.3212, -0.23297) { };
\draw [gray] (0.045, -0.005) -- (0.1, 0.03) { };
\draw [gray] (0.045, -0.005) -- (0.095, -0.065) { };
\draw [gray] (1.3212, -0.23297) -- (1.25, -0.26) { };
\draw [gray] (1.3212, -0.23297) -- (1.27, -0.17) { };
\node [ ] at (0.5, -0.4) { $\scriptstyle{\operatorname{ub}(p_q) + m_j}$ };
\filldraw [lightgray!60!red] (2.4, 0.3) circle (0.05) { };
\draw [thin] (2.4, 0.3) circle (0.05) { };
\node [ ] at (2.4, 0.5) { $c_k$ };
\draw [gray] (2.45, 0.3) -- (3.3, 0.3) { };
\draw [gray] (2.45, 0.3) -- (2.52, 0.27) { };
\draw [gray] (2.45, 0.3) -- (2.52, 0.33) { };
\draw [gray] (3.3, 0.3) -- (3.23, 0.27) { };
\draw [gray] (3.3, 0.3) -- (3.23, 0.33) { };
\node [ ] at (2.8, 0.1) { $\scriptstyle{\min_k m_k}$ };
\draw [black,dotted] (2.4, 0.3) circle (0.9) { };
\end{tikzpicture}
\caption{$p_q$'s owner cannot change.}
\label{fig:prune-2}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\begin{tikzpicture}
\draw [gray,dashed] (-0.9, -0.5) -- (-0.9, 1.1) { };
\filldraw [lightgray!60!blue] (0.0, 0.0) circle (0.05) { };
\draw [thin] (0.0, 0.0) circle (0.05) { };
\node [ ] at (-0.26, 0.0) { $p_q$ };
\filldraw [lightgray!60!red] (0.4, 0.2) circle (0.05) { };
\draw [thin] (0.4, 0.2) circle (0.05) { };
\node [ ] at (0.3, 0.42) { $c_j$ };
\draw [gray] (0.44, 0.23) -- (1.2, 0.6) { };
\draw [gray] (0.44, 0.23) -- (0.47, 0.3) { };
\draw [gray] (0.44, 0.23) -- (0.51, 0.2) { };
\draw [gray] (1.2, 0.6) -- (1.17, 0.53) { };
\draw [gray] (1.2, 0.6) -- (1.13, 0.63) { };
\node [ ] at (0.9, 0.1) { $m_j$ };
\draw [black,dashed,domain=-30:50] plot ({1.3416*cos(\x)},
{1.3416*sin(\x)});
\draw [gray] (0.045, -0.005) -- (1.3212, -0.23297) { };
\draw [gray] (0.045, -0.005) -- (0.1, 0.03) { };
\draw [gray] (0.045, -0.005) -- (0.095, -0.065) { };
\draw [gray] (1.3212, -0.23297) -- (1.25, -0.26) { };
\draw [gray] (1.3212, -0.23297) -- (1.27, -0.17) { };
\node [ ] at (0.5, -0.4) { $\scriptstyle{\operatorname{ub}(p_q) + m_j}$ };
\filldraw [lightgray!60!red] (2.0, 0.3) circle (0.05) { };
\draw [thin] (2.0, 0.3) circle (0.05) { };
\node [ ] at (2.0, 0.5) { $c_k$ };
\draw [gray] (2.05, 0.3) -- (2.9, 0.3) { };
\draw [gray] (2.05, 0.3) -- (2.12, 0.33) { };
\draw [gray] (2.05, 0.3) -- (2.12, 0.27) { };
\draw [gray] (2.9, 0.3) -- (2.83, 0.33) { };
\draw [gray] (2.9, 0.3) -- (2.83, 0.27) { };
\node [ ] at (2.4, 0.1) { $\scriptstyle{\min_k m_k}$ };
\draw [black,dotted] (2.0, 0.3) circle (0.9) { };
\end{tikzpicture}
\caption{$p_q$'s owner can change.}
\label{fig:prune-3}
\end{subfigure}
\vspace*{-0.7em}
\caption{Different pruning situations.}
\label{fig:prune_one}
\vspace*{-1.0em}
\end{figure*}
The speedup of the blacklist algorithm comes from the hierarchical nature of
trees: during the algorithm, we may rule out a cluster centroid for {\it many
points at once}. The same reason is responsible for the impressive speedups
obtained for other single-tree algorithms, such as nearest neighbor search
\cite{bentley1975multidimensional, liu2004investigation}. But
for nearest neighbor search, the nearest neighbor is often required not just for
a query point but instead a {\it query set}. This observation
motivated the development of {\it dual-tree algorithms}, which also build a tree
on the query set (the {\it query tree}) in order to share work across query
points. Both trees are recursed in such a way that combinations of query
nodes and reference nodes are visited. Pruning criteria are applied to
these node combinations, and if a combination may be pruned, then the
recursion does not continue in that direction.
This approach
is applicable to $k$-means with large $k$: we may build a tree on the
$k$ cluster centroids, as well as a tree on the data points, and then we may
rule out {\it many} centroids for {\it many} points at once.
A recent result generalizes the class of dual-tree
algorithms, simplifying their expression and development
\cite{curtin2013tree}. Any dual-tree algorithm can be decomposed into three
parts: a type of space tree, a pruning dual-tree traversal, and a point-to-point
\texttt{BaseCase()} function and node-to-node \texttt{Score()} function that
determines when pruning is possible. Precise definitions and details of the
abstraction are given by \citet{curtin2013tree}, but for our purposes, this means
that we can describe a dual-tree $k$-means algorithm entirely with a
straightforward \texttt{BaseCase()} function and \texttt{Score()} function. Any
tree and any traversal can then be used to create a working dual-tree algorithm.
The two types of trees we will explicitly consider in this paper are the
$kd$-tree and the cover tree \cite{langford2006}, but it should be remembered
that the algorithm as provided is sufficiently general to work with any other
type of tree. Therefore, we standardize notation for trees: a tree is denoted
with $\mathscr{T}$, and a node in the tree is denoted by $\mathscr{N}$. Each
node in a tree may have children; the set of children of $\mathscr{N}_i$ is
denoted $\mathscr{C}_i$. In addition, each node may hold some points; this set
of points is denoted $\mathscr{P}_i$. Lastly, the set of {\it descendant}
points of a node $\mathscr{N}_i$ is denoted $\mathscr{D}^p_i$. The descendant
points are all points held by descendant nodes, and it is important to note that
the set $\mathscr{P}_i$ is {\it not} equivalent to $\mathscr{D}^p_i$. This
notation is taken from \citet{curtin2013tree} and is detailed more
comprehensively there. Lastly, we say that a centroid $c$ {\it owns} a point
$p$ if $c$ is the closest centroid to $p$.
\vspace*{-0.5em}
\section{Pruning strategies}
\label{sec:strategies}
\vspace*{-0.2em}
All of the existing accelerated $k$-means algorithms operate by avoiding
unnecessary work via the use of pruning strategies. Thus, we will pursue four
pruning strategies, each based on or related to earlier work
\cite{pelleg1999accelerating, elkan2003using, hamerly2010making}.
These pruning strategies are meant to be used during the dual-tree traversal,
for which we have built a query tree $\mathscr{T}_q$ on the points and
a reference tree $\mathscr{T}_r$ on the centroids. Therefore, these pruning
strategies consider not just combinations of single points and centroid
$p_q$ and $c_i$, but the combination of sets of points and sets of centroids,
represented by a query tree node $\mathscr{N}_q$ and a centroid tree node $\mathscr{N}_r$. This
allows us to prune many centroids for many points simultaneously.
{\bf Strategy one.} When visiting a particular combination $(\mathscr{N}_q, \mathscr{N}_r)$
(with $\mathscr{N}_q$ holding points in the dataset and $\mathscr{N}_r$ holding
centroids), the combination should be pruned if every descendant centroid in
$\mathscr{N}_r$ can be shown to
own none of the points in $\mathscr{N}_q$. If we have cached an upper bound $\operatorname{ub}(\mathscr{N}_q)$
on the distance between any descendant point of $\mathscr{N}_q$ and its nearest cluster
centroid that satisfies
\vspace*{-1.0em}
\begin{equation}
\operatorname{ub}(\mathscr{N}_q) \ge \max_{p_q \in \mathscr{D}^p_q} d(p_q,
c_q)
\end{equation}
\vspace*{-1.0em}
\noindent where $c_q$ is the cluster centroid nearest to point $p_q$, then the
node $\mathscr{N}_r$ can contain no centroids that own any descendant points of $\mathscr{N}_q$ if
\vspace*{-1.0em}
\begin{equation}
d_{\min}(\mathscr{N}_q, \mathscr{N}_r) > \operatorname{ub}(\mathscr{N}_q).
\label{eqn:prune}
\end{equation}
\vspace*{-1.5em}
This relation bears similarity to the pruning rules for nearest neighbor search
\cite{curtin2013tree} and max-kernel search \cite{curtin2014dual}. Figure
\ref{fig:prune-1} shows a situation where $\mathscr{N}_r$ can be pruned; in this case,
ball-shaped tree nodes are used, and the upper bound $\operatorname{ub}(\mathscr{N}_q)$ is set to
$d_{\max}(\mathscr{N}_q, \mathscr{N}_{r2})$.
{\bf Strategy two.} The recursion down a particular branch of the query tree
should
terminate early if we can determine that only one cluster can possibly own all
of the descendant points of that branch. This is related to the first strategy.
If we have been caching the number of pruned centroids (call this
$\operatorname{pruned}(\mathscr{N}_q)$), as well as the identity of any
arbitrary non-pruned centroid (call this $\operatorname{closest}(\mathscr{N}_q)$), then if
$\operatorname{pruned}(\mathscr{N}_q) = k - 1$, we may conclude that the
centroid $\operatorname{closest}(\mathscr{N}_q)$ is the owner of all descendant
points of $\mathscr{N}_q$, and there is no need for further recursion in
$\mathscr{N}_q$.
{\bf Strategy three.} The traversal should not visit nodes whose owner could not
have possibly changed between iterations; that is, the tree should be coalesced
to include only nodes whose owners may have changed.
There are two easy ways to use the triangle inequality to show that the owner of
a point cannot change between iterations. Figures \ref{fig:prune-2} and
\ref{fig:prune-3} show the
first: we have a point $p_q$ with owner $c_j$ and second-closest
centroid $c_k$. Between iterations, each centroid will move when it is
recalculated; define the distance that centroid $c_i$ has moved as
$m_i$. Then we bound the distances for the next
iteration: $d(p_q, c_j) + m_j$ is an upper bound on the distance from $p_q$
to its owner next iteration, and $d(p_q, c_k) - \max_i m_i$ is a lower bound on
the distance from $p_q$ to its second closest centroid next iteration. We
may use these bounds to conclude that if
\vspace*{-1.2em}
\begin{equation}
d(p_q, c_j) + m_j < d(p_q, c_k) - \max_i m_i,
\end{equation}
\vspace*{-1.6em}
\noindent then the owner of $p_q$ next iteration must be $c_j$. Generalizing
from individual points $p_q$ to tree nodes $\mathscr{N}_q$ is easy.
This pruning strategy can only be used when all descendant points of
$\mathscr{N}_q$ are owned by a single centroid, and in order to perform the
prune, we need to establish a lower bound on the distance between any
descendant point of the node $\mathscr{N}_q$ and the second closest centroid.
Call this bound $\operatorname{lb}(\mathscr{N}_q)$. Remember that
$\operatorname{ub}(\mathscr{N}_q)$ provides an upper bound on the distance
between any descendant point of $\mathscr{N}_q$ and its nearest centroid. Then,
if all descendant points of $\mathscr{N}_q$ are owned by some cluster $c_j$ in
one iteration, and
\vspace*{-1.4em}
\begin{equation}
\operatorname{ub}(\mathscr{N}_q) + m_j < \operatorname{lb}(\mathscr{N}_q) -
\max_i m_i,
\label{eqn:static-1}
\end{equation}
\vspace*{-1.4em}
\noindent then $\mathscr{N}_q$ is owned by cluster $c_j$ in the next iteration.
Implementationally, it is convenient to have $\operatorname{lb}(\mathscr{N}_q)$ store a
lower bound on the distance between any descendant point of $\mathscr{N}_q$ and
the nearest pruned centroid. Then, if $\mathscr{N}_r$ is entirely owned by one
cluster, all other centroids are pruned, and $\operatorname{lb}(\mathscr{N}_q)$
holds the necessary lower bound for pruning according to the rule above.
The second way to use the triangle inequality to show that an owner cannot
change depends on the distances between centroids. Suppose that $p_q$ is
owned by $c_j$ at the current iteration; then, if
\vspace*{-1.3em}
\begin{equation}
d(p_q, c_j) - m_j < 2 \left( \min_{c_i \in C, c_i \ne c_j} d(c_i, c_j) \right)
\end{equation}
\vspace*{-1.3em}
\noindent then $c_j$ will own $p_q$ next iteration \cite{elkan2003using}. We
may adapt this rule to tree nodes $\mathscr{N}_q$ in the same way as the previous rule;
if $\mathscr{N}_q$ is owned by cluster $c_j$ during this iteration and
\vspace*{-1.3em}
\begin{equation}
\operatorname{ub}(\mathscr{N}_q) + m_j < 2 \left( \min_{c_i \in C, c_i \ne c_j}
d(c_i, c_j) \right)
\label{eqn:static-2}
\end{equation}
\vspace*{-1.3em}
\noindent then $\mathscr{N}_q$ is owned by cluster $c_j$ in the next iteration.
Note that the above rules do work with individual points $p_q$ instead of nodes
$\mathscr{N}_q$ if we have a valid upper bound $\operatorname{ub}(p_q)$ and a
valid lower bound $\operatorname{lb}(p_q)$. Any nodes or points that satisfy
the above conditions do not need to be visited during the next iteration, and
can be removed from the tree for the next iteration.
{\bf Strategy four.} The traversal should use bounding information from previous
iterations; for instance, $\operatorname{ub}(\mathscr{N}_q)$ should not be reset
to $\infty$ at the beginning of each iteration. Between iterations, we may
update $\operatorname{ub}(\mathscr{N}_q)$, $\operatorname{ub}(p_q)$,
$\operatorname{lb}(\mathscr{N}_q)$, and $\operatorname{lb}(p_q)$ according to
the following rules:
\vspace*{-1.5em}
\begin{eqnarray}
\operatorname{ub}(\mathscr{N}_q) &\gets&
\begin{cases}
\operatorname{ub}(\mathscr{N}_q) + m_j & \text{if } \mathscr{N}_q \text{ is}\\
\multicolumn{2}{l}{\text{\ \ \ \ owned by a single cluster $c_j$}}
\\
\operatorname{ub}(\mathscr{N}_q) + \max_i m_i & \text{if } \mathscr{N}_q \text{ is}\\
\multicolumn{2}{l}{\text{\ \ \ \ not owned by a single cluster},}
\end{cases} \label{eqn:special} \\
\operatorname{ub}(p_q) &\gets& \operatorname{ub}(p_q) + m_j, \\
\operatorname{lb}(\mathscr{N}_q) &\gets& \operatorname{lb}(\mathscr{N}_q) -
\max_i m_i, \\
\operatorname{lb}(p_q) &\gets& \operatorname{lb}(p_q) - \max_i m_i.
\end{eqnarray}
\vspace*{-1.0em}
Special handling is required when descendant points of $\mathscr{N}_q$
are not owned by a single centroid (Equation \ref{eqn:special}). It is also
true that for a child node $\mathscr{N}_c$ of $\mathscr{N}_q$, $\operatorname{ub}(\mathscr{N}_q)$ is a valid upper bound
for $\mathscr{N}_c$ and $\operatorname{lb}(\mathscr{N}_q)$ is a valid lower bound for $\mathscr{N}_c$: that is, the upper
and lower bounds may be taken from a parent, and they are still valid.
\vspace*{-0.6em}
\section{The dual-tree $k$-means algorithm}
\label{sec:algorithm}
\vspace*{-0.3em}
These four pruning strategies lead to a high-level $k$-means algorithm,
described in Algorithm \ref{alg:high_level}. During the course of this
algorithm, to implement each of our pruning strategies, we will need to maintain
the following quantities:
\vspace*{-1.0em}
\begin{itemize} \itemsep -1.5pt
\item $\operatorname{ub}(\mathscr{N}_q)$: an upper bound on the distance
between any descendant point of a node $\mathscr{N}_q$ and the nearest centroid
to that point.
\item $\operatorname{lb}(\mathscr{N}_q)$: a lower bound on the distance
between any descendant point of a node $\mathscr{N}_q$ and the nearest pruned
centroid.
\item $\operatorname{pruned}(\mathscr{N}_q)$: the number of centroids pruned
during traversal for $\mathscr{N}_q$.
\item $\operatorname{closest}(\mathscr{N}_q)$: if $\operatorname{pruned}(\mathscr{N}_q) = k - 1$, this
holds the owner of all descendant points of $\mathscr{N}_q$.
\item $\operatorname{canchange}(\mathscr{N}_q)$: whether or not
$\mathscr{N}_q$ can change owners next iteration.
\item $\operatorname{ub}(p_q)$: an upper bound on the distance between point
$p_q$ and its nearest centroid.
\item $\operatorname{lb}(p_q)$: a lower bound on the distance between point
$p_q$ and its second nearest centroid.
\item $\operatorname{closest}(p_q)$: the closest centroid to $p_q$ (this is
also the owner of $p_q$).
\item $\operatorname{canchange}(p_q)$: whether or not $p_q$ can change owners
next iteration.
\end{itemize}
\vspace*{-0.8em}
At the beginning of the algorithm, each upper bound is initialized to $\infty$,
each lower bound is initialized to $\infty$, $\operatorname{pruned}(\cdot)$
is initialized to $0$ for each node, and
$\operatorname{closest}(\cdot)$ is initialized to an invalid centroid for each
node and point. $\operatorname{canchange}(\cdot)$ is set to {\tt
true} for each node and point. Thus line
6 does nothing on the first iteration.
\setlength{\textfloatsep}{0.4em}
\begin{algorithm}[t!]
\begin{algorithmic}[1]
\STATE {\bf Input:} dataset $S \in \mathcal{R}^{N \times d}$, initial
centroids $C \in \mathcal{R}^{k \times d}$.
\STATE {\bf Output:} converged centroids $C$.
\medskip
\STATE $\mathscr{T} \gets$ a tree built on $S$
\WHILE{centroids $C$ not converged}
\STATE \COMMENT{Remove nodes in the tree if possible.}
\STATE $\mathscr{T} \gets \mathtt{CoalesceNodes(}\mathscr{T}\mathtt{)}$
\STATE $\mathscr{T}_c \gets$ a tree built on $C$
\medskip
\STATE \COMMENT{Call dual-tree algorithm.}
\STATE Perform a dual-tree recursion with $\mathscr{T}$, $\mathscr{T}_c$,
\texttt{BaseCase()}, and \texttt{Score()}.
\medskip
\STATE \COMMENT{Restore the tree to its non-coalesced form.}
\STATE $\mathscr{T} \gets \mathtt{DecoalesceNodes(\mathscr{T})}$
\medskip
\STATE \COMMENT{Update centroids and bounding information.}
\STATE $C \gets \mathtt{UpdateCentroids(}\mathscr{T}\mathtt{)}$
\STATE $\mathscr{T} \gets \mathtt{UpdateTree(}\mathscr{T}\mathtt{)}$
\ENDWHILE
\STATE {\bf return} $C$
\end{algorithmic}
\caption{High-level outline of dual-tree $k$-means.}
\label{alg:high_level}
\end{algorithm}
First, consider the dual-tree algorithm called on line
9. As detailed earlier, we can describe a dual-tree
algorithm as a combination of tree type, traversal, and point-to-point
\texttt{BaseCase()} and node-to-node \texttt{Score()} functions. Thus, we
need only present \texttt{BaseCase()} (Algorithm \ref{alg:base_case}) and
\texttt{Score()} (Algorithm \ref{alg:score})\footnote{In these algorithms, we
assume that any point present in a node $\mathscr{N}_i$ will also be present in at least
one child $\mathscr{N}_c \in \mathscr{C}_i$. It is possible to fully
generalize to any tree type, but the exposition is significantly more complex,
and our assumption covers most standard tree types anyway.}.
The \texttt{BaseCase()} function is simple: given a point $p_q$ and a
centroid $c_r$, the distance $d(p_q, c_r)$ is calculated; $\operatorname{ub}(p_q)$,
$\operatorname{lb}(p_q)$, and $\operatorname{closest}(p_q)$ are updated if needed.
\texttt{Score()} is more complex. The first stanza (lines 4--6) takes the
values of $\operatorname{pruned}(\cdot)$ and $\operatorname{lb}(\cdot)$ from the parent node of $\mathscr{N}_q$; this
is necessary to prevent $\operatorname{pruned}(\cdot)$ from undercounting. Next, we prune if
the owner of $\mathscr{N}_q$ is already
known (line 7). If the minimum distance between any descendant point of $\mathscr{N}_q$
and any descendant centroid of $\mathscr{N}_r$ is greater than $\operatorname{ub}(\mathscr{N}_q)$,
then we may prune the combination (line 16). In that case we may also improve
the lower bound (line 14). Note the special handling in line 15: our definition
of tree allows points to be held in more than one node; thus, we must avoid
double-counting clusters that we prune.\footnote{For trees like the $kd$-tree
and the metric tree, which do not hold points in more than one node, no special
handling is required: we will never prune a cluster twice for a given query node
$\mathscr{N}_q$.}. If the node combination cannot be pruned in this way, an attempt is
made to update the upper bound (lines 17--20). Instead of using $d_{\max}(\mathscr{N}_q,
\mathscr{N}_r)$, we may use a tighter upper bound: select any
descendant centroid $c$ from $\mathscr{N}_r$ and use $d_{\max}(\mathscr{N}_q, c)$. This still
provides a valid upper bound, and in practice is generally smaller than
$d_{\max}(\mathscr{N}_q, \mathscr{N}_r)$. We simply set $\operatorname{closest}(\mathscr{N}_q)$ to $c$ (line 20);
$\operatorname{closest}(\mathscr{N}_q)$ only holds the owner of $\mathscr{N}_q$ if all centroids
except one are pruned---in which case the owner {\it must} be $c$.
\setlength{\textfloatsep}{0.4em}
\begin{algorithm}[t!]
\begin{algorithmic}[1]
\STATE {\bf Input:} query point $p_q$, reference centroid $c_r$
\STATE {\bf Output:} distance between $p_q$ and $c_r$
\medskip
\IF{$d(p_q, c_r) < \operatorname{ub}(p_q)$}
\STATE $\operatorname{lb}(p_q) \gets \operatorname{ub}(p_q)$
\STATE $\operatorname{ub}(p_q) \gets d(p_q, c_r)$
\STATE $\operatorname{closest}(p_q) \gets c_r$
\ELSIF{$d(p_q, c_r) < \operatorname{lb}(p_q)$}
\STATE $\operatorname{lb}(p_q) \gets d(p_q, c_r)$
\ENDIF
\medskip
\STATE {\bf return} $d(p_q, c_r)$
\end{algorithmic}
\caption{\texttt{BaseCase()} for dual-tree $k$-means.}
\label{alg:base_case}
\end{algorithm}
\begin{algorithm}[t!]
\begin{algorithmic}[1]
\STATE {\bf Input:} query node $\mathscr{N}_q$, reference node $\mathscr{N}_r$
\STATE {\bf Output:} score for node combination $(\mathscr{N}_q,
\mathscr{N}_r)$, or $\infty$ if the combination can be pruned
\medskip
\STATE \COMMENT{Update the number of pruned nodes, if needed.}
\IF{$\mathscr{N}_q$ not yet visited and is not the root node}
\STATE $\operatorname{pruned}(\mathscr{N}_q) \gets
\operatorname{parent}(\mathscr{N}_q)$
\STATE $\operatorname{lb}(\mathscr{N}_q) \gets
\operatorname{lb}(\operatorname{parent}(\mathscr{N}_q))$
\ENDIF
\STATE{{\bf if} $\operatorname{pruned}(\mathscr{N}_q) = k - 1$ {\bf then return} $\infty$}
\medskip
\STATE $s \gets d_{\min}(\mathscr{N}_q, \mathscr{N}_r)$
\STATE $c \gets \mathrm{any\ descendant\ cluster\ centroid\ of } \mathscr{N}_r$
\IF{$d_{\min}(\mathscr{N}_q, \mathscr{N}_r) >
\operatorname{ub}(\mathscr{N}_q)$}
\STATE \COMMENT{This cluster node owns no descendant points.}
\IF{$d_{\min}(\mathscr{N}_q, \mathscr{N}_r) <
\operatorname{lb}(\mathscr{N}_q)$}
\STATE \COMMENT{Improve the lower bound for pruned nodes.}
\STATE $\operatorname{lb}(\mathscr{N}_q) \gets d_{\min}(\mathscr{N}_q,
\mathscr{N}_r)$
\ENDIF
\STATE $\operatorname{pruned}(\mathscr{N}_q) \mathrel{+}= |\mathscr{D}^p_r \setminus \{ \textrm{clusters
not pruned} \}|$
\STATE $s \gets \infty$
\medskip
\ELSIF{$d_{\max}(\mathscr{N}_q, c) <
\operatorname{ub}(\mathscr{N}_q)$}
\STATE \COMMENT{We may improve the upper bound.}
\STATE $\operatorname{ub}(\mathscr{N}_q) \gets d_{\max}(\mathscr{N}_q,
\mathscr{N}_r)$
\STATE $\operatorname{closest}(\mathscr{N}_q) \gets c$
\ENDIF
\medskip
\STATE \COMMENT{Check if all clusters (except one) are pruned.}
\STATE {\bf if} $\operatorname{pruned}(\mathscr{N}_q) = k - 1$ {\bf then return} $\infty$
\medskip
\STATE {\bf return} $s$
\end{algorithmic}
\caption{\texttt{Score()} for dual-tree $k$-means.}
\label{alg:score}
\end{algorithm}
\begin{algorithm}[t!]
\begin{algorithmic}[1]
\STATE {\bf Input:} tree $\mathscr{T}$ built on dataset $S$
\STATE {\bf Output:} new centroids $C$
\medskip
\STATE $C := \{ c_0, \ldots, c_{k - 1} \} \gets \bm{0}^{k \times d}$; \ $n =
\bm{0}^k$
\medskip
\STATE \COMMENT{$s$ is a stack.}
\STATE $s \gets \{ \operatorname{root}(\mathscr{T}) \}$
\WHILE{$|s| > 0$}
\STATE $\mathscr{N}_i \gets s\mathtt{.pop()}$
\IF{$\operatorname{pruned}(\mathscr{N}_i) = k - 1$}
\STATE \COMMENT{The node is entirely owned by a cluster.}
\STATE $j \gets \mathrm{index\ of } \operatorname{closest}(\mathscr{N}_i)$
\STATE $c_j \gets c_j + |\mathscr{D}^p_i|
\operatorname{centroid}(\mathscr{N}_i)$
\STATE $n_j \gets n_j + |\mathscr{D}^p_i|$
\ELSE
\STATE \COMMENT{The node is not entirely owned by a cluster.}
\STATE {{\bf if} $|\mathscr{C}_i| > 0$ {\bf then}
$s\mathtt{.push(}\mathscr{C}_i\mathtt{)}$}
\STATE {\bf else}
\STATE {\ \ \ \ {\bf for} $p_i \in \mathscr{P}_i$ not yet considered}
\STATE \ \ \ \ \ \ \ $j \gets \mathrm{index\ of } \operatorname{closest}(p_i)$
\STATE \ \ \ \ \ \ \ $c_j \gets c_j + p_i$; \ \ $n_j \gets n_j + 1$
\ENDIF
\ENDWHILE
\medskip
\STATE{{\bf for} $c_i \in C${\bf, if} $n_i > 0$ {\bf then} $c_i \gets c_i /
n_i$}
\STATE {\bf return} $C$
\end{algorithmic}
\caption{\texttt{UpdateCentroids()}.}
\label{alg:update_centroids}
\end{algorithm}
Thus, at the end of the dual-tree algorithm, we know the owner of every node (if
it exists) via $\operatorname{closest}(\cdot)$ and $\operatorname{pruned}(\cdot)$, and we know the owner of
every point via $\operatorname{closest}(\cdot)$. A simple
algorithm to do this is given here as Algorithm \ref{alg:update_centroids}
(\texttt{UpdateCentroids()}); it
is a depth-first recursion through the tree that terminates a branch when a node
is owned by a single cluster.
Next is updating the bounds in the tree and determining if nodes and
points can change owners next iteration; this work is encapsulated in the
\texttt{UpdateTree()} algorithm, which is an implementation of strategies 3 and
4 (see the appendix for details). Once
\texttt{UpdateTree()} sets the correct value of $\operatorname{canchange}(\cdot)$ for every
point and node, we coalesce the tree for the next iteration with the
\texttt{CoalesceTree()} function. Coalescing the tree is straightforward:
we simply remove any nodes from the tree where $\operatorname{canchange}(\cdot)$
is \texttt{false}. This leaves a smaller tree with no nodes where
$\operatorname{canchange}(\cdot)$ is \texttt{false}.
Decoalescing the tree (\texttt{DecoalesceTree()}) is done by restoring
the tree to its original state. See the appendix for more details.
\vspace*{-0.7em}
\section{Theoretical results}
\label{sec:theory}
\vspace*{-0.4em}
Space constraints allow us to only provide proof sketches for the first two
theorems here. Detailed proofs are given in the appendix.
\begin{thm}
A single iteration of dual-tree $k$-means as given in Algorithm
\ref{alg:high_level} will produce exactly the same results as the
brute-force $O(kN)$ implementation.
\end{thm}
\vspace*{-1.7em}
\begin{proof}
(Sketch.) First, we show that the dual-tree algorithm (line 9) produces correct
results for $\operatorname{ub}(\cdot)$, $\operatorname{lb}(\cdot)$, $\operatorname{pruned}(\cdot)$, and $\operatorname{closest}(\cdot)$
for every point and node. Next, we show that \texttt{UpdateTree()} maintains
the correctness of those four quantities and only marks $\operatorname{canchange}(\cdot)$ to
\texttt{false} when the node or point truly cannot change owner. Next, it is
easily shown that \texttt{CoalesceTree()} and \texttt{DecoalesceTree()} do not
affect the results of the dual-tree algorithm because the only nodes and points
removed are those where $\operatorname{canchange}(\cdot) = \mathtt{false}$. Lastly, we show
that \texttt{UpdateCentroids()} produces centroids correctly.
\end{proof}
\vspace*{-1.0em}
Next, we consider the runtime of the algorithm. Our results are with respect to
the {\it expansion constant} $c_k$ of the centroids \cite{langford2006}, which
is a measure of intrinsic dimension. $c_{qk}$ is a related quantity:
the largest expansion constant of $C$ plus any point in the dataset. Our
results also depend on the imbalance of the tree $i_t(\mathscr{T})$, which in
practice generally scales linearly in $N$ \cite{curtin2015plug}. As with the
other theoretical results, more detail on each of these quantities is available
in the appendix.
\begin{thm}
When cover trees are used, a single iteration of dual-tree $k$-means as in
Algorithm \ref{alg:high_level} can be performed in $O(c_k^4 c_{qk}^5 (N +
i_t(\mathscr{T})) + c_k^9 k \log k)$ time.
\end{thm}
\vspace*{-1.0em}
\begin{proof}
(Sketch.) Cover trees have $O(N)$ nodes \cite{langford2006}; because
\texttt{CoalesceTree()}, \texttt{DecoalesceTree()}, \texttt{UpdateCentroids()},
and \texttt{UpdateTree()} can be performed in one pass of the tree, these steps
may each be completed in $O(N)$ time. Building a tree on the centroids takes
$O(c_k^6 k \log k)$ time, where $c_k$ is the expansion constant of the
centroids. Recent results show that dual-tree algorithms that use the cover
tree may have their runtime easily bounded \cite{curtin2015plug}. We may
observe that our pruning rules are at least as tight as nearest neighbor search;
this means that the dual-tree algorithm (line 11) may be performed in
$O(c_{kr}^9 (N + i_t(\mathscr{T})))$ time. Also, we must perform nearest
neighbor search on the centroids, which costs $O(c_k^9 (k +
i_t(\mathscr{T_c})))$ time. This gives a total per-iteration runtime of
$O(c_{kr}^9 (N + i_t(\mathscr{T})) + c_k^6 k \log k + c_k^9
i_t(\mathscr{T}_k))$.
\end{proof}
\vspace*{-1.0em}
This result holds intuitively. By building a tree on the centroids, we are able
to prune many centroids at once, and as a result the amortized cost of finding
the nearest centroid to a point is $O(1)$. This meshes with earlier theoretical
results \cite{langford2006, curtin2015plug, ram2009} and earlier empirical
results \cite{gray2003nonparametric, gray2001nbody} that suggest that an answer
can be obtained for a single query point in $O(1)$ time. Note that this
worst-case bound depends on the intrinsic dimension (the expansion constant)
of the centroids, $c_k$, and the related quantity $c_{qk}$. If the intrinsic
dimension of the centroids is low---that is, if the centroids are distributed
favorably---the dual-tree algorithm will be more efficient.
However, this bound is generally quite loose in practice. First, runtime bounds
for cover trees are known to be loose \cite{curtin2015plug}. Second, this
particular bound does not consider the effect of coalescing the tree.
In any given iteration, especially toward the end of the $k$-means
clustering, most points will have $\operatorname{canchange}(\cdot) =
\mathtt{false}$ and thus the coalesced tree
will be far smaller than the full tree built on all $N$ points.
\begin{thm}
Algorithm \ref{alg:high_level} uses no more than $O(N + k)$ memory when cover
trees are used.
\end{thm}
\vspace*{-1.0em}
\begin{proof}
This proof is straightforward. A cover tree on $N$ points takes $O(N)$
space. So the trees and associated bounds take $O(N)$ and $O(k)$ space. Also,
the dataset and centroids take $O(N)$ and $O(k)$ space.
\end{proof}
\vspace*{-1.3em}
\section{Experiments}
\label{sec:empirical}
\vspace*{-0.3em}
\setlength{\textfloatsep}{1.2em}
\begin{table}[t!]
{\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& & & \multicolumn{2}{|c|}{\bf tree build time} \\
{\bf Dataset} & $N$ & $d$ & $kd$-tree & cover tree \\
\hline
cloud & 2048 & 10 & 0.001s & 0.005s \\
cup98b & 95413 & 56 & 1.640s & 32.41s \\
birch3 & 100000 & 2 & 0.037s & 2.125s \\
phy & 150000 & 78 & 4.138s & 22.99s \\
power & 2075259 & 7 & 7.342s & 1388s \\
lcdm & 6000000 & 3 & 4.345s & 6214s \\
\hline
\end{tabular}
\end{center}
}
\vspace*{-1.0em}
\caption{Dataset information.}
\label{tab:datasets}
\end{table}
\begin{table*}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|r|l|l|l|l|l|l|}
\hline
& & & \multicolumn{6}{|c|}{\bf avg. per-iteration runtime (distance
calculations)} \\
{\bf dataset} & $k$ & {\bf iter.} & {\tt elkan} & {\tt hamerly} & {\tt yinyang} & {\tt blacklist} & {\tt dualtree-kd} & {\tt dualtree-ct}
\\
\hline
cloud & 3 & 8 & 1.50e-4s (867) & 1.11e-4s (1.01k) & 1.11e-1s (2.00k) & {\bf 4.68e-5s} (302) & 1.27e-4s ({\bf 278}) & 2.77e-4s (443) \\
cloud & 10 & 14 & 2.09e-4s ({\bf 1.52k}) & 1.92e-4s (4.32k) & 7.66e-2s (9.55k) & {\bf 1.55e-4s} (2.02k) & 3.69e-4s (1.72k) & 5.36e-4s (2.90k) \\
cloud & 50 & 19 & 5.87e-4s ({\bf 2.57k}) & {\bf 5.30e-4s} (21.8k) & 9.66e-3s (15.6k) & 8.20e-4s (12.6k) & 1.23e-3s (5.02k) & 1.09e-3s (9.84k) \\
\hline
cup98b & 50 & 224 & 0.0445s ({\bf 25.9k}) & 0.0557s (962k) & 0.0465s (313k) & {\bf 0.0409s} (277k) & 0.0955s (254k) & 0.1089s (436k) \\
cup98b & 250 & 168 & 0.1972s ({\bf 96.8k}) & 0.4448s (8.40M) & {\bf 0.1417s} (898k) & 0.2033s (1.36M) & 0.4585s (1.38M) & 0.3237s (2.73M) \\
cup98b & 750 & 116 & 1.1719s ({\bf 373k}) & 1.8778s (36.2M) & {\bf 0.2653s} (1.26M) & 0.6365s (4.11M) & 1.2847s (4.16M) & 0.8056s (81.4M) \\
\hline
birch3 & 50 & 129 & 0.0194s ({\bf 24.2k}) & 0.0093s (566k) & 0.0378s (399k) & {\bf 0.0030s} (42.7k) & 0.0082s (37.4k) & 0.0378s (67.9k) \\
birch3 & 250 & 812 & 0.0895s ({\bf 42.8k}) & 0.0314s (2.59M) & 0.0711s (239k) & {\bf 0.0164s} (165k) & 0.0183s (79.7k) & 0.0485s (140k) \\
birch3 & 750 & 373 & 0.3253s (292k) & 0.0972s (8.58M) & 0.1423s (476k) & 0.0554s (450k) & {\bf 0.02989s} ({\bf 126k}) & 0.0581s (235k) \\
\hline
phy & 50 & 34 & 0.0668s (82.3k) & 0.1064s (1.38M) & 0.1072s (808k) & {\bf 0.0081s} ({\bf 33.0k}) & 0.02689s (67.8k) & 0.0945s (188k) \\
phy & 250 & 38 & 0.1627s (121k) & 0.4634s (6.83M) & 0.2469s (2.39M) & {\bf 0.0249s} (104k) & 0.0398s ({\bf 90.4k}) & 0.1023s (168k) \\
phy & 750 & 35 & 0.7760s ({\bf 410k}) & 2.9192s (43.8M) & 0.6418s (5.61M) & {\bf 0.2478s} (1.19M) & 0.2939s (1.10M) & 0.3330s (1.84M) \\
\hline
power & 25 & 4 & 0.3872s (2.98M) & 0.2880s (12.9M) & 1.1257s (33.5M) & {\bf 0.0301s} (216k) & 0.0950s ({\bf 87.4k}) & 0.6658s (179k) \\
power & 250 & 101 & 2.6532s (425k) & 0.1868s (7.83M) & 1.2684s (10.3M) & 0.1504s (1.13M) & {\bf 0.1354s} ({\bf 192k}) & 0.6405s (263k) \\
power & 1000& 870 & {\it out of memory} & 6.2407s (389M) & 4.4261s (9.41M) & 0.6657s (2.98M) & {\bf 0.4115s} ({\bf 1.57M}) & 1.1799s (4.81M) \\
power & 5000& 504 & {\it out of memory} & 29.816s (1.87B) & 22.7550s (58.6M) & 4.1597s (11.7M) & {\bf 1.0580s} ({\bf 3.85M}) & 1.7070s (12.3M) \\
power & 15000& 301 & {\it out of memory} & 111.74s (6.99B) & {\it out of memory} & {\it out of memory} & {\bf 2.3708s} ({\bf 8.65M}) & 2.9472s (30.9M) \\
\hline
lcdm & 500 & 507 & {\it out of memory} & 6.4084s (536M) & 8.8926s (44.5M) & 0.9347s (4.20M) & {\bf 0.7574s} ({\bf 3.68M}) & 2.9428s (7.03M) \\
lcdm & 1000& 537 & {\it out of memory} & 16.071s (1.31B) & 18.004s (74.7M) & 2.0345s (5.93M) & {\bf 0.9827s} ({\bf 5.11M}) & 3.3482s (10.0M) \\
lcdm & 5000& 218 & {\it out of memory} & 64.895s (5.38B) & {\it out of memory} & 12.909s (16.2M) & {\bf 1.8972s} ({\bf 8.54M}) & 3.9110s (19.0M) \\
lcdm &20000& 108 & {\it out of memory} & 298.55s (24.7B) & {\it out of memory} & {\it out of memory} & {\bf 4.1911s} ({\bf 17.8M}) & 5.5771s (43.2M) \\
\hline
\end{tabular}
}
\end{center}
\vspace*{-1.0em}
\caption{Empirical results for $k$-means.}
\label{tab:runtime}
\vspace*{-1.0em}
\end{table*}
The next thing to consider is the empirical performance of the algorithm. We
use the publicly available \texttt{kmeans} program in {\bf mlpack}
\cite{mlpack2013}; in our experiments, we run it as follows:
\vspace*{-0.5em}
\begin{verbatim}
$ kmeans -i dataset.csv -I centroids.csv -c
$k -v -e -a $algorithm
\end{verbatim}
\vspace*{-0.5em}
\noindent where \texttt{\$k} is the number of clusters and \texttt{\$algorithm}
is the algorithm to be used. Each algorithm is implemented in C++. For the
{\tt yinyang} algorithm, we use the authors' implementation. We use a variety
of $k$ values on mostly real-world datasets; details are shown in Table
\ref{tab:datasets} \cite{uci, birch3, lcdm}. The table also contains the time
taken to build a $kd$-tree (for \texttt{blacklist} and \texttt{dualtree-kd}) and
a cover tree (for \texttt{dualtree-ct}). Cover trees are far more complex to
build than $kd$-trees; this explains the long cover tree build time. Even so,
the tree only needs to be built once during the $k$-means run. If results are
required for multiple values of $k$---such as in the X-means algorithm
\cite{pelleg2000x}---then the tree built on the points may be re-used.
Clusters were initialized using the Bradley-Fayyad refined start procedure
\yrcite{bradley1998refining}; however, this was too slow for the very large
datasets, so in those cases points were randomly sampled as the initial
centroids. $k$-means was then run until convergence on each dataset. These
simulations were performed on a modest consumer desktop with an
Intel i5 with 16GB RAM, using {\bf mlpack}'s benchmarking system
\cite{edel2014automatic}.
Average runtime per iteration results are shown in Table \ref{tab:runtime}.
The amount of work that is being pruned away is somewhat unclear from the
runtime results, because the \texttt{elkan} and \texttt{hamerly} algorithms
access points linearly and thus benefit from cache effects; this is not true of
the tree-based algorithms. Therefore, the average number of distance
calculations per iteration are also included in the results.
It is immediately clear that for large datasets, \texttt{dualtree-kd} is
fastest, and \texttt{dualtree-ct} is almost as fast.
The \texttt{elkan} algorithm, because it
holds $kN$ bounds, is able to prune away a huge amount of work and is very fast
for small datasets; however,
maintaining all of these bounds becomes prohibitive with large $k$ and the
algorithm exhausts all
available memory. The \texttt{blacklist} algorithm has the same issue: on the
largest datasets, with the largest $k$ values, the space required to maintain
all the blacklists is too much. This is also true of the \texttt{yinyang}
algorithm, which must maintain bounds between each point and each group of
centroids. For large $k$, this burden becomes too much and the algorithm fails.
The \texttt{hamerly} and dual-tree algorithms, on the other hand, are the
best-behaved with memory usage and do not have any issues with large $N$ or
large $k$; however, the \texttt{hamerly} algorithm is very slow on large
datasets because it is not able to prune many points at once.
Similar to the observations about the \texttt{blacklist} algorithm, the
tree-based approaches are less effective in higher dimensions
\cite{pelleg1999accelerating}. This is an important point: the performance of
tree-based approaches suffer in high dimensions in part because the bound
$d_{\min}(\cdot, \cdot)$ generally becomes looser as dimension increases.
This is partly because the volume of nodes in high dimensions is much higher;
consider that a ball has volume that is exponential in the dimension.
Even so, in our results, we see speedup in reasonable dimensions (for example,
the {\tt phy} dataset has 78 dimensions). Further, because our algorithm is
tree-independent, we may use tree structures that are tailored to
high-dimensional data \cite{arya1998optimal}---including ones that
have not yet been developed. From our results we believe
as a rule of thumb that the dual-tree $k$-means algorithm can be effective up to
a hundred dimensions or more.
Another clear observation is that when $k$ is scaled on a single dataset, the
\texttt{dualtree-kd} and \texttt{dualtree-ct} algorithms nearly always scale
better (in terms of runtime) than the other algorithms. These results show that
our algorithm satisfies its original goals: to be able to scale effectively to
large $k$ and $N$.
\vspace*{-0.6em}
\section{Conclusion and future directions}
\label{sec:conclusion}
\vspace*{-0.2em}
Using four pruning strategies, we have developed a flexible,
tree-independent dual-tree $k$-means algorithm that is the best-performing
algorithm for large datasets and large $k$ in small-to-medium dimensions. It
is theoretically favorable, has a small memory footprint, and may be used in
conjunction with initial point selection and approximation schemes for
additional speedup.
There are still interesting future directions to pursue, though. The first
direction is parallelism: because our dual-tree algorithm is agnostic to the
type of traversal used, we may use a parallel traversal \cite{curtin2013tree},
such as an adapted version of a recent parallel dual-tree algorithm
\cite{lee2012distributed}. The second direction is kernel $k$-means and other
spectral clustering techniques: our algorithm may be merged with the
ideas of \citet{curtin2014dual} to perform kernel $k$-means. The third
direction is theoretical. Recently, more general notions of intrinsic
dimensionality have been proposed \cite{houle2013dimensionality,
amsaleg2015estimating}; these may enable tighter and more descriptive runtime
bounds. Our work thus provides a useful and fast $k$-means algorithm and also
opens promising avenues to further accelerated clustering algorithms.
\nocite{ram2009rank}
\nocite{march2010euclidean}
\bibliographystyle{icml2016}
|
1,108,101,562,759 | arxiv | \section{Introduction}
Recently, model combination has gained much attention in many real-world tasks, and stayed as the winning solution in numerous data science competitions such like Kaggle \shortcite{bell2007lessons}. It is considered as a sub-field of ensemble learning, aiming for achieving better prediction performance \shortcite{zhou2012ensemble}. Despite that, it is often beyond the scope of machine learning---it has been used in other domains such as the experimental design in clinical trials. Generally speaking, model combination has two key usages: stability improvement and performance boost. For instance, practitioners run independent trials and then average the results to eliminate the built-in randomness and uncertainty---more reliable results may be obtained. Additionally, even in a non-ideal scenario, base models may make independent but complementary errors. The combined model can, therefore, yield better performance than any constituent ones.
Although model combination is crucial for all sorts of learning tasks, dedicated Python libraries are absent. There are a few packages that partly fulfill this purpose, but established libraries either exist as single purpose tools like PyOD \shortcite{zhao2019pyod} and pycobra \shortcite{guedj2018pycobra}, or as part of general purpose libraries like scikit-learn \shortcite{pedregosa2011scikit}.
\texttt{combo} can fill this gap with four key advantages. Firstly, \texttt{combo} contains more than 15 combination algorithms, including both classical algorithms like dynamic classifier selection (DCS) \shortcite{woods1997combination} and recent advancement like LCSP \shortcite{zhao2019lscp}. It could handle the combination operation for all sorts of tasks like classification, clustering, and anomaly detection.
Secondly, \texttt{combo} works with both raw and pretrained learning models from major libraries like scikit-learn, XGBoost, and LightGBM, given certain conditions are met. Thirdly, the models in \texttt{combo} are designed with unified APIs, detailed documentation\footnote{\url{https://pycombo.readthedocs.io}}, and interactive examples\footnote{\url{https://mybinder.org/v2/gh/yzhao062/combo/master}} for the easy use. Lastly, all \texttt{combo} models are associated with unit test and being checked by continuous integration tools for robustness; code coverage and maintainability check are also enabled for performance and sustainability. To our best knowledge, this is the first comprehensive framework for combining learning models and scores in Python, which is valuable for data practitioners, machine learning researchers, and data competition participants.
\begin{lstlisting}[title={Code Snippet 1: Demo of \texttt{combo} API with DCS},captionpos=b, float=tp]
>>> from combo.models.classifier_dcs import DCS
# initialize a group of classifiers
>>> classifiers = [
DecisionTreeClassifier(),
LogisticRegression(),
KNeighborsClassifier()]
>>> # initialize/fit the combination model
>>> clf = DCS(base_estimators=classifiers)
>>> clf.fit(X_train)
>>> # fit and make prediction
>>> y_test_pred = clf.predict(X_test)
>>> y_test_proba =
clf.predict_proba(X_test)
>>> # fit and predict on the same dataset
>>> y_train_pred = clf.fit_predict(X_train)
\end{lstlisting}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{compare_selected_classifiers.png}
\caption{Comparison of Selected Classifier Combination on Simulated Data}
\label{classifier_comparison}
\end{figure*}
\section{Core Scenarios}
\texttt{combo} models for classification, clustering, and anomaly detection share unified APIs. Inspired by scikit-learn's API design, the models in \texttt{combo} all come with the following key methods: (i) \texttt{fit} function processes the train data and gets the model ready for prediction; (ii) \texttt{predict} function generates labels for the unknown test data once the model is fitted; (iii) \texttt{predict\_proba} generates predictions in probability instead of discrete labels by \texttt{predict} and (iv) \texttt{fit\_predict} calls \texttt{fit} function first on the input data and then predicts on it (applicable to unsupervised model only). Code Snippet 1 shows the use of above APIs on DCS. Notably, fitted (pretrained) models can be used directly by setting \texttt{pre\_fitted} flag; \texttt{fit} process will be skipped.
\textbf{\textit{Classifier Combination}} aims to aggregate multiple base supervised classifiers in either parallel or sequential manner. Selected classifier combination methods implemented in \texttt{combo} include stacking (meta-learning), dynamic classifier selection, dynamic ensemble selection, and a group of heuristic aggregation methods like averaging and majority vote. Fig. \ref{classifier_comparison} shows how different frameworks behave on a simulated dataset with 300 points. The leftmost one is a simple \textit{k}NN model (\textit{k}=15), and the other three are the combination of five \textit{k}NN models with \textit{k} in range $[5,10,15,20, 25]$. Different from classifier combination, \textbf{\textit{Cluster Combination}} is usually done in an unsupervised manner. The focus is on how to align the predicted labels generated by base clusterings, as cluster labels are categorical instead of ordinal. For instance, $[0,1,1,0,2]$ and $[1,0,0,1,2]$ are equivalent with appropriate alignment. Two classical clustering combination methods are therefore implemented to handle this---clustering combination using evidence accumulation (EAC)\shortcite{fred2005combining} and Clusterer Ensemble \shortcite{zhou2006clusterer}. \textbf{\textit{Anomaly Detection}} concentrates on identifying the anomalous objects from the general data distribution \shortcite{zhao2019pyod}. The challenges of combining multiple outlier detectors lie in its unsupervised nature and extreme data imbalance. Two latest combination frameworks, LSCP \shortcite{zhao2019lscp} and XGBOD \shortcite{zhao2018xgbod}, are included in \texttt{combo} for unsupervised and semi-supervised detector combination. \textbf{\textit{Score Combination}} comes with more flexibility than the above tasks as it only asks for the output from multiple models, whichever it is from a group of classifiers or outlier detectors. As a general purpose task, score combination methods are easy to use without the need of initializing a dedicated class. Each aggregation method, e.g., average of maximum (AOM), can be invoked directly.
\section{Conclusion and Future Directions}
\texttt{combo} is a comprehensive Python library to combine the models from major machine learning libraries. It supports four types of combination scenarios (classification, clustering, anomaly detection, and raw scores) with unified APIs, detailed documentation, and interactive examples. As avenues for future work, we will add the combination frameworks for customized deep learning models (from TensorFlow, PyTorch, and MXNet), enable GPU acceleration and parallelization for scalability, and expand to more task scenarios such as imbalanced learning and regression.
\bibliographystyle{aaai}
|
1,108,101,562,760 | arxiv | \section{Introduction}
In this paper, we consider the problem of reconstructing a
$C^{1}$ \emph{figure} -- that is, a family of curves ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$ from a finite
set of data. More precisely, we assume we are given
an unorganized set of points ${ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$, as well as \emph{unit} tangents to the points ${ \{ \vm_{i}\, |\, i = 0,\dots,N-1 \} }$. Note that the tangents have no particular orientation; making the change $\vm_{i} \rightarrow -\vm_{i}$ destroys no information.
\begin{definition}
\label{def:polygonalization}
A polygonalization of a figure ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$ is a planar graph
$\Gamma = (V,E)$ with the property that each vertex $p \in V$ is a point on some $\gamma_{i}(t)$, and each edge connects points which are adjacent samples of some curve $\gamma_{i}$.
\end{definition}
Our goal here is to construct an algorithm which reconstructs the
polygonalization of a figure from the data defined above.
An example of a polygonalization is given in Figure \ref{fig:polygonalization}.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{polygonalization_example.eps}
\caption{A figure and it's polygonalization, c.f. Definition \ref{def:polygonalization}. }
\label{fig:polygonalization}
\end{figure}
The topic of reconstructing figures solely from point data ${ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$ has been the subject of considerable attention \cite{amenta98crust,amenta98new,dey99curve,hoppe92surface,amenta02simple, dey01reconstructing, edelsbrunner}. This is actually a more difficult problem, and only weaker results are possible. The main difficulty is the following; if the distance between two separate curves $\gamma_{i}$ and $\gamma_{j}$ is smaller than the sample spacing, then it is difficult to determine which points are associated to which curve. Thus, sample spacing must be $O({\delta})$, with ${\delta}$ the distance between different curves.
Tangential information makes this task easier; in essence, if two points are nearby (say $\vp_{1}$ and $\vp_{2}$), but $\pm \vm_{1}$ does not point (roughly) in the direction $\vp_{2}-\vp_{1}$, then $\vp_{1}$ and $\vp_{2}$ should not be connected. This fact allows us to reduce the sample spacing to $O({\delta}^{1/2})$, rather than $O({\delta})$. This is to be expected by analogy to interpolation; knowledge of a function and its derivatives yields quadratic accuracy.
We should mention at this point related work on \emph{Surfels} (short for \emph{Surface Elements}). A surfel is a point, together with information characterizing the tangent plane to a surface at that point (and perhaps other information such as texture). They have become somewhat popular in computer graphics recently, mainly for rendering objects characterized by point clouds
\cite{882320,1103907,598521,1018057,344936,383300}.
In this work, we present an algorithm which allows us to reconstruct a curve from ${ \{ \vp_{i}, \vm_{i}\, |\, i = 0,\dots,N-1 \} }$. We make two assumptions, under which the algorithm is provably correct.
\begin{assumption}
\label{ass:curvature}
We assume each curve $\gamma_{i}(t) = (x_i(t),y_i(t))$ has bounded curvature:
\begin{equation}
\label{eq:curvatureAssumption}
\forall i = \Oto{M}, ~ \frac{
\abs{x_{i}'(t) y_{i}''(t) - y_{i}'(t) x_{i}''(t)}
} {
(x_{i}'(t)^{2}+y_{i}'(t)^{2})^{3/2}
} \leq {\kappa_{m}}
\end{equation}
\end{assumption}
This assumption is necessary to prevent the curves from oscillating too much between samples.
\begin{assumption}
\label{ass:separation}
We assume the curves $\gamma_{i}$ and $\gamma_{j}$ are uniformly separated from each other, i.e.:
\begin{subequations}
\begin{equation}
\label{eq:separationAssumption}
\inf_{t,t'} \abs{ \gamma_{i}(t) - \gamma_{j}(t')} \geq {\delta} \textrm{~for~} i \neq j
\end{equation}
We also assume that different areas of the same curve are separated
from each other:
\begin{equation}
\label{eq:separationAssumptionSameCurve}
\inf_{\abs{t-t'} > {\curvemax^{-1}}\pi/2 } \abs{ \gamma_{i}(t) - \gamma_{i}(t')} \geq {\delta}
\end{equation}
\end{subequations}
(assuming the curve $\gamma_{i}(t)$ proceeds with unit speed).
\end{assumption}
These assumptions ensure that two distinct curves do not come too close
together (\ref{ass:curvature}) and that separate regions of
the same curve do not come
arbitrarily close (\ref{ass:separation}).
This is illustrated in Figure \ref{fig:separationBetweenCurves}.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{assumption_two.eps}
\caption{An illustration of Assumption \ref{ass:separation}. The black arrow illustrates the condition \eqref{eq:separationAssumption}, while the red arrow illustrates the condition \eqref{eq:separationAssumptionSameCurve}.}
\label{fig:separationBetweenCurves}
\end{figure}
\section{The Reconstruction Algorithm}
Before we begin, we require some notation.
\begin{definition}
\label{def:perp}
For a vector $\vv$, let $\vv^{\perp}$ denote the vector $\vv$
rotated clockwise by an angle $\pi/2$.
\end{definition}
\begin{definition}
\label{def:metric}
Let $d(\vp,\vq)$ denote the usual Euclidean metric, $d(\vp,\vq) = \abs{\vp - \vq}$. Let $d_{\vm}(\vp,\vq)$ denote the distance in the $\vm$ direction between $\vp$ and $\vq$, i.e. $d_{\vm}(\vp,\vq) = \abs{ (\vp - \vq) \cdot \vm}$.
\end{definition}
\begin{definition}
For a point $\vp$ and a curve $\gamma_{i}(t)$, we say that $\vp \in \gamma_{i}(t)$ if $\exists t$ such that $\gamma_{i}(t)=\vp$.
\end{definition}
\subsection{The Forbidden Zone}
Before explaining the algorithm which constructs the polygonalization of
a figure (the set of curves ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$) from discrete data ${ \{ \vp_{i}, \vm_{i}\, |\, i = 0,\dots,N-1 \} }$, we
prove a basic lemma which forms the foundation of our method.
We assume for the remainder of this section that the figure
satisfies Assumption 1.
\begin{definition}
For a point $\vp_{i}$, we refer to the set
$\cup_{\pm} \ball{{\curvemax^{-1}}}{\vp_{i} \pm \vm_{i}^{\perp} {\curvemax^{-1}}}$
as its \emph{forbidden zone},
illustrated in Fig. \ref{fig:forbiddenZone}.
Here, $\ball{r}{\vp}$ is the usual ball of radius $r$ about $\vp$.
\end{definition}
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{forbidden_zone.eps}
\caption{The forbidden zones, as described in Lemma \ref{lem:forbiddenZone}.
The orange (darker region) is the forbidden zone, and the blue (lighter region)
is the set of points a distance $\pi {\curvemax^{-1}}/2$ away from $p_{i}$.}
\label{fig:forbiddenZone}
\end{figure}
\begin{lemma}
\label{lem:forbiddenZone}
For every $i \neq j$, if $\vp_{j}$ is in the forbidden zone of $\vp_i$,
then $(\vp_{i},\vp_{j})$ is not an edge in ${\Gamma}$ assuming that the
sample spacing is less than ${\curvemax^{-1}} \pi/2$.
\end{lemma}
\begin{proof}
Suppose for simplicity that $\vp_{i}=(0,0)$ and $\vm_{i}=(1,0)$. Now, consider a line $\tau(t)$ of maximal curvature. The curve of maximal curvature, with $\tau_{y}'(t) > 0$ and proceeding at speed ${\curvemax^{-1}}$ is $\tau^{+}(t)=({\curvemax^{-1}} \sin(t), {\curvemax^{-1}} (1-\cos(t)))$, while the curve with $\tau_{y}'(t) < 0$ is $\tau^{-}(t)=({\curvemax^{-1}} \sin(t), {\curvemax^{-1}} (\cos(t)-1))$.
By assumption 1, the curve $\gamma(t)$ containing $\vp_i$
must lie between these curves (the near boundaries of the forbidden zone
in Fig \ref{lem:forbiddenZone}). Thus, it is confined to the blue (lighter)
region while its arc length is less than ${\curvemax^{-1}} \pi/2$.
If $\vp_{j}$ is in the forbidden zone and
$\gamma(t)$ connects $\vp_{i}$ to $\vp_{j}$, then it must do so after travelling a distance greater than ${\curvemax^{-1}} \pi/2$.
\end{proof}
In short, the extra information provided by the tangents
allows us to exclude edges from the polygonalization if they point too far away
from the tangent, resulting in higher fidelity (c.f. Fig. \ref{fig:proximityVsTangentBased}).
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{tangents_good_for.eps}
\caption{A naive proximity-based reconstruction algorithm (shown),
or even a $\beta$-crust type algorithm, will introduce edges
between different curves. Knowledge of the forbidden zone allows us to
remove such edges.}
\label{fig:proximityVsTangentBased}
\end{figure}
\begin{definition}
\label{def:AllowedRegion}
For a point $\vp$, we define the \emph{allowed zone} or
\emph{allowed region} $\allowed{\epsilon}{\vp}$ by
\begin{equation}
\label{eq:allowedRegion}
\allowed{\epsilon}{\vp}=\ball{\epsilon}{\vp_{i}} \setminus \left[ \cup_{\pm} \ball{{\curvemax^{-1}}}{\vp_{i} \pm \vm_{i}^{\perp} {\curvemax^{-1}}} \right]
\end{equation}
That is, $\allowed{\epsilon}{\vp}$ is the ball of radius $\epsilon$ about $p$ excluding the forbidden zone.
\end{definition}
Clearly, any edge in the polygonalization starting at $\vp$, with length shorter than $\epsilon$, must connect to another point $\vq \in \allowed{\epsilon}{\vp}$. We are now ready to describe the polygonalization algorithm.
\vspace{.2in}
\hrule
\begin{algo}
\label{algo:polygonalization}
\begin{center} {\bf (Noise-Free Polygonalization)}
\end{center}
\vspace{.1in}
\hrule
\vspace{.2in}
\noindent { \bf Input: }
[ We assume we are given
the dataset ${ \{ \vp_{i}, \vm_{i}\, |\, i = 0,\dots,N-1 \} }$,
the maximal curvature ${\kappa_{m}}$, and
a parameter $\epsilon$ satisfying both
$\epsilon {\kappa_{m}} < 1/\sqrt{2}$ and $2 {\kappa_{m}} \epsilon^{2} < {\delta}$.
We assume that adjacent points on a given curve
are less than a distance $\epsilon$ apart, i.e. the curve is
$\epsilon$-sampled. ]
\vspace{.1in}
\begin{enumerate}
\item Compute the graph $G = ({ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E)$ with edge set:
\begin{equation*}
E = \{ (\vp_{i},\vp_{j}) : \vp_{i} \in \allowed{\epsilon}{\vp_{j}} \textrm{~and~} \vp_{j} \in \allowed{\epsilon}{\vp_{i}}\}
\end{equation*}
\item For each vertex $\vp_{i} \in { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$:
\begin{enumerate}[a.]
\item Compute the set of vertices
\begin{equation*}
R^{\pm}_{i} = \{ \vp_{j} : (\vp_{i}, \vp_{j}) \in E \textrm{~and~} \pm (\vp_{j}-\vp_{i}) \cdot \vm_{i} > 0 \}
\end{equation*}
\item Find the nearest tangential neighbors, i.e.
\begin{equation*}
\vr^{\pm}_{i} = \textrm{argmin}_{\vq \in R^{\pm}_{i}} d_{\vm_{i}}(\vq, \vp_{i})
\end{equation*}
\end{enumerate}
\item Output the graph $\Gamma = ( { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E')$ with
\begin{equation*}
E' = \{ (\vp_{i}, \vr^{+}_{i}) \} \cup \{ (\vp_{i}, \vr^{-}_{i}) \}
\end{equation*}
This graph is the polygonalization of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$.
\end{enumerate}
\end{algo}
\begin{remark}
As presented, the complexity of Algorithm \ref{algo:polygonalization} is $O(N^{2})$, due to both step 1 and step 2. (Step 2 can be slow if $O(N)$ points are within the allowed region of some particular point). The complexity can be reduced to $O(N \log N)$ using quadtrees if we assume a minimal sampling rate (see Appendix \ref{sec:quadTreeSection}).
\end{remark}
The following theorem guarantees the correctness of
Algorithm \ref{algo:polygonalization}. Its proof is presented
in the next section.
\begin{theorem}
\label{thm:proofOfAlgo}
Suppose that:
\begin{subequations}
\label{eq:separationCondition}
\begin{equation}
\label{eq:constraintOnSeparationSampling}
{\delta} > 2{\kappa_{m}} \epsilon^{2}
\end{equation}
where ${\delta}$ is as in Assumption \ref{ass:separation} and also
\begin{equation}
\label{eq:constraintOnkmaxEpsilon}
\epsilon < \frac{1}{ {\kappa_{m}} \sqrt{2}}
\end{equation}
\end{subequations}
Suppose also that the distance between adjacent samples in the polygonalization is bounded by $\epsilon$, i.e. the curve is $\epsilon$-sampled. Then graph $\Gamma$ returned by Algorithm \ref{algo:polygonalization} is the polygonalization of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$.
\end{theorem}
\subsection{Proof of Theorem \ref{thm:proofOfAlgo}}
\begin{lemma}
\label{lem:separationAllowedRegions}
Suppose $i \neq j$ and that Assumption \ref{ass:separation} holds.
Then for all $t, t'$, if \eqref{eq:separationCondition} holds, then
\begin{equation}
\label{eq:connectionsBetweenDifferentCurvesNotAllowed}
\gamma_{j}(t') \not\in \allowed{\epsilon}{\gamma_{i}(t)}.
\end{equation}
Similarly, if $i=j$ and $\abs{t-t'} \geq {\kappa_{m}}^{-1} \pi/2$, then \eqref{eq:connectionsBetweenDifferentCurvesNotAllowed} holds.
\end{lemma}
\begin{proof}
Fix $t$, and define $\vp=\gamma_{i}(t)$ and $\vm=\gamma_{i}'(t) / \abs{\gamma_{i}'(t)}$. Define $L$ to be the line segment $L= \{ \vp+\vm {\curvemax^{-1}} \sin(\theta) : \theta \in [-\arcsin(\epsilon {\kappa_{m}}),\arcsin(\epsilon {\kappa_{m}})] \}$. The boundaries of $\allowed{\epsilon}{\vp}$ are given by
\begin{equation*}
\vp + \vm {\curvemax^{-1}} \sin(\theta) \pm \vm^{\perp} {\curvemax^{-1}} (1-\cos(\theta)).
\end{equation*}
Now, for any $\vq \in \gamma_{i} $ and $\vq \in \allowed{\epsilon}{\vp}$, the distance between $\vq$ and $L$ is the normal distance to $L$. This distance is bounded by:
\begin{multline}
\label{eq:1}
d(\vq,L) \leq
\sup_{\theta} {\curvemax^{-1}} \abs{(1-\cos(\theta)) }\\
\leq
\sup_{\theta} {\curvemax^{-1}} 2 \sin^{2}(\theta/2) =
2{\curvemax^{-1}} \sin^{2}( \arcsin(\epsilon {\kappa_{m}})/2)
\end{multline}
The intermediate value theorem implies $\arcsin( x) \leq \arcsin'(\zeta) x=(1-\zeta^{2})^{-1/2} x$ for some $\zeta \in [0,x]$; since $\epsilon {\kappa_{m}} < 2^{-1/2}$ (by \eqref{eq:constraintOnkmaxEpsilon}), we find that:
\begin{equation*}
\arcsin(\epsilon {\kappa_{m}}) \leq (1-(2^{-1/2})^{2})^{-1/2} {\kappa_{m}} \epsilon = \sqrt{2} {\kappa_{m}} \epsilon
\end{equation*}
Substituting this into \eqref{eq:1} yields:
\begin{equation}
\label{eq:4}
d(\vq,L) \leq
2 {\curvemax^{-1}} \sin^{2}( \sqrt{2} {\kappa_{m}} \epsilon/2) \leq {\kappa_{m}} \epsilon^{2}
\end{equation}
Thus, the \emph{normal} distance between any point in $\allowed{\epsilon}{\vp}$ and $L$ is $O({\kappa_{m}} \epsilon^{2})$.
If $\gamma_{j}(t') \not\in L+\vm^{\perp} \RR$,
then clearly $\gamma_{j}(t') \not\in \allowed{\epsilon}{\gamma_{i}(t)}$
so we assume $\gamma_{j}(t') \in L+\vm^{\perp} \RR$. In this case,
$\gamma_{j}(t') = \vp + \vm {\curvemax^{-1}} \sin(\theta_{0}) + \vm^\perp z_j$
for some $\theta_0 \in
[-\arcsin(\epsilon {\kappa_{m}}),\arcsin(\epsilon {\kappa_{m}})]$
and $z_j \in \RR$.
Thus, $|z_j| = d_{\vm^{\perp}}(\gamma_{j}(t'), L)$,
the normal distance to $L$. By construction, there is a unique
value $t_i'$ such that
$\gamma_{i}(t_i') = \vp + \vm {\curvemax^{-1}} \sin(\theta_{0}) + \vm^\perp z_i$.
$|z_i|$ then equals $d_{\vm^{\perp}}(\gamma_{i}(t), L)$.
By the second triangle inequality,
\begin{equation*}
d_{\vm^{\perp}}(\gamma_{j}(t'), L) = |z_j|
\geq \abs{|z_j - z_i| - |z_i|}
\geq \delta - {\kappa_{m}} \epsilon^{2} > {\kappa_{m}} \epsilon^{2}
\end{equation*}
But this implies that $d(\gamma_{j}(t'), L) \geq d_{\vm^{\perp}}(\gamma_{j}(t'), L) \geq {\kappa_{m}} \epsilon^{2}$, and thus $\gamma_{j}(t') \not\in \allowed{\epsilon}{\vp}$.
The proof when $i=j$ is identical.
\end{proof}
This result shows that the graph $G$, computed in Step 1 of Algorithm \ref{algo:polygonalization}, separates different $\gamma_{i}$ and $\gamma_{j}$ from each other, as well as different parts of the same curve. Thus, after Step 1, we are left with a graph $G$ having edges only between points $\vp_{i}$ and $\vp_{j}$ which are on the same curve $\gamma_{k}$, and which are separated along $\gamma_{k}$ by an arc length no more than ${\curvemax^{-1}} \pi/2$.
We now show that $G$ is a superset of the polygonalization $\Gamma$.
\begin{proposition}
\label{prop:polyIncludesNeighboringPoints}
Suppose the point data ${ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$ is $\epsilon$-sampled, i.e. if two points $\vp_{i}$ and $\vp_{j}$ are adjacent on the curve $\gamma_{k}$, then the \emph{arc length} between $\vp_{i}$ and $\vp_{j}$ is bounded by $\epsilon$. Then $G$ contains the polygonalization of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$.
\end{proposition}
\begin{proof}
If the distance between adjacent points $\vp_{i}$ and $\vp_{j}$ is at most $\epsilon$, then $\vp_{j} \in \ball{\epsilon}{\vp_{i}}$. Since the segment of $\gamma_{k}$ between $\vp_{i}$ and $\vp_{j}$ has arc length less than $\epsilon$, $\vp_{j}$ is not in the forbidden zone of $\vp_{i}$ (by the same argument as in Lemma \ref{lem:forbiddenZone}. Thus, $\vp_{j} \in \allowed{\epsilon}{\vp_{i}}$ (and vice versa), and $(\vp_{i},\vp_{j})$ is an edge in $G$.
\end{proof}
We have now shown that $G$ separates distinct curves, and that $G$ contains the polygonalization $\Gamma$ of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$. It remains to show that $G = \Gamma$.
\begin{lemma}
\label{lem:localGraphParameterization}
A curve $\gamma_{i}(t)$ satisfying \eqref{eq:curvatureAssumption} admits the local parameterization
\begin{equation}
\label{eq:localGraphParameterization}
\gamma_{i}(t) = \gamma(t_{0}) + (t-t_{0})\gamma'(t_{0}) + w(t) \gamma'^{\perp}(t_{0})
\end{equation}
where $w'(t_{0})=0$. The parameterization is valid for $\abs{t-t_{0}} < {\curvemax^{-1}}$. In particular, $w(t) < f^{-1}({\kappa_{m}} t) $ where $f(z)=z / \sqrt{1+z^{2}}$.
\end{lemma}
\begin{proof}
Taylor's theorem shows the parameterization to be valid on an arbitrarily small ball. All we need to do is show that this parameterization is valid on a region of size ${\curvemax^{-1}}$.
The parameterization breaks down when $w'(t)$ blows up, so we need to show that this does not happen before $t=\epsilon$. Plugging this parameterization into the curvature bound \eqref{eq:curvatureAssumption} yields:
\begin{equation*}
\frac{ \abs{w''(t)} }{(1+w'(t)^{2})^{3/2}} \leq {\kappa_{m}}
\end{equation*}
Assuming $w''(t)$ is positive, this is a first order nonlinear differential inequality for $w'(t)$. We can integrate both sides (using the hyperbolic trigonometric substitution $w(t)=\sinh(\theta)$ for the left side) to obtain:
\begin{equation}
\label{eq:2}
\frac{w'(t)}{\sqrt{1+w'(t)^{2}}} \leq {\kappa_{m}} t \, .
\end{equation}
With $f(z)$ defined as in the statement, then $f^{-1}(z)$ is singular only at $z=\pm 1$, and is regular before that. Solving \eqref{eq:2} for $w'(t)$ shows that:
\begin{equation*}
w'(t) \leq f^{-1}({\kappa_{m}} t)
\end{equation*}
implying that $w'(t)$ is finite for ${\kappa_{m}} t < 1$, or $t < {\curvemax^{-1}}$.
\end{proof}
\begin{lemma}
\label{lem:closestTangentPointInAllowedRegionIsCorrect}
Fix a point $\vp_{i}=\vp \in { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$. Choose a tangent vector $\vm_{0}$ and fix an orientation, say $+$. Consider the set of points $\vp_{j}$ such that $(\vp, \vp_{j})$ is an edge in $G$ and $(\vp_{j} - \vp) \cdot \vm_{0} > 0$. Suppose also that $\epsilon$ satisfies \eqref{eq:constraintOnkmaxEpsilon}.
Then, the only edge in the polygonalization of $\gamma$ is the edge for which $(\vp_{j} - \vp) \cdot \vm_{0}$ is minimal.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:localGraphParameterization}, the curve $\gamma(t)$ can be locally parameterized as a graph near $\vp$, i.e. \eqref{eq:localGraphParameterization}. This is valid up to a distance ${\curvemax^{-1}}$; by \eqref{eq:constraintOnkmaxEpsilon}, it is valid for all points in the graph $G$ connected to $\vp$.
The adjacent points on the graph are the ones for which $\abs{t-t_{0}}$ is minimal. Note that $\vm_{0} \cdot (\vp_{j} - \vp) = t$ (simply plug in \eqref{eq:localGraphParameterization}); thus, minimizing $\vm_{0} \cdot (\vp_{j} - \vp)$ selects the adjacent point on the graph.
\end{proof}
The minimal edge is the edge $\vr^{+}_{0}$ as computed in Step (2b) of Algorithm \ref{algo:polygonalization}.
Thus, we have shown that the computed graph $G$ is the polygonalization
$\Gamma$ of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{example1.eps}
\caption{Some unordered points/tangents, and the curves reconstructed from them. In this case, $\epsilon=0.065$, ${\kappa_{m}}=3$ and $\delta=0.015$.}
\label{fig:basicExample}
\end{figure}
\section{Reconstruction in the Presence of Noise}
In practice one rarely has perfect data, so it is important to understand the performance of the approach in the presence of errors.
To that end, we consider the polygonalization problem, but with the point data perturbed by noise smaller than ${\zeta}$ and the tangent data perturbed by noise smaller than ${\xi}$.
By this we mean the following; to each point $\vp_{i} \in { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$, there exists a point $\vp_{i,\ast} = \gamma_{k_{i}}(t_{i})$ such that $\abs{\vp_{i}-\vp_{i,\ast}} \leq {\zeta}$. Similarly, the unit tangent vector $\vm_{i}$ differs from the true tangent $\vm_{i,\ast} = \gamma_{k_{i}}'(t_{i})$ by an angle at most ${\xi}$. By a polygonalization of the noisy data, we mean that $(\vp_{i},\vp_{j})$ is an edge in the noisy polygonalization if $(\vp_{i,\ast},\vp_{j,\ast})$ is an edge in the noise-free polygonalization. In what follows, $\vp_{j}$ refers to a given (noisy) point, while $\vp_{j,\ast}$ refers to the corresponding true point (and similarly for tangents).
Noise, of course, introduces a lower limit on the features we can resolve. At the very least, the curves must be separated by a distance greater than or equal to ${\zeta}$, to prevent noise from actually moving a sample from one curve to another. In addition, noise in the tangent data introduces uncertainty which forces us to increase the sampling rate; in particular, we require $O(\epsilon {\xi} + \epsilon^{2}) < {\delta}$.
The main idea in extending Algorithm \ref{algo:polygonalization} to the noisy case is to expand the allowed regions to encompass all possible points and tangents. Of course, this imposes new constraints on the separation between curves.
We also require a \emph{maximal} sampling rate in order to ensure that the order of points on the curve is not affected by noise.
For work in the context of reconstruction using point samples only, see \cite{chengnoise,mdnoise}.
\begin{assumption}
\label{ass:minSamplingRateNoisy}
We assume that adjacent points $\vp_{i}$ and $\vp_{j}$ on the curve $\gamma_{k}(t)$ are separated by a distance greater
than $[(1+2^{3/2})(2 {\xi} \epsilon + {\zeta})]$.
\end{assumption}
To compensate for noise, we expand the allowed region to account
for uncertainty concerning the actual point locations.
\begin{definition}
\label{def:AllowedRegionNoisy}
The \emph{noisy allowed region} $\nallowed{\epsilon}{\vp_i}$
is the union of the allowed regions of all points/tangents
near $(\vp_i, \vm_i)$:
\begin{equation}
\label{eq:allowedRegionNoisy}
\nallowed{\epsilon}{\vp_i}= \bigcup_{
\substack{
\abs{\vp -\vp_{i}} < {\zeta}\\
\arccos(\vm_{i} \cdot \vm) < {\xi}
}
}
\left(
\ball{\epsilon}{\vp} \setminus
\left[ \cup_{\pm} \ball{{\curvemax^{-1}}}{\vp
\pm \vm^{\perp} {\curvemax^{-1}}} \right]
\right)
\end{equation}
\end{definition}
\vspace{.2in}
\hrule
\begin{algo}
\label{algo:polygonalizationNoisy}
\begin{center} {\bf (Noisy Polygonalization)}
\end{center}
\vspace{.1in}
\hrule
\vspace{.2in}
\noindent { \bf Input: }
[ We assume we are given the dataset ${ \{ \vp_{i}, \vm_{i}\, |\, i = 0,\dots,N-1 \} }$, the maximal curvature
${\kappa_{m}}$, the noise amplitudes ${\zeta}, {\xi}$, and a
parameter $\epsilon$ satisfying both $\epsilon {\kappa_{m}} < 1/\sqrt{2}$ and
$4 {\zeta} + 2 \epsilon {\xi} + 2.1 {\kappa_{m}} \epsilon^{2} < {\delta}$. We assume that adjacent points on a given curve
are less than a distance $\epsilon$ apart, i.e. the curve is
$\epsilon$-sampled. ]
\vspace{.1in}
\begin{enumerate}
\item Compute the graph $G = ({ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E)$ with edge set:
\begin{equation}
\label{eq:noisyConditionForCheckingIfEdgeConnectionIsPlausible}
E = \{ (\vp_{i},\vp_{j}) : \ball{{\zeta}}{\vp_{i}} \cap \nallowed{\epsilon}{\vp_{j}} \neq \emptyset \textrm{~and~} \ball{{\zeta}}{\vp_{j}} \cap \nallowed{\epsilon}{\vp_{i}} \neq \emptyset \}
\end{equation}
\item For each vertex $\vp_{i} \in { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$:
\begin{enumerate}[a.]
\item Compute the set of vertices
\begin{equation*}
R^{\pm}_{i} = \{ \vp_{j} : (\vp_{i}, \vp_{j}) \in E \textrm{~and~} \pm (\vp_{j}-\vp_{i}) \cdot \vm_{i} > 0 \}
\end{equation*}
\item Find the nearest tangential neighbors, i.e.
\begin{equation*}
\vr^{\pm}_{i} = \textrm{argmin}_{\vq \in R^{\pm}_{i}} d_{\vm_{i}}(\vq, \vp_{i})
\end{equation*}
\end{enumerate}
\item Output the graph $\Gamma = ( { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E')$ with
\begin{equation*}
E' = \{ (\vp_{i}, \vr^{+}_{i}) \} \cup \{ (\vp_{i}, \vr^{-}_{i}) \}
\end{equation*}
This graph is the polygonalization of ${ \{ \gamma_i(t)\, |\, i = 0,\dots,M-1 \}}$.
\end{enumerate}
\end{algo}
The following theorem
guarantees that Algorithm \ref{algo:polygonalizationNoisy} works.
The proof follows that of Theorem \ref{thm:proofOfAlgo}
and is given in Appendix \ref{sec:proofOfNoisyReconstruction}.
An application is shown in Fig. \ref{fig:noisyreconstruction}.
\begin{theorem}
\label{thm:noisyReconstruction}
Suppose that Assumptions \ref{ass:curvature}, \ref{ass:separation} and \ref{ass:minSamplingRateNoisy} hold. Suppose also that
\begin{subequations}
\label{eq:noisySeparationConditions}
\begin{equation}
\label{eq:noisySeparationDistance}
{\delta} > 4 {\zeta} + 4 \epsilon {\xi} + 2.1 {\kappa_{m}} \epsilon^{2} \, ,
\end{equation}
\begin{equation}
\label{eq:noisyConstraintOnkmaxEpsilon}
\epsilon < \frac{1}{ {\kappa_{m}} \sqrt{2}} \, .
\end{equation}
\end{subequations}
Then, Algorithm \ref{algo:polygonalizationNoisy} correctly reconstructs
the figure.
\end{theorem}
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.50]{noisy_reconstruction.eps}
\caption{Noisy sampled points, and the reconstruction by Algorithm \ref{algo:polygonalizationNoisy}. This example takes ${\kappa_{m}}=5$, $\epsilon=0.15$, ${\zeta}={\xi}=0.01$. }
\label{fig:noisyreconstruction}
\end{figure}
\begin{remark}
Consider a point $\vp$, which is a noisy sample from some curve in the figure.
All we can say a priori is that $\vp$ is close to the true
sample $\vp_{\ast}$, i.e. $\vp \in \ball{{\zeta}}{\vp_{\ast}}$.
However, given the knowledge that the polygonalization contains
the edges $(\vq,\vp)$ and $(\vp,\vr)$, we can obtain further information
on $\vp_{\ast}$. Not only does $\vp_{\ast}$ lie in $\ball{{\zeta}}{\vp}$,
but $\vp_{\ast} \in \nallowed{\epsilon}{\vq}$ and
$\vp_{\ast} \in \nallowed{\epsilon}{\vr}$. In short,
\begin{equation}
\label{eq:noisyFilteringFromAllowedRegions}
\vp_{\ast} \in \ball{{\zeta}}{\vp} \cap \nallowed{\epsilon}{\vq} \cap \nallowed{\epsilon}{\vr}
\end{equation}
We can therefore improve our approximation to $\vp_{\ast}$ by minimizing
either the worst case error,
\begin{subequations}
\begin{equation}
\vp^{new} = \textrm{argmin}_{\vp} \sup_{\vx \in A} \abs{\vp - \vx}, ~ A = \ball{{\zeta}}{\vp} \cap \nallowed{\epsilon}{\vq} \cap \nallowed{\epsilon}{\vr}
\end{equation}
or the mean error,
\begin{equation}
\vp^{new} = \textrm{argmin}_{\vp} \int_{A} \abs{\vp - \vx} d\vx
\end{equation}
\end{subequations}
or some application-dependent functional.
Noise in the tangential data can be similarly reduced.
This is a postprocessing matter after polygonalization,
and we will not expanded further on this idea in the present paper.
\end{remark}
\section{Examples}
\subsection{Extracting Topology from MRI images}
In its simplest version, Magnetic Resonance Imaging (MRI) is
used to obtain the
two-dimensional Fourier transform of the proton density in a
planar cross-section through the patient's body.
That is, if $\rho(x)$ is
is the proton density distribution in the plane $P$, then the MRI device
is able to return the data $\hat{\rho}(k)$ at a selection of
points $k$ in the Fourier transform domain ($k$-space).
The number of sample points available, however, is finite and covers
only the low-frequency range in $k$-space well.
Thus, it is desirable to be able to make use of the limited
information in an optimal fashion.
We are currently exploring methods for MRI based on
exploiting the assumption that
$\rho(x)$ is piecewise smooth (since different tissues have different
densities, and the tissues boundaries tend to be sharp).
Our goal is to carry out reconstruction in three steps.
First, we find the tissue boundaries (the discontinu-
ities). Second, we subtract the influence of the discontonuities from
the measured $k$-space data and third, we reconstruct the remainder which
is now smooth (or smoother). Standard filtered Discrete Fourier Transforms
are easily able to reconstruct the remainder, so the basic
problem is that of reconstructing the edges.
Using directional edge detectors on the $k$-space data, we can extract
a set of point samples from the edges, together with non-oriented normal
directions. By means of
Algorithm \ref{algo:polygonalizationNoisy}, we can
reconstruct the topology of the edge set and carry out the
procedure sketched out above.
The details of the algorithm are beyond the scope of this article,
and will be reported at a later date,
but Figure \ref{fig:mriExample}
illustrates the idea behind the method. Our work on curve reconstruction was,
in fact, motivated by this application.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{mri_edges.eps}
\caption{A simulated MRI image. The original image was two circles, together with some low frequency ``texture''. The noise level is 5\%.}
\label{fig:mriExample}
\end{figure}
\subsection{Figure detection}
A natural problem in various computer vision applications is that of
recognizing sampled objects that are partially obscured by a
complex foreground.
As a model of this problem, we constructed an (oval)
figure, and obscured it by covering it with a sequence of curves.
Algorithm \ref{algo:polygonalization} succesfully reconstructs the
figure, as well as properly connecting points on the
horizontally and vertically oriented covering curves.
The result is shown in
Figure \ref{fig:obscuredExample}. Note that the branches are not
connected to the oval (or each other).
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{obscured_figure.eps}
\caption{A figure which is partially obscured.
Algorithm \ref{algo:polygonalization} correctly computes its
polygonalization, and distinguishes it from the curves in front of it.
To avoid visual clutter, the tangents are not displayed in this figure.}
\label{fig:obscuredExample}
\end{figure}
\subsection{Filtering spurious points}
The method provided here is relatively robust with regard to the
addition of spurious random data points. This is because spurious data
points are highly unlikely to be connected to any other points in the
polygonalization graph. To see this, note
first that for an incorrect data point to be connected to part of the
polygonalization at all, it would need to be located in
$\allowed{\epsilon}{\vp}$ for some $\vp$.
This is a region of length $O(\epsilon)$ and width $O(\epsilon^{2})$.
There are approximately $L = \sum_{j} \textrm{arclength}(\gamma_{j})$
such points, for a total volume of $\epsilon^{2} L$. Thus, the probability
that a spurious point is in \emph{some} allowed region is roughly
$O(L \epsilon^{2})$.
The second reason is that even if a spurious point is in some allowed region,
it is unlikely to point in the correct direction.
If an erroneous point $\vq$ is inside $\allowed{\epsilon}{\vp}$, it is
still not likely that $\vp \in \allowed{\epsilon}{\vq}$, since
the tangent at $\vq$ must point in the direction of $\vp$
(with error proportional to $\epsilon^{2}$, the angular width of
$\allowed{\epsilon}{\vq}$). Thus, the probability that the tangent at
$\vq$ points towards $\vp$ is $O(\epsilon^{2}/2\pi)$.
Combining these arguments, the probability that any \emph{randomly chosen}
spurious point $\vq$ is connected to any other point in the
polygonalization is $O(L \epsilon^{4})$.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{noisy_example.eps}
\caption{The same example as in Figure \ref{fig:basicExample}, but with
100 additional points (for a total of $196$), placed randomly. }
\label{fig:noisyExample}
\end{figure}
\subsubsection{Filtering the data}
The aforementioned criteria suggest that our reconstruction algorithm
has excellent potential for noise removal. It suggests that if we
remove points which do not have edges pointing towards other edges,
then with high probability we are removing spurious edges.
This notion is well supported in practice.
By running Algorithm \ref{algo:polygonalization} on a figure consisting of
$96$ true points, and $100$ randomly placed incorrect points, a nearly
correct polygonalization is calculated (Fig. \ref{fig:noisyExample}).
The original curve is reconstructed with an error at only one point
(the top left corner of the right-hand curve).
Of course, if enough incorrect points are present, some points will
eventually be connected by Algorithm \ref{algo:polygonalization}.
This can be seen in Figure \ref{fig:noisyExample}:
the line segment near $(0.9, -0.2)$ is an edge between two incorrect points.
One hint that an edge is incorrect is that it points to a leaf.
That is, consider a set of vertices $\vp_{0}, \vp_{1}, \ldots, \vp_{n}$
as well as $\vq$. Suppose, after approximately computing the
polygonalization, one finds that the graph contains edges
$e_{0} = (\vp_{0}, \vp_{1}), e_{1} = (\vp_{1}, \vp_{2}), \ldots, e_{n-1} =
(\vp_{n-1}, \vp_{n})$ and $e_{n} = (\vp_{n/2}, \vq)$. The vertex $\vq$ is
a leaf, that is it is reachable by only one edge. A polygonalization
of a set of closed curves should not have leaves, suggesting that the
edge $e_{n}$ is spurious.
Thus filtering leaves is a very reasonable heuristic for noise filtering.
One final problem with noisy data worth mentioning is that sometimes,
an incorrect point will be present that lies within the allowed
region of a legitimate point, and closer to the legitimate point
than the adjacent points along the curve. This will prevent the
correct edge from being added. This can be remedied by adding not
only $\vr_{i}^{\pm}$ at Step 3 of the algorithm, but also points for
which $d_{\vm^{\perp}}(\vp_{i})$ whose distance to $\vp_{i}$ is not
much longer than the distance between $\vp_{i}$ and $\vr_{i}^{\pm}$.
With some luck, this procedure combined with filtering out leaves
will approximately reconstruct the correct figure.
\vspace{.2in}
\hrule
\begin{algo}
\label{algo:noisyPolygonalization}
\begin{center}
{\bf (Polygonalization with Noise Removal)}
\end{center}
\vspace{.1in}
\hrule
\vspace{.2in}
\noindent { \bf Input: }
[ We assume we are given the dataset ${ \{ \vp_{i}, \vm_{i}\, |\, i = 0,\dots,N-1 \} }$ (which includes spurious data),
the maximal curvature
${\kappa_{m}}$, the noise amplitudes ${\zeta}, {\xi}$, and a
parameter $\epsilon$ satisfying both $\epsilon {\kappa_{m}} < 1/\sqrt{2}$ and
$2 {\kappa_{m}} \epsilon^{2} < {\delta}$.
We assume that adjacent points on a given curve
are less than a distance $\epsilon$ apart, i.e. the curve is
$\epsilon$-sampled. We also assume we are given the number of
leaf removal sweeps $l \in \mathbb{Z}^{+}$ and a
threshold $\alpha \geq 1$. ]
\vspace{.1in}
\begin{enumerate}
\item Compute the graph $G = ({ \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E)$ with edge set:
\begin{equation*}
E = \{ (\vp_{i},\vp_{j}) : \vp_{i} \in \allowed{\epsilon}{\vp_{j}} \textrm{~and~} \vp_{i} \in \allowed{\epsilon}{\vp_{j}}\}
\end{equation*}
\item For each vertex $\vp_{i} \in { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }$:
\begin{enumerate}[a.]
\item Compute the set of vertices
\begin{equation*}
R^{\pm}_{i} = \{ \vp_{j} : (\vp_{i}, \vp_{j}) \in E \textrm{~and~} \pm (\vp_{j}-\vp_{i}) \cdot \vm_{i} > 0 \}
\end{equation*}
\item Find the nearest tangential neighbors, i.e.
\begin{equation*}
\vr^{\pm}_{i} = \textrm{argmin}_{\vq \in R^{\pm}_{i}} \pm (\vp_{j}-\vp_{i}) \cdot \vm_{i}
\end{equation*}
\item Find the set of almost-nearest tangential neighbors:
\begin{equation*}
\mathbf{R}^{\pm}_{i} = \{ \vr \in R^{\pm}_{i} : d_{\vm_{i}}(\vp_{i}, \vr) \leq \alpha \vr^{\pm}_{i} \}
\end{equation*}
\end{enumerate}
\item Compute the graph $\Gamma = ( { \{ \vp_{i}\, |\, i = 0,\dots,N-1 \} }, E')$ with
\begin{equation*}
E' = \bigcup_{i} \{ (\vp_{i}, \vr) : \vr \in \mathbf{R}^{+}_{i} \} \cup \{ (\vp_{i}, \vr) : \vr \in \mathbf{R}^{-}_{i} \}
\end{equation*}
\item Search through $\Gamma$ for leaves, and remove edges pointing to the leaves. Repeat this $l$ times.
\item Output $\Gamma$.
\end{enumerate}
\end{algo}
In practice, we have found that $\alpha=1.1$ and
$l=4$ work reasonably well.
Figure \ref{fig:moreNoisyExample} illustrates the result of Algorithm
\ref{algo:noisyPolygonalization}, both with and without filtering.
\begin{figure}
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\includegraphics[scale=0.5]{more_noisy_example.eps}
\caption{The same example as in Figure \ref{fig:basicExample}, but with
2000 additional random points added (for a total of 2096).
The original curve is no longer completely reconstructed, but the
general shape is still roughly visible, along with many more spurious
points. The middle figure shows the reconstruction
without Step 4 of Algorithm 3. Filtering leaves with $l=4$ improves
the situation considerably (bottom figure).
\label{fig:moreNoisyExample}}
\end{figure}
\section{Conclusions}
Standard methods for reconstructing a finite set of
curves from sample data are quite general.
By and large, they assume that only point samples are given.
In some applications, however, additional information is available.
In this paper, we have shown that if both sample location and
tangent information are given, significant improvements can be made
in accuracy. We were motivated by a problem in medical imaging,
but believe that
the methods developed here will be of use in a variety of other applications,
including MR tractography and contour line reconstruction in topographic
maps \cite{GORE,TOPO}.
|
1,108,101,562,761 | arxiv | \section{Introduction}
\subsection{The inviscid surface quasi-geostrophic equation}
We are interested in a model of point-vortices for the inviscid surface quasi-geostrophic equation
\begin{equation}\tag{SQG}\label{eq:SQG}
\left\{\begin{split}
&\partial_t\,\omega+v\cdot\nabla\omega=0,\\
&v=\nabla^\perp(-\Delta)^{-s}\omega,
\end{split}\right.
\end{equation}
where $v:\RR_+\times\RR^2\to\RR^2$ is the fluid velocity and $\omega:\RR_+\times\RR^2\to\RR$ is called the active scalar. The notation $\perp$ refers to the counterclockwise rotation of angle $\frac{\pi}{2}$.
Typically, the surface quasi-geostrophic equation models the dynamic in a rotating frame of the potential temperature for a stratified fluid subject to Brunt-Väisälä oscillations.
This is a standard model for geophysical fluids and it is intensively used for weather forecast and climatology.
For more details about this physical model, see e.g.~\cite{Pedlowsky_1987} or~\cite{Vallis_2006}.
Mathematically, the quasi-geostrophic equation has many properties in common with the two-dimensional Euler equation written in terms of vorticity
\begin{equation}\tag{Euler 2D}\label{eq:Euler}
\left\{\begin{split}
&\partial_t\,\omega+v\cdot\nabla\omega=0,\\
&v=\nabla^\perp(-\Delta)^{-1}\omega.
\end{split}\right.
\end{equation}
The two-dimensional Euler equation can be seen as a particular case of the quasi-geostrophic equation where $s$ is equal to $1$.
Local well-posedness of classical solutions for (SQG) was established in~\cite{Constantin_Majda_Tabak_1994}, where are also studied the analogies with the two and three-dimensional Euler equation.
Solutions with arbitrary Sobolev growth were constructed in~\cite{Kiselev_Nazarov_2012} in a periodic setting.
So far and contrarily to the two-dimensional Euler equations, establishing global well-posedness of classical solutions for (SQG) is an open problem.
Note also that the global existence of weak solutions in $L^2(\RR^2)$ was established in~\cite{Resnick_1995}, but below a certain regularity threshold, these weak solutions show dissipative behaviors and non-uniqueness is possible~\cite{Buckmaster_Shkoller_Vicol_2016}.
Exhibiting global smooth solutions or patch solutions is a challenging issue as there is no equivalent of the Yudovitch theorem~\cite{Yudovich_1963}.
A first example was recently provided in~\cite{Castro_Cordoba_Gomez_2020} by developing a bifurcation argument from a specific radially symmetric function.
The varia\-tional construction of an alternative example in the form
of a smooth traveling-wave solution was completed
in~\cite{Gravejat_Smets_2019, Godard-Cadillac_2019}.
Corotating patch solutions with two patches~\cite{Hmidi_Mateu_2017} and $N$ patches forming an $N$-fold symmetrical pattern~\cite{Garcia_2020} were recently exhibited with bifurcation
argument. The $\cC^1$ analogous of these solutions has also been investigated independently recently in~\cite{Ao_Davila_DelPinto_Musso_Wei_2020}.
Another recent independent result~\cite{Godard-Cadillac_Gravejat_Smets_2020} also build corotating solutions with $N$ patches using varia\-tional argument. In this last article, the desingularization of the associated point-vortex problem is achieved.
The present work aims at developing the understanding of the links
between the quasi-geostro\-phic equation and the two-dimensional
Euler equation through the study of the point-vortex model.
In the case of the two-dimensional Euler equation, the
point-vortex model
is a system of differential equations for points on $\RR^2$ that
approximates situations where the vorticity $\omega$ is highly
concentrated around several points.
In such a situation, it is more convenient to see the vorticity as
being a sum of Dirac masses evolving in time.
This model is widely studied in fluid mechanics of the plane. An extensive
presentation of the main results on this system can be found
at~\cite[Part 4]{Marchioro_Pulvirenti_1984},
completed by~\cite{Marchioro_Pulvirenti_1994}.
The desingularization problem which consists in a rigorous derivation of the point-vortex model is a classical issue
for the two-dimensional Euler equation~\cite{Smets_VanSchaftingen_2010}, to our
knowledge it is still open for (SQG) vortices although recent
results exist~\cite{Geldhauser_Romito_2020, Rosenzweig_2020}.
This article generalizes several existing
results known for the Euler vortices or extends known results for Euler to the quasi-geostrophic case.
The first proposition of this article is the generalization of a uniform bound result, Theorem 2.1 in \cite[Part 4]{Marchioro_Pulvirenti_1984}.
We prove that under the \emph{non-neutral cluster hypothesis}
(defined hereafter) the vortices stays bounded in finite time and this bound does not depend on the singularity of the kernel nor on the initial position of the
vortices.
We also provide a uniform relative bound for a slight relaxation of the \emph{non-neutral cluster hypothesis}.
In the second part of this work we prove that under the non-neutral cluster hypothesis the trajectories of the vortices for the Euler model are convergent in finite time even in the case of collapses. The quasi-geostrophic case is left open.
The third part of this work is devoted to the question of the
improbability of collapses.
This consists in studying the Lebesgue measure of the initial conditions leading to a collapse, which is expected to be equal to $0$.
This question has been successfully answered by Theorem 2.2 in \cite[Part 4]{Marchioro_Pulvirenti_1984} under the \emph{non-neutral cluster hypothesis} for the Euler point-vortices.
The extension to the quasi-geostrophic case was achieved by~\cite{Geldhauser_Romito_2020} in the case $1/2<s<1$.
We generalize this result to (SQG) for all $s\in(0,1]$ and we weaken the \emph{non-neutral cluster hypothesis} since we allow the total sum of the intensities of the vortices to be equal to $0$.
\subsection{Presentation of the point-vortex model}~\label{sec:presentation}
The point-vortex model on the plane $\RR^2$ consists in assuming that at time $t=0$ the vorticity can write as a sum of Dirac masses,
\begin{equation}\label{eq:vorticity Dirac}
\omega(t=0,x)=\sum_{i=1}^Na_i\delta_{x_i}.
\end{equation}
The points $x_i$ are the respective position of the vortices $a_i\delta_{x_i}$ and the coefficients $a_i\neq0$ are their intensity.
The first equation in~\eqref{eq:SQG} or~\eqref{eq:Euler} is a transport equation on the vorticity $\omega$.
It is expected that the Dirac masses initially located at $x_i$ are left unchanged but transported by the flow.
Formally, if we solve the evolution equations~\eqref{eq:SQG} or~\eqref{eq:Euler} with initial datum~\eqref{eq:vorticity Dirac}, we obtain that the initial speed writes
\begin{equation}
\label{eq:vorticity Dirac in speed}
v(t=0,x)=\sum_{i=1}^Na_i\nabla^\perp_xG\big(|x_i-x|\big),
\end{equation}
where $G$ is the profile of the Green function of the fractional Laplace operator $(-\Delta)^s$ in the plane $\RR^2$. Here the parameter $s$ is chosen to be in $(0,1]$. In this case, The profiles $G:\RR_+^\ast\to\RR$ are given by:
\begin{equation}\label{def:Green functions}
G_1(r):=\frac{1}{2\pi}\log\Big(\frac{1}{r}\Big)\qquad\mathrm{and}\qquad G_s(r):=\frac{\Gamma(1 - s)}{2^{2s} \pi \Gamma(s)}\,\frac{1}{r^{2(1 - s)}},
\end{equation}
with $\Gamma$ the classical Gamma function.
Nevertheless, a problem arises from the singularity of the speed in~\eqref{eq:vorticity Dirac in speed}.
Since the vorticity is concentrated in one point then it is usually assumed that it does not interact with itself but only with the vortices that are at a positive distance.
We derive that the differential equation describing the evolution of the position of the vortices is given by
\begin{equation}\label{eq:vortex equation}
\frac{d}{dt}x_i(t)=\sum_{\substack{j=1\\j\neq i}}^Na_j\nabla^\perp G\big(|x_i(t)-x_j(t)|\big).
\end{equation}
In the work~\cite[Part 4]{Marchioro_Pulvirenti_1984}, the authors only consider the case $s=1$ corresponding to the two-dimensional Euler equation.
We are going to generalize some of their results to more general kernel profiles $G$ that include the quasi-geostro\-phic case.
In the sequel, the function $G:\RR_+^\ast\to\RR$ is assumed to be chosen such that~\eqref{eq:vortex equation} satisfy the hypothesis of the Cauchy-Lipschitz theorem (also known as Picard-Lindelöf theorem) as long as the distances between vortices remain positive.
\begin{definition}[Set of collapses]\label{defi:collisions}
The set of initial datum such that two or more vortices collapse on the interval of time $[0,T)$ is defined by
\begin{equation}\label{def:collisions T}
\mC_T:=\Big\{X\in\RR^{2N}\;:\;\exists\;T_X\in[0,T),\;\;\liminf\limits_{t\to T_X}\;\min\limits_{i\neq j}\;|x_i(t)-x_j(t)|=0\Big\}.
\end{equation}
We then set
\begin{equation}
\mC:=\bigcup_{T=1}^{+\infty}\mC_T.
\end{equation}
The set $\mC$ is called the \emph{set of collapses} and $T_X$ is the time of collapse associated to the initial datum $X\in\mC.$ Note that these sets depend on the choice of the kernel $G$.
\end{definition}
The point-vortex differential equation~\eqref{eq:vortex equation} is well-defined for all initial datum $X\in\RR^{2N}\setminus\mC$.
If we restrict the analysis to a bounded interval of time $[0,T)$ then it is well-defined for all initial datum $X\in\RR^{2N}\setminus\mC_T$, which eventually allows us to study a possible vortex collapse at time $t=T.$
Concerning the point-vortex problem, the main element to point-out about this dynamic is its Hamiltonian nature. The Hamiltonian of the point-vortex system is given by
\begin{equation}\label{def:hamiltonnian}
\begin{array}{cccc}
H:&\RR^{2N}&\longrightarrow&\RR,\\\quad &X=(x_1\dots x_N)&\longmapsto&\displaystyle\sum_{i\neq j}a_i\,a_j\, G\big(|x_i-x_j|\big).
\end{array}
\end{equation}
The system~\eqref{eq:vortex equation} can be rewritten
\begin{equation}
a_i\frac{d}{dt}x_i(t)=\nabla^\perp_{x_i}H(X).
\end{equation}
The first consequence of this Hamiltonian reformulation is the preservation of the Hamiltonian $H$ along the flow $S^t$ of~\eqref{eq:vortex equation}:
\begin{equation}\label{lem:preservation Hamiltonien}
\forall\;t\in[0,T),\qquad\frac{d}{dt}H(S^tX)=0.
\end{equation}
We recall that the flow of a differential equation is the function $S^t$ that maps the position $X\in\RR^{2N}$ at time $t=0$ to the position at time $t$. In other words, $S^tX=\big(x_1(t),\dots,x_N(t)\big)$ solution to~\eqref{eq:vortex equation}, with initial positions $X=(x_1,\dots,x_N)$.
Another consequence of the Hamiltonian of the system is the Liouville theorem that ensures the preservation of the Lebesgue measure by the flow.
More precisely, if $V_0\subseteq\RR^{2N}\setminus\mC_T$ is measurable, then we have
\begin{equation}\label{lem:liouville theorem}
\forall\;t\in[0,T),\qquad\frac{d}{dt}\;\cL^{2N}(S^tV_0)=0,\end{equation}
where $\cL^{2N}$ denotes the Lebesgue measure on $\RR^{2N}$.
For the proof of the Liouville Theorem, we refer to the one given by Arnold in~\cite[Part 3]{Arnold_1978}.
With the Hamiltonian formulation also comes the Noether theorem~\cite{Noether_1918} that provides the quantities left invariant by the flow corresponding to the geometrical invariances of the Hamiltonian $H$.
The \emph{vorticity vector} is defined for all initial datum $X\in\RR^{2N}$ by
\begin{equation}\label{def:vorticity vector}
M(X):=\sum_{i=1}^Na_i\,x_i.
\end{equation}
The translations invariance of $H$ implies the conservation of the vorticity vector:
\begin{equation}\label{lem:vorticity vector}
\forall\;t\in[0,T),\qquad\frac{d}{dt}M(S^tX)=0.\end{equation}
When the system is non-neutral, meaning that $\sum_ia_i\neq0$, this lemma implies the preservation of the \emph{center of vorticity} of the system defined by
\begin{equation}\label{def:vorticity center}
B(X):=\Big(\sum_{i=1}^Na_i\Big)^{-1}\sum_{i=1}^Na_i\,x_i.
\end{equation}
Similarly, the invariance by the rotations, implies the conservation of the moment of inertia defined by
\begin{equation}\label{def:inertia momentum}
I(X):=\sum_{i=1}^Na_i\,|x_i|^2.
\end{equation}
We have:
\begin{equation}\label{lem:inertia momentum}
\forall\;t\in[0,T),\qquad\frac{d}{dt}I(S^tX)=0.\end{equation}
The combination of these two lemmas implies the preservation of
\begin{equation}\label{def:collapse constraint}
C(X):=\sum_{i=1}^N\sum_{\substack{j=1\\i\neq j}}^Na_i\,a_j\,|x_i-x_j|^2.
\end{equation}
Indeed, if we expand the square in the right-hand side of~\eqref{def:collapse constraint}, we obtain by a straight-forward calculation
\begin{equation}\label{okay}C(X)=2\Big(\sum_{i=1}^Na_i\Big)I(X)-2\big|M(X)\big|^2.
\end{equation}
The preservation of this quantity is referred as a collapse constraint because it is widely used in the study of vortex collapses for small number of vortices~\cite{Novikov_1975, Novikov_Sedov_1979, Aref_1979, Badin_Barry_2018}.
Indeed, a collapse means that $|x_i-x_j|^2$ vanishes for some values of $i$ and $j$.
Combined with the preservation of $C$, this gives a necessary condition for a vortex collapse. For instance, in the case of a collapse for a system of $3$ vortices, this gives the constraint $C=0$.
\section{Main results}
\subsection{Uniform bound results}
\subsubsection{The uniform bound Theorem}
The specific case of the Euler point-vortex system corresponds to the Green function of the Laplacian~\eqref{def:Green functions}.
This particular case is studied in \cite[Part 4]{Marchioro_Pulvirenti_1984}.
More precisely, they focused on a specific situation for which the intensities for the vortices satisfy
\begin{equation}\label{eq:no null partial sum}
\forall\;A\subseteq\{1\dots N\}\; s.t.\;\;A\neq\emptyset,\qquad\sum_{i\in A}a_i\neq0.
\end{equation}
A vortex system such that the sum of all the intensities $a_i$ is equal to $0$ is called in~\cite[Part 4]{Marchioro_Pulvirenti_1984} a ``\emph{neutral system}''.
No name to Hypothesis~\eqref{eq:no null partial sum} is given and we suggest that to call it ``\emph{non-neutral clusters hypothesis}''.
The main interest of this hypothesis relies on the preservation of the center of vorticity property~\eqref{lem:vorticity vector}.
Under the non-neutral cluster hypothesis, the center of vorticity is well defined, not only for the whole system but also for any subset of vortices.
It can be said intuitively that a vortex cluster is expected to ``\emph{turn around its center of vorticity}''.
More precisely, we provide a bound on the trajectories that is uniform with respect to the initial datum $X\in\RR^{2N}$ but also with respect to the singularity of the kernel profile $G$ near $0$.
\begin{proposition}[Uniform bound on the trajectories]\label{thrm:borne uniforme}
Consider the point-vortex dynamic~\eqref{eq:vortex equation} under the non-neutral clusters hypothesis~\eqref{eq:no null partial sum} with a kernel profile $G\in\cC^{1,1}_{loc}\big(\RR_+^\ast\big)\cap\cC^{1,1}\big([1,+\infty[\big)$.
Then, given any positive time $T>0$, there exist a constant $C$ such that for all initial datum $X\in\RR^{2N}\setminus\mC_T$,
\begin{equation}\label{eq:borne uniforme}
\sup\limits_{t\in[0,T)}\big|X-S_G^tX\big|\leq C,
\end{equation}
where $S^t_G$ is the~\eqref{eq:vortex equation} flow associated to kernel profile $G$. Moreover, the constant $C$ depends only on $N$, the intensities $a_i$, the final time $T$ and on supremum of $r\mapsto\big|\frac{dG}{dr}(r)\big|$ for $r\geq 1$. This constant $C$ does not depend on the initial datum $X\in\RR^{2N}$ nor on the singularity of the kernel $r\mapsto G(r)$ when $r\to0$.
\end{proposition}
In~\cite[Part 4]{Marchioro_Pulvirenti_1984}, a weaker version of this result is established only for the Euler case.
We extend this result to a general case where it holds no matter what the singularity of the kernel $G$ in $0^+$ is with a proof widely inspired from Theorem $2.1$ in~\cite[Part 4]{Marchioro_Pulvirenti_1984}.
We remark that the non-neutral cluster hypothesis~\eqref{eq:no null partial sum} is essential.
Indeed, the simple situation of a vortex pair with intensities $+1$ and $-1$ gives raise to a translation motion along two parallel lines at a speed that blows up as the initial distance between the two vortices goes to $0^+$.
The trajectories are bounded in finite time but the bound is not uniform, depending on the initial conditions and on the singularity of the kernel near $r=0$.
We underline that Theorem~\ref{thrm:borne uniforme} apply to the quasi-geostrophic case given by
the Green function of the fractional Laplacian~\eqref{def:Green functions}.
In this sense, this theorem is the
extension of Theorem $2.1$ in~\cite[Part 4]{Marchioro_Pulvirenti_1984}
to the quasi-geostrophic case.
\subsubsection{The case of intensities $a_i$ all positive}
As a further consequence of Theorem~\ref{thrm:borne uniforme} we can show the the impossibility of collapses in the case where the intensities $a_i$ are all positive.
\begin{corollary}\label{lem:non collapse}
Let $G$ be a profile such that
\begin{equation}
\big|G(r)\big|\longrightarrow+\infty\qquad\text{as } r\to0.\label{eq:hyp on G 1}
\end{equation}
Assume that the intensities $a_i$ are all positive and consider an
initial datum $X\in\RR^{2N}$ for the point-vortex system~\eqref{eq:vortex equation} such that $x_i\neq x_j$ for all $i\neq j$. Then there is no collapse of vortices at any time.
\end{corollary}
Contrarily to the existence result given by Theorem~$2.2$
in~\cite[Part 4]{Marchioro_Pulvirenti_1984} and its generalization to (SQG) at Theorem~\ref{thrm:Improved Marchioro Pulvirenti},
this result is true for all initial datum and not only for almost every one. Hypothesis~\eqref{eq:hyp on G 1} may appear a bit restrictive. Indeed, if we consider the kernels
\begin{equation}
\forall\;r>0,\qquad\frac{dG}{dr}(r)=\frac{1}{r^\alpha},
\end{equation}
with $0<\alpha<1$ then the associated kernel $G$ does not satisfy~\eqref{eq:hyp on G 1}.
The possibility to extend Corollary~\ref{lem:non collapse} to this case is an open problem.
Nevertheless, the physical relevant cases are $\alpha=1$ for the Euler model or $3<\alpha<1$ for the quasi-geostrophic model.
For these values of $\alpha$, Corollary~\ref{lem:non collapse} apply.
\subsubsection{The uniform relative bound theorem}
A natural question concerning Theorem~\ref{thrm:borne uniforme} is to ask what this result becomes when the non-neutral cluster hypothesis~\eqref{eq:no null partial sum} ceases to be satisfied. For instance,
we consider instead of~\eqref{eq:no null partial sum} the following hypothesis:
\begin{equation}\label{eq:no null sub partial sum}
\forall\;A\subseteq\{1\dots N\}\; s.t.\;\;A\neq\emptyset\;\;\text{or}\;\;\{1\dots N\},\qquad\quad\sum_{i\in A}a_i\neq0.
\end{equation}
In other words, all the strict sub-clusters must have the sum of their
intensities different from $0$ but we allow the
total sum $\sum_{i=1}^Na_i$ to be equal to $0$. This situation is
achieved for instance by the vortex pair of intensities $+1$
and $-1$ that are translating at a constant speed.
\begin{proposition}[Uniform relative bound on the trajectories]\label{thrm:uniform relative bound}
For a given set of points noted $X=(x_1\dots x_N)\in\RR^{2N}$, we define the diameter of this set by
\begin{equation}\label{def:diam}
diam(X)\;:=\;\max\limits_{i\neq j}|x_i-x_j|.
\end{equation}
Consider the point-vortex dynamic~\eqref{eq:vortex equation} under hypothesis~\eqref{eq:no null sub partial sum} with a kernel profile $G\in\cC^{1,1}_{loc}\big(\RR_+^\ast\big)\cap\cC^{1,1}\big([1,+\infty[\big)$.
Let $T>0$ the final time.
Then for all kernel profile $G\in\cC^{1,1}_{loc}\big(\RR_+^\ast\big)\cap\cC^{1,1}\big([1,+\infty[\big)$ and for all
initial datum $X\in\RR^{2N}$ that are not leading to collapse on $[0,T)$,
\begin{equation}\label{eq:borne uniforme relative}
\sup\limits_{t\in[0,T)}\diam\big(S_G^tX\big)\leq \diam(X)+C,
\end{equation}
where $S^t_G$ is the flow associated to~\eqref{eq:vortex equation}
with the kernel profile equal to $G$.
Moreover, the constant $C$ depends only on $N$, the intensities $a_i$, the final time $T$ and on supremum of $r\mapsto\big|\frac{dG}{dr}(r)\big|$ for $r\geq 1$.
\end{proposition}
This constant $C$ does not depend on the initial datum $X\in\RR^{2N}$ nor on the singularity of the kernel $r\mapsto G(r)$ when $r\to0$. The reasoning is close to the one of Proposition~\ref{thrm:borne uniforme}.
\subsection{Convergence result for Euler point-vortices}\label{sec:convergence result}
The systems of vortices that are studied with more details
in~\cite[Part 4]{Marchioro_Pulvirenti_1984} are the systems for which
the non-neutral clusters hypothesis~\eqref{eq:no null partial sum}
holds. As stated in the previous section, their result concerning
uniform bound on the trajectories can be improved to consider a much
wider class of point-vortex systems, including the quasi-geostrophic
case. Nevertheless, in the particular case of the Euler point-vortex
system with the non-neutral clusters hypothesis~\eqref{eq:no null partial sum}, we are also able to write a convergence result.
When vortices come to collapse, their speed may become infinite as a
consequence of the kernel profile singularity in $0^+$.
But if their speed blows up, then any pathological behavior near the
time of collapse $T_X$ is \emph{a priori} possible.
We prove here that the trajectories are actually convergent in the Euler case.
\begin{theorem}[Convergence for Euler vortices under non-neutral clusters hypothesis]\label{thrm:generalization of no partial sum with ordinary}
Consider the point-vortex model~\eqref{eq:vortex equation} under
hypothesis~\eqref{eq:no null partial sum} with a kernel profile $G_1$
corresponding to the Green function of the
Laplacian~\eqref{def:Green functions}.
Let $X\in\mC$ be an initial datum leading to a collapse at time $T_X$.
Then, for all $i=1\dots N$, there exists an $x_i^\ast\in\RR^{2}$ such that
\begin{equation}\label{eq:continuity of the trajectories 2}
x_i(t)\longrightarrow x_i^\ast\qquad\text{as }t\to T_X^-.
\end{equation}
\end{theorem}
The first step of the proof consists in the following idea:
For a fixed value of $t\in[0,T_X)$ define the distribution
$P_t:=\sum_{i=1}^Na_i\delta_{x_i(t)}.$
The point-vortex equation gives that this distribution converges as $t\to T_X$.
In a second time, it is possible to prove that this convergence is
actually stronger and obtain~\eqref{eq:continuity of the
trajectories 2} by exploiting the non-neutral cluster hypothesis.
The proof of this result is specific to the Euler case and we do not
know yet whether the conclusion extends to the (SQG) case.
\subsection{Improbability of collapses for point-vortices}\label{sec:improbability collapses}
Understanding the sets of collapses $\mC_T$ is an important issue for the study of point-vortices since these sets give the time of existence of the point-vortex system for a given initial datum.
Marchioro and Pulvirenti in their study of the point-vortex
problem~\cite{Marchioro_Pulvirenti_1984} provided the following
improbability result
\begin{theorem}[Improbability of collapses, Marchioro-Pulvirenti, 1984]~\label{thrm:Marchioro Pulvirenti existence theorem}
Consider the point-vortex problem~\eqref{eq:vortex equation} with
kernel profile $G_1$, the kernel associated to the Green function of
the Laplacian on the plane~\eqref{def:Green functions}. Assume that the intensities of the vortices satisfy
the \emph{non neutral cluster hypothesis}~\eqref{eq:no null partial sum}.
Then, the set of initial datum for the dynamic~\eqref{eq:vortex equation} that lead
to collapses in finite time
is a set of Lebesgue measure equal to $0$.
\end{theorem}
The non neutral cluster hypothesis~\eqref{eq:no null partial sum} is an important hypothesis for this theorem as it allows us to use Theorem~\ref{thrm:borne uniforme} that provides a uniform bound.
Concerning this result, the most natural question consists in
removing this assumption on the intensities of the vortices, which eventually leads to the
following conjecture.
\begin{conjecture}\label{conj:improbability of collapses}
The set of collapses $\mC$ has a Lebesgue measure $0$.
\end{conjecture}
It is not yet possible to prove such a conjecture.
The main difficulty lays in the understanding of the situations
where some vortices collide in such a way that they go to infinity
in finite time, or show an unbounded pathological behavior. The existence of unbounded trajectories in finite time is also an open problem.
Although we are not able to answer to Conjecture~\eqref{conj:improbability of collapses},
we are able to improve Theorem~\ref{thrm:Marchioro Pulvirenti existence theorem} as stated in the following theorem:
\begin{theorem}[improbability of collapses for Euler and SQG vortices]\label{thrm:Improved Marchioro Pulvirenti}
Consider the point-vortex problem~\eqref{eq:vortex equation} with
kernel profile $G_s$, the kernel associated to the Green function of
the Laplacian or the fractional Laplacian on the plane~\eqref{def:Green functions} for $s\in(0,1]$. Assume that the intensities of the vortices satisfy~\eqref{eq:no null sub partial sum}.
Then, the set of initial datum leading to collapses
has a Lebesgue measure equal to $0$.
\end{theorem}
This theorem is slightly more general than
Theorem~\ref{thrm:Marchioro Pulvirenti existence theorem} for two reasons.
First, it is true both for Euler and for quasi-geostrophic point-vortex
models. This aspect was already partially improved by~\cite{Geldhauser_Romito_2020}, where they obtained the result
for $s>1/2$. Indeed, the value $s=1/2$ appear to be a critical value for the quasi-geostrophic equations where
integrability problems arise. Our arguments manage to pass trough these difficulties and obtain
the result for all $s\in(0,1]$.
The second improvement lays in the fact that we managed to replace the non neutral cluster
hypothesis~\eqref{eq:no null partial sum} by the weaker hypothesis~\eqref{eq:no null sub partial sum}.
This weaker hypothesis can be seen at first sight as a small improvement.
Nevertheless, it make a quite important difference because the non neutral cluster
hypothesis~\eqref{eq:no null partial sum} implies that the trajectories are bounded (Theorem~\ref{thrm:borne uniforme})
whereas hypothesis~\eqref{eq:no null sub partial sum} only imply a relative bound (Theorem~\ref{thrm:uniform relative bound}).
In other words, this improbability result allows a simple unbounded behaviors: the cases where the vortices collectively goes to infinity.
In the article~\cite{Geldhauser_Romito_2020}, the authors study the evolution in time of the following function:
\begin{equation}
\Phi(X):=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^N\big|x_i-x_j\big|^{-\beta}
\end{equation}
when the $x_i(t)$ evolves according to the point-vortex equation~\eqref{eq:vortex equation} to obtain sufficent conditions for a collapse.
Indeed, this function blows up if $|x_i(t)-x_j(t)|\to0$ as $t\to T$ at a speed depending on the choice of $\beta>0$ and then it is possible to study the collapses as being the set points for wich this function becomes unbounded in time.
Unfortunately, the choice of $\beta$ that must be made for this proof (depending on the value of $s$) create intergability problems when $0<s\leq1/2$.
We overcome these problems arising in the proof of~\cite{Geldhauser_Romito_2020} when $s\leq1/2$ by replacing the singularity $r^{-\beta}$ in the definition of $\Phi$ by a regularized version with parameter $\varepsilon>0$.
We then proceed to a reasoning similar to the one of Marchioro and Pulvirenti~\cite[Part 4]{Marchioro_Pulvirenti_1984} and then conclude by letting $\varepsilon\to0.$
\section{Outlines of the proofs}
To ease the general reading and understanding of the article, we
draw here only the outlines of the proofs of the main Theorems. These larger
proofs are decomposed into smaller lemmas stated in this section and
forming the main intermediate steps.
The links and articulations between the different lemmas are
developed in this section but the detailed technical proofs of each
different Lemma are all postponed to Section~\ref{sec:proofs}.
Concerning Propositions~\ref{thrm:borne uniforme} and~\ref{thrm:uniform relative bound},
the proofs are shorter and directly done in section~\ref{sec:proofs}.
\subsection{Outline of the proof for Theorem~\ref{thrm:generalization of no partial sum with ordinary}}
The first part of the proof consists in considering Dirac masses
located on the vortices and to prove that this converges in the
distributional sense as $t\to T_X$, the time of collapse.
\begin{lemma}[Convergence of the Dirac measures]\label{lem:Dirac}
Let $X\in\RR^{2N}$. From the vortices $x_i(t)$ evolving according to
the differential equations~\eqref{eq:vortex equation} with initial
datum $X$, define the distribution
\begin{equation}\label{def:Dirac sum}
P_X(t):=\sum_{i=1}^Na_i\,\delta_{x_i(t)},
\end{equation}
where $\delta_x$ denotes the Dirac mass at point $x\in\RR^2$.
Assume now that the evolution problem associated to the initial
datum $X$ is well defined on an interval of time $[0,T)$ for some $T>0$.
Then, there exists $X^\ast\in\RR^{2N}$ and $b\in\{0,1\}^N$ such that
\begin{equation}\label{eq:Dirac limit}
\sum_{i=1}^Na_i\,\delta_{x_i(t)}\longrightarrow\sum_{i=1}^Na_i\,b_i
\,\delta_{x_i^\ast}\qquad\text{in the weak sense of measure as } t\to T^-
\end{equation}
and such that $$b_i=0\qquad\Longrightarrow
\qquad\sup\limits_{t\in[0,T[}|x_i(t)|=+\infty.$$
\end{lemma}
Note that the proof makes no use of the non-neutral clusters
hypothesis~\eqref{eq:no null partial sum}. In the particular case
where this hypothesis is satisfied, theorem~\ref{thrm:borne uniforme} implies that the vortices stay bounded on bounded
intervals of time. Therefore, in this particular case all the
coefficients $b_i$ given by this lemma are equal to $1$.
The next step of the proof exploits more precisely Hypothesis~\eqref{eq:no null partial sum} and the continuity of the
trajectories to obtain that a given vortex $x_i(t)$ can only
converge as $t\to T^-$ to one element of the set $\{x_1^\ast\dots x_N^\ast\}$ given by Lemma~\ref{lem:Dirac}. More precisely, if a
given point $x_i(t)$ have at least two adherence points, it is
possible to extract a third adherence point that does not belongs to
$\{x_1^\ast\dots x_N^\ast\}$ and this provides a contradiction with
Lemma~\ref{lem:Dirac}. See Section~\ref{sec:proofs} for a detailed proof.
\subsection{Outline of the proof for Theorem~\ref{thrm:Improved Marchioro Pulvirenti}}
\subsubsection{The modified system}
Let $i$ be fixed in $\{1\dots N\}.$
The modified system consists in studying the evolution of $y_{ij}:=x_i-x_j$ for $j\in\{1\dots N\}\setminus\{i\}$. The idea is that knowing the relative position of the vortices (the differences $y_{ij}$) is enough to study the problem of collapses.
The initial problem~\eqref{eq:vortex equation} implies that
\begin{equation}\label{eq:evolution difference}
\frac{d}{dt}(x_i-x_j)(t)=\sum_{k\neq i}a_k\nabla^\perp G_s\big(|x_i-x_k|\big)-\sum_{k\neq j}a_k\nabla^\perp G_s\big(|x_j-x_k|\big).
\end{equation}
Therefore, the evolution of $y_{ij}$ is given by
\begin{equation}
\label{eq:evolution Y}
\frac{d}{dt}y_{ij}=(a_i+a_j)\nabla^\perp G_s\big(|y_{ij}|\big)+
\sum_{k\neq i,j}a_k\Big(\nabla^\perp G_s\big(|y_{ik}|\big)
+\nabla^\perp G_s\big(|y_{ij}-y_{ik}|\big)\Big).
\end{equation}
The main interest of this new system is that Theorem~\ref{thrm:Improved Marchioro Pulvirenti} can be reformulated using
only the differences $y_{ij}$.
\begin{lemma}[Reformulation of Theorem~\ref{thrm:Improved Marchioro Pulvirenti}]\label{lem:reformulation}
Denote by $Y_i(t):=\big(y_{ij}(t)\big)_{j\neq i}$ the solution
to~\eqref{eq:evolution Y} at time $t$ with initial datum $Y_i\in\RR^{2(N-1)}$.
Assume that for all $i\in\{1\dots N\}$ and for all $T>0$ and $\rho>0$ the set
\begin{equation}\label{eq:the set with measure zero}
\Big\{Y_i:=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}:\exists\;T_X\in[0,T],
\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\Big\}.
\end{equation}
has its Lebesgue measure $\cL^{2(N-1)}$ equal to $0$. Then in this
case the conclusion of Theorem~\ref{thrm:Improved Marchioro Pulvirenti} holds.
\end{lemma}
The notation $\cB(x_0,\rho)$ refers to the Euclidian ball of $\RR^2$ of center $x_0$ and radius $\rho$.
The rest of the work consists then in studying this
system~\eqref{eq:evolution Y} and to establish
that~\eqref{eq:the set with measure zero} does have measure $0$.
This modified dynamics has many properties in common with the
original one. In particular, this
new dynamics still satisfies the Liouville property. Define the
function $\cH_{ij}:\RR^{2(N-1)}\to\RR$ by
\begin{equation}\label{eq:hamilton Y}
\cH_{ij}\Big[(y_{ik})_{k\neq i}\Big]:=(a_i+a_j)G_s\big(|y_{ij}|\big)
+y_{ij}\cdot\!\sum_{k\neq i,j}a_k\nabla G_s\big(|y_{ik}|\big)+
\sum_{k\neq i,j}a_kG\big(|y_{ij}-y_{ik}|\big).
\end{equation}
Combining this equation with~\eqref{eq:evolution Y} gives
\begin{equation}\label{eq:nabla perp Y}
\frac{d}{dt}y_{ij}=\nabla^\perp_{y_{ij}}\cH_{ij}\Big[(y_{ik})_{k\neq i}\Big].
\end{equation}
This equation says that, in a certain sense, the dynamic of the
vector $Y_i:=(y_{ij})_{j\neq i}$ shows an Hamiltonian structure. It
is not an Hamiltonian system because the function $\cH_{ij}$ does
depend on $j$ but there is still a structure with an operator $\nabla^\perp$. Therefore, the Schwartz theorem gives
\begin{equation}\label{eq:Schwarz Y}
\di_{y_{ij}}\Big(\frac{d}{dt}y_{ij}\Big)=\di_{y_{ij}}\bigg(\nabla^\perp_{y_{ij}}\cH_{ij}\Big[(y_{ik})_{k\neq i}\Big]\bigg)=0.
\end{equation}
Since the velocity is divergent-free, a Liouville theorem holds for this dynamics.
\begin{lemma}[Liouville theorem]\label{lem:Liouville Theorem for the modified dynamics}
The Lebesgue measure on the space $\RR^{2(N-1)}$ given by
\begin{equation}
\prod_{j\neq i}dy_{ij}
\end{equation}
is preserved by the flow $\mS_{i}^t$ associated to~\eqref{eq:nabla perp Y}.
\end{lemma}
A detailed proof of the Liouville theorem in a more general setting
can be found at~\cite[Part 3]{Arnold_1978} and we refer to it for the
proof of Lemma~\ref{lem:Liouville Theorem for the modified dynamics}.
\subsubsection{Estimate the collapses}
From the kernel $G_s$ given at~\eqref{def:Green functions}, define
now the regularized profiles $G_{s,\varepsilon}$ for
$\varepsilon\in(0,1]$. The objective here is to drop the singularity of
the kernel $G_s$ near $0^+$. We ask $G_{s,\varepsilon}$ to be
$\cC^1$ on $\RR_+$ (until the boundary) and to verify
\begin{align}
&\bullet\qquad G_{s,\varepsilon}(q)=G_s(q)\qquad \mathrm{when}\quad \varepsilon\leq q,\label{eq:condition on G epsilon 1}\\
&\bullet\qquad |G_{s,\varepsilon}(q)|\leq |G_s(q)|\qquad\mathrm{for\;all}\quad q\in\RR_+,\label{eq:condition on G epsilon 2}\\
&\bullet\qquad \Big|\frac{d}{dq}G_{s,\varepsilon}(q)\Big|
\leq \Big|\frac{d}{dq}G_s(\varepsilon)\Big|\qquad\mathrm{for\;all}\quad q\leq\varepsilon,\label{eq:condition on G epsilon 3}\\
&\bullet\qquad |G_{s,\varepsilon}(q)|\leq 2|G_s(\varepsilon)|\qquad\mathrm{for\;all}\quad q\in\RR_+.\label{eq:condition on G epsilon 4}
\end{align}
Since $G_{s,\varepsilon}$ is of class $\cC^{1,1}$ on $\RR_+$, the
dynamic defined by~\eqref{eq:vortex equation} for the kernel
$G_{s,\varepsilon}$ is always well-defined and is Hamiltonian.
In the sequel we denote $\mS_{i,\varepsilon}^t$ the flow at time $t$
associated to the evolution equation~\eqref{eq:evolution Y} with the
kernel profile $G_s$ replaced by the regularization $G_{s,\varepsilon}$.
The motion induced by the kernel profile $G_{s,\varepsilon}$
coincides with the original motion provided that the distances
between the vortices remain higher than $\varepsilon$.
Theorem~\ref{thrm:Improved Marchioro Pulvirenti} can be reformulated
as follows.
\begin{lemma}[Reformulation of
Theorem~\ref{thrm:Improved Marchioro Pulvirenti} with $\varepsilon$-Regularized dynamic] \label{lem:reformulation convergence}
Assume that for all $T>0$ and for all $\rho>0$ we have the following convergence:
\begin{equation}\label{eq:reformulation}
\cL^{2(N-1)}\Big\{Y_i=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}\;:
\;\min\limits_{j\neq i}
\inf\limits_{t\in[0,T]}\big|y_{ij}^\varepsilon(t)\big|\leq\varepsilon\Big\}
\xrightarrow[\varepsilon\to0^+]{}0.
\end{equation}
Then for all $i\in\{1\dots N\}$, the set~\eqref{eq:the set with measure zero} has Lebesgue measure $0$.
\end{lemma}
This lemma is nothing more than Lemma~\ref{lem:reformulation} where
we added this parameter $\varepsilon>0$ allowing us to regularize
the kernel.
Therefore, combined with Lemma~\ref{lem:reformulation}, this lemma
gives Theorem~\ref{thrm:Improved Marchioro Pulvirenti} provided that
the convergence~\eqref{eq:reformulation} holds.
\begin{lemma}\label{lem:collapses}
Let $i\in\{1\dots N\}$, $\varepsilon>0$ and $\rho>0$. Then
\begin{equation}\label{eq:convergence}
\cL^{2(N-1)}\Big\{Y_i=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}\;:
\;\min\limits_{j\neq i}\inf_{t\in[0,T]}\big|y_{ij}^\varepsilon(t)\big|\leq\varepsilon\Big\}
\leq C\left\{\begin{array}{ll}
\varepsilon&\quad\text{if }s>0.5,\\
\varepsilon\log(1/\varepsilon)&\quad\text{if } s=0.5,\\
\varepsilon^{2s}&\quad\text{if } s<0.5.
\end{array}\right.
\end{equation}
where the constant $C$ only depends on $N$, $|a_i|$, $T$ and $\rho$.
\end{lemma}
The proof of this last lemma is reminiscent from the proof of
Theorem~\ref{thrm:Marchioro Pulvirenti existence theorem} with some
new arguments. The main idea is to rely on a Bienaymé-Tchebycheff
inequality applied to a well-chosen function.
This last estimate~\eqref{eq:convergence} with Lemma~\ref{lem:reformulation convergence}
concludes the proof of Theorem~\ref{thrm:Improved Marchioro Pulvirenti}.\qed
\section{Technical proofs}\label{sec:proofs}
\subsection{Proofs of the Lemmas for the Hamiltonian formulation}
\subsubsection{Proof of Lemma~\ref{lem:preservation Hamiltonien}}
Let $X\in\RR^{2N}\setminus\mC_T$, we have
\begin{equation}
\frac{d}{dt}H(S^tX)=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^Na_i\,a_j\,\nabla G\big(|x_i(t)-x_j(t)|)\cdot\frac{d}{dt}\big(x_i(t)-x_j(t)\big).
\end{equation}
Using the equations of motion~\eqref{eq:vortex equation} gives
\begin{equation}\begin{split}
\frac{d}{dt}H(S^tX)&=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^N\sum_{\substack{k=1\\k\neq i,j}}^Na_i\,a_j\,a_k\nabla G\big(|x_i(t)-x_j(t)|)\cdot\nabla^\perp G|x_i(t)-x_k(t)|)\\
&\qquad-\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^N\sum_{\substack{k=1\\k\neq i,j}}^Na_i\,a_j\,a_k\nabla G\big(|x_i(t)-x_j(t)|)\cdot\nabla^\perp G\big(|x_j(t)-x_k(t)|),
\end{split}
\end{equation}
where we used the identity $\nabla G(|x|)\cdot\nabla^\perp G(|x|)=0.$
If we relabel the indices of the second sum above by swapping the roles as follow: $i\to k\to j\to i$, we get
\begin{equation}\begin{split}
\frac{d}{dt}H(S^tX)&=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^N\sum_{\substack{k=1\\k\neq i,j}}^Na_i\,a_j\,a_k\nabla G\big(|x_i(t)-x_j(t)|)\cdot\nabla^\perp G|x_i(t)-x_k(t)|)\\
&\qquad-\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^N\sum_{\substack{k=1\\k\neq i,j}}^Na_i\,a_j\,a_k\nabla G\big(|x_k(t)-x_i(t)|)\cdot\nabla^\perp G\big(|x_i(t)-x_j(t)|).
\end{split}
\end{equation}
Using now the identity $\nabla G(|-x|)\cdot\nabla^\perp G(|y|)=\nabla G(|y|)\cdot\nabla^\perp G(|x|)$, we obtain that the two sum appearing in the expression above are equal and therefore $$\frac{d}{dt}H(S^tX)=0.$$\qed
\subsubsection{Proof of the conservation of the vorticity vector~\eqref{lem:vorticity vector}}
Let $X\in\RR^{2N}\setminus\mC_T$, we have
\begin{equation}
\frac{d}{dt}M(S^tX)=\sum_{i=1}^Na_i\,\frac{d}{dt}x_i(t).
\end{equation}
Using the equations of motion~\eqref{eq:vortex equation} gives
\begin{equation}
\frac{d}{dt}M(S^tX)=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}a_i\,a_j\,\nabla^\perp G\big(|x_i(t)-x_j(t)|\big).
\end{equation}
If we relabel the sum above by swapping the roles of the indices: $i\leftrightarrow j$, then
\begin{equation}
\frac{d}{dt}M(S^tX)=-\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}a_i\,a_j\,\nabla^\perp G\big(|x_i(t)-x_j(t)|\big),
\end{equation}
where we used the identity $\nabla^\perp G(|-x|)=-\nabla^\perp G(|x|).$ Thus, combining these two last equalities gives
\begin{equation}
\frac{d}{dt}M(S^tX)=0.
\end{equation}\qed
\subsubsection{Proof of the conservation of the moment of inertia~\eqref{lem:inertia momentum}}
Let $X\in\RR^{2N}\setminus\mC_T$, we have
\begin{equation}
\frac{d}{dt}I(S^tX)=2\sum_{i=1}^Na_i\,x_i(t)\cdot\frac{d}{dt}x_i(t).
\end{equation}
Using the equations of motion~\eqref{eq:vortex equation} gives
\begin{equation}
\frac{d}{dt}I(S^tX)=2\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^Na_i\,a_j\,x_i(t)\cdot\nabla^\perp G\big(|x_i(t)-x_j(t)|\big).
\end{equation}
If we relabel the sum above by swapping the roles of the indices: $i\leftrightarrow j$, then
\begin{equation}
\frac{d}{dt}I(S^tX)=-2\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^Na_i\,a_j\,x_j(t)\cdot\nabla^\perp G\big(|x_i(t)-x_j(t)|\big),
\end{equation}
where we used the identity $\nabla^\perp G(|-x|)=-\nabla^\perp G(|x|).$ Combining these two equalities leads to
\begin{equation}
\frac{d}{dt}I(S^tX)=\sum_{i=1}^N\sum_{\substack{j=1\\j\neq i}}^Na_i\,a_j\,\big(x_i(t)-x_j(t)\big)\cdot\nabla^\perp G\big(|x_i(t)-x_j(t)|\big).
\end{equation}
Observing now that the vector $\nabla^\perp G(|x|)$ is orthogonal to the vector $x$, we deduce that all the scalar products appearing in the expression above are $0$. Thus,
\begin{equation}
\frac{d}{dt}I(S^tX)=0.
\end{equation}\qed
\subsection{Proofs for the uniform bound results}
\subsubsection{Proof of Propositions~\ref{thrm:borne uniforme} and~\ref{thrm:uniform relative bound}}
The two uniform bounds given by Propositions~\ref{thrm:borne uniforme} and~\ref{thrm:uniform relative bound} are a consequence of the following proposition:
\begin{proposition}\label{prop:borne reformule}
Let $N\in\NN$ with $N\neq0$ and let $a_i\neq0$ with $i=1\dots N$. let $X\in\RR^{2N}$ be an initial datum for the following differential evolution problem for $t\in[0,T)$:
\begin{equation}\label{eq:vortex perturbed}
\frac{d}{dt}x_i(t)\;=\;\sum_{\substack{j=1\\j\neq i}}^Na_j\,\nabla^\perp G\big(|x_i(t)-x_j(t)|\big)+f\big(t,x_i(t)\big),
\end{equation}
where $G\in\cC^{1,1}_{loc}\big(\RR_+^\ast\big)\cap\cC^{1,1}\big([1,+\infty[\big)$
is the kernel profile and where $f:[0,T)\times\RR^2\to\RR^2$ is a smooth external field.
One makes the assumption that there are no collapses for all $t\in[0,T)$ so that the dynamic~\eqref{eq:vortex perturbed} is well-defined.
One set:
\begin{equation}\label{def:a A0 A}\begin{split}
&a:=\sum_{i=1}^N|a_i|,\\
&A_0:=\min\limits_{\substack{\cP\subseteq\{1\dots N\}\\\cP\neq\emptyset,\,\cP\neq\{1\dots N\}}}\bigg|\sum_{i\in\cP}a_i\bigg|,\\
&A:=\min\limits_{\substack{\cP\subseteq\{1\dots N\}\\\cP\neq\emptyset}}\bigg|\sum_{i\in\cP}a_i\bigg|\;=\min\Big\{A_0;\Big|\sum_{i=1}^Na_i\Big|\Big\}.
\end{split}
\end{equation}
Then there exists a function $C:\NN\times(\RR_+)^5\to\RR_+$ that is non-decreasing with respect to any of its $6$ variables such that:\vspace{0.2cm}
$(i)$ If $A_0\neq0$, then $\forall\;t\in[0,T),\;\forall\, i,j\in\{1\dots N\},$
\begin{equation}\label{eq:announced estimate}
\Big|\big(x_i(t)-x_j(t)\big)-\big(x_i(0)-x_j(0)\big)\Big|\;\leq\;C\Big(N,\,\|f\|_{L^\infty},\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big).
\end{equation}
$(ii)$ If moreover $A\neq 0$, then $\forall\;t\in[0,T),\;\forall\, i\in\{1\dots N\},$
\begin{equation}
\Big|\big(x_i(t)-B(t)\big)-\big(x_i(0)-B(0)\big)\Big|\;\leq\;\frac{a}{A}\,C\Big(N,\,\|f\|_{L^\infty},\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big),
\end{equation}
where $B(t)$ is the center of vorticity of the system~\eqref{def:vorticity center}.
\end{proposition}
The first point of this proposition implies the relative uniform bound stated by Proposition~\ref{thrm:uniform relative bound} because the non-neutral sub-clusters hypothesis~\eqref{eq:no null sub partial sum} is equivalent to $A_0\neq0$.
Proposition~\ref{thrm:uniform relative bound} is then obtained by choosing $f\equiv0.$
Similarly, the second point of this proposition implies the uniform bound stated by Proposition~\ref{thrm:borne uniforme} since the non-neutral clusters hypothesis~\eqref{eq:no null partial sum} is equivalent to $A\neq0$. Recall that in the case $f\equiv0$, the center of vorticity $B$ is a constant of the movement~\eqref{lem:vorticity vector}.
\begin{proof}
To start with, the second point of this proposition is a direct consequence of the fist one because of the following estimate that holds for all $i=1\dots N$:
\begin{equation}
\big|x_i(t)-B(t)\big|=\bigg|\frac{\sum_{j=1}^Na_j\big(x_i(t)-x_j(t)\big)}{\sum_{j=1}^Na_j}\bigg|\leq\frac{a}{A}\,\max_{j,k=1\dots N}|x_j(t)-x_k(t)|.
\end{equation}
Then, there remain to prove Proposition~\ref{prop:borne reformule}-$(i)$.
The function $C$ is constructed using an iterative argument on the number of vortices $N>0$ that is similar to the proof of the uniform bound in the book of Marchioro and Pulvirenti~\cite[Part 4]{Marchioro_Pulvirenti_1984}.
For the iterative arguments, the case $N=1$ is straight-forward and gives $C(1,\dots)\equiv0$.
Suppose now that the function $C(k,\dots)$ has been constructed for all $k=1\dots N-1$ with $N\geq2$ and satisfy the announced estimate~\eqref{eq:announced estimate}. One is now constructing the function $C(N,\dots)$. For that purpose, one defines in the view of~\eqref{def:a A0 A} the following quantities for all $\cP\subseteq\{1\dots N\}$ non empty:
\begin{equation}
\begin{split}
&a(\cP):=\sum_{i\in\cP}|a_i|,\\
&A_0(\cP):=\min\limits_{\substack{\cQ\subseteq\cP\\\cQ\neq\emptyset,\,\cQ\neq\{1\dots N\}}}\bigg|\sum_{i\in\cQ}a_i\bigg|,\\
&A(\cP):=\min\limits_{\substack{\cQ\subseteq\cP\\\cQ\neq\emptyset}}\bigg|\sum_{i\in\cQ}a_i\bigg|\;=\min\Big\{A_0(\cP);\Big|\sum_{i\in\cP}a_i\Big|\Big\}.
\end{split}
\end{equation}
The following increasing properties hold:
\begin{equation}\label{eq:aA increasing}\begin{split}
&\cQ\subsetneq\cP\qquad\Longrightarrow\qquad a(\cQ)<a(\cP),\\
&\cQ\subsetneq\cP\qquad\Longrightarrow\qquad A(\cQ)\geq A_0(\cP).
\end{split}
\end{equation}
There is also,
\begin{equation}\label{eq:aA and A_0}
\forall\,\cP\subseteq\{1\dots N\},\qquad A_0(\cP)\geq A(\cP).
\end{equation}
Now, let $S>0$ be a parameter supposed very large and that is fixed later on.
One defines the set
\begin{equation}
\mD_S\;:=\;\big\{t\in[0,T):\max_{j,k=1\dots N}|x_j(t)-x_k(t)|\geq S\big\},
\end{equation}
and $t_0:=min\,\mD_S\in[0,T]$. Since the largest distance between vortices is larger or equal to $S$ at time $t_0$, then by the triangular inequality the vortices are divided into two nonempty clusters $\cP,\cQ\subsetneq\{1\dots N\}$ (with $\cP\cup\cQ=\{1\dots N\}$ and $\cP\cap\cQ=\emptyset$) separated by a distance $d$ that is bounded from below by $S/(N-1).$
Indeed, the least favorable case consists in the situation where all
the vortices forms a rectilinear chain made of $N$
points spaced with intervals of same length and that link the two vortices that realize the maximal distance.
If the distance $d$ is larger than $1$ then the interaction between two vortices that does not belong to the same cluster is bounded by $sup_{r\geq1}\big|\frac{dG}{dr}(r)\big|$.
For that purpose, one makes now the assumption that $S\geq N$ so that $d>1$ (this constraint is handled later in the choice of $S$) n that case, the evolution of the points $x_i$ in cluster $\cP$, using~\eqref{eq:vortex perturbed}, is given by
\begin{equation}\label{eq:vortex cluster evo}
\frac{d}{dt}x_i(t)\;=\;\sum_{\substack{j\in\cP\\j\neq i}}a_j\,\nabla^\perp G\big(|x_i(t)-x_j(t)|\big)+\widetilde{f}\big(t,x_i(t)\big),
\end{equation}
where,
\begin{equation}
\widetilde{f}\big(t,x_i(t)\big)=f\big(t,x_i(t)\big)+\sum_{\substack{j\in\cQ\\j\neq i}}a_j\,\nabla^\perp G\big(|x_i(t)-x_j(t)|\big)
\end{equation}
and
\begin{equation}\label{eq:tilde f}
\|\widetilde{f}\|_{L^\infty}\leq\|f\|_{L^\infty}+a(\cQ)\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|.
\end{equation}
Then, as long as the distance between the two clusters remain larger than $1$, it is possible to apply the result of Proposition~\ref{prop:borne reformule} recursively to~\eqref{eq:vortex cluster evo}. Note that since $A_0=A_0(\{1\dots N\})>0$ by hypothesis then $A(\cP)>0$ as a consequence of~\eqref{eq:aA and A_0}. It is therefore possible to define the center of vorticity of the cluster $\cP$:
\begin{equation}
B_\cP(t):=\Big(\sum_{j\in\cP}a_j\Big)^{-1}\sum_{j\in\cP}a_j\,a_j(t).
\end{equation}
Proposition~\ref{prop:borne reformule}-$(ii)$ applied recursively to $\cP$ with a number of vortices $\#\cP<N$ gives that for all index $i\in\cP,$
\begin{equation}\label{luth}\begin{split}
\forall\;t\in[t_0,t_1),\qquad\Big|\big(x_i(t)&-B_\cP(t)\big)-\big(x_i(t_0)-B_\cP(t_0)\big)\Big|\\\;&\leq\;\frac{a(\cP)}{A(\cP)}\,C\Big(\#\cP,\,\|\widetilde{f}\|_{L^\infty},\,|t_1-t_0|,\,a(\cP),\,\frac{1}{A_0(\cP)},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big),\end{split}
\end{equation}
where
\begin{equation}
t_1:=\sup\big\{t\in[t_0,T)\;:\;\min_{i\in\cP}\,\min_{j\in\cQ}\,|x_i(t)-x_j(t)|\geq1\big\}.
\end{equation}
Note also that $t_1>t_0$ since $S\geq N$ implies that at time $t_0$ the distance is larger than $N/N-1>1$.
Since the function $C$ is increasing with respect to any of its variables, the estimate~\eqref{luth} with the increasing properties~\eqref{eq:aA increasing} becomes, for all $i,j\in\cP,$ and for all $t\in[t_0,t_1)$,
\begin{equation}\label{peter pan}\begin{split}
&\qquad\Big|\big(x_i(t)-B_\cP(t)\big)-\big(x_i(t_0)-B_\cP(t_0)\big)\Big|\qquad\\&\leq\;\frac{a}{A_0}C\Big(N-1,\,\|\widetilde{f}\|_{L^\infty},\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big),\\
&\leq\;\frac{a}{A_0}C\Big(N-1,\,\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|,\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big),
\end{split}\end{equation}
Where~\eqref{eq:tilde f} is used for the second inequality.
On the other hand, the center of vorticity is preserved by the flow of the point-vortex dynamics~\eqref{lem:vorticity vector} and therefore $B_\cP$ is only moved by the external field:
\begin{equation}
\forall\;t\in[t_0,t_1),\qquad\frac{d}{dt}B_\cP(t)=\Big(\sum_{j\in\cP}a_j\Big)^{-1}\sum_{j\in\cP}a_j\widetilde{f}\big(t,x_j(t)\big).
\end{equation}
Thus,
\begin{equation}\label{bambi}
\forall\;t\in[t_0,t_1),\qquad\big|B_\cP(t)-B_\cP(t_0)\big|\leq\frac{a(\cP)}{A(\cP)}T\|\widetilde{f}\|_{L^\infty}\leq\frac{a}{A_0}T\Big(\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big),
\end{equation}
where was used~\eqref{eq:tilde f} for the second inequality.
One now establishes that $t_1=T$ is $S$ is chosen large enough. Let $t\in[t_0,t_1)$ and let $i\in\cP$, $j\in\cQ.$ Equations~\eqref{peter pan} and~\eqref{bambi} together gives
\begin{equation}\label{piou piou}\begin{split}
\big|x_i(t)-x_j(t)\big|&\geq \big|x_i(t_0)-x_j(t_0)\big|-\Big|\big(x_i(t)-B_\cP(t)\big)-\big(x_i(t_0)-B_\cP(t_0)\big)\Big|-\Big|B_\cP(t)-B_\cP(t_0)\Big|\\&\qquad-\Big|\big(x_j(t)-B_\cQ(t)\big)-\big(x_j(t_0)-B_\cQ(t_0)\big)\Big|-\Big|B_\cQ(t)-B_\cQ(t_0)\Big|\\
&\geq\frac{S}{N-1}-2\frac{a}{A_0}C\Big(N-1,\,\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|,\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big)\\
&\qquad-2\frac{a}{A_0}T\Big(\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big).
\end{split}
\end{equation}
If one chooses $S$ big enough so that the last expression above is larger than $2$, then the definition of $t_1$ implies $t_1=T$. For that purpose we set the parameter $S$ equal to:
\begin{equation}\label{def:S_N}\begin{split}
S_N:=2(N-1)+2(N-1)\frac{a}{A_0}C\Big(N-1,\,\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|,\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big)\\+2(N-1)\frac{a}{A_0}T\Big(\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big).\end{split}
\end{equation}
Remark that the constraint $S_N\geq N$ previously required in the case $N\geq 2$ is indeed satisfied with such a definition.
It is now possible to establish the announced estimate. First, in the case $0\leq t\leq t_0$, then by definition of $t_0$,
\begin{equation}\label{chante}
\forall\;i,j=1\dots N,\qquad\big|\big(x_i(t)-x_j(t)\big)-\big(x_i(0)-x_j(0)\big)\big|\leq 2S_N.
\end{equation}
Otherwise if $t_0\leq t<t_1=T$,
\begin{equation}
\begin{split}
&\forall\;i,j=1\dots N,\qquad\Big|\big(x_i(t)-x_j(t)\big)-\big(x_i(0)-x_j(0)\big)\Big|\\&\qquad\leq\Big|\big(x_i(t_0)-x_j(t_0)\big)-\big(x_i(0)-x_j(0)\big)\Big|+\big|x_i(t)-x_i(t_0)\big|+\big|x_j(t)-x_j(t_0)\big|
\end{split}
\end{equation}
The first term above is estimated similarly as~\eqref{chante}. For the other terms, suppose for instance that $i\in\cP$, then one uses again~\eqref{peter pan} and~\eqref{bambi} to get
\begin{equation}\label{danse}\begin{split}
\Big|x_i(t)-x_i(t_0)\Big|&\leq\Big|\big(x_i(t)-B_\cP(t)\big)-\big(x_i(t_0)-B_\cP(t_0)\big)\Big|+\Big|B_\cP(t)-B_\cP(t_0)\Big|\\
&\leq\frac{a}{A_0}C\Big(N-1,\,\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|,\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big)\\
&\qquad+\frac{a}{A_0}T\Big(\|f\|_{L^\infty}+a\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big)\\
&=\frac{S_N-(N+1)}{2(N-1)}\leq \frac{S_N}{2}
\end{split}
\end{equation}
Thus, gathering the two estimates~\eqref{chante} and~\eqref{danse},
\begin{equation}\label{bio}
\forall\;i,j,\quad\forall\;t\in[0,T),\qquad\big|\big(x_i(t)-x_j(t)\big)-\big(x_i(0)-x_j(0)\big)\big|\leq 3S_N.
\end{equation}
In the view of the definition of $S_N$ at~\eqref{def:S_N}, it is possible to define the function $C(N,\dots)$ such that
\begin{equation}
C\Big(N,\,\|f\|_{L^\infty},\,T,\,a,\,\frac{1}{A_0},\,\sup_{r\geq1}\Big|\frac{dG}{dr}(r)\Big|\Big)=3S_N
\end{equation}
It is a direct computation to check that the function $C$ is increasing with respect to any of its variables and~\eqref{bio} corresponds exactly to~\eqref{eq:announced estimate}.
\end{proof}
\subsubsection{Proof of Corollary~\ref{lem:non collapse}}
Assume for the sake of contradiction that there exist $T>0$ such that
\begin{equation}
\liminf\limits_{t\to T}\;\min\limits_{i\neq j}\;|x_i(t)-x_j(t)|\;=\;0.
\end{equation}
Then it is possible to extract an increasing sequence of time $(t_n)$ and two indices $i_0\neq j_0$ such that
\begin{equation}\label{Scrogneugneu}
\lim\limits_{n\to+\infty}\;|x_{i_0}(t_n)-x_{j_0}(t_n)|\;=\;0.
\end{equation}
Consider two indices $k_0\neq l_0$. Two cases occurs:
\begin{equation}\label{Saperlotte}
\text{either}\qquad\liminf\limits_{n\to+\infty}\;|x_{k_0}(t_n)-x_{l_0}(t_n)|\;=\;0\qquad\text{or}\qquad\forall\;n\in\NN,\;|x_{k_0}(t_n)-x_{l_0}(t_n)|\;\geq\;c\;>\;0.
\end{equation}
In the first case, it is possible to obtain that
\begin{equation}
\lim\limits_{n\to+\infty}\;|x_{k_0}(t_n)-x_{l_0}(t_n)|\;=\;0,
\end{equation}
provided that we proceed to another omitted extraction of the sequence $(t_n)$.
We considering now successively all the possible pairs of indices $i\neq j$ and we proceed iteratively to such an extraction if the first case of~\eqref{Saperlotte} happens.
We eventually end up with a set of couples $\cP\subseteq\{(i,j):i, j=1\dots N\;\;\text{with}\;\;i\neq j\}$ and an increasing sequence of time $(t_n)$ such that
\begin{equation}\label{voici}
\forall\;(i,j)\in\cP,\qquad\big|x_i(t_n)-x_j(t_n)\big|
\longrightarrow0^+\qquad\text{as }n\to+\infty
\end{equation}
and
\begin{equation}\label{voila}
\forall\;(i,j)\notin\cP,\;i\neq j,\qquad\big|x_i(t_n)-x_j(t_n)\big|\geq c>0,
\end{equation}
with $c>0$ a constant independent of $n$.
Since the $a_i$ are all positive, the non-neutral cluster
hypothesis is satisfied and then Theorem~\ref{thrm:borne uniforme} in apply. The trajectories remain bounded by a constant $C$. Combining this with~\eqref{voila} gives
\begin{equation}
\forall\;(i,j)\notin\cP,\;i\neq j,\qquad G\big(|x_i(t_n)-x_j(t_n)|\big)\;\geq\;\min\limits_{r\in[c,C]}G(r).
\end{equation}
Since the $a_i$ are all positive, we deduce that
\begin{equation}\label{foutreDieu}\begin{split}
&\sum_{i\neq j}a_i\,a_j\,G\big(|x_i(t_n)-x_j(t_n)|\big)
\\&\qquad\geq\sum_{\substack{(i,j)\in\cP\\i\neq j}}a_i\,a_j\,G\big(|x_i(t_n)-x_j(t_n)|\big)
+\Big(\min\limits_{r\in[c,C]}G(r)\Big)\sum_{\substack{(i,j)\notin\cP\\i\neq j}}a_i\,a_j.\end{split}
\end{equation}
Using again the positivity of the coefficients $a_i$, we obtain that hypothesis~\eqref{eq:hyp on G 1} combined with~\eqref{voici} imply that
\begin{equation}
\sum_{\substack{(i,j)\in\cP\\i\neq j}}a_i\,a_j\,G\big(|x_i(t_n)-x_j(t_n)|\big)\longrightarrow+\infty\qquad\text{for }n\to+\infty.
\end{equation}
Thus, in the view of~\eqref{foutreDieu},
\begin{equation}
\sum_{i\neq j}a_i\,a_j\,G\big(|x_i(t_n)-x_j(t_n)|\big)\longrightarrow+\infty\qquad\text{for }n\to+\infty.
\end{equation}
This is in contradiction with the energy preservation stated
at Lemma~\ref{lem:preservation Hamiltonien}.\qed
\subsubsection{Proof of Theorem~\ref{thrm:uniform relative bound}}
Let $S>0$ parameter very large fixed later. Suppose by the absurd
that there exists an initial datum $X\in\RR^{2N}\setminus\mC_T$, two indices $i_0$ and $j_0$ and a time $t_0$ such that
\begin{equation}
|x_{i_0}(t_0)-x_{j_0}(t_0)|\;\geq\;\max\limits_{i,j}|x_i-x_j|+S.
\end{equation}
By continuity of the trajectories, there exists an interval of time $[t_1,t_2]$ such that
\begin{equation}\label{Barcelona}
|x_{i_0}(t_1)-x_{j_0}(t_1)|=\max\limits_{i,j}|x_i-x_j|+S,\qquad|x_{i_0}(t_2)-x_{j_0}(t_2)|=\max\limits_{i,j}|x_i-x_j|+\frac{S}{2},
\end{equation}
and such that
\begin{equation}
\forall\;t\in[t_1,t_2],\quad|x_{i_0}(t_2)-x_{j_0}(t_2)|\geq\max\limits_{i,j}|x_i-x_j|+\frac{S}{2}.
\end{equation}
As a consequence of this last property and similarly as in the proof of Theorem~\ref{thrm:borne uniforme}, we can split the system of vortices into two non-empty
subsets $P$ and $Q$ with $P\cup Q=\{1\dots N\}$ such that for all $t\in[t_1,t_2]$,
\begin{equation}\label{Madrid}
\min\limits_{i\in P}\min\limits_{j\in Q}|x_i(t)-x_j(t)|\geq\frac{S}{2N}.
\end{equation}
It can be assumed for instance that $i_0\in P$ and $j_0\in Q$.
Therefore, the vortices in the set $P$ evolves according to equation
\begin{equation}\label{Sevilla}
\frac{d}{dt}x_i(t)=\sum_{\substack{j\in P\\j\neq i}}a_j\nabla^\perp G\big(|x_i^N(t)-x_j^N(t)|\big)+F_i(x_i^N(t),t).
\end{equation}
where the external fields $F_i$ is the interaction with the vortices
that belongs to $Q$. As a consequence of~\eqref{Madrid}, and if $S$ is chosen large enough, this field satisfies
\begin{equation}\label{Malaga}
|F_i(x,t)|\;\leq\;\sup_{r\geq 1}\Big|\frac{d}{dr}G(r)\Big|.
\end{equation}
The analogous equation holds for the vortices of the set $Q$.
We now want to apply Theorem~\ref{thrm:borne uniforme} to the dynamic of the
cluster $P$ on $[t_1,t_2]$ given by~\eqref{Sevilla}. The non-neutral
clusters hypothesis is satisfied for the cluster $P$ as a consequence
of~\eqref{eq:no null sub partial sum} because $P$ is a strict subset of $\{1\dots N\}$. We obtain from Theorem~\ref{thrm:borne uniforme} a constant $C$ such that the dynamic of the cluster $P$ without external field $F_i$ is bounded by $C$. If we add the smooth external field $F_i(x,t)$ and since the bound given by Theorem~\ref{thrm:borne uniforme} is uniform, we end up with
\begin{equation}
\sup\limits_{t\in[t_1,t_2]}\;\max\limits_{i\in P}\;|x_i(t)-x_i(t_1)|\leq C+\int_{t_1}^{t_2}\sup_{x\in\RR^{2N}}|F_i(x,t')|\,dt'.
\end{equation}
Combining this with~\eqref{Malaga} gives
\begin{equation}\label{Zaragossa}
\sup\limits_{t\in[t_1,t_2]}\;\max\limits_{i\in P}\;|x_i(t)-x_i(t_1)|\leq C+T\,\sup_{r\geq 1}\Big|\frac{d}{dr}G(r)\Big|.
\end{equation}
Doing an analogous argument on the cluster $Q$ gives
\begin{equation}\label{Murcia}
\sup\limits_{t\in[t_1,t_2]}\;\max\limits_{i\in Q}|x_{i}(t)-x_{i}(t_1)|\leq C+T\,\sup_{r\geq 1}\Big|\frac{d}{dr}G(r)\Big|.
\end{equation}
On the other hand, the conditions~\eqref{Barcelona} imply that
\begin{equation}\label{Valencia}
|x_{i_0}(t_2)-x_{i_0}(t_1)|\geq\frac{S}{4}\qquad\text{or}\qquad|x_{j_0}(t_2)-x_{j_0}(t_1)|\geq\frac{S}{4}.
\end{equation}
The two bounds~\eqref{Zaragossa} and~\eqref{Murcia} are in contradiction with~\eqref{Valencia} if $S$ is chosen large enough.
\qed
\subsection{Proofs of the lemmas for Theorem~\ref{thrm:generalization of no partial sum with ordinary}}\label{sec:proofs 2}
\subsubsection{Proof of Lemma~\ref{lem:Dirac}}
Let $P_X(t)$ be the distribution defined by~\eqref{def:Dirac sum},
assumed well-defined for all $t\in[0,T[$. Let $\varphi\in\cD(\RR^2)$. Then,
\begin{equation}
\frac{d}{dt}\big<P_X(t),\varphi\big>_{\cD'\cD}
=\frac{d}{dt}\sum_{i=1}^Na_i\,\varphi\big(x_i(t)\big)=\sum_{i=1}^Na_i\nabla\varphi\big(x_i(t)\big)\cdot\frac{d}{dt}x_i(t).
\end{equation}
The equations of motion~\eqref{eq:vortex equation} give
\begin{equation}\begin{split}
&\frac{d}{dt}\big<P_X(t),\varphi\big>_{\cD'\cD}=\sum_{i\neq j}a_ia_j
\nabla\varphi\big(x_i(t)\big)\cdot\frac{\big(x_j(t)-x_i(t)\big)^\perp}{|x_j(t)-x_i(t)|^2}\\
&=\frac{1}{2}\sum_{i\neq j}a_ia_j\bigg(\nabla\varphi\big(x_i(t)\big)
-\nabla\varphi\big(x_j(t)\big)\bigg)\cdot\frac{\big(x_j(t)-x_i(t)\big)^\perp}{|x_j(t)-x_i(t)|^2}.
\end{split}\end{equation}
Therefore,
\begin{equation}
\bigg|\frac{d}{dt}\big<P_X(t),\varphi\big>_{\cD'\cD}\bigg|\leq
\frac{1}{2}\sum_{i\neq j}|a_i a_j|\,\big\|\nabla^2\varphi\big\|_\infty.
\end{equation}
Then, $t\mapsto\big<P_X(t),\varphi\big>_{\cD'\cD}$ is Lipschitz and
converges as $t\to T^-$.
Since this holds for any $\varphi\in\cD(\RR^2)$, this implies that
$P_X(t)$ converges in the sense of distributions towards some $P_X\in\cD'(\RR^2)$ as $t\to T^-$.
There remains to prove that $P_X$ is actually a measure that takes
the form given by~\eqref{eq:Dirac limit}.
Consider now an increasing sequence $(t_n)$ converging towards $T^-$.
We remark first that it is always possible, up to an omitted extraction
of the sequence, to reduce the problem to
\begin{equation}\label{alternative}
\text{either}\qquad x_i(t_n)\longrightarrow x_i^\ast\qquad\text{or}\qquad\big|x_i(t_n)\big|\longrightarrow+\infty
\end{equation}
for some $X^\ast\in\RR^{2N}$. Indeed, if
$\big|x_i(t_n)\big|\longrightarrow+\infty$ is not satisfied then
there exists an extraction such that $x_i(t_n)$ stays bounded. But
if it stays bounded then another extraction makes this sequence
converge towards some $x_i^\ast$. Repeting this process for $i$ from
$1$ to $N$ gives~\eqref{alternative}. Now that~\eqref{alternative} holds, define
\begin{equation}
b_i=\left\{\begin{array}{ll}0&\quad\text{ if }\big|x_i(t_n)\big|\longrightarrow+\infty,\\ 1&\quad\text{ either.}\end{array}\right.
\end{equation}
Therefore it holds
\begin{equation}
\sum_{i=1}^Na_i\,\delta_{x_i(t_n)}\longrightarrow\sum_{i=1}^Na_i\,b_i\,\delta_{x_i^\ast}\qquad\text{as}\;n\to+\infty,
\end{equation}
in the distributionnal sense.
By uniqueness of the limit, it is possible to identify
\begin{equation}
P_X = \sum_{i=1}^Na_i\,b_i\,\delta_{x_i^\ast}.
\end{equation}
The fact that the convergence of $P_X(t)$ towards $P_X$ in $\cD'$ is
actually a convergence in the weak sense of measure comes from the
fact that the measure $P_X(t)$ is bounded by $\sum_i|a_i|$ for all $t$.\qed
\subsubsection{Proof of Theorem~\ref{thrm:generalization of no partial sum with ordinary}}
\textbf{$\bullet\;$ Step 1: }Consider the $X^\ast\in\RR^{2N}$ given
by Lemma~\ref{lem:Dirac}. Let $z\in\RR^2$ such that for all $i=1\dots N$, $z\neq x^\ast_i$.
We are going to prove that for all $i=1\dots N$,
\begin{equation}\label{Ra}
\liminf\limits_{t\to T^-}|x_i(t)-z|>0.
\end{equation}
Suppose by the absurd that there exists $A\subseteq\{1\dots N\}$
with $A\neq\emptyset$ such that for all $i\in A$,
\begin{equation}\label{Khepri}
\liminf\limits_{t\to T^-}|x_i(t)-z|=0.
\end{equation}
This set $A$ can be chosen such that for all $i\notin A$,
\begin{equation}\label{Nout}
\liminf\limits_{t\to T^-}|x_i(t)-z|>0.
\end{equation}
Define
\begin{equation}\label{Hapi}\begin{split}
&d_z^1:=\min\big\{|x_i^\ast-z|:i=1\dots N\big\}>0,\\
&d_z^2:=\min\big\{\liminf\limits_{t\to T^-}|x_i(t)-z|:i\notin A\big\}>0,\\
&d^\ast_z:=\min\big\{d_z^1,\;d_z^2\big\}>0,
\end{split}
\end{equation}
where by convention, for the definition of $d_z^2$, the minimum of
the empty set is $+\infty$.
Let $\varphi$ be a $\cC^\infty$ function supported on the ball $\cB(z,d^\ast_z/2)$ and equal to $1$ on the ball $\cB(z,d^\ast_z/4)$.
As a consequence of Lemma~\ref{lem:Dirac} (and Theorem~\ref{thrm:borne uniforme} to obtain that $b_i=1$ for all $i$) and by
definition of $d^\ast_z$ we have,
\begin{equation}\label{Re}
\Big<\sum_{i=1}^Na_i\delta_{x_i(t)},\;\varphi\Big>_{\cD',\cD}\;\longrightarrow
\;\Big<\sum_{i=1}^Na_i\delta_{x_i^\ast},\;\varphi\Big>_{\cD',\cD}= 0\qquad\text{as }t\to T^-.\end{equation}
Using now~\eqref{Khepri} and~\eqref{Nout}, we infer the existence of
an increasing sequence $(t_n)_{n\in\NN}$ converging towards $T^-$
such that for all $n\in\NN$,
\begin{equation}
\forall\;i\in A,\quad x_i(t_n)\in\cB\Big(z,\,\frac{d^\ast_z}{4}\Big)
\qquad\mathrm{and}\qquad\forall\;i\notin A,\quad x_i(t_n)\notin\cB\Big(z,\,\frac{d^\ast_z}{2}\Big).
\end{equation}
With the definition of $\varphi$ and the definition of $d_z^\ast$ given at~\eqref{Hapi}, holds for all $n\in\NN$,
\begin{equation}\label{Amon}
\Big<\sum_{i=1}^Na_i\delta_{x_i(t_n)},\;\varphi\Big>_{\cD',\cD}=\sum_{i\in A}a_i.
\end{equation}
As a consequence of the non-neutral clusters hypothesis~\eqref{eq:no null partial sum}
we have $\sum_{i\in A}a_i\neq 0$. Therefore, Equations~\eqref{Re} and~\eqref{Amon} are in contradiction and then~\eqref{Ra} holds.
\textbf{$\bullet\;$ Step 2: }
Assume now that a given vortex $x_i(t)$ has two adherence points.
By~\eqref{Ra}, these two points must be some $x_i^\ast$. For
instance, $x_j^\ast$ and $x_k^\ast$ with $j$ and $k$ such that $x_j^\ast\neq x_k^\ast$. Define the smallest distance between $x_j^\ast$
and any other possible adherence point
\begin{equation}
r^\ast_j:=\min\Big\{r>0:\exists\;l=1\dots N,\;|x_j^\ast-x_l^\ast|=r\Big\}.
\end{equation}
Consider then the circle
\begin{equation}
\cS:=\Big\{x\in\RR^2:|x-x_j^\ast(t)|=\frac{1}{2}r^\ast_j\Big\}.
\end{equation}
Since $x_j^\ast$ is inside the ball of radius $r^\ast_j/2$ and
$x_k^\ast$ is outside, since these two points are adherence points
for the dynamics of $x_i(t)$ as $t\to T^-$ and since the
trajectories are continuous, there exist an increasing sequence of time $(t_n)_{n\in\NN}$ converging towards $T^-$ such that
\begin{equation}
\forall\;n\in\NN,\quad x_i(t_n)\in\cS.
\end{equation}
By compactness of $\cS$, it can be assumed that, up to an extraction, $x_i(t_n)\to x^\ast\in\cS$ as $n\to\infty$.
By definition of $r^\ast_j$, for all $l=1\dots N$, $x_l^\ast\notin\cS$.
These two facts together are in contradiction with~\eqref{Ra} and
this concludes the proof of Theorem~\ref{thrm:generalization of no partial sum with ordinary}.
\qed
\subsection{Proofs of the lemmas for Theorem~\ref{thrm:Improved Marchioro Pulvirenti}}\label{sec:proofs 3}
\subsubsection{Proof of Lemma~\ref{lem:reformulation}}
First, the conclusion of Theorem~\ref{thrm:Improved Marchioro Pulvirenti} can be
formulated as follows
\begin{equation}\label{Sekhmet}
\cL^{2N}\Big\{X\in\RR^{2N}:\exists\;T_X\in\RR_+,
\quad\liminf\limits_{t\to T_X^-}\;\min\limits_{i\neq j}\big|x_i(t)-x_j(t)\big|=0\Big\}\;=0,
\end{equation}
where $\cL^d$ refers to the Lebesgue measure of dimension $d$.
It is possible to reduce the problem to bounded intervals of time
and bounded regions of space by rewriting~\eqref{Sekhmet} as follows.
\begin{equation}\label{Sekhmet2}\begin{split}
\cL^{2N}\bigg(\bigcup_{T=1}^{+\infty}\bigcup_{\rho=1}^{+\infty}\Big\{X\in\RR^{2N}:\;&
\exists\;T_X\in[0,T],
\quad\liminf\limits_{t\to T_X^-}\;\min\limits_{i\neq j}\big|x_i(t)-x_j(t)\big|=0\\
&\text{and}\qquad
\;\max_{i\neq j}\big|x_i(t=0)-x_j(t=0)\big|\leq\rho
\Big\}\bigg)=0.
\end{split}
\end{equation}
Indeed, we can directly check that the two sets appearing respectively in~\eqref{Sekhmet} and~\eqref{Sekhmet2} are equal.
Since the reunion in~\eqref{Sekhmet2} is a countable reunion, then to conclude that~\eqref{Sekhmet} holds it is enough to prove that for all $T>0$ and $\rho>0$,
\begin{equation}\label{Horus}\begin{split}
\cL^{2N}\Big\{X\in\RR^{2N}:\;&
\exists\;T_X\in[0,T],
\quad\liminf\limits_{t\to T_X^-}\;\min\limits_{i\neq j}\big|x_i(t)-x_j(t)\big|=0\\
&\text{and}\qquad
\;\max_{i\neq j}\big|x_i(t=0)-x_j(t=0)\big|\leq\rho
\Big\}=0.\end{split}
\end{equation}
Now, let $T>0$ and $\rho>0$. For $i$ fixed in $\{1\dots N\}$ denote
by $\cT_i$ the isomorphism that gives the position of the point
vortices $(x_k)_{k=1}^N$ knowing the position of $x_i$ and knowing
the differences $(y_{ij})_{j\neq i}$. In other words define,
\begin{equation}
\begin{array}{cccc}
\cT_i:&\RR^2\times\RR^{2(N-1)}&\to&\RR^{2N}\\
\;&(x,Y)&\mapsto&(x+y_{i1}\dots x+y_{i(i-1)},\;x,\;x+y_{i(i+1)}\dots x+y_{iN}).
\end{array}
\end{equation}
Thus, the following inclusion holds
\begin{equation}
\begin{split}
&\bigg\{X\in\RR^{2N}:\exists\;T_X\in[0,T],\quad
\liminf\limits_{t\to T_X^-}\;\min\limits_{\substack{j=1\dots N\\j\neq i}}\;\big|x_i(t)-x_j(t)\big|=0\\
&\qquad\qquad\qquad\qquad\qquad\qquad\text{and}\quad\max_{i\neq j}\big|x_i(t=0)-x_j(t=0)\big|\leq\rho.\bigg\}\\
&\subseteq\cT_i\bigg[\RR^2\times\Big\{Y_i:=(y_{ij})_{j\neq i}\in\RR^{2(N-1)}:
\exists\;T_X\in[0,T],\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\\
&\qquad\qquad\qquad\qquad\qquad\qquad\text{and}\quad\max_{i\neq j}\big|y_{ij}(t=0)\big|\leq\rho.\Big\}\bigg].\\
&=\cT_i\bigg[\RR^2\times\Big\{Y_i:=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}:
\exists\;T_X\in[0,T],\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\Big\}\bigg].
\end{split}
\end{equation}
We conclude that the studied set satisfies
\begin{equation}\label{Isis}
\begin{split}
&\Big\{X\in\RR^{2N}:\exists\;T_X\in[0,T],\quad\liminf\limits_{t\to T_X^-}
\;\min\limits_{i=1\dots N}\;\min\limits_{\substack{j=1\dots N\\j\neq i}}\;\big|x_i(t)-x_j(t)\big|=0\\
&\qquad\qquad\qquad\qquad\qquad\qquad\text{and}\quad\max_{i\neq j}\big|x_i(t=0)-x_j(t=0)\big|\leq\rho.\Big\}\\
&=\bigcap_{i=1}^N\bigg\{X\in\RR^{2N}:\exists\;T_X\in[0,T],\quad
\liminf\limits_{t\to T_X^-}\;\min\limits_{\substack{j=1\dots N\\j\neq i}}\;\big|x_i(t)-x_j(t)\big|=0\\
&\qquad\qquad\qquad\qquad\qquad\qquad\text{and}\quad\max_{i\neq j}\big|x_i(t=0)-x_j(t=0)\big|\leq\rho.\bigg\}\\
&\subseteq\bigcap_{i=1}^N\cT_i\bigg[\RR^2\times\Big\{Y_i:=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}:
\exists\;T_X\in[0,T],\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\Big\}\bigg].
\end{split}
\end{equation}
Using now hypothesis~\eqref{eq:the set with measure zero} and the Fubini theorem,
\begin{equation}\label{Osiris}
\cL^{2N}\bigg(\RR^2\times\Big\{Y_i:=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}:
\exists\;T_X\in[0,T],\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\Big\}\bigg)=0.
\end{equation}
Since $\cT_i$ is a linear map, it is absolutely continuous and
therefore maps any sets of Lebesgue measure $0$ into sets of
Lebesgue measure $0$.
Therefore, combining this fact with~\eqref{Isis} and~\eqref{Osiris} gives~\eqref{Horus}.\qed
\subsubsection{Proof of Lemma~\ref{lem:reformulation convergence}}
Let $T>0$, $\rho>0$ and $\varepsilon>0$. For $i\neq j$, we define
the set of initial datum such that occurs an $\varepsilon$-collapse
between the two vortices $x_i$ and $x_j$
\begin{equation}\label{Seth}
\Gamma_{ij}^{\varepsilon,\rho}:=\Big\{Y_i=(y_{il})_{l\neq i}\in\cB(0,\rho)^{2(N-1)}:
\;\exists\;t\in[0,T],\quad\big|y_{ij}(t)\big|\leq\varepsilon\Big\}.
\end{equation}
We also define the time at which occurs the $\varepsilon$-collapse. Let $Y_i\in\bigcup_{j\neq i}\Gamma_{ij}^{\varepsilon,\rho}$,
\begin{equation}
T_{Y_i}^\varepsilon:=\inf\Big\{t\in[0,T]:\min\limits_{j\neq i}\;\big|y_{ij}(t)\big|\leq\varepsilon\Big\}.
\end{equation}
We are also interested in the situations where other collapses
occur, far from $x_i$. This corresponds to the $\varepsilon$-collapses of vector $y_{jk}:=x_j-x_k=y_{ij}-y_{ik}$. Let $k\neq i,j$, define
\begin{equation}\label{Anubis}
\Gamma_{ijk}^{\varepsilon,\rho}
:=\Big\{Y_i\in\Gamma_{ij}^{\varepsilon,\rho}:\exists\;t<T_{Y_i}^\varepsilon,\quad\big|y_{ij}(t)-y_{ik}(t)\big|\leq\varepsilon\Big\}.
\end{equation}
The fact that $\Gamma_{ijk}^{\varepsilon,\rho}$ gives information on
whether another $\varepsilon$-collapse occurs far from $x_i$ with
$x_j$ before the expected $\varepsilon$-collapse between $x_i$ and
$x_j$ implies the following inclusion.
\begin{equation}\label{Maat}
\Gamma_{ijk}^{\varepsilon,\rho}\;\subseteq\;\Gamma_{kj}^{\varepsilon,2\rho}\setminus\Gamma_{kji}^{\varepsilon,2\rho}.
\end{equation}
This inclusion must be understood as follows. If occurs an $\varepsilon$-collapse between $x_j$ and $x_k$ before the first $\varepsilon$-collapse between $x_i$ and $x_j$ (left-hand side of the inclusion above), then in particular we have an $\varepsilon$-collapse between $x_j$ and $x_k$ (the set $\Gamma_{kj}^{\varepsilon,2\rho}$ in the right-hand side above). Yet, since we do not have an $\varepsilon$-collapse between $x_j$ and $x_i$ before the $\varepsilon$-collapse between $x_j$ and $x_k$, we can remove the set $\Gamma_{kji}^{\varepsilon,2\rho}$ in the right-hand side of the inclusion above.
Another important inclusion is
\begin{equation}\label{Nekhbet}
\Gamma_{ijk}^{\varepsilon,\rho}\;\subseteq\;\Gamma_{ijk}^{\varepsilon,2\rho}.
\end{equation}
These two definitions~\eqref{Seth} and~\eqref{Anubis} study the
$\varepsilon$-collapse on the exact system with kernel $G_s$. We
need the same definitions with the regularized kernels $G_{s,\varepsilon}$.
\begin{equation}\label{Thot}
\begin{split}
&\widehat{\Gamma}_{ij}^{\varepsilon,\rho}:=\Big\{Y_i=(y_{il})_{l\neq i}\in\cB(0,\rho)^{2(N-1)}:
\;\exists\;t\in[0,T],\quad\big|y_{ij}^\varepsilon(t)\big|\leq\varepsilon\Big\},\\
&\widehat{T}_{Y_i}^\varepsilon:=\inf\Big\{t\in[0,T]:\min\limits_{j\neq i}\;\big|y_{ij}^\varepsilon(t)\big|\leq\varepsilon\Big\},\\
&\widehat{\Gamma}_{ijk}^{\varepsilon,\rho}
:=\Big\{Y_i\in\widehat{\Gamma}_{ij}^{\varepsilon,\rho}:\exists\;t<\widehat{T}_{Y_i}^\varepsilon,\quad\big|y_{ij}^\varepsilon(t)-y_{ik}^\varepsilon(t)\big|\leq\varepsilon\Big\}.
\end{split}
\end{equation}
One remarks now that as long as the quantities $\big|y_{ij}\big|$ and
$\big|y_{ij}-y_{ik}\big|$ remain higher than $\varepsilon$ for all
$j\neq i$ and $k\neq i,j$, then the dynamics of $y_{ij}$ and $y_{ij}^\varepsilon$
coincide as a consequence of~\eqref{eq:condition on G epsilon 1}.
This property implies in particular, using the sets defined at~\eqref{Seth},~\eqref{Anubis} and~\eqref{Thot},
\begin{equation}\label{Hator}
\Gamma_{ij}^{\varepsilon,\rho}\setminus\bigg(\bigcup_{k\neq i,j}\Gamma_{ijk}^{\varepsilon,\rho}\bigg)
=\widehat{\Gamma}_{ij}^{\varepsilon,\rho}\setminus\bigg(\bigcup_{k\neq i,j}\widehat{\Gamma}_{ijk}^{\varepsilon,\rho}\bigg).
\end{equation}
The hypothesis of Lemma~\ref{lem:reformulation convergence} can be rephrased as follows: for all $\rho>0$, and for all $i\neq j$,
\begin{equation}\label{Geb}
\cL^{2N}\Big(\widehat{\Gamma}_{ij}^{\varepsilon,\rho}\Big)\longrightarrow 0\qquad\text{as }\varepsilon\to0^+.
\end{equation}
Concerning the conclusion, it is enough to prove that for all $\rho>0$, and for all $i\neq j$,
\begin{equation}\label{Khnoum}
\cL^{2N}\Big(\Gamma_{ij}^{\varepsilon,\rho}\Big)\longrightarrow 0\qquad\text{as }\varepsilon\to0^+,
\end{equation}
because for all $i=1\dots N$ the following equality holds:
\begin{equation}\label{Shou}
\Big\{Y_i:=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{2(N-1)}:\exists\;T_X\in[0,T],
\quad\liminf_{t\to T_X^-}\;\min\limits_{j\neq i}\big|y_{ij}(t)\big|=0\Big\}
=\bigcap_{n=1}^{+\infty}\bigcup_{j\neq i}\Gamma_{ij}^{\frac{1}{n},\rho}.
\end{equation}
The fact that the convergences~\eqref{Geb} imply the
convergences~\eqref{Khnoum} is given by the following computations. First, using~\eqref{Maat} we get
\begin{equation}\label{Papyrus}
\Gamma_{ij}^{\varepsilon,\rho}=\Bigg[\Gamma_{ij}^{\varepsilon,\rho}
\setminus\bigg(\bigcup_{k\neq i,j}\Gamma_{ijk}^{\varepsilon,\rho}\bigg)\Bigg]
\cup\bigg(\bigcup_{k\neq i,j}\Gamma_{ijk}^{\varepsilon,\rho}\bigg)\subseteq\Bigg[\Gamma_{ij}^{\varepsilon,\rho}\setminus
\bigg(\bigcup_{k\neq i,j}\Gamma_{ijk}^{\varepsilon,\rho}\bigg)\Bigg]\cup\bigg(\bigcup_{k\neq i,j}\Gamma_{kj}^{\varepsilon,2\rho}\setminus\Gamma_{kji}^{\varepsilon,2\rho}\bigg).
\end{equation}
One remarks that it is possible to do the same computation with the remaining term on the very right using again~\eqref{Maat} and this gives
\begin{equation}\begin{split}
\Gamma_{kj}^{\varepsilon,2\rho}\setminus\Gamma_{kji}^{\varepsilon,2\rho}
&=\Bigg[\Gamma_{kj}^{\varepsilon,2\rho}\setminus\bigg(\bigcup_{l\neq k,j}\Gamma_{kjl}^{\varepsilon,2\rho}\bigg)\Bigg]\cup\bigg(\bigcup_{l\neq i,j,k}\Gamma_{kjl}^{\varepsilon,2\rho}\bigg)\\
&\subseteq\Bigg[\Gamma_{kj}^{\varepsilon,2\rho}\setminus\bigg(\bigcup_{l\neq k,j}\Gamma_{kjl}^{\varepsilon,2\rho}\bigg)\Bigg]
\cup\bigg(\bigcup_{l\neq i,j,k}\Gamma_{lj}^{\varepsilon,4\rho}\setminus\Gamma_{ljk}^{\varepsilon,4\rho}\bigg).
\end{split}
\end{equation}
Doing the same computation as above recursively until the residual
term is empty and using~\eqref{Nekhbet} transforms~\eqref{Papyrus} into
\begin{equation}
\Gamma_{ij}^{\varepsilon,\rho}\subseteq\bigcup_{k\neq j}
\Bigg[\Gamma_{kj}^{\varepsilon,\,2^N\!\!\rho}\setminus\bigg(\bigcup_{l\neq j,k}\Gamma_{kjl}^{\varepsilon,\,2^N\!\!\rho}\bigg)\Bigg].
\end{equation}
Using now~\eqref{Hator}, we finally get
\begin{equation}
\Gamma_{ij}^{\varepsilon,\rho}\subseteq\bigcup_{k\neq j}
\Bigg[\widehat{\Gamma}_{kj}^{\varepsilon,\,2^N\!\!\rho}\setminus
\bigg(\bigcup_{l\neq j,k}\widehat{\Gamma}_{kjl}^{\varepsilon,\,2^N\!\!\rho}\bigg)\Bigg]
\subseteq\bigcup_{k\neq j}\widehat{\Gamma}_{kj}^{\varepsilon,\,2^N\!\!\rho}.\end{equation}
Thus the convergences~\eqref{Geb} imply the convergences~\eqref{Khnoum} and the lemma is proved.\qed
\subsubsection{Proof of Lemma~\ref{lem:collapses}}
Let $i\in\{1\dots N\}$, $\varepsilon>0$ and $\rho>0$.
\textbf{Step 1.}
Let $a>0$. We define a kernel $L_a$ by
\begin{equation}\label{Poseidon}
L_a(q):=q^{-2-a}.
\end{equation}
and we associate to the kernel $L_a$ its $\varepsilon$-regularization $L_{a,\varepsilon}$ as defined by~\eqref{eq:condition
on G epsilon 1}-\eqref{eq:condition on G epsilon 4}. From this we
define the function
\begin{equation}
\Phi(Y_i):=\sum_{j\neq i}L_{a,\varepsilon}\big(|y_{ij}|\big).
\end{equation}
This function is all the most high valued as the system is close to
collapse with vortex $x_i$. Denote by $\mS^t_{i,\varepsilon}$ the flow of the modified
system~\eqref{eq:evolution Y} with the regularized kernel $G_{s,\varepsilon}$. This gives
\begin{equation}\label{Hera}\begin{split}
&\frac{d}{dt}\Phi\Big(\mS^t_{i,\varepsilon} Y_i\Big)=\sum_{j\neq i}\nabla L_{a,\varepsilon}\big(|y_{ij}^\varepsilon(t)|\big)
\cdot\frac{d}{dt}y_{ij}^\varepsilon(t),\\
&=\sum_{j\neq i}\nabla L_{a,\varepsilon}\big(|y_{ij}^\varepsilon(t)|\big)
\cdot\!\bigg[(a_i+a_j)\nabla^\perp G_{s,\varepsilon}\big(|y_{ij}^\varepsilon|\big)+\!\sum_{k\neq i,j}\!a_k\Big(\nabla^\perp G_{s,\varepsilon}\big(|y_{ik}^\varepsilon|\big)
+\nabla^\perp G_{s,\varepsilon}\big(|y_{ij}^\varepsilon-y_{ik}^\varepsilon|\big)\Big)\bigg],\\
&=\sum_{j\neq i}\sum_{k\neq i,j}a_k\nabla L_{a,\varepsilon}\big(|y_{ij}^\varepsilon(t)|\big)
\cdot\Big(\nabla^\perp G_{s,\varepsilon}\big(|y_{ik}^\varepsilon|\big)
+\nabla^\perp G_{s,\varepsilon}\big(|y_{ij}^\varepsilon-y_{ik}^\varepsilon|\big)\Big),
\end{split}
\end{equation}
where for the last equality we used the identity $\nabla f\cdot\nabla^\perp g=0$
that holds for $f$ and $g$ two radial functions. Thus,
\begin{equation}\label{Demeter}
\bigg|\frac{d}{dt}\Phi\Big(\mS^t_{i,\varepsilon} Y_i\Big)\bigg|\leq\Psi\Big(\mS^t_{i,\varepsilon} Y_i\Big),
\end{equation}
where
\begin{equation}\label{Aphrodite}
\Psi\big(Y_i\big):=\sum_{j\neq i}\sum_{k\neq i,j}a_k
\big|\nabla L_{a,\varepsilon}\big(|y_{ij}|\big)\big|\;\Big(\big|\nabla G_{s,\varepsilon}
\big(|y_{ik}|\big)\big|+\big|\nabla G_{s,\varepsilon}\big(|y_{ij}-y_{ik}|\big)\big|\Big).
\end{equation}
We now observe, recalling the definition of $G_s$ given
at~\eqref{def:Green functions}, that the $\varepsilon$-
regularization~\eqref{eq:condition on G epsilon 1}-\eqref{eq:condition on G epsilon 4}
implies, by a direct computation using polar coordinates,
\begin{equation}\label{Artemis}
\int_{\cB(0,\rho)}\big|\nabla G_{s,\varepsilon}\big(|y|\big)|dy\leq C\left\{\begin{array}{ll}
1&\quad\text{if }s>0.5,\\
\log(1/\varepsilon)&\quad\text{if } s=0.5,\\
\varepsilon^{2s-1}&\quad\text{if } s<0.5,
\end{array}\right.
\end{equation}
where $\cB(0,\rho)$ is the euclidean ball on $\RR^2$. T
he constant $C$ depends on $\rho$ and $s$. Similarly with the definition of the kernel $L_a$ at~\eqref{Poseidon} and since $a>0$,
\begin{equation}\label{Apollon}
\int_{\cB(0,\rho)}L_{a,\varepsilon}\big(|y|\big)dy
\leq C\varepsilon^{-a}\qquad\mathrm{and}
\qquad\int_{\cB(0,\rho)}\big|\nabla L_{a,\varepsilon}\big(|y|\big)|dy\leq C\varepsilon^{-1-a}.
\end{equation}
Therefore using~\eqref{Apollon} with the definition of $\Phi$ gives
\begin{equation}\label{Hermes}
\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i\leq C\varepsilon^{-a}.
\end{equation}
Similarly, the definition of $\Psi$ given at~\eqref{Aphrodite} gives
\begin{equation}\begin{split}
&\int_{\cB(0,\rho)^{N-1}}\Psi(Y_i)\,dY_i\\
&=\int_{\cB(0,\rho)^{N-1}}\Bigg[\sum_{j\neq i}\sum_{k\neq i,j}a_k
\big|\nabla L_{a,\varepsilon}\big(|y_{ij}|\big)\big|
\;\Big(\big|\nabla G_{s,\varepsilon}\big(|y_{ik}|\big)\big|
+\big|\nabla G_{s,\varepsilon}\big(|y_{ij}-y_{ik}|\big)\big|\Big)\Bigg]\prod_{\substack{l=1\\l\neq i}}^Ndy_{il}\\
&=2\Big(\sum_{j\neq i}\sum_{k\neq i,j}a_k\Big)\bigg(\int_{\cB(0,\rho)}dy\bigg)^{N-3}
\bigg(\int_{\cB(0,\rho)}\big|\nabla L_{a,\varepsilon}\big(|y|\big)\big|dy\bigg)
\bigg(\int_{\cB(0,\rho)}\big|\nabla G_{s,\varepsilon}\big(|y|\big)\big|dy\bigg),
\end{split}
\end{equation}
and then, using~\eqref{Artemis} and~\eqref{Apollon},
\begin{equation}\label{Dionysos}
\int_{\cB(0,\rho)^{N-1}}\Psi(Y_i)\,dY_i\leq C\varepsilon^{-2-a}\left\{\begin{array}{ll}
\varepsilon&\quad\text{if }s>0.5,\\
\varepsilon\log(1/\varepsilon)&\quad\text{if } s=0.5,\\
\varepsilon^{2s}&\quad\text{if } s<0.5.
\end{array}\right.
\end{equation}
\textbf{Step 2.} It is now possible to integrate $\Phi$ along the
flow. We obtain
\begin{equation}
\begin{split}
\int_{\cB(0,\rho)^{N-1}}\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\,dY_i
\leq\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i
+\int_{\cB(0,\rho)^{N-1}}\int_0^T\bigg|\frac{d}{dt}\Phi\Big(\mS^t_{i,\varepsilon} Y_i\Big)\bigg|\,dt\,dY_i
\end{split}
\end{equation}
Using~\eqref{Demeter} in the estimate above gives
\begin{equation}\label{Hephaistos}
\int_{\cB(0,\rho)^{N-1}}\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\,dY_i
\leq\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i
+\int_{\cB(0,\rho)^{N-1}}\int_0^T\Psi\Big(\mS^t_{i,\varepsilon} Y_i\Big)\,dt\,dY_i.
\end{equation}
Using the Fubini theorem in~\eqref{Hephaistos} and the Liouville theorem~\ref{lem:Liouville Theorem for the modified dynamics} leads to
\begin{equation}\label{Circee}\begin{split}
\int_{\cB(0,\rho)^{N-1}}\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\,dY_i
&\leq\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i
+\int_0^T\int_{\mS^t_{i,\varepsilon}\cB(0,\rho)^{N-1}}\Psi\big( Y_i\big)\,d\mS^{-t}_{i,\varepsilon}Y_i\,dt\\
&=\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i
+\int_0^T\int_{\mS^t_{i,\varepsilon}\cB(0,\rho)^{N-1}}\Psi\big( Y_i\big)\,dY_i\,dt.
\end{split}
\end{equation}
We now make use of hypothesis~\eqref{eq:no null sub partial sum} on the intensities of the vorticies. Indeed, this hypothesis allows us to use Theorem~\ref{thrm:uniform relative bound} which states the existence of a constant $C'$ independent on $\varepsilon$ (but dependent on $\rho$, $T$, $s$ and the $a_i$) such that
\begin{equation}
\mS^t_{i,\varepsilon}\cB(0,\rho)^{N-1}\subseteq\cB(0,C')^{N-1}.
\end{equation}
Thus, the estimate~\eqref{Circee} above becomes
\begin{equation}\label{Hades}\begin{split}
\int_{\cB(0,\rho)^{N-1}}\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\,dY_i
&\leq\int_{\cB(0,\rho)^{N-1}}\Phi(Y_i)\,dY_i+\int_0^T\int_{\cB(0,C')^{N-1}}\Psi\big( Y_i\big)\,dY_i\,dt.\\
&\leq C\varepsilon^{-a}+C\,T\varepsilon^{-2-a}\left\{\begin{array}{ll}
\varepsilon&\quad\text{if }s>0.5,\\
\varepsilon\log(1/\varepsilon)&\quad\text{if } s=0.5,\\
\varepsilon^{2s}&\quad\text{if } s<0.5.
\end{array}\right.,
\end{split}
\end{equation}
where for the last estimate we used~\eqref{Hermes} and~\eqref{Dionysos}.
\textbf{Step 3.} By definition of the function $\Phi$, there exists a constant $c>0$ such that,
\begin{equation}\begin{split}
&\Big\{Y_i=(y_{ij})_{j\neq i}\in \cB(0,\rho)^{N-1}:\min\limits_{j\neq i}\inf_{t\in[0,T]}\big|y_{ij}^\varepsilon(t)\big|
\leq\varepsilon\Big\}\\&\qquad\qquad\subseteq\Big\{Y_i=(y_{ij})_{j\neq i}\in\cB(0,\rho)^{N-1}
:\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\geq c\,\varepsilon^{-2-a}\Big\}.
\end{split}
\end{equation}
Combining this inclusion with the Bienaymé-Tchebycheff inequality gives
\begin{equation}\label{Hestia}\begin{split}
\cL^{2(N-1)}\Big\{Y_i=(y_{ij})_{j\neq i}\in \cB(0,\rho)^{N-1}:&
\min\limits_{j\neq i}\inf_{t\in[0,T]}\big|y_{ij}^\varepsilon(t)\big|
\leq\varepsilon\Big\}\\&\leq\frac{\varepsilon^{2+a}}{c}
\int_{\cB(0,\rho)^{N-1}}\sup_{t\in[0,T]}\Phi\big(\mS^t_{i,\varepsilon}Y_i\big)\,dY_i.\end{split}
\end{equation}
Using now~\eqref{Hades} in~\eqref{Hestia},
\begin{equation}
\cL^{2(N-1)}\Big\{Y_i=(y_{ij})_{j\neq i}\in \cB(0,\rho)^{N-1}:\min\limits_{j\neq i}\inf_{t\in[0,T]}\big|y_{ij}^\varepsilon(t)\big|
\leq\varepsilon\Big\}\leq\,C\left\{\begin{array}{ll}
\varepsilon&\quad\text{if }s>0.5,\\
\varepsilon\log(1/\varepsilon)&\quad\text{if } s=0.5,\\
\varepsilon^{2s}&\quad\text{if } s<0.5.
\end{array}\right.,
\end{equation}
where $C$ is a constant that depends on $T$, $\rho$, $N$, $s$ and on the $a_i$. The lemma is proved. \qed
\vspace{1cm}
\noindent \textbf{\Large Acknowledgments}\vspace{0.3cm}
I would like to acknowledge my PhD advisors Philippe GRAVEJAT and
Didier SMETS for their confidence and their scientific support,
constructive criticism and suggestions during all my work on the point-vortex model. I would like also to acknowledge them for their meticulous rereadings and their advises during the redaction of this article.
The author acknowledges grants from the \emph{Agence Nationale de la Recherche} (ANR), for the project ``Ondes Dispersives
Aléatoires" (ANR-18-CE40-0020-01). The problem considered in this
paper was inspired from the Workshop of this ANR ODA on 27-28 June
2019 in Laboratoire Paul Painlevé (Lille, France).
\bibliographystyle{plain}
|
1,108,101,562,762 | arxiv | \section*{Acknowledgments}
We would like to thank T.\ Banks, M.\ Dine, D.\ Kabat, P.\ Kraus, D.\ Lowe and
V.\ Periwal for helpful conversations and correspondence. The work of
MVR is supported in part by the Natural Sciences and Engineering
Research Council of Canada (NSERC). The work of WT was supported in
part by the National Science Foundation (NSF) under contracts
PHY96-00258 and PHY94-07194. WT would like to thank the
organizers of the Duality 98 program and the ITP in Santa Barbara for
hospitality while this work was being completed.
\section*{Appendix}
In this appendix we describe explicitly the cancellation of the term
calculated in \cite{ffi}\footnote{After this work was completed we
were informed that this cancellation has been independently derived by
Echols and Gray \cite{eg}}. In that paper a single diagram, the
``setting sun'' diagram, is computed. This computation is performed in
two steps: first the very massive 1-2 and 1-3 fields are integrated
out, then the 2-3 fields are integrated out. The cancellation of the
setting-sun diagram occurs already after integrating out the very
massive modes, when the corresponding setting-sun diagram is computed
with fermionic 1-2 and 1-3 fields. This conclusion essentially
follows from the computation in \cite{Dan-Wati}, where the one-loop
calculation was performed for an arbitrary bosonic background. The
connection between these calculations is somewhat subtle, however, as
they are performed using different propagators for the
off-diagonal fields.
In order to make the cancellation mechanism completely clear in the
case discussed in \cite{ffi}, we describe here explicitly the
diagrammatic cancellation using the same propagators as those used in
that paper. We make the discussion slightly more general by
considering $n$ gravitons at positions $R_i$ where $R_1\gg R_i$ for $i
> 1$. We assume that the $(N -1) \times (N -1)$ background matrices
describing gravitons $2-n$ contain arbitrary (but small) off-diagonal
terms, and write these matrices as
\[
X^a_{ij} = R_{ij}^a+K^a_{ij},\; \;\; \;\; \;1 < i,j \leq n
\]
where $R^a = {\rm Diag}(R^a_2, \ldots, R^a_n)$. We wish to integrate
out the off-diagonal bosonic and fermionic fields $x_{i}^a$ ($1 \leq a
\leq 9$) and $\psi_{i}^\alpha$ ($1 \leq \alpha \leq 16$) (where we
have simplified notation by writing, for example, $x_i = x_{1i}$).
We ignore the gauge fluctuations in this calculation as they are not
relevant to terms of the type computed in \cite{ffi}.
The relevant terms in the Lagrangian are
\begin{eqnarray*}
& &\frac{1}{R_c} \left\{
\dot{x}^*_i \dot{x}_i
-x_i^* (r_i^2) x_i +
x_i^* (r_i \cdot K_{ij} + K_{ij} \cdot r_j -(K \cdot K)_{ij}) x_j
-2(x^a_i)^* ([Y_a,Y_b])_{ij} x_j^b\right. \\
& & \hspace{1in}\left.
+ i \psi_i \dot{\psi}_i
+\psi_i \gamma^a r^a_i \psi_i
-\psi_i \gamma^a (K^a)_{ij} \psi_j \right\}
\end{eqnarray*}
where we have defined
\[
(Y^a)_{ij} = (X^a)_{ij} - R^a_1 \delta_{ij}
\]
and
\[
r_i = R_1-R_i,
\]
and we have suppressed some spatial and spinor indices $a$ and
$\alpha$. Treating $r_i$ as the masses of the bosonic and fermion
fields, and the remaining terms as interactions, we can perform the
one-loop calculation order by order by summing over bosonic diagrams
with insertions of the quadratic vertices
of the forms $r \cdot K,K^2$ and $[Y,Y]$,
and fermionic diagrams with insertions of the quadratic vertex
of the form
$K \cdot \gamma$. Terms with fewer than four
bosonic vertices of the $[Y,Y]$ type are all canceled, as
shown in \cite{Dan-Wati}.
In particular, the diagram calculated in \cite{ffi}
comes from a term of the form
\[
2\langle ((x^a_i)^* ([ Y^a, Y^b])_{ij} x_j^b) \cdot
((x^c_k)^* ([Y^c,Y^d])_{kl} x_l^d)
\rangle .
\]
Using the leading part of the bosonic propagator
\cite{Becker-Becker,ffi}
\[
\Delta ( t_1,t_2 |r^2) = \frac{1}{2r} {\rm e}^{-r | t_1-t_2|}
\]
we find that
this term gives a contribution of
\begin{equation}
\frac{2}{r_i r_j (r_i + r_j)}
\left\{ ([R^a,K^b])_{ij}
([R^b,K^a])_{ji} -
([R^a,K^b])_{ij}
([R^a,K^b])_{ji} \right\}
\label{eq:term}
\end{equation}
as well as terms of the form $[K,K]^2$. The expression (3.12) in
\cite{ffi} is a piece of the term (\ref{eq:term}) in the case $n = 2$.
Now consider the fermionic loop diagram with two insertions of
$K^a \gamma^a$. The leading fermionic propagator contains two terms,
one containing a $\theta$ function and no $\gamma$ matrices, and the
other given by
\[
\frac{r^a \gamma^a}{2 r} {\rm e}^{- r | t_1-t_2|} .
\]
Using only this part of the propagator, we get a contribution
\[
-\frac{1}{4 r_i r_j (r_i + r_j)}
{\rm Tr}\; (\gamma^a \gamma^b \gamma^c \gamma^d
r^a_i K^b_{ij} r^c_j K^d_{ji})
\]
The trace can be rewritten in the form
\begin{eqnarray*}
{\rm Tr}\; (\cdot) & = &
16 \;{\rm Tr}\; \left( r^aK^br^bK^a-r^a K^br^a K^b + r^a K^ar^b K^b
\right) \\
&= & 8 \left(
([R^a,K^b])_{ij}
([R^b,K^a])_{ji} -
([R^a,K^b])_{ij}
([R^a,K^b])_{ji}\right) \\
& &
+ 8 \left( (r \cdot K + K \cdot r)_{ij}(r \cdot K + K \cdot r)_{ji}
\right)
-16 \;r_j^2 (K \cdot K)_{jj}
\end{eqnarray*}
The first term in this expression precisely cancels (\ref{eq:term}).
The remaining terms are higher moments of canceling diagrams, and are
canceled by other one-loop diagrams.
The second term is canceled by the bosonic diagram with two $(r \cdot
K)$ insertions, and the third term is canceled by a combination of the
bosonic term with a single $K \cdot K$ term and the fermionic diagram
with two $K \cdot \gamma$ insertions and theta functions in the propagators.
We have thus shown explicitly, using the same propagator structure as
in \cite{ffi}, that in general all the bosonic one-loop terms with two
$[Y,Y]$ insertions are canceled by a combination of bosonic and
fermionic diagrams. In particular, this shows explicitly that the
term found in expression (3.12) of \cite{ffi} is canceled simply by
integrating out the very massive 1-2 and 1-3 modes, in accord
with the results of Dine and Rajaraman and the
discussion in the main text of this letter.
\bibliographystyle{plain}
|
1,108,101,562,763 | arxiv | \section{INTRODUCTION}
Understanding how to best utilize resources distributed over a network has been and is an important question across many industries. For the power/energy industry, solving the centralized AC optimal power flow (OPF) problem is NP-hard and has been the focus of much research since the 1960s~\cite{squires_economic_1960,dommel_optimal_1968,carpentier_optimal_1979} and more recently, as optimization solvers matured~\cite{molzahn_survey_2017,molzahn_survey_2019}. In some cases, the OPF problem is cast within the setting of (transmission) expansion planning and considers a large number of scenarios, decade-long prediction horizons, and many possible investment decisions~\cite{ploussard_efficient_2018}. In other cases, the focus of the OPF problem is near-term grid operations to determine active and/or reactive power set-points for PV inverters, batteries, and other controllable assets in the grid to minimize operating costs, line losses, voltage deviations from nominal, or to achieve a desired net-load profile that reflects whole-sale energy market conditions. These power set-points can be updated every minute or hour in a microgrid~\cite{dallanese_distributed_2013} or distribution grid applications~\cite{shukla_efficient_2019,almassalkhi_hierarchical_2020} corresponding to the timescales of grid or market conditions, e.g., renewable injections or frequency regulation signals. Thus, many applications of the AC OPF requires a mix of long prediction horizons and frequent re-computations, which for large networks are computationally challenging.
To overcome the computational challenges associated with solving practical (large-scale) AC OPF problems to (global) optimality, the power/energy community has often studied approximations of the AC physics, such as the so-called (linear) DC power flow~\cite{stott_dc_2009}, convex relaxations~\cite{lavaei_zero_2012,low_convex_2014}, convex restrictions~\cite{dongchan_robust_2021}, and various distributed implementations~\cite{dallanese_optimal_2018}. However, if the AC network could be made a lot smaller, while representing the physics of the network sufficiently well, the computational challenge would decrease significantly~\cite{rogers_aggregation_1991}. Thus, in this paper, we focus on a novel method for (optimally) reducing the AC network, which could then be employed within an appropriate OPF setting.
Network reductions, not to be confused with model-order reduction from systems theory~\cite{kokotovic_singular_1976,sturk_structured_2012,chevalier_accelerate_2021}, have been studied extensively and employ a variety of methods, such as similarity or (electrical) distance measures for clustering, bus aggregations (e.g., REI), and equivalence techniques (e.g., Ward and Kron). In the case of reducing nodes belonging to an ``external'' area, which are nodes that are geographically or electrically distanced from the ``internal'' area, network reduction via Ward- or Kron-based methods can be readily applied and has been standard practice for decades~\cite{ploussard_efficient_2019}. However, recently techniques have focused inwards on the internal area or so-called ``backbone-type'' network reductions, where any nodes can be reduced in the network rather than just ``external'' nodes. These backbone-type equivalents rely on either an initial clustering approach (e.g., $k$-means clustering) to group nodes together into contiguous zones or a pre-defined set of zones. Once the nodes are assigned to specified zones (or subgrids), a network reduction can be readily applied to said zones (e.g., via Ward and Kron or heuristics) and possibly tuned based on some criteria. For example, network-preserving bus aggregation methods by ~\cite{fortenbacher_transmission_2018} and~\cite{shi_novel_2015} employ nonlinear and quadratic optimization, respectively, to tune (susceptance values in) the reduced admittance matrix so as to minimize tie line flow errors with respect to the full network. In~\cite{fortenbacher_transmission_2018}, the method depends on pre-determined zones and a specific operating point to calculate the full network's power transfer distribution factors (PTDFs). The algorithm~\cite{shi_novel_2015} replaces the zonal input requirement with a list of pre-determined salient tie lines and also uses PTDFs, which inform a bus clustering algorithm that defines internal zones, which are then subject to bus aggregation. These methods can reduce 60,000-bus networks by up to 100X in the order of minutes (on a super computer) with small inter-zonal worst-case flow deviation errors - even under different operating conditions. Other approaches sidestep the dependence on operating points by employing DC load flow analysis in deriving independent PTDF values~\cite{oh_new_2010}. In this case, a 15,000-bus network is reduced by 85X after eight hours with relative line flow errors of less than 30\%. Lastly, some methods are built around multiple clustering objectives and heuristics that preserve physical features and network structure, but are overly conservative (i.e., only reduce by 2-3X while line flow deviation errors are around 5-10\%)~\cite{sistermanns_feature-_2019}.
Kron-based network reductions have been shown to be valuable across numerous applications in power system analysis~\cite{dorfler_kron_2013,ploussard_efficient_2019}. For example, comprehensive transmission planning schemes have been built around Kron-based equivalents that employ various optimization formulations whose solutions serve as seeds to identify a set of salient buses and lines to partition the network~\cite{ploussard_efficient_2019}. To speed up 3-phase distribution grid OPF,~\cite{almassalkhi_hierarchical_2020} presents a Kron-based network reduction, where a desired level of reduction informs a nodal clustering scheme that determines which nodes are reduced. This method was able to achieve 10-50X reductions in realistic distribution feeders with maximum voltage deviation errors (between reduced nodes and their corresponding non-reduced ``super'' node in the same cluster) of less than 0.015pu across a wide range of operating conditions.
Across these different approaches to network reductions in power systems, they all depend on pre-specified salient buses, tie-lines, and/or level of desired reduction as inputs. Clearly, these inputs affect the resulting network reduction and this is what motivates a simple, but interesting question: \textit{is there an optimal Kron-reduction?} More precisely: are there a set of nodes and a level of reduction that is optimal (in some sense) when reducing a network? Thus, as a first step towards answering this question, the paper's key contribution is a novel network reduction methodology that leverages a mixed-integer linear programming (MILP) formulation to determine a Kron-based reduction that is optimal in the sense that it automatically balances the level of reduction (i.e., complexity) with resulting worst-case voltage deviation errors between the reduced and full networks. The method is based on a pre-computed library of AC load flow data (i.e., operating points) and guarantees that any feasible solution is a valid Kron reduction that preserves the network's structure. As far as the authors are aware, there is no other literature that casts a Kron-based network reduction entirely within an efficient MILP optimization formulation. To ensure tractability in the MILP formulation, we constrain nodes to only reduce to a ``super node,'' if they are neighbors (as defined by the graph Laplacian). Then, we successively reduce the network via an iterative scheme to overcome the nodal neighbor limitation. The entire methodology, denoted Opti-KRON, is validated via simulation-based analysis on a 115-node, radial and balanced IEEE test network, which represents a minor contribution as it provides insight on different optimal Kron-based network reductions.
The remaining paper is structured as follows. Section~\ref{sec:model} presents the network model and summarizes Kron reductions. In Section~\ref{sec:method}, the MILP formulation for Kron-based network reduction is presented. Simulation-based analysis is presented in Section~\ref{sec:exampleLarger}. Finally, the paper concludes in Section~\ref{sec:end} with a summary and a brief discussion on future directions and applications.
\section{Network model and Kron-reduction}\label{sec:model}
For the sake of notational simplicity, consider a single-phase power system network whose graph ${\mathcal G}({\mathcal V},{\mathcal E})$ has edge set $\mathcal{E}$, $|\mathcal{E}|=m$, vertex\footnote{In power systems, a vertex in a power network is commonly denoted a node in distribution systems and a busbar (or bus) in transmission systems. Given the general discussion of power networks, we will use node and bus interchangeably.} set $\mathcal{V}$, $|\mathcal{V}|=n$, and signed nodal incidence matrix $E\in{\mathbb R}^{m\times n}$. The complex nodal admittance matrix (i.e., Y-bus matrix) ${Y}_b\in{\mathbb C}^{n\times n}$ associated with this system is constructed via
\begin{equation}\label{eq: Yb}
{Y}_{b}=E^{T}{Y}_{l}E+{Y}_{s},
\end{equation}
where ${Y}_{l}\in{\mathbb C}^{m\times m}$ is the diagonal matrix of complex line admittances and ${Y}_{s}\in{\mathbb C}^{n\times n}$ is the diagonal matrix of complex nodal shunt admittances. In this paper, we generally assume ${Y}_{s}\ne 0$, implying ${Y}_{b}$ is a nonsingular matrix. Leveraging this property, the so-called nodal impedance matrix ${ Z}_b\in{\mathbb C}^{n\times n}$ can be directly computed as the inverse of (\ref{eq: Yb}): ${Z}_{b}={Y}_{b}^{-1}$. The nodal admittance (and impedance) matrices directly relate complex nodal voltages and current injections via ${I}={Y}_{b}{V}$ (and ${Z}_{b}{I}={V}$).
\subsection{The Kron-Reduction Procedure}\label{sec:method}
Without loss of generality, we partition the network via
\begin{align}\label{eq: Y_partition}
{I} & ={Y}_{b}{V}\\
\left[\begin{array}{c}
{I}_{k}\\
\hline {I}_{r}
\end{array}\right] & =\left[\begin{array}{c|c}
{Y}_{b1} & {Y}_{b2}\\
\hline {Y}_{b3} & {Y}_{b4}
\end{array}\right]\left[\begin{array}{c}
{V}_{k}\\
\hline {V}_{r}
\end{array}\right],
\end{align}
where subscripts ``$r$" and ``$k$" denote nodes which are ultimately reduced and kept, respectively. As in~\cite{dorfler_kron_2013}, Gaussian elimination of the nodal voltages ${ V}_r$ is achieved by
\begin{align}
{I}_{k}=\left({Y}_{b1}-{Y}_{b2}{Y}_{b4}^{-1}{Y}_{b3}\right){V}_{k}+\left({Y}_{b2}{Y}_{b4}^{-1}\right){I}_{r}.
\end{align}
The Kron reduction of (\ref{eq: Y_partition}), which is used to ``eliminate" nodes with zero current injection (i.e., $I_r=0$), is canonically given by the following Schur complement:
\begin{align}
{Y}_{K}={Y}_{b1}-{Y}_{b2}{Y}_{b4}^{-1}{Y}_{b3}.
\end{align}
Alternatively, ${Y}_{K}$ can be constructed using the network impedance matrix, whose associated partition is given by
\begin{align}\label{eq: Zb}
\left[\begin{array}{c|c}
{Z}_{b1} & {Z}_{b2}\\
\hline {Z}_{b3} & {Z}_{b4}
\end{array}\right]\left[\begin{array}{c}
{I}_{k}\\
\hline 0
\end{array}\right]=\left[\begin{array}{c}
{V}_{k}\\
\hline {V}_{r}
\end{array}\right].
\end{align}
\begin{remark}
The Kron-reduced admittance ${Y}_{K}$ is equal to the inverse of sub-impedance ${Z}_{b1}$.
\end{remark} \begin{proof} By construction, the Kron admittance relates ${I}_{k}={Y}_{K}{V}_{k}$. From (\ref{eq: Zb}), ${Z}_{b1}{I}_{k}={V}_{k}$. Therefore, ${Z}_{b1}={Y}_{K}^{-1}$.
\end{proof}
In the following, we define the Kron impedance matrix as ${Z}_{K}\triangleq{Z}_{b1}$
from (\ref{eq: Zb}). Note that any removal of rows and columns from ${Z}_{K}$ will result in a valid Kron impedance matrix, in the sense that it will relate nodal voltages and currents. {Thus, if we can optimally select which nodes to reduce and where to assign them, we can effectively choose an optimal set of rows and columns to remove from ${Z}_{K}$. This would then allow us to define an optimal Kron impedance matrix through a set of binary decisions, which is illustrated in Fig.~\ref{fig:explainKron}.} This inspires the following mixed-integer formulation.
\subsection{Mixed-Integer Approach for Constructing Kron Matrices}\label{sec:modelMIP}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figs/cdc_Kron_explainer.png}
\caption{Illustration of how a network can be partitioned in two different ways to yield two different Kron-reduced networks. The partition is based on reduced and kept (super) nodes. The {\color{mygreen}green}, numbered circles represent kept nodes or super nodes, while {\color{myred}red} circles are reduced nodes and eliminated in the reduced network. The dashed (\textbf{-\,-\,-}) ellipses illustrate which reduced nodes are assigned to which super nodes and define how injected currents are assigned to each super node.}
\centering
\label{fig:explainKron}
\end{figure}
In the following, we define binary variable $s_i\in\{0,1\}$, which selects the optimal Kron impedance matrix $Z_k$, where $s_i=0$ or $1$ indicates if the $i^{\rm th}$ node is reduced or kept, respectively. We accordingly define binary vector $s\in \{0,1\}^{n}$ and the associated diagonal selection matrix $S\triangleq{\rm diag}\{s\}$. For any given binary values, the matrix product $S{Z}_{b}S$ thus yields a matrix which we refer to as a generalized Kron impedance, defined as ${Z}_{\hat{K}}\triangleq S{Z}_{b}S$. This generalized Kron impedance has the dimensions of the full nodal impedance matrix ($n\times n$), but a subset of its rows and columns are zeroed-out. As an example, the generalized Kron impedance of (\ref{eq: Zb}) is
\begin{align}
{Z}_{\hat{K}} & =\left[\begin{array}{c|c}
I & 0\\
\hline 0 & 0
\end{array}\right]\!\!\left[\begin{array}{c|c}
{Z}_{b1} & {Z}_{b2}\\
\hline {Z}_{b3} & {Z}_{b4}
\end{array}\right]\!\!\left[\begin{array}{c|c}
I & 0\\
\hline 0 & 0
\end{array}\right]\!=\!\left[\begin{array}{c|c}
{Z}_{b1} & 0\\
\hline 0 & 0
\end{array}\right],
\end{align}
where the diagonal binary values of $S$ ``kept" the top nodes and ``reduced" the bottom ones.
We define an additional binary decision matrix, $A\in \{0,1\}^{n \times n}$, which codifies where currents from reduced nodes are placed. Accordingly, $A_{i,j}=1$ if the nodal current injection from bus $j$ is placed at bus $i$, and $A_{i,j}=0$ otherwise. Since current can only be assigned to a single bus, $\sum_i A_{i,j}=1$ is always enforced. Furthermore, $\sum_j A_{i,j}\le M_b S_{i,i}$ ensures that currents cannot be assigned to a reduced bus and $S_{i,i} = A_{i,i}$ guarantees that each non-reduced bus does not move its own current. Based on these rules, the matrix vector product ${ I}_K = A{ I}$ naturally and properly aggregates currents at non-reduced nodes (i.e., Kron currents), and the following product allows for the direct computation of Kron voltages:
\begin{align}\label{eq: Vk}
{V}_{K}=S{Z}_{b}SA{I}.
\end{align}
We define the non-reduced nodes as ``super nodes", and the Kron reduced voltages at these super nodes are given by (\ref{eq: Vk}).
In order to compute an \textit{optimal} Kron reduction, we need to define an objective function, which balances the trade off between complexity (i.e., level of detail) and corresponding nodal voltage deviation error (i.e., performance of reduced network). As the number of reduced nodes is a measure of reduction in network complexity, we can capture this by minimizing the number non-zero binary values in the reduction vector $s$. In order to quantify voltage deviation error (which generally increases as network reduction increases), we take the infinity norm of the difference between the Kron voltage at the $i^{\rm th}$ super node (${V}_{K,i}$) and the voltages at all nodes within its cluster $C_{K,i}$, across all potential super nodes. The resulting objective function is then given by
\begin{align}\label{eq: Loss}
\mathcal{L}=\underbrace{\left\Vert {V}_{K,i}-{V}_{j\in C_{K,i}}\right\Vert_{\infty}}_{{\rm error}} - \alpha \underbrace{\frac{1}{n}\sum_{j=1}^n(1-s_{j})}_{{\rm complexity}},
\end{align}
where $\alpha$ balances these two terms. We note that the network currents ${I}$ in (\ref{eq: Vk}) and the cluster voltages ${V}_{j}$ in (\ref{eq: Loss}) are assumed to be given as input data libraries for the optimization problem. Ideally, these data vectors (or matrices) come from representative AC power flow solutions collected on the full network. Thus, an optimal (with respect to (\ref{eq: Loss})), yet, naive MILP-based Kron reduction can be stated as
\begin{subequations}\label{eq: Kron}
\begin{align}
\min_{s,A}\;\; & \left\Vert {V}_{K,i}-{V}_{j\in C_{K,i}}\right\Vert _{\infty}-\frac{\alpha}{n}\sum_{j=1}^{n}(1-s_{j})\label{eq: obj}\\
{\rm s.t.}\;\; & {V}_{K}=S{Z}_{b}SA{I}\label{eq: SZS_opt}\\
& s_{i},\;A_{i,j}\in\{0,1\}\\
& S={\rm diag}(s)\\
& \ensuremath{\ensuremath{\sum_{i}A_{i,j}=1}}\label{eq: Aij1}\\
& \ensuremath{\sum_{j}A_{i,j}\le M_{b}S_{i,i}}\label{eq: Aij}\\
& \ensuremath{S_{i,i}=A_{i,i}}\label{eq: Aijs}.
\end{align}
\end{subequations}
While (\ref{eq: Kron}) will generally compute a valid Kron impedance matrix in (\ref{eq: SZS_opt}), the given formulation presents a variety of challenges. First, it does not formally constrain current injections associated with reduced nodes from being placed on super nodes which are electrically or geographically ``far" from their physical location. Second, the product $S Z_b S AI$ in (\ref{eq: SZS_opt}) contains cubic binary terms. And third, this problem is generally intractable for large-scale systems, since matrix $A$ is a binary matrix which engenders a large branch-and-bound search space for MILP solvers. In the following subsection, we address all three of these challenges to engender a tractable MILP-based Kron reduction.
\subsection{Formulation Improvements }\label{sec:modelMIP_update}
The issue of cubic binary terms in (\ref{eq: SZS_opt}) is sidestepped by first simplifying the product term $SA$.
\begin{lemma}
$SA=A$.
\begin{proof}
Constraint (\ref{eq: Aij}) forces the $j$-th row of $A$ to 0 if $S_{j,j} = 0$; thus, reduced nodes (i) must place their currents somewhere else ($A_{j,j}=0$) and (ii) cannot receive currents from other reduced nodes ($A_{j,i}=0$). If when a binary, whose value is 0, only multiplies other binaries whose value is also 0, then the original binary has no effect. Thus, $SA=A$.
\end{proof}
\end{lemma}
Using $SA=A$, we now have that ${V}_{K}=S{Z}_{b}A{I}$. However, we cannot apply the same trick again to simplify $SZ_{b}A$ since $Z_{b}$ is generally a dense matrix. This means that each element of $A$ is multiplied by each diagonal element of $S$. Since the product of any two binaries can be reformulated in linear form (thus, ``linearizing'' the expression) by introducing a third auxiliary binary variable, directly linearizing $SZ_{b}A$ will generally require $n^3$ binary auxiliary variables; e.g., binary matrices $B_1 = S_{1,1}\times A$, $B_2= S_{2,2}\times A$, etc. Rather than directly linearizing this expression, however, we can leverage the physically motivated observation that any error accumulated by removing $S$ from the Kron voltage equation (denoted with a tilde: $\tilde{V}_{K}={Z}_{b}A{I}$) can be subsumed into auxiliary big-M slack factors. To do so, we reformulate the infinity norm in the objective function of (\ref{eq: Kron}) with a continuous slack variable $\delta$:
\begin{subequations}\label{eq: Kron_simp}
\begin{align}\min_{s,A,\delta}\;\; & \delta-\frac{\alpha}{n}\sum_{j=1}^{n}(1-s_{j})\\
{\rm s.t.}\;\; & \tilde{V}_{K,i}-V_{j\in C_{K,i}}\le\delta+M_{b}(1-A_{i,j}),\forall i\label{eq: vkt}\\
& V_{j\in C_{K,i}}-\tilde{V}_{K,i}\le\delta+M_{b}(1-A_{i,j}),\forall i\label{eq: -vkt}\\
& \tilde{V}_{K}=Z_{b}AI\label{eq: Vktil}\\
& \eqref{eq: Aij1}-\eqref{eq: Aijs}\nonumber
\end{align}
\end{subequations}
\begin{lemma}
Despite the Kron voltage error in (\ref{eq: Vktil}) caused by the elimination of $S$, (\ref{eq: Kron_simp}) and (\ref{eq: Kron}) have identical minimizers.
\begin{proof}
Multiplying $S{\tilde V}_K$ yields ${V}_K$, so $S$ effectively zeros-out non-super node voltages. However, it does not change the value of the super node voltage itself. $A_{i,j}=1$ indicates that node $j$ is inside the cluster associated with super node $i$. In this case, $M_{b}(1-A_{i,j})=0$, and $\delta$ will be a supremum for the exact intra-cluster voltage deviations (since super node voltages are preserved in ${\tilde V}_K$). However, when $A_{i,j}=0$, node $j$ is not internal to the cluster associated with node $i$, which may or may not be a super node. Therefore, $M_{b}(1-A_{i,j})=M_{b}$ will safely upper bound any voltage deviation between ${\tilde V}_K$ and $V_{j\in C_{K,i}}$, thus leaving $\delta$ unaffected. Since $\delta$ accurately captures the infinity norm value from (\ref{eq: obj}), the programs must have identical minimizers.
\end{proof}
\end{lemma}
In order to avoid allowing the optimizer to add current from reduced nodes far from the super node itself, we employ the graph Laplacian to constrain current aggregation only at neighboring nodes.
To accomplish this, we enforce the binary values in matrix $A$ (which chooses where currents are aggregated) to satisfy
\begin{align}\label{eq: Laplacian}
A_{i,j}\le|E^{T}E|_{i,j},
\end{align}
where $|E^{T}E|_{i,j}$ is the $i,j$-th entry of the absolute value of the graph Laplacian. Therefore, if two nodes are not direct neighbors, then their currents cannot be aggregated together. Not only does this prevent currents from being placed in non-physically meaningful places, it also greatly limits the size of the search space, thus greatly increasing the tractability of (\ref{eq: Kron}).
\subsection{Successive Enhancement of Reduced Networks }\label{sec:modelMIP_iterate}
While (\ref{eq: Kron_simp}) \& (\ref{eq: Laplacian}) jointly represent a highly tractable mathematical program, the degree of reduction it can achieve is limited by the graph Laplacian constraint in~\eqref{eq: Laplacian}. Since nodes can only aggregate with their neighbors, the algorithm cannot typically achieve more than a 60\% network reduction in a given solve. To overcome this hurdle, we propose an iterative implementation. That is, find an optimal network reduction, construct the reduced network, and then find another optimal network reduction of the pre-reduced network. This procedure is repeated until either (i) the desired level of reduction is achieved, or (ii), zero nodes are reduced. In order to control the maximal size $\beta$ of network reduction (i.e., force optimizer to make small network reductions at each step), or control the maximal acceptable voltage deviations $\gamma$, we can embed associated constraints directly in the program.
\begin{subequations} \label{eq:optiKron}
\begin{align}\min_{A,\delta}\;\; & \delta-\frac{\alpha}{n}\sum_{j=1}^{n}(1-A_{j,j})\\
{\rm s.t.}\;\; & \sum_{j=1}^{n}(1-A_{j,j})\le n\beta\\
& \delta\le\gamma\\
& \eqref{eq: Aij1}-\eqref{eq: Aijs},\eqref{eq: vkt} - \eqref{eq: Vktil},\eqref{eq: Laplacian}\nonumber.
\end{align}
\end{subequations}
This iterative approach is illustrated in Fig.~\ref{fig:algoIter} and described algorithmically in Algorithm~\ref{alg:one}. We refer to the tractable mathematical program given by~\eqref{eq:optiKron} as Opti-KRON. Next, we apply Opti-KRON and Algorithm~\ref{alg:one} to optimally Kron reduce an IEEE test network, which represents a balanced, medium-voltage, radial distribution feeder.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figs/cdc_algo_draw2.png}
\caption{The algorithm for successively enhancing the Kron-reduced network uses Opti-KRON, which is given by~\eqref{eq:optiKron}. The inputs are network and AC load flow data and the parameters that define the MILP formulation's objective function. The output is an optimal Kron-reduced network where kept nodes are denoted super nodes.}
\centering
\label{fig:algoIter}
\end{figure}
\RestyleAlgo{ruled}
\SetKwComment{Comment}{/* }{ */}
\begin{algorithm}
\caption{Opti-KRON Successive Enhancement }\label{alg:one}
\KwData{$Y_b, V, I, \alpha, \beta, \gamma$}
\KwResult{$Z_K$ (Optimal Kron-reduced network)}
$p=0, s^{(p)}=\mathbf{1}_n, \Delta s^{(p)}=n$ \;
${Z}_{b} = {Y}_{b}^{-1}$ \;
\While{$\Delta s^{(p)}>0$}{
$s^{(p+1)} \gets$ Solve Opti-KRON in \eqref{eq:optiKron}\;
$\Delta s^{(p+1)} = \mathbf{1}_n^\top (s^{(p)} - s^{(p+1)})$ \;
$p \gets p+1$ \;
}
$S \gets \text{diag}\{s^{(p)}\}$\;
$Z_K \gets S Z_b S A$ \;
\end{algorithm}
\section{Experimental results for 115-node radial network} \label{sec:exampleLarger}
The 115-node radial, balanced IEEE distribution test feeder from~\cite{schneider_ieee_2017} provides $Y_b$ and is used herein to illustrate Algorithm~\ref{alg:one}. To balance complexity and error, $\alpha= 0.002$, while the maximum reduction in complexity for a single iteration of~\eqref{eq: Kron_simp} is limited initially to 25\% (i.e., $\beta = 0.25$). The worst-case voltage deviation error is effectively unconstrained by setting $\gamma = 1.0$pu. Finally, two distinct nodal net-injection profiles are applied to the network to beget the network's necessary data scenarios on complex branch currents, $I$, and nodal voltages, $V$. The corresponding voltage profiles, $|V_i|\,\, \forall i=1,\hdots, n$ are shown in Fig.~\ref{fig:voltProfiles}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figs/cdc_voltprofile.pdf}
\caption{The voltage profile resulting from two distinct net-load injections. The red line represents a heavily loaded scenario, while the higher voltage for the blue line represents a lightly loaded scenario with more solar PV injections.}
\label{fig:voltProfiles}
\centering
\end{figure}
With all input data now available, Algorithm~\ref{alg:one} can be executed and converges in eight iterations and under five seconds total, which highlights tractability of Opti-KRON. The resulting optimal Kron-based network reduction has eliminated 85\% of nodes, yet embodies a worst-case intra-cluster voltage deviation error across both load scenarios of less than $0.007$pu. To investigate the accuracy of Opti-KRON, we subject the optimal Kron reduction \textit{at each iteration} to operating conditions that sweep from low-load to high-load conditions (via a convex combination of the initial injection data). Then, we record the maximum intra-cluster (super node) voltage deviation errors, which are illustrated in Fig.~\ref{fig:volterrorsIters}. These results clearly show that despite subjecting the optimal Kron reduction to a wide range of operating conditions, the worst-case voltage deviation errors are still very small across all super nodes and loading conditions (i.e., all super node clusters deviate from their corresponding reduced nodes by less than 0.0065pu). The fact that errors do not increase away from known input data scenarios $(V,I)$ (which are at either end in Fig.~\ref{fig:volterrorsIters}) may seem surprising. However, AC load flows are nonlinear, the optimal Kron-reduction minimizes the worst-case voltage errors, and the two load scenarios were low- and high-load conditions. This means that away from high-load conditions (which was in our initial set of data), the voltages at each node will become closer to 1.00pu and, thus, closer to each other, which reduces voltage deviation errors. Thus, including high net-load demand profiles to generate initial input data that has large voltage deviations may help find an optimal network reduction that captures the full system behavior accurately. In addition, the structure-preserving nature of the optimal Kron reduction appears quite valuable to represent a wide range of operating conditions.
Lastly, to understand the effects of constraining the complexity at each iteration, we explored different upper bounds, $\beta = \{0.10, 0.25, 0.50, 0.75\}$. Then, we looked at the number of iterations required to achieve a converged optimal Kron-based network reduction, the level of the reduction, and the corresponding worst-case voltage errors. Results are summarized in Table~\ref{tab:compareBeta} and show that smaller bounds can reduce overall errors, but at the cost of the reduction itself. The best overall point is $\beta=0.25$, with high level of reduction and reasonably small voltage error ($<0.01$pu).
\begin{table}[h]
\caption{Different upper bounds on complexity ($\beta$)}
\label{tab:compareBeta}
\small
\centering
\begin{tabular}{lcccc}
\toprule\toprule
\textbf{item} & \textbf{$\beta = 0.10$} & \textbf{$0.25$} & \textbf{$0.50$} & \textbf{$0.75$} \\
\midrule
Iterations (\#) & 17 & 8 & 7 & 7 \\
Reduction (\%) & 75 & 85 & 83 & 83.5 \\
Voltage error (pu) & 0.0030 & 0.0065 & 0.0035 & 0.005 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figs/Voltage_error.pdf}
\caption{Worst-case super node intra-cluster voltage errors for all iterate Kron-reduced versions of the 115-node test feeder is optimally reduced via eight iterations in Algorithm~\ref{alg:one}.
}
\label{fig:volterrorsIters}
\centering
\end{figure}
\section{Conclusion and Future work} \label{sec:end}
This paper develops a novel and efficient mixed-integer linear optimization-based methodology for generating structure-preserving network reductions of electric power networks. The MILP formulation enables trading off complexity (in the number of reduced nodes) and errors (in terms of worst-case voltage deviations across all super node clusters) and uses the network's graph Laplacian to restrict nodal eliminations to only include neighbors of chosen super nodes. By leveraging the efficient MILP formulation, an iterative scheme is employed to successively enhance the network reduction while ensuring that each iterate is a valid Kron reduction of the full network. Furthermore, simulation-based analysis is used to numerically explore the formulation and characterize and compare the optimal Kron reductions. The computational results illustrate that Opti-KRON can reduce full networks of more than 100 nodes by 25-90\% at optimality and within seconds. These optimal network reductions engender worst-case intra-cluster voltage magnitude deviations of less than 0.01pu.
Future work will pursue a number of open questions resulting from discoveries herein. First, we will investigate the For example, while the iterative scheme is guaranteed to converge to a Kron-reduced network, we have not established global optimality guarantees at convergence. However, for radial networks, it may be possible to prove that the successive iterations will yield the globally optimal Kron reduced network~\cite{yuan_inverse_2021}. Furthermore, we are interested in using the optimal Kron-reduced networks in OPF problems and want to incorporate the corresponding worst-case intra-cluster voltage deviations to yield robust OPF formulations (e.g., via tightened voltage bounds) whose solutions guarantee admissibility in the underlying full network~\cite{almassalkhi_hierarchical_2020}. Similarly, solving the OPF on a reduced network will require a disaggregation policy to lift the optimal solution on the reduced network (e.g., a dispatch of aggregated resources) to the full network's (individual) resources. This lifting may not be unique and can be carried out in numerous ways, which results in loss of optimality relative to solving the OPF on the full network~\cite{rogers_aggregation_1991}. Thus, developing disaggregation policies with optimality guarantees is of interest.
|
1,108,101,562,764 | arxiv | \section{Introduction}
Cosmic microwave background (CMB) temperature fluctuations provide invaluable information about our Universe, and can give extremely tight constraints on cosmological parameters \citep{Kofman1993, Hinshaw2012, Planck2015-params, Planck2018-params}. The primary CMB anisotropy encodes information about the primordial universe, measured at $z \approx 1100$. However, since the discovery of the CMB, a lot of progress has also been made on the secondary CMB anisotropies, such as gravitational lensing \citep{Smith2007, Hirata2008, Lewis2006}, the thermal and kinetic Sunyaev-Zel'dovich (tSZ, kSZ) effects \citep{Sunyaev1980}, and the integrated Sachs-Wolfe effect (ISW) \citep{Sachs1967}. These effects can act as foreground for the primary CMB, but they also encode information about the growth of structure at lower redshifts, a powerful probe of Dark Energy, Modified Gravity and neutrino masses \citep{Lewis2006}.
Secondary CMB anisotropies are produced by large scale structures (LSS) in the late-time universe \citep{Aghanim2008} and can be easily detected through cross-correlation with the LSS. In particular, CMB lensing traces the matter density field at intermediate redshifts ($z \lesssim 1100$). As CMB photons travel to the observer, they are gravitationally deflected by the matter, leaving an imprint on the observed CMB temperature and polarization fluctuations. Weak lensing of the CMB introduces off-diagonal correlations between Fourier modes, allowing the CMB lensing deflection field $\bf{d}$ to be estimated \citep{Hu2002}.
We use the CMB lensing convergence field as a tracer of the dark matter field, and thus the large scale structure of the Universe. More specifically, we use the CMB lensing convergence field to study properties of quasars, which trace the large scale structure at intermediate redshifts. Quasars are thought to be luminous accreting supermassive black holes at the centers of distant galaxies \citep{Salpeter1964}. Like galaxies, they are tracers of the 3D distribution of dark matter at different redshifts. With the understanding that almost every galaxy hosts a supermassive black hole at its center \citep{Kormendy1995}, quasars can be thought as a phase in the galaxy evolution. Properties of quasars, such as the characteristic mass of the their host halos \citep{Tinker2010, DiPompeo2014}, can be inferred by studying the relationship between the dark matter distribution and quasar clustering. The information about quasar properties can reveal much about the growth of structure over the history of the universe \citep{Marziani2014, Mortlock2015}.
Both CMB lensing and the observed quasar overdensity depend on the projected matter overdensity, and the quasar redshift distribution matches well with the CMB lensing kernel, so the CMB lensing convergence and the quasar overdensity should have a relatively strong correlation \citep{Peiris2000}. Quasars are biased tracers of the underlying matter density field \citep{Kaiser1984}, meaning that the observed cross power spectrum is proportional to the quasar bias. This factor parametrizes the properties of the clustering of quasars and encapsulates the information about the processes of galaxy formation and evolution that are currently not very well understood \citep{Amendola2017}. Measuring this bias factor would be crucial to the understanding of galaxy formation and the evolution of supermassive black holes within the standard structural formation framework \citep{Shen2009}.
\citet{Laurent2017} have analyzed the auto-correlation of the eBOSS quasars, and put constraints on the quasar bias, as well as the corresponding host halo mass. In this paper, we use an alternative way to constrain the quasar bias, by cross-correlating the CMB lensing map from Planck and quasar overdensities drawn from eBOSS Data Release 14 \citep{Dawson2016}. We measure a quasar bias that is consistent with the auto-correlation result. We then calculate the corresponding characteristic host halo mass of the quasars. All calculations assume a cosmology with the Planck TT+lowP+lensing parameters \citep{Planck2015-params}.
The remainder of the paper is organized as follows: in Section \ref{sec:background} we present the theoretical background of quasar linear bias and the CMB lensing-quasar angular cross power spectrum. Section \ref{sec:methods} includes the data samples and estimators we used to evaluate the observed power spectrum. In Section \ref{sec:results}, we estimate the cross power spectrum, the quasar bias, and the characteristic host halo mass. In Section \ref{sec:error}, we discuss errors, systematic effects and a null test performed on the data. We draw our conclusions in Section \ref{sec:conclusions}.
\begin{figure}
\includegraphics[width=\columnwidth]{fz}
\caption{The redshift distributions of the selected quasars in the range 0.9 < $z$ < 2.2 in the North and South Galactic Caps (NGC, in blue; and SGC, in orange). The normalized redshift distribution is plotted on the y-axis.}
\label{fig:fz}
\end{figure}
\section{Theoretical Background}
\label{sec:background}
\subsection{Overview}
Quasars reside in the nuclei of distant galaxies and are expected to be biased tracers of the matter overdensity on large scales. In other words, the number density of quasars is related to the dark matter overdensity by a bias factor, i.e. $\delta_q = b_q \ \delta_m$, where $b_q$ can be a function of scale, redshift, formation history or other environment related factors \citep{White1978}. The amplitude of the deflection by CMB lensing in a given direction depends on the projected matter density in that direction. Thus we expect the quasar number density to be correlated with CMB lensing convergence \citep{Peiris2000}.
\subsection{Angular cross power spectrum}
\label{sec:powspec}
To relate the CMB lensing to the matter overdensity field, we define the lensing convergence, $\kappa \equiv -\frac{1}{2} \nabla \cdot \bf{d}$, where $\bf{d}$ is the lensing deflection field. The lensing convergence is a weighted projection of the matter overdensity in direction $\hat{n}$ along the line of sight \citep{Lewis2006}:
\begin{ceqn}
\begin{equation}
\label{eq:kappa}
\kappa(\hat{n}) = \int_0^{z_{\textrm{\tiny CMB}}} dz W(z) \delta_m(\chi(z) \hat{n}, z)
\end{equation}
\end{ceqn}
where $z_{\textrm{\tiny CMB}} \approx 1100$ is the redshift at the last scattering surface, $W(z)$ is the CMB lensing kernel, $\delta_m(\chi(z) \hat{n}, z)$ is the matter overdensity at redshift $z$ in the direction $\hat{n}$, and $\chi(z)$ is the comoving distance at redshift $z$. Assuming a flat universe, $W(z)$ is given by
\begin{ceqn}
\begin{equation}
\label{eq:wz}
W(z)= \frac{3 H_0^2 \Omega_{m,0}}{2 c H(z)} (1+z) \chi(z)\left( 1 - \frac{\chi(z)}{\chi_{\textrm{\tiny CMB}}}\right)
\end{equation}
\end{ceqn}
where $\chi_{\textrm{\tiny CMB}}$ is the comoving distance to the last scattering surface, $H_0$ is the current Hubble parameter, $H(z)$ is the Hubble parameter at redshift $z$, and $\Omega_{m, 0}$ is the current matter density parameter. Since the lensing potential $\phi$ is a 2D projection of the gravitational potential, we can assume CMB lensing as an unbiased tracer of the underlying matter overdensity field \citep{Lewis2006}.
The quasar overdensity field is related to the matter overdensity field by a window function $f(z)$, such that the projected surface density is $q(\hat{n}) = \int_0^{z_{\textrm{\tiny CMB}}} dz f(z) \delta_m(\chi(z)\hat{n}, z)$ \citep{Peiris2000}:
\begin{ceqn}
\begin{equation}
\label{eq:fz}
f(z)= \frac{b(z) dN/dz}{\int dz' \frac{dN}{dz'}} + \frac{3}{2 H(z)} \Omega_0 H_0^2 (1+z) g(z) (5s - 2).
\end{equation}
\end{ceqn}
In the previous equation, the first term is the normalized, bias-weighted redshift distribution of the quasars. The second term is the magnification bias, which accounts for the change in the density of the sources due to lensing magnification \citep{Moessner1997, Scranton2005}. This term is negligible compared to the intrinsic clustering of the quasars, and for this reason we ignore it for simplicity. For a full expression of $g(z)$, see \citet{Peiris2000, Sherwin2012}.
On large scales, we expect the quasar bias to be a constant. On smaller scales, however, the scale-dependence of the bias has been supported by many measurements and it is predicted by theory \citep{Amendola2017, Giusarma2018}.
Many bias models have been proposed. We will consider an effective power law parametrization of the scale dependence of the bias:
\begin{ceqn}
\begin{equation}
\label{eq:scale-de}
b (k) = b_1 + b_2 \left( \frac{k}{k_0} \right)^n
\end{equation}
\end{ceqn}
where $k_0$ is an arbitrary reference scale we set to be $1 h \textrm{ Mpc}^{-1}$, such that $b_2$ is a dimensionless parameter \citep{Amendola2017}. The case $n = 0$ corresponds to a scale-independent bias.
\citet{Desjacques2016} and \citet{Modi2017} reported an n = 2 behavior at scales $0.1 \lesssim k \lesssim 0.5h~\textrm{Mpc}^{-1}$ for the linear halo bias, based on results from N-body simulations. We will test this form for the scale-dependent quasar bias.
If the selection functions of the dark matter tracers are slowly varying compared to the scale we are probing, the Limber approximation \citep{Limber1954, Lewis2006} is expected to be valid at $\ell \gtrsim 30$. Assuming a flat universe, the quasar-CMB lensing convergence angular cross-power spectrum is given by:
\begin{ceqn}
\begin{equation}
\label{eq:cl}
C_l^{\kappa q} = \int \frac{dz}{c} \frac{H(z)}{\chi^2(z)} W(z) f(z) P_{mm}\left(k=\frac{l}{\chi(z)}, z\right)
\end{equation}
\end{ceqn}
where $f(z)$ is the bias-weighted redshift distribution, and $P_{mm}(k, z)$ is the 3D matter power spectrum.
An advantage of using the cross-correlation between CMB lensing and quasars over doing quasar auto-correlation, is that the quasar clustering-matter cross power spectrum has a linear dependence on the quasar bias, from the bias-weighted redshift distribution. Moreover, measuring this cross-correlation in addition to the auto-correlation of quasars helps break the degeneracy between quasar bias $b_q$ and amplitude of fluctuations\footnote{This is because the auto-correlation measures $b_q^2 \sigma_8^2$, while the cross correlation is proportional to $b_q \sigma_8^2$} $\sigma_8$, thus improving our constraints on $\sigma_8$ . The cross-correlation is also less likely to be affected by systematics in the quasar sample \citep{Sherwin2012, Geach2013, 2012ApJ...753L...9B}.
\begin{figure}
\includegraphics[width=\columnwidth]{cl}
\caption{The CMB lensing-quasar overdensity angular cross-power spectrum. The data points are in orange, and the blue solid curve is the calculated theory curve. The significance of the cross-power spectrum signal is 5.4$\sigma$.}
\label{fig:cl}
\end{figure}
\section{Data and Methods}
\label{sec:methods}
\subsection{CMB lensing map}
We use the CMB lensing convergence map published by the Planck Collaboration \citep{Planck2015-lensing}. The \textit{Planck} satellite, which was launched in 2009, observed the temperature and polarization fields of the cosmic background radiation over the whole sky at various frequencies.
Maps of the temperature and polarization fields of the CMB covering 70\% of the sky are produced \citep{Planck2015-overview}. The Planck minimum-variance CMB lensing potential field is reconstructed using the CMB maps produced by the SMICA code, and combines the five quadratic estimators of the correlations of the CMB temperature ($T$) and polarizations ($E, B$). The map underwent several systematic and null tests, that showed that any contamination is small compared to the statistical errors
\subsection{Quasar map}
We use quasars from the extended-Baryon Oscillation Spectroscopic Survey (eBOSS, \citet{Dawson2016, Zhao2016}), which started in July 2014, as an extension to the Baryon Oscillation Spectroscopic Survey (BOSS) \citep{Dawson2013}. BOSS probed the BAO at a scale of roughly 100 $h^{-1}$ Mpc, using mostly galaxies at $z < 0.7$ and neutral hydrogen clouds in the Lyman-$\alpha$ forest at $z > 2.1$.
eBOSS aims to probe four different dark matter tracers at redshift ranges that are not covered in previous surveys, and map the large scale structures over the redshift range $0.6 < z < 2.2$, which is previously unconstrained by BOSS. The full eBOSS quasar catalog \citep{Myers2015} is expected to contain 500,000 spectroscopically-confirmed quasars over an area of 7500 deg$^2$ by the end of the survey and provide the first BAO distance measurement over the range $0.9 < z < 2.2$. The eBOSS quasars will also provide tests of General Relativity on the cosmological scales through measurements of the redshift-space distortion, and new constraints on the summed mass of all known neutrino species.
We use the quasars from the eBOSS DR14 LSS catalog\footnote{\url{https://data.sdss.org/sas/ebosswork/eboss/lss/catalogs/catalogs-DR14/}} \citep{Myers2015}, which contains 142,017 quasars between $0.9 < z < 2.2$ and has an effective redshift of 1.51. The redshift distribution of the selected quasars is shown in Fig.~\ref{fig:fz}. We construct an overdensity map ($q_i = \frac{n_i - \bar{n}}{\bar{n}}$, where $i$ is the pixel number) of these quasars. The map is converted into HEALPix format with $N_{\textrm{side}} = 2048$ to match the resolution of the CMB lensing convergence map. We find the quasar footprint by downgrading the resolution of the quasar map to $N_{\textrm{side}} = 32$ and identifying the empty pixels in the map.
\begin{figure}
\includegraphics[width=\columnwidth]{fid}
\caption{The fiducial bias-redshift model used in the calculation, obtained by interpolating the data points in \citet{Shen2009}. The paper also provides estimates of the error in the quasar bias, which are shown as error bars in the plot. The dashed line in orange is the interpolated result.}
\label{fig:fid}
\end{figure}
\subsection{Estimator for the angular power spectrum}
We use a pseudo-$C_l$ estimator \citep{Wandelt2000} to calculate the angular cross-power spectrum from the data
\begin{ceqn}
\begin{equation}
\hat{C}_l^{\kappa q}=\frac{1}{f_{\textrm{sky}}^{\kappa q}(2l + 1)}\sum_{m=-l}^{l}{\kappa_{lm}^* q_{lm}}
\end{equation}
\end{ceqn}
where $f_{\textrm{sky}}^{\kappa q}$ is the fraction of the sky shared by the quasar map and the CMB lensing convergence map. $\kappa_{lm}$ is the spherical harmonic transform of the CMB lensing convergence map, and $q_{lm}$ is the spherical harmonic transform of the quasar overdensity map.
In the Fisher approximation, the theoretical error in each bin $A$ of $\hat{C_l}^{\kappa q}$ can be estimated using \citep{Cabre2008}
\begin{ceqn}
\begin{equation}
\label{eq:error}
\frac{1}{\sigma^2(A)}= \sum_{l_{\textrm{min}}(A) < l < l_{\textrm{max}}(A)} \frac{f_{\textrm{sky}}^{\kappa q}(2l + 1)}{(C_l^{\kappa q})^2+C_l^{\kappa \kappa} C_l^{q q}}
\end{equation}
\end{ceqn}
where $C_l^{\kappa \kappa}$ and $C_l^{q q}$ are the CMB lensing and quasar auto-power spectra, including both signal and noise. The contribution of error from the $C_l^{\kappa \kappa} C_l^{qq}$ term should dominate the contribution from the cross term. The auto-spectra can be estimated similarly:
\begin{ceqn}
\begin{equation}
\hat{C}_l^{\kappa \kappa}=\frac{1}{f_{\textrm{sky}}^{\kappa}(2l + 1)}\sum_{m=-l}^{l}{|\kappa_{lm}|^2}
\end{equation}
\end{ceqn}
and
\begin{ceqn}
\begin{equation}
\hat{C}_l^{q q}=\frac{1}{f_{\textrm{sky}}^{q}(2l + 1)}\sum_{m=-l}^{l}{|q_{lm}|^2}
\end{equation}
\end{ceqn}
where $f_{\textrm{sky}}^{\kappa}$ is the sky fraction of the CMB lensing convergence map, and $f_{\textrm{sky}}^{q}$ is the sky fraction of the quasar overdensity map. We bin the cross-power spectrum into 15 bands in the range $30 \le \ell < 1200$. We choose $\ell_{\textrm{min}} = 30$ because the Limber approximation breaks down on larger scales. We choose $\ell_{\textrm{max}} = 1200$ because of the uncertainty on modeling the bias and power spectrum on smaller scales. The signal-to-noise also drops significantly for $\ell > \ell_\textrm{max}$ \citep{Lewis2006,Kirk2015}.
\section{Results}
\label{sec:results}
\subsection{Cross-correlation}
The cross-correlation results are shown in Fig.~\ref{fig:cl}. The theoretical curve is calculated using Equation~\ref{eq:cl}. We use the redshift distribution in Fig.~\ref{fig:fz} and the CMB lensing kernel in Equation~\ref{eq:wz}. The sample variance fluctuations are of order $\sim10\%$ per bin in redshift, due to the large number of quasars in the bin. We use the full $dN/dz$ from Figure \ref{fig:fz} in the theory calculation, which yields a smooth result since it is integrated over a broad kernel. The theory curve should not be sensitive to binning and interpolation, since the weighting functions are slowly varying with redshift. We use CAMB \citep{Lewis2000} to compute the matter power spectrum. The nonlinear matter power spectrum (HALOFIT, \citet{Smith2003, Takahashi2012}) is used in this calculation. The linear matter power spectrum produces similar results because the signal mainly comes from angular scales ($\ell < 600$) corresponding to linear scales.
We assume a fiducial bias-redshift model from \citet{Shen2009} in the theory calculation, shown in Fig.~\ref{fig:fid}. The fiducial bias model is based on the amplitude of the quasar correlation function from the SDSS DR5 quasar sample. We fit a scaled version of the fiducial bias-redshift relation to the data to find the best-fit scaling parameter ($b_q/b_\textrm{fid}$), and the theory curve is a good fit. With 14 degrees of freedom, the chi-squared value for the best-fit theory curve is $\chi^2_{\textrm{th}} = 12.9$. The significance of the cross-correlation is $\sqrt{\chi^2_0 - \chi^2_{\textrm{th}}} = 5.4\sigma$, where $\chi^2_0$ is the chi-squared value for the null hypothesis. The detection significance is also the best-fit scaling parameter divided by its error.
All points are included in the model fits to the data. Despite the theory curve being a good fit to the data points, the first bin is more than $1 \sigma$ from the theoretical prediction and shows an anti-correlation between the CMB lensing map and the quasar overdensity, albeit having a large uncertainty. \citet{Giannantonio2016}, which uses the CMB lensing data from the South Pole Telescope \citep{Story2015}, and \citet{Pullen2016} also reported a deficit of power in the low $\ell$ region of the CMB lensing-galaxy angular cross power spectrum. We do not have an explanation for the cause of this deficit of power. However, we can rule out a list of systematics, described in detail later in Section \ref{sec:error}, as causes of this deficit.
\subsection{Quasar bias}
The fiducial bias-redshift model used in the calculation is obtained by interpolating the data in \citet{Shen2009}. Although it uses a different quasar catalog than the one in our analysis, we choose this as a convenient model because the theoretical cross-power spectrum does not have a strong dependence on the detailed form of the bias model \citep{Sherwin2012}. The fiducial model is shown in Fig.~\ref{fig:fid}. From this model we find $b_q/b_{\textrm{fid}} = 1.01 \pm 0.19$. At the effective redshift of our quasar sample ($z \approx 1.51$), the fiducial model gives a bias $b_{\textrm{fid}} = 2.4$. Combining these results we get a quasar linear bias of $b_q = 2.43 \pm 0.45$, with $5.4 \sigma$ significance.
We also fit for the scale-dependent bias in Equation~\ref{eq:scale-de}, by fixing $n$ at various values. Table~\ref{tab:scale-dep} shows some of the results. In the $n = 2$ case, we have $b_1 = 2.26 \pm 0.59$ and $b_2 = 13.0 \pm 33.3$. We conclude that the data does not yield a strong constraint on the scale dependent bias. This, however, could be due to the low number density of quasars in the survey. We expect that the better sensitivity and resolution of future surveys will allow better constraints on the scale dependence of both the bias and the matter power spectrum \citep{Abazajian2016}.
\begin{table}
\centering
\begin{tabular}{lccccr}\hline
$n$ & $b_0$ & $\sigma(b_0)$ & $b_1$ & $\sigma(b_1)$ & $\chi^2$ \\\hline
-2 & 2.80 & 0.50 & -0.0010 & 0.0006 & 11.0 \\
-1 & 3.53 & 0.81 & -0.067 & 0.041 & 11.2 \\
0 & 2.43 & 0.45 & - & - & 12.9 \\
1 & 1.85 & 0.89 & 6.37 & 8.71 & 13.3 \\
2 & 2.26 & 0.59 & 13.0 & 33.3 & 13.7 \\
\end{tabular}
\caption{\label{tab:scale-dep}Selected results for the scale-dependent bias fit. In the third row, $n = 0$ corresponds to a scale-independent bias.}
\end{table}
\subsection{Quasar host halo mass}
As shown in Fig.~\ref{fig:fid}, the quasar bias generally increases with redshift, and the bias is expected to increase with halo mass. However, at higher redshifts, the halos also have less time to grow. Therefore, we would expect a roughly constant halo mass-redshift relation.
We use the bias model provided in \citet{Tinker2010} to relate the scale-independent quasar bias to the peak height of the linear density field, $\nu = \frac{\delta_c}{\sigma(M)}$, where $\delta_c = 1.686$ is the critical overdensity for collapse, and calculate a corresponding characteristic host halo mass. We assume the ratio between the halo mass density and the average matter density of universe is $\Delta = 200$. We find the characteristic host halo mass to be $\log_{10}\left( \frac{M}{ h^{-1} M_\odot} \right) = 12.54^{+0.25}_{-0.36}$. This is consistent with previous estimates for BOSS/eBOSS quasars at similar redshifts \citep{White2012, Laurent2017}.
\begin{figure}
\includegraphics[width=\columnwidth]{sys}
\caption{Check for possible systematic effects on the cross power spectrum due to contaminants. Here we show the right hand side of Equation \ref{eq:systematics} for different foregrounds. The dust plot is the bias due to dust emission. The SZ plot is the bias due to the Planck SZ catalog. The 100, 143, and 217 plots are the biases from Planck Catalog of Compact Objects, corresponding to the labeled frequency. The GCC plot is the bias from the Planck Galactic cold clumps.}
\label{fig:sys}
\end{figure}
\section{Measurement systematics and uncertainties}
\label{sec:error}
\subsection{Systematic effects}
Residual foregrounds in the CMB map that are correlated with the large scale structure probed by the eBOSS quasars can lead to biases to the CMB lensing cross-correlation \citep{2014ApJ...786...13V, 2014JCAP...03..024O, 2018PhRvD..97b3512F,Pullen2016}. Mitigation strategies have been proposed \citep{2018arXiv180208230M, 2018arXiv180406403S}, and based on previous work we expect the bias to cross-correlations with Planck lensing to be at most a few percent, considerably smaller than our statistical significance.
Nonetheless, we check for contamination from galactic dust emission, point sources, and SZ effect. We use the Second Planck SZ Catalog \citep{Planck2015-sz}, which includes sources detected through the SZ effect \citep{Sunyaev1980}, the \citet{Schlegel1998} dust infrared emission map for estimation of CMB radiation foregrounds, the \textit{Planck} Catalog of Galactic cold clumps \citep{Planck2015-gcc}, and the overdensity maps constructed from the Second \textit{Planck} Catalog of Compact Sources \citep{Planck2015-ccs} at frequencies 100 GHz, 143 GHz, and 217 GHz.
If systematic effects were added linearly to the observed CMB lensing map and quasar map \citep{Ross2011, Ho2012}, the bias to the cross correlation would be given by \citep{Giannantonio2016}:
\begin{ceqn}
\begin{equation}
\label{eq:systematics}
\Delta \hat{C}_{l}^{\kappa q} = \sum_s \frac{\hat{C}_{l}^{\kappa s} \hat{C}_{l}^{q s}}{\hat{C}_{l}^{s s}}
\end{equation}
\end{ceqn}
where $s$ is the map for the systematics. In Equation \ref{eq:systematics}, we estimate the amplitude of the systematic $s$ for each data set by cross-correlating the data and the systematic template, and propagate these to the bias in the observed cross power spectrum. Although the lensing map is obtained though non-linear operations on the CMB map, and therefore the assumption of linearity is not satisfied, estimating the quantity above is still a powerful null-test. If significant contamination was found, Equation \ref{eq:systematics} should not be used to correct for the bias, but more sophisticated mitigation techniques should be employed \citep{2018arXiv180406403S, 2018arXiv180208230M, 2014JCAP...03..024O}.
Fig.~\ref{fig:sys} shows the right hand side of Equation \ref{eq:systematics}. The effects are consistent with null at most scales and we conclude that there is no significant systematic effects due to the contaminants considered above. We calculate the overall systematic error by adding the average absolute biases at each angular scale, weighted by the inverse variance, and find it to be less than 7\% of the signal.
\subsection{Null test}
We use a simple null test \citep{Sherwin2012, Geach2013} on the CMB lensing-quasar overdensity cross power spectrum to check our result and procedure by cross-correlating the CMB lensing convergence map on one part of the sky with the quasar map on another part of the sky. The result of the null test is shown in Fig.~\ref{fig:null}. Most bins of the null cross-spectrum fall within $1\sigma$ of null, and fitting the theoretical spectrum to the null result yields a bias measurement of $b / b_\textrm{fid} = -0.005 \pm 0.003$. The best fit chi-square value for the null hypothesis is 11.23, with 14 degrees of freedom. The distribution of the points is consistent with a Gaussian centered at zero.
\begin{figure}
\includegraphics[width=\columnwidth]{null}
\caption{The cross-power spectrum from the null test. The error bars are obtained the same way as before (Equation \ref{eq:error}). The blue curve is the best-fit theoretical cross-power spectra. The zoomed-up subplots show points more than $1 \sigma$ from null. The result is consistent with zero correlation.}
\label{fig:null}
\end{figure}
\subsection{Covariance matrix}
\label{cov-matrix}
The theoretical error bar for each bin is calculated using Equation~\ref{eq:error}, which assumes the bins are independent. Limited sky fraction may induce correlation between $\ell$ bins, and this assumption is only valid when the bins are relatively large ($\Delta\ell \gtrsim 2/f_{\textrm{sky}} $) \citep{Gaztanaga2012, Cabre2008}. In our case, the bins are large and should be roughly independent in the limit of large $\Delta \ell$.
To compute the full covariance matrix of $\hat{C}_l^{\kappa q}$, we use quasar mocks and CMB lensing simulations. The quasar mocks are taken from the QSO EZmocks (effective Zel'dovich approximation mock catalogs) \citep{Chuang2014}, which include 1000 realizations of the quasar map with the same number of randomly distributed sources. The CMB lensing simulations include 100 realizations of simulated lensing convergence maps \citep{Planck2015-lensing} containing both signal and noise. We cross correlate 100 pairs of the quasar mocks and lensing simulations, and calculate the average of the cross power spectra, to estimate the covariance matrix $\textrm{cov}[i, j] = \langle (C(i) - E(C(i)))(C(j) - E(C(j))) \rangle$. Note that the covariance estimated via this route does not include the $C_l^{\kappa q}$ part in Equation \ref{eq:error}, because the quasar mocks are not correlated with the CMB lensing simulations.
The off-diagonal elements of the covariance matrix are small compared to the on-diagonal elements (Fig.~\ref{fig:cov}), and the diagonal elements mostly agree with the theoretical values, calculated using Equation~\ref{eq:error}. In both the theoretically predicted error and the covariance matrix, the error in the cross power spectrum decreases with increasing $\ell$ for $\ell < 1200$. The shot noise of the quasars should be a constant contribution of the power spectrum error at all scales. On smaller scales, the error increases again, due to reconstruction noise in the lensing map.
The central value and the uncertainty of the bias estimate change slightly when we use the full covariance matrix, which gives a bias of $2.42 \pm 0.44$ with a significance of $5.4 \sigma$ and $\chi^2_{\textrm{th}} = 13.9$ for 14 degrees of freedom.
\section{Conclusions}
\label{sec:conclusions}
We studied the cross-correlation between the Planck CMB lensing convergence map and the eBOSS DR14 quasar map at redshift range $0.9 < z < 2.2$, with an effective redshift of $z_{\textrm{eff}} \approx 1.51$, and measure the quasar bias. We found correlation between CMB lensing and the eBOSS quasars, and a quasar bias $b_q = 2.43 \pm 0.45$ at $5.4\sigma$ significance, using the theoretically calculated covariance matrix. This is consistent with the result in \citet{Laurent2017}. We obtained the covariance matrix from the quasar mocks and lensing simulations, and found it consistent with the theoretical covariance matrix. While the theory curve is a good fit on most of the scales, the first bin shows low cross-correlation between CMB lensing and quasar clustering. The origin of this deficit of power at low-$\ell$ is not known at present.
We performed a simple null test for the cross power spectrum, and the result is mostly consistent with null, with the exception of two low-$\ell$ bins and one near $\ell_{\textrm{max}}$. We checked for several systematics and found no significant contributions from the considered contaminants.
Using the \citet{Tinker2010} model of the relation between halo mass function and clustering, we calculate a characteristic host halo mass for the eBOSS DR14 quasar catalog: $\log_{10}\left( \frac{M_{200}}{1 h^{-1} M_\odot} \right) = 12.54^{+0.25}_{-0.36}$. This is consistent with previous estimates of the quasar host halo mass at similar redshifts \citep{White2012, Laurent2017}. We also attempted to fit for a scale dependent bias, but did not find evidence for a scale dependent term.
The significance and accuracy of the quasar bias measurement depend on the sample size and number density of the quasar survey \citep{Seljak2009}, so we would expect the signal-to-noise ratio of the detection using this method to improve, as eBOSS continues to expand its sample size \citep{Dawson2016} and new surveys such as DESI \citep{DESI2016} and Euclid \citep{Euclid2011} become operational. This will provide more precise measurements of the quasar bias as a function of redshift, scale, etc, and open paths to better understanding of the various properties of quasars, including the host halo mass and duty cycle \citep{Martini2001}. The improved bias measurement could also provide good constraints on the galaxy formation models \citep{Contreras2013}, general relativity and modified gravity \citep{Acquaviva2008}, and the properties of dark matter and dark energy \citep{Das2009}.
\section*{Acknowledgements}
We thank Siyu He, Anthony Pullen, Emmanuel Schaan, Blake Sherwin, Jeremy Tinker and Michael Wilson for helpful discussions. J.H. would like to thank Stephen Ebert and Pavel Motloch for helpful comments on the paper.
This work is based on observations made by the Planck satellite and the Apache Point Observatory. The Planck project (\url{http://www.esa.int/Planck}) is funded by the member states of ESA, and NASA. The SDSS-IV project (\url{http://www.sdss.org/}) is funded by the participating institutions, the National Science Foundation, the United States Department of Energy, and the Alfred P. Sloan Foundation. S.F. is funded by a Miller Fellowship at the University of California, Berkeley.
S.H. thanks NASA for their support in grant number: NASA grant 15-WFIRST15-0008 and NASA ROSES grant 12-EUCLID12-0004. E.G. is supported by NSF grant AST1412966 and by the Simons Foundation through the Flatiron Institute.
\begin{figure}
\includegraphics[width=\columnwidth]{cov}
\caption{The normalized covariance matrix of the angular cross power spectrum (($\textrm{cov}[i, j] / \sqrt{\textrm{cov}[i, i] * \textrm{cov}[j, j]}$)), where $i$ and $j$ are labels of the bins.}
\label{fig:cov}
\end{figure}
|
1,108,101,562,765 | arxiv | \section*{Affiliation notes}
$^{\rm I}$ Deceased\\
$^{\rm II}$ Also at: Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Bologna, Italy\\
$^{\rm III}$ Also at: Dipartimento DET del Politecnico di Torino, Turin, Italy\\
$^{\rm IV}$ Also at: M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia\\
$^{\rm V}$ Also at: Institute of Theoretical Physics, University of Wroclaw, Poland\\
\section*{Collaboration Institutes}
$^{1}$ A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia\\
$^{2}$ AGH University of Science and Technology, Cracow, Poland\\
$^{3}$ Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, Kiev, Ukraine\\
$^{4}$ Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India\\
$^{5}$ Budker Institute for Nuclear Physics, Novosibirsk, Russia\\
$^{6}$ California Polytechnic State University, San Luis Obispo, California, United States\\
$^{7}$ Central China Normal University, Wuhan, China\\
$^{8}$ Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba\\
$^{9}$ Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico\\
$^{10}$ Chicago State University, Chicago, Illinois, United States\\
$^{11}$ China Institute of Atomic Energy, Beijing, China\\
$^{12}$ Chungbuk National University, Cheongju, Republic of Korea\\
$^{13}$ Comenius University Bratislava, Faculty of Mathematics, Physics and Informatics, Bratislava, Slovakia\\
$^{14}$ COMSATS University Islamabad, Islamabad, Pakistan\\
$^{15}$ Creighton University, Omaha, Nebraska, United States\\
$^{16}$ Department of Physics, Aligarh Muslim University, Aligarh, India\\
$^{17}$ Department of Physics, Pusan National University, Pusan, Republic of Korea\\
$^{18}$ Department of Physics, Sejong University, Seoul, Republic of Korea\\
$^{19}$ Department of Physics, University of California, Berkeley, California, United States\\
$^{20}$ Department of Physics, University of Oslo, Oslo, Norway\\
$^{21}$ Department of Physics and Technology, University of Bergen, Bergen, Norway\\
$^{22}$ Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza' and Sezione INFN, Rome, Italy\\
$^{23}$ Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Cagliari, Italy\\
$^{24}$ Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Trieste, Italy\\
$^{25}$ Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Turin, Italy\\
$^{26}$ Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Bologna, Italy\\
$^{27}$ Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Catania, Italy\\
$^{28}$ Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Padova, Italy\\
$^{29}$ Dipartimento di Fisica e Nucleare e Teorica, Universit\`{a} di Pavia and Sezione INFN, Pavia, Italy\\
$^{30}$ Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a} and Gruppo Collegato INFN, Salerno, Italy\\
$^{31}$ Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy\\
$^{32}$ Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and INFN Sezione di Torino, Alessandria, Italy\\
$^{33}$ Dipartimento di Scienze MIFT, Universit\`{a} di Messina, Messina, Italy\\
$^{34}$ Dipartimento Interateneo di Fisica `M.~Merlin' and Sezione INFN, Bari, Italy\\
$^{35}$ European Organization for Nuclear Research (CERN), Geneva, Switzerland\\
$^{36}$ Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Split, Croatia\\
$^{37}$ Faculty of Engineering and Science, Western Norway University of Applied Sciences, Bergen, Norway\\
$^{38}$ Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic\\
$^{39}$ Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia\\
$^{40}$ Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany\\
$^{41}$ Fudan University, Shanghai, China\\
$^{42}$ Gangneung-Wonju National University, Gangneung, Republic of Korea\\
$^{43}$ Gauhati University, Department of Physics, Guwahati, India\\
$^{44}$ Helmholtz-Institut f\"{u}r Strahlen- und Kernphysik, Rheinische Friedrich-Wilhelms-Universit\"{a}t Bonn, Bonn, Germany\\
$^{45}$ Helsinki Institute of Physics (HIP), Helsinki, Finland\\
$^{46}$ High Energy Physics Group, Universidad Aut\'{o}noma de Puebla, Puebla, Mexico\\
$^{47}$ Hiroshima University, Hiroshima, Japan\\
$^{48}$ Hochschule Worms, Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Worms, Germany\\
$^{49}$ Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest, Romania\\
$^{50}$ Indian Institute of Technology Bombay (IIT), Mumbai, India\\
$^{51}$ Indian Institute of Technology Indore, Indore, India\\
$^{52}$ Indonesian Institute of Sciences, Jakarta, Indonesia\\
$^{53}$ INFN, Laboratori Nazionali di Frascati, Frascati, Italy\\
$^{54}$ INFN, Sezione di Bari, Bari, Italy\\
$^{55}$ INFN, Sezione di Bologna, Bologna, Italy\\
$^{56}$ INFN, Sezione di Cagliari, Cagliari, Italy\\
$^{57}$ INFN, Sezione di Catania, Catania, Italy\\
$^{58}$ INFN, Sezione di Padova, Padova, Italy\\
$^{59}$ INFN, Sezione di Roma, Rome, Italy\\
$^{60}$ INFN, Sezione di Torino, Turin, Italy\\
$^{61}$ INFN, Sezione di Trieste, Trieste, Italy\\
$^{62}$ Inha University, Incheon, Republic of Korea\\
$^{63}$ Institute for Gravitational and Subatomic Physics (GRASP), Utrecht University/Nikhef, Utrecht, Netherlands\\
$^{64}$ Institute for Nuclear Research, Academy of Sciences, Moscow, Russia\\
$^{65}$ Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia\\
$^{66}$ Institute of Physics, Homi Bhabha National Institute, Bhubaneswar, India\\
$^{67}$ Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic\\
$^{68}$ Institute of Space Science (ISS), Bucharest, Romania\\
$^{69}$ Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany\\
$^{70}$ Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico\\
$^{71}$ Instituto de F\'{i}sica, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil\\
$^{72}$ Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico\\
$^{73}$ iThemba LABS, National Research Foundation, Somerset West, South Africa\\
$^{74}$ Jeonbuk National University, Jeonju, Republic of Korea\\
$^{75}$ Johann-Wolfgang-Goethe Universit\"{a}t Frankfurt Institut f\"{u}r Informatik, Fachbereich Informatik und Mathematik, Frankfurt, Germany\\
$^{76}$ Joint Institute for Nuclear Research (JINR), Dubna, Russia\\
$^{77}$ Korea Institute of Science and Technology Information, Daejeon, Republic of Korea\\
$^{78}$ KTO Karatay University, Konya, Turkey\\
$^{79}$ Laboratoire de Physique des 2 Infinis, Ir\`{e}ne Joliot-Curie, Orsay, France\\
$^{80}$ Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France\\
$^{81}$ Lawrence Berkeley National Laboratory, Berkeley, California, United States\\
$^{82}$ Lund University Department of Physics, Division of Particle Physics, Lund, Sweden\\
$^{83}$ Moscow Institute for Physics and Technology, Moscow, Russia\\
$^{84}$ Nagasaki Institute of Applied Science, Nagasaki, Japan\\
$^{85}$ Nara Women{'}s University (NWU), Nara, Japan\\
$^{86}$ National and Kapodistrian University of Athens, School of Science, Department of Physics , Athens, Greece\\
$^{87}$ National Centre for Nuclear Research, Warsaw, Poland\\
$^{88}$ National Institute of Science Education and Research, Homi Bhabha National Institute, Jatni, India\\
$^{89}$ National Nuclear Research Center, Baku, Azerbaijan\\
$^{90}$ National Research Centre Kurchatov Institute, Moscow, Russia\\
$^{91}$ Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark\\
$^{92}$ Nikhef, National institute for subatomic physics, Amsterdam, Netherlands\\
$^{93}$ NRC Kurchatov Institute IHEP, Protvino, Russia\\
$^{94}$ NRC \guillemotleft Kurchatov\guillemotright Institute - ITEP, Moscow, Russia\\
$^{95}$ NRNU Moscow Engineering Physics Institute, Moscow, Russia\\
$^{96}$ Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom\\
$^{97}$ Nuclear Physics Institute of the Czech Academy of Sciences, \v{R}e\v{z} u Prahy, Czech Republic\\
$^{98}$ Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States\\
$^{99}$ Ohio State University, Columbus, Ohio, United States\\
$^{100}$ Petersburg Nuclear Physics Institute, Gatchina, Russia\\
$^{101}$ Physics department, Faculty of science, University of Zagreb, Zagreb, Croatia\\
$^{102}$ Physics Department, Panjab University, Chandigarh, India\\
$^{103}$ Physics Department, University of Jammu, Jammu, India\\
$^{104}$ Physics Department, University of Rajasthan, Jaipur, India\\
$^{105}$ Physikalisches Institut, Eberhard-Karls-Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany\\
$^{106}$ Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany\\
$^{107}$ Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany\\
$^{108}$ Politecnico di Bari and Sezione INFN, Bari, Italy\\
$^{109}$ Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung GmbH, Darmstadt, Germany\\
$^{110}$ Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia\\
$^{111}$ Russian Federal Nuclear Center (VNIIEF), Sarov, Russia\\
$^{112}$ Saha Institute of Nuclear Physics, Homi Bhabha National Institute, Kolkata, India\\
$^{113}$ School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom\\
$^{114}$ Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru\\
$^{115}$ St. Petersburg State University, St. Petersburg, Russia\\
$^{116}$ Stefan Meyer Institut f\"{u}r Subatomare Physik (SMI), Vienna, Austria\\
$^{117}$ SUBATECH, IMT Atlantique, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France\\
$^{118}$ Suranaree University of Technology, Nakhon Ratchasima, Thailand\\
$^{119}$ Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia\\
$^{120}$ The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland\\
$^{121}$ The University of Texas at Austin, Austin, Texas, United States\\
$^{122}$ Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico\\
$^{123}$ Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil\\
$^{124}$ Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil\\
$^{125}$ Universidade Federal do ABC, Santo Andre, Brazil\\
$^{126}$ University of Cape Town, Cape Town, South Africa\\
$^{127}$ University of Houston, Houston, Texas, United States\\
$^{128}$ University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland\\
$^{129}$ University of Liverpool, Liverpool, United Kingdom\\
$^{130}$ University of Science and Technology of China, Hefei, China\\
$^{131}$ University of South-Eastern Norway, Tonsberg, Norway\\
$^{132}$ University of Tennessee, Knoxville, Tennessee, United States\\
$^{133}$ University of the Witwatersrand, Johannesburg, South Africa\\
$^{134}$ University of Tokyo, Tokyo, Japan\\
$^{135}$ University of Tsukuba, Tsukuba, Japan\\
$^{136}$ Universit\'{e} Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France\\
$^{137}$ Universit\'{e} de Lyon, CNRS/IN2P3, Institut de Physique des 2 Infinis de Lyon , Lyon, France\\
$^{138}$ Universit\'{e} de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France\\
$^{139}$ Universit\'{e} Paris-Saclay Centre d'Etudes de Saclay (CEA), IRFU, D\'{e}partment de Physique Nucl\'{e}aire (DPhN), Saclay, France\\
$^{140}$ Universit\`{a} degli Studi di Foggia, Foggia, Italy\\
$^{141}$ Universit\`{a} di Brescia and Sezione INFN, Brescia, Italy\\
$^{142}$ Variable Energy Cyclotron Centre, Homi Bhabha National Institute, Kolkata, India\\
$^{143}$ Warsaw University of Technology, Warsaw, Poland\\
$^{144}$ Wayne State University, Detroit, Michigan, United States\\
$^{145}$ Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, Institut f\"{u}r Kernphysik, M\"{u}nster, Germany\\
$^{146}$ Wigner Research Centre for Physics, Budapest, Hungary\\
$^{147}$ Yale University, New Haven, Connecticut, United States\\
$^{148}$ Yonsei University, Seoul, Republic of Korea\\
\bigskip
\end{flushleft}
\endgroup
\section{Introduction}
Photonuclear reactions can be studied in ultra-peripheral collisions (UPCs) of heavy ions where the two projectiles pass each other with an impact parameter larger than the sum of their radii. In this case, purely hadronic interactions are suppressed and electromagnetically induced processes occur via photons with typically very small virtualities, of the order of tens of MeV$^2$. The intensity of the photon flux is proportional to the square of the electric charge of the nuclei, resulting in large cross sections for the coherent photoproduction of a vector meson in UPCs of Pb ions at the LHC. This process has a clear experimental signature: the decay products of the vector meson are the only particles detected in an otherwise empty detector.
The physics of vector meson photoproduction is described, e.g., in Refs.~\cite{Bertulani:2005ru, Baltz:2007kq,Contreras:2015dqa,Klein:2019qfb}. Two vector meson photoproduction processes, coherent and incoherent, are relevant for the results presented here. In the former, the photon interacts with all nucleons in a nucleus, while in the latter it interacts with a single nucleon. In both cases a single vector meson is produced. Experimentally, one can distinguish between these two production types through the transverse momentum \pt of the vector meson which is related to the transverse size of the target. While coherent photoproduction is characterised by an average transverse momentum $ \left<\pt\right> \sim$~60~MeV/$c$, incoherent production leads to higher average transverse momenta:~$ \left<\pt\right> \sim$~500~MeV/$c$. Incoherent photoproduction can also be accompanied by the excitation and dissociation of the target nucleon resulting in an even higher transverse momentum of the produced vector meson~\cite{Guzey:2018tlk}.
Shadowing, the observation that the structure of a nucleon inside nuclear matter is different from that of a free nucleon~\cite{Armesto:2006ph}, is not yet completely understood and several processes may have a role in different kinematic regions. In this context, coherent heavy vector meson photoproduction is of particular interest, because it is especially sensitive to the gluon distribution in the target, and thus to gluon shadowing effects at low Bjorken-$x$~\cite{Ryskin:1992ui,Rebyakova:2011vf}. One of the effects expected to contribute to shadowing in this kinematic region is saturation, a dynamic equilibrium between gluon radiation and recombination~\cite{Albacete:2014fwa}. The momentum scale of the interaction ($Q^{2}$) is related to the mass $m_V$ of the vector meson as $Q^{2}~\sim~m^{2}_{V}/4$, corresponding to the perturbative regime of quantum chromodynamics (QCD) in the case of charmonium states. The rapidity of the coherently produced ${\rm c} \bar{\rm c}$ states is related to the Bjorken-$x$ of the gluonic exchange as $x~=~\left(m_V/\sqrt{s_{\rm NN}}\right)\exp\left(\pm~y\right)$, where the two signs indicate that either of the incoming ions can be the source of the photon. Thus, the charmonium photoproduction cross section at midrapidity in \PbPb UPCs at the LHC Run~2 centre-of-mass energy per nucleon pair of \fivenn is sensitive to $x\in (0.3,1.4)\times10^{-3}$ at ALICE. It thereby provides information on the gluon distribution in nuclei in a kinematic region where shadowing could be present and saturation effects may be important~\cite{Guzey:2016qwo,Bendova:2020hbb}.
Charmonium photoproduction in ultra-peripheral \PbPb collisions was previously studied by the ALICE Collaboration at $\sqrt{s_{\mathrm{NN}}}~=~2.76$~Te\kern-.1emV\xspace~\cite{Abelev:2012ba, Abbas:2013oua,Adam:2015sia}. The coherent \Jpsi photoproduction cross section was measured both at midrapidity $|y|<0.9$ and at forward rapidity $-3.6 < y < -2.6$. Recently, a measurement of the rapidity dependence of coherent \Jpsi photoproduction at forward rapidity at the higher energy of \fivenn was also published by the ALICE Collaboration~\cite{Acharya:2019vlb}. In addition, the CMS Collaboration studied the coherent \Jpsi\ photoproduction accompanied by neutron emission at semi-forward rapidity $1.8 < |y| < 2.3$ at $\sqrt{s_{\mathrm{NN}}}~=~2.76$~Te\kern-.1emV\xspace~\cite{Khachatryan:2016qhq}. These measurements allow for a deeper insight into the rapidity dependence of gluon shadowing, but do not give information on the behaviour of gluons in the impact-parameter plane.
The square of the momentum transferred to the target nucleus, \mant, is related through a two-dimensional Fourier transform to the gluon distribution in the plane transverse to the interaction~\cite{Bartels:2003yj}; thus the study of the \mant-dependence of coherent \Jpsi\ photoproduction provides information about the spatial distribution of gluons as a function of the impact parameter.
Thus far, the only measurements in this direction were performed recently by the STAR Collaboration for the case of the $\rho^{0}$ vector meson~\cite{Adamczyk:2017vfu} and for the yield of \Jpsi in semi-central Au--Au collisions~\cite{STAR:2019yox}.
In this Letter, the first measurement of the \mant-dependence of the coherent \Jpsi\ photoproduction cross section at midrapidity in \PbPb UPCs at \fivenn is presented. The \Jpsi vector mesons were reconstructed in the rapidity range $|y|<0.8$ through their decay into \mumu, taking advantage of the better mass and momentum resolution of this channel with respect to the $e^+e^-$ channel. The data sample, recorded in 2018, is approximately 10 times larger than that used in previous ALICE measurements at midrapidity at the lower energy of $\sqrt{s_{\mathrm{NN}}}~=~2.76$~Te\kern-.1emV\xspace \cite{Adam:2015sia}. Cross sections are reported for six \mant intervals and compared with theoretical predictions.
\section{Detector description}
The ALICE detector and its performance are described in Refs.~\cite{Aamodt:2008zz, Abelev:2014ffa}. Three central barrel detectors, the Inner Tracking System (\ITS), the Time Projection Chamber (\TPC), and the Time-of-Flight (\TOF), in addition to two forward detectors, \VZERO and the ALICE Diffractive (\AD) arrays, are used in this analysis. The central barrel detectors are surrounded by a large solenoid magnet producing a magnetic field of $B = 0.5$~T. The \VZERO, \AD, \ITS, and \TOF detectors are used for triggering, the \ITS and the \TPC for particle tracking, and the \TPC for particle identification.
The \VZERO is a scintillator detector made of two counters, V0A and V0C, installed on both sides of the interaction point. The V0A and V0C cover the pseudorapidity ranges $2.8< \eta <5.1$ and $-3.7< \eta <-1.7$, respectively. Both counters are segmented in four rings in the radial direction, with each ring divided into 8 sections in azimuth.
The \AD consists of two scintillator stations, ADA and ADC, located at 16 and $-19$ m along the beam line with respect to the nominal interaction point and covering the pseudorapidity ranges $4.8< \eta <6.3$ and $-7.0 < \eta <-4.9$, respectively~\cite{Akiba:2016ofq,Broz:2020ejr}.
The \ITS is a silicon based detector and is made of six cylindrical layers using three different technologies. The Silicon Pixel Detector (\SPD) forms the two innermost layers of the ITS and covers $|\eta|<2$ and $|\eta|<1.4$, respectively. Apart from tracking, the \SPD is also used for triggering purposes and to reconstruct the primary vertex.
The \ITS is cylindrically surrounded by the \TPC, whose main purpose is to track particles and provide charged-particle momentum measurements with good two-track separation and particle identification. The TPC coverage in pseudorapidity is $|\eta|<0.9$ for tracks with full radial length. The TPC has full coverage in azimuth. It offers good momentum resolution in a large range of the track transverse momentum spanning from 0.1~GeV/$c$ to 100~GeV/$c$.
The \TOF is a large cylindrical gaseous detector based on multi-gap resistive-plate chambers. It covers the pseudorapidity region $|\eta|<0.8$. The \TOF readout channels are arranged into 18 azimuthal sectors which can provide topological trigger decisions.
\section{Data analysis}
\subsection{Event selection} \label{sec_event_selection}
The online event selection was based on a dedicated UPC trigger which selected back-to-back tracks in an otherwise empty detector. This selection required ($i$) that nothing above the trigger threshold was detected in the \VZERO and \AD detectors, ($ii$) a topological trigger requiring less than eight \SPD chips with trigger signal, forming at least two pairs; each pair was required to have an \SPD chip fired in each of the two layers and to be in compatible azimuthal sectors, with an opening angle in azimuth between the two pairs larger than 144\degree, ($iii$) a topological trigger in the \TOF requiring more than one and less than seven \TOF sectors to register a signal; at least two of these sectors should have an opening angle in azimuth larger than 150\degree.
The integrated luminosity of the analysed sample is 233~$\mu\text{b}^{-1}$. The determination of the luminosity is obtained from the counts of a reference trigger based on multiplicity selection in the \VZERO detector, with the corresponding cross section estimated from a van der Meer scan; this procedure has an uncertainty of 2.2$\%$~\cite{ALICE-PUBLIC-2021-001}. The determination of the live-time of the UPC trigger has an additional uncertainty of 1.5$\%$. The total relative systematic uncertainty of the integrated luminosity is thus 2.7$\%$.
Additional offline \VZERO and \AD veto decisions were applied in the analysis. The offline veto algorithm improved the signal to background ratio, because it utilised a larger timing window to integrate the signal than its online counterpart. Some good events were lost due to this selection. The loss was taken into account with the correction on veto trigger inefficiency discussed in Sec.~\ref{sec_axe}. The systematic uncertainty from the \VZERO and \AD vetoes was estimated as the relative change in the measured \Jpsi cross section before and after imposing them and correcting for the losses; it amounts to 3\%.
Each event had a reconstructed primary vertex within 15~cm from the nominal interaction point along the beam direction, $z$, and had exactly two tracks. These tracks were reconstructed using combined tracking in the \ITS and \TPC. Tracks were requested to have at least 70 (out of 159) \TPC space points and to have a hit in each of the two layers of the \SPD. Each track had to have a distance of closest approach to the event interaction vertex of less than 2 cm in the $z$-axis direction. Also, each track was required to have $|\eta|<0.9$. The relative systematic uncertainty from tracking, which takes into account the track quality selection and the track propagation from the \TPC to the \ITS, was estimated from a comparison of data and Monte Carlo simulation. The combined uncertainty to reconstruct both tracks is 2.8\%.
The particle identification (PID) was provided by the specific ionisation losses in the \TPC, which offer a large separation power between muons and electrons from the leptonic decays of the \Jpsi in the momentum range $(1.0,2.0)$~\GeVc, relevant for this analysis. The effect of a possible misidentification was found to be negligible.
An offline \SPD decision was also applied in the analysis. The offline topological \SPD algorithm ensured that the selected tracks crossed the \SPD chips used in the trigger decision. The relative systematic uncertainty from the \SPD and \TOF trigger amounts to 1.3$\%$, which was estimated using a data-driven method by changing the requirements on the probe tracks.
The selected events were required to have tracks with opposite electric charge, the rapidity of the dimuon candidate was restricted to $|y|<0.8$ and its \pt had to be less than 0.11~\GeVc, in order to obtain a sample dominated by coherent interactions with just a small contamination from incoherent processes. The measurement was initially carried out in \pttwo intervals, because for collider kinematics \mant$\approx~$\pttwo. The corrections needed to obtain the \mant-dependence are discussed in Sec.~\ref{sec:pt2tot}.
\subsection{Signal extraction}
As a first step in extracting the coherent \Jpsi\ signal, a fit to the opposite sign dimuon invariant mass distribution was performed. The model used to fit the data consists of three templates: one Crystal Ball function~\cite{Oreglia:1980cs} (CB) to describe the \Jpsi resonance, a second CB function to describe the \Ppsi resonance, and an exponential function to describe the continuum production of muon pairs, $\gamma\gamma\to\mu^+\mu^-$.
The parameters of the exponential function were left free. The integral of this exponential in the mass range $(3.0,3.2)$ \GeVmass was used to determine the number of events from the continuum production in this interval.
The CB parameters describing the tails of the measured distribution in data, commonly known as $\alpha$ and $n$, were fixed to the values obtained while fitting the dimuon invariant mass distribution in an associated Monte Carlo simulation, which is described in Sec.~\ref{sec_axe}. These settings were employed for both CB functions.
The number of \Jpsi candidates in each \pttwo interval was obtained from an extended maximum likelihood fit to the unbinned invariant mass distribution of all $\mu^+\mu^-$ pairs which survived the selection criteria described in Sec.~\ref{sec_event_selection}. Results of the fits for the six \pttwo intervals are shown in Fig.~\ref{fig_massNbins}. In all cases a very clear \Jpsi resonance is seen over a fairly small background. Note that the effect on the kinematics from a potential dimuon decay including bremsstrahlung is negligible.
The relative systematic uncertainty from the signal extraction was calculated by repeating the fit over different invariant mass ranges, and modifying the CB $\alpha$ and $n$ parameters accordingly. These uncertainties vary in the interval (0.7,2.2)\%.
\begin{figure}[!th]
\begin{center}
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin1}.eps}
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin2}.eps}\\
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin3}.eps}
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin4}.eps}\\
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin5}.eps}
\includegraphics[width=0.46\linewidth]{{figures/mass/mass6Bin6}.eps}
\end{center}
\caption{Invariant-mass distributions for different \pttwo intervals with the global fit described in the text shown with the blue line. The exponential part of the fit model, representing the $\gamma\gamma\to\mu^+\mu^-$ background, is shown in red.}
\label{fig_massNbins}
\end{figure}
\subsection{Corrections for irreducible backgrounds}
The selection criteria described above are not sensitive to events which mimic the signature of coherent \Jpsi production, but are coming from feed-down of \Ppsi or incoherent production. The contribution of these events was taken into account with the \fD and \fI factors, respectively, entering Eq.~(\ref{eq_yieldCorrection}),
\begin{equation} \label{eq_yieldCorrection}
N^{\text{coh}}_{\Jpsi} = \frac{N^{\text{fit}}}{1 + \fI + \fD}\times\frac{1}{(\axe)^{\text{coh}}_{\Jpsi}},
\end{equation}
where $N^{\text{fit}}$, the yield of \Jpsi candidates, is the integral of the CB describing the \Jpsi signal in the fit of the dimuon invariant mass spectrum, and $(\axe)^{\text{coh}}_{\Jpsi}$ is the acceptance and efficiency correction factor described in Sec.~\ref{sec_axe}.
Feed-down refers to the decay of a \Ppsi to a \Jpsi plus anything else, where these additional particles were not detected for some reason. The correction for these events, \fD, was estimated with Monte Carlo simulations describing the apparatus (\axe) factor for the following channels: \Jpsi$\rightarrow$\mumu, \Ppsi$\rightarrow$\mumu, and \Ppsi$\rightarrow \Jpsi + X$; and the measured ratio of \Ppsi to \Jpsi production cross sections. The details of the method are described in Ref.~\cite{Acharya:2019vlb}. The results for each \pttwo interval are summarised in Table~\ref{tab_yieldcorrection}. Relative systematic uncertainties, estimated by using different cross section ratios, are \pttwo-correlated. Their relative effect on the final cross section can be found in Table~\ref{tab_syserrcorrelation}; it is well below 1\%.
Most of the incoherent production of \Jpsi off nucleons was rejected with the restriction of the phase space in \pt, as mentioned in Sec.~\ref{sec_event_selection}. However, around 5\% of all incoherent events remained in the region where the measurement was performed. To estimate the \fI factor to correct for the remaining incoherent events, a fit to the measured \Jpsi \pt distribution of data in the invariant mass range $(3.0,3.2)$~\GeVmass was used. The model fitted to the data consists of six templates: coherent~\Jpsi photoproduction, incoherent~\Jpsi photoproduction, incoherent~\Jpsi photoproduction with nucleon dissociation, coherent~\Ppsi photoproduction, incoherent~\Ppsi photoproduction, and continuum production from $\gamma\gamma\to\mu^+\mu^-$. The templates of all, but dissociative \Jpsi and continuum, were taken from Monte Carlo simulations. In the fit, the fractions of both \Ppsi photoproduction processes were fixed to values calculated as described above. These included the modifications that the \pt restriction was released and that there was a selection on the invariant mass to be in the range $(3.6,3.8)$~\GeVmass. Other fractions were left free in the fit. The normalisation of the continuum was restricted from the invariant mass fit to be the sum of background events in the mass range of the \Jpsi. The shape of the continuum was taken from the dimuon \pt distribution selecting the invariant mass range between the \Jpsi and the \Ppsi, while the shape for the nucleon dissociation process was based on the H1 parameterisation~\cite{Alexa:2013xxa}. The global template was fitted to data using an extended maximum likelihood unbinned fit. The results for each \pttwo interval are reported in Table~\ref{tab_yieldcorrection}. The systematic uncertainties, estimated from a combination of the fit uncertainty and a modification of the coherent template used in the fitting model are \pt-correlated. Their relative effect on the final cross section can be found in Table~\ref{tab_syserrcorrelation}.
\subsection{Acceptance, efficiency and pile-up corrections
\label{sec_axe}}
The STARlight 2.2.0 MC generator~\cite{Klein:2016yzr} was used to generate samples of coherent and incoherent events for the production of \Jpsi$\rightarrow$\mumu and \Ppsi$\rightarrow$\mmpp. GEANT 3.21~\cite{Brun:1082634} was used to reproduce the response of the detector. The simulated data were reconstructed with the same software as the real ones, accounting for actual data-taking conditions. Values of the acceptance and efficiency, $(\axe)^{\text{coh}}_{\Jpsi}$, are shown in Table~\ref{tab_yieldcorrection} for the different \pttwo intervals used in this analysis.
\AD and \VZERO were used to veto activity at forward rapidity. These detectors were sensitive to signals coming from independent interactions (pile-up), which resulted in the rejection of potentially interesting events. The correction factor for this effect was obtained using a control sample of events collected with an unbiased trigger. These were then used to compute the probability of having a veto from \AD or \VZERO in otherwise empty events. The total veto trigger efficiency $\epsilon^{\text{VETO}}$ used in Eq.~(\ref{eq_expcohCS}) was determined to be 0.94. The corresponding systematic uncertainty is included in the \AD and \VZERO value of 3\% mentioned in Sec.~\ref{sec_event_selection}.
Electromagnetic dissociation (EMD) is another process which may cause the rejection of a good event due to the veto from the forward detectors. EMD can occur when photons excite one or both interacting nuclei. Upon de-excitation, neutrons and sometimes other charged particles are emitted at forward rapidities~\cite{Pshenichnov:1999hw} and can trigger a \VZERO or \AD veto. Such loss of events was quantified from data gathered with a specialized EMD trigger; the efficiency correction factor to take into account these losses amounts to $\epsilon^{\text{EMD}}~=~0.92$ with a relative systematic uncertainty of 2\% given by the statistical uncertainty from the control sample.
\begin{table*}[t]
\caption{Incoherent correction \fI, feed-down correction \fD and the $(\axe)^{\text{coh}}_{\Jpsi}$ correction factor for each \pttwo interval. See Eq.~(\ref{eq_yieldCorrection}).}
\centering
\begin{tabular}{lccc}
\toprule
\pttwo interval (\GeVtwo) & \fI & \fD & $(\axe)^{\text{coh}}_{\Jpsi}$\\
\midrule
$\left(0,0.00072\right)$ & 0.0045 & 0.0039 & 0.0348\\
$\left(0.00072,0.0016\right)$ & 0.0047 & 0.0046 & 0.0352\\
$\left(0.0016,0.0026\right)$ & 0.0047 & 0.0058 & 0.0358\\
$\left(0.0026,0.004\right)$ & 0.0072 & 0.0072 & 0.0365\\
$\left(0.004,0.0062\right)$ & 0.0120 & 0.011 & 0.0379\\
$\left(0.0062,0.0121\right)$ & 0.0300 & 0.028 & 0.0412\\
\bottomrule
\end{tabular}
\label{tab_yieldcorrection}
\end{table*}
\subsection{Unfolding of the $\mathbf{\textit p^{2}_{\rm T}}$ distribution}
Cross sections were measured in different \pttwo intervals. In order to account for the migration of about 45\% of the events across \pttwo intervals due to the finite resolution of the detector, an unfolding procedure was used. The effect of migrations are much more important than the small difference between the data and MC \pttwo spectra, so no re-weighting has been performed previous to unfolding.
Amongst many available methods, unfolding based on Bayes' theorem~\cite{DAgostini:1994fjx} was chosen to perform the unfolding, while the singular-value decomposition (SVD) method~\cite{Hocker:1995kb} served to study potential systematic effects. The implementations of these methods as provided by RooUnfold~\cite{Adye:2011gm} were used in this analysis.
Bayesian unfolding is an iterative method, therefore the result depends on the number of iterations. The size of the data sample is large enough to investigate different numbers of \pttwo ranges. These two parameters, that is the number of iterations and of ranges, were tuned using Monte Carlo simulations by studying the evolution of the statistical uncertainty in each interval as a function of the number of iterations, and by using the relative difference between iteration-adjacent results. It was found that the best combination for this analysis is Bayes' unfolding with three iterations applied to the \pttwo distribution split into six regions. The widths of the \pttwo intervals were chosen to have similar statistical uncertainties in each region.
The Monte Carlo sample used for unfolding contained 600 000 events. An 80\% fraction of them was used to train the response matrix which is used to unfold the true distribution from the measured distribution. This matrix was tested on the remaining 20\% of the events. The unfolding matrix was able to correct the smeared distribution with high precision. Comparison with results using the SVD method revealed a \pt-correlated relative systematic uncertainty with values in the interval (0.6,2.3)\%.
\subsection{Cross section for coherent $\mathbf{\rm{J}/\psi}$ photoproduction in UPCs}
The differential cross section for coherent \Jpsi photoproduction in a given \pttwo interval and a given rapidity range $\Delta y$ in Pb--Pb UPCs is
\begin{equation} \label{eq_expcohCS}
\frac{{\rm d}^{2}\sigma^{\text{coh}}_{\Jpsi}}{{\rm d}y{\rm d}\pttwo} = \frac{^{\text{unf}}N^{\text{coh}}_{\Jpsi}}{\epsilon^{\text{VETO}}\times\epsilon^{\text{EMD}}\times\BR(\Jpsi\rightarrow\mu^{+}\mu^{-})\times\lumi_{\text{int}}\times\Delta \pttwo\times\Delta y},
\end{equation}
where the correction factors $\epsilon^{\text{VETO}}$ and $\epsilon^{\text{EMD}}$ are introduced in Sec.~\ref{sec_axe}, $\BR(\Jpsi\rightarrow\mu^{+}\mu^{-})$ is the branching ratio ($5.961 \pm 0.033$)$\%$~\cite{Zyla:2020}, $\lumi_{\text{int}}$ is the total integrated luminosity of the data sample, $\Delta \pttwo$ is the size of the interval where the measurement was performed, and finally, $^{\text{unf}}N^{\text{coh}}_{\Jpsi}$ is the number of coherent \Jpsi candidates after unfolding the results given by Eq.~(\ref{eq_yieldCorrection}). The corresponding systematic uncertainties are summarised in the upper part of Table~\ref{tab_syserrcorrelation}.
With the exception of signal extraction, all other systematic uncertainties mentioned up to here are correlated across \pttwo intervals.
\subsection{Corrections for the photonuclear cross section
\label{sec:pt2tot}}
The cross section described by Eq.~(\ref{eq_expcohCS}) is the one measured by ALICE. The main theoretical interest is in the photonuclear process at a fixed energy. To obtain the corresponding cross section, one has to account for several effects. None of these effects is affected by the ALICE detector, they just depend on the kinematics and quantum nature of the process. This means that the uncertainties in going from the UPC to the photonuclear cross sections are of theoretical nature only.
At midrapidity, the UPC cross section corresponds to the $\gamma$Pb cross section multiplied by twice the photon flux averaged over the impact parameter, $n_{\gamma{\rm Pb}} (y)$,
\begin{equation}
\label{eq_upc_gPb}
\left. \frac{{\rm d}^{2}\sigma^{\text{coh}}_{\Jpsi}}{{\rm d}y{\rm d}\pttwo}
\right|_{y=0}= 2n_{\gamma{\rm Pb}} (y=0)
\frac{{\rm d}\sigma_{\gamma{\rm Pb}}}{{\rm d}|t|}.
\end{equation}
Since the rapidity dependence of the UPC cross section in the rapidity range studied here is fairly flat, the measurements are taken to represent the value at $y=0$. In UPCs, there are two potential photon sources, so in principle both amplitudes have to be added and their interference needs to be accounted. This was studied for the first time in Ref.~\cite{Klein:1999gv} and later measured for the case of $\rho^0$ coherent photoproduction by the STAR Collaboration~\cite{Abelev:2008ew}. The interference is important only at very small values of \mant (see for example~\cite{Zha:2017jch}). To account for this effect, the STARlight program, which includes the interference of both amplitudes, was used. It was found that this is an 11.6$\%$ effect in the smallest \mant interval, where the effect is concentrated. To estimate the potential uncertainty on this procedure, the interference effects with the nominal strength were compared to those with a 25\% reduction of the strength. The relative change in the photonuclear cross section varied from 0.3 to 1.2\% with the largest uncertainty being assigned to the smallest \mant interval.
The photon flux was computed in the semiclassical formalism following the prescription detailed in Ref.~\cite{Contreras:2016pkc} and cross checked with that of Ref.~\cite{Broz:2019kpl}. The flux amounts to 84.9 with an uncertainty of 2\% coming from variations of the geometry of the Pb ions.
Although the value of \pttwo is a good approximation to that of \mant, it is not exact due to the fact that the photon also has a transverse momentum in the laboratory frame. To account for this effect, the cross section was unfolded with a response matrix built from \pttwo- and \mant-distributions. Two sources for the distributions were used: ($i$) the STARlight generator which includes the transverse momenta of the photons, but does not describe so well the shape of the measured \pttwo distribution in data, and ($ii$) measured \pttwo values coupled to photon momenta randomly generated using the transverse momentum distribution of photons from Refs.~\cite{Vidovic:1992ik,Hencken:1995me}. The average of the corresponding unfolded results was used for the cross section, while half their difference was taken as a systematic uncertainty which varied between 0.1\% and 5.7\%, with this last value corresponding to the largest \mant interval.
These three uncertainties are reported in the lower part of Table~\ref{tab_syserrcorrelation}. The uncertainty on the value of the photon flux at $y=0$ is correlated across \mant, the uncertainty on the \pttwo$\rightarrow$\mant unfolding is partially correlated and the uncertainty on the variation of the interference term is anti-correlated in the lowest \mant region and correlated in the other \mant regions. They are added in quadrature for the final result shown in Sec.~\ref{sec_res} and Table~\ref{tab_cs} below.
\begin{table}[t]
\caption{Summary of the identified systematic uncertainties on the coherent \Jpsi photoproduction and photonuclear cross sections. The uncertainties to go from the measured cross section in UPCs to the photonuclear process are listed after the line in the middle of the table and their origin depends on the modeling of the photon flux and interference effects. The correlation across \pttwo intervals is discussed in the text.}
\begin{center}
\begin{tabular}{l c c}
\toprule
Source & Uncertainty ($\%$) \\
\midrule
Signal extraction & (0.7,2.2)\\
$f_{\rm D}$ & (0.1,0.5) \\
$f_{\rm I}$ &(1.1,2.3) \\
\pttwo migration unfolding & (0.6,2.3)\\
Luminosity & 2.7\\
V0 and AD veto & 3\\
EM dissociation & 2\\
ITS-TPC tracking & 2.8\\
SPD and TOF efficiency & 1.3\\
Branching ratio & 0.5\\
\midrule
Variations in interference strength & (0.3,1.2)\\
Value of the photon flux at $y=0$ & 2\\
\pttwo$\rightarrow$\mant unfolding & (0.1,5.7)\\
\bottomrule
\end{tabular}
\label{tab_syserrcorrelation}
\end{center}
\end{table}
\section{Results
\label{sec_res}}
The final result for the cross section measured in each \pttwo interval is reported in Table~\ref{tab_cs}. The statistical uncertainty originates from the error obtained in the fit to the dimuon invariant-mass distribution, propagating the uncertainties of the \fI and \fD corrections, see Eq.~(\ref{eq_yieldCorrection}), and the uncertainty related to the unfolding process. The uncorrelated systematic uncertainty from signal extraction and the quadratic sum of correlated systematic uncertainties are shown in Table~\ref{tab_cs}.
\begin{table*}[t]
\caption{Measured coherent \Jpsi photoproduction cross section in UPCs in different $\pttwo$ intervals as well as the photonuclear cross section in \mant-intervals. The first uncertainty is statistical, the second and third systematic, uncorrelated and correlated, respectively. The fourth uncertainty, for the photonuclear cross section case, is the systematic uncertainty on the correction to go from the UPC to the photonuclear cross section. The mean value of \mant in each interval is also shown.
}
\centering
\begin{tabular}{cccc}
\toprule
Interval (GeV$^2 c^{-2}$) & $\left<|t|\right>$ (GeV$^2 c^{-2}$) &
$\frac{{\rm d}^{2}\sigma^{\text{coh}}_{\Jpsi}}{{\rm d}y{\rm d}\pttwo}$ ($\frac{{\rm mb} c^2}{{\rm GeV}^2}$) &
$\frac{{\rm d}\sigma_{\gamma{\rm Pb}}}{{\rm d}|t|}$ ($\frac{{\rm mb} c^2}{{\rm GeV}^2}$) \\
\midrule
$\left(0,0.72\right)\times10^{-3}$ & 0.00032 & 1290 $\pm 74 \pm 29 \pm 73$ & 8.15 $\pm 0.50 \pm 0.18 \pm 0.46 \pm 0.20$\\
$\left(0.72,1.6\right)\times10^{-3}$ & 0.00113 & 1035 $\pm 47 \pm 10 \pm 60$ & 5.75 $\pm 0.27 \pm 0.06 \pm 0.34 \pm 0.16$\\
$\left(1.6,2.6\right)\times10^{-3}$ & 0.00207 & 743 $\pm 34 \pm 6 \pm 43$ & 4.23 $\pm 0.20 \pm 0.03 \pm 0.25 \pm 0.11$\\
$\left(2.6,4.0\right)\times10^{-3}$ & 0.00328 & 465 $\pm 24 \pm 6 \pm 27$ & 2.87 $\pm 0.15 \pm 0.04 \pm 0.17 \pm 0.08$\\
$\left(4.0,6.2\right)\times10^{-3}$ & 0.00498 & 229 $\pm 14 \pm 3 \pm 14$ & 1.48 $\pm 0.09 \pm 0.02 \pm 0.09 \pm 0.04$\\
$\left(6.2,12.1\right)\times10^{-3}$ & 0.00833 & 51 $\pm 5 \pm 1 \pm 4$ & 0.40 $\pm 0.04 \pm 0.01 \pm 0.03 \pm 0.03$\\
\bottomrule
\end{tabular}
\label{tab_cs}
\end{table*}
The results for the photonuclear cross section are listed in Table~\ref{tab_cs} and shown in Fig.~\ref{fig_cs}, where the measurement is compared with several theoretical predictions. The average \mant ($\left<|t|\right>$) quoted in Table~\ref{tab_cs} was estimated from the \mant-distribution used in the response matrix based on measured data (see above). The mean of the ensuing distribution in a given \pttwo interval was taken to be $\left<|t|\right>$.
STARlight utilises the vector meson dominance model and a parameterisation of the existing data on exclusive photoproduction of $\Jpsi$ off protons coupled with a Glauber-like formalism to obtain the photonuclear cross section. Since the \mant-dependence in this model comes from the Glauber calculation, meaning that it does not include explicitly gluon shadowing effects, it is an interesting baseline for comparisons (This approach is quite similar to the impulse approximation used in~\cite{Guzey:2013qza}). STARlight overestimates the measured cross section and the shape of the distribution appears to be wider than that of the measured data.
The LTA prediction by Guzey, Strikman and Zhalov~\cite{Guzey:2016qwo} is based on the leading-twist approximation (LTA) of nuclear shadowing based on the combination of the Gribov--Glauber theory and inclusive diffractive data from HERA~\cite{Frankfurt:2011cs}. There are two LTA predictions; one called {\em high shadowing} and the other {\em low shadowing}. The low shadowing prediction is shown in Fig.~\ref{fig_cs}. The shape obtained from this model is similar to that of the data and describes the cross section within experimental uncertainties. As shown in Fig.~3 of~\cite{Guzey:2016qwo}, the high-shadowing version of the model has a similar shape but the overall normalisation is smaller by factor around 1.7.
The b-BK model by Bendova et al.~\cite{Cepila:2018faq,Bendova:2019psy,Bendova:2020hbb} is based on the colour dipole approach where the scattering amplitude is obtained from the impact-parameter dependent solution of the Balitsky--Kovchegov equation coupled to a nuclear-like intial condition~\cite{Balitsky:1995ub,Kovchegov:1999yj} which incorporates saturation effects. This model also predicts the behaviour of the data quite well.
The different predictions of the STARlight and LTA or b-BK models reflect the effects of QCD dynamics (shadowing in LTA, saturation in b-BK) at small values of $x\sim10^{-3}$ and highlight the importance of measuring the \mant-dependence of the photonuclear cross section.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\linewidth]{{figures/cs/csont_ratio}.eps}
\end{center}
\caption{Dependence on \mant of the photonuclear cross section for the coherent photoproduction of $\Jpsi$ off Pb compared with model predictions~\cite{Klein:2016yzr,Guzey:2016qwo,Bendova:2020hbb} (top panel), where for LTA the {\em low shadowing} case is shown (see text). Model to data ratio for each prediction in each measured point (bottom panel). The uncertainties are split to those originating from experiment and to those originating from the correction to go from the UPC to the photonuclear cross section.}
\label{fig_cs}
\end{figure}
\section{Conclusions}
\enlargethispage{\baselineskip}
The first measurement of the \mant-dependence of coherent \Jpsi photonuclear production off Pb nuclei in UPCs is presented. The measurement was carried out with the ALICE detector at midrapidity, $|y|<0.8$, in ultra-peripheral \PbPb collisions at \fivenn and covers the small-$x$ range $(0.3-1.4)\times10^{-3}$. Photonuclear cross sections in six different intervals of \mant are reported and compared with theoretical predictions. The measured cross section shows a \mant-dependent shape different from a model based on the Pb nuclear form factor and closer to the shape predicted by models including QCD dynamical effects in the form of shadowing (LTA) or saturation (b-BK). The difference in shape and magnitude between the LTA and b-BK models is of the same order as the current measurement uncertainties, but the large data sample expected in the LHC Run 3~\cite{Citron:2018lsq} and the improvement in tracking from the upgrades of the ALICE detector~\cite{Abelevetal:2014cna} promise a much improved accuracy. These results highlight the importance of observables sensitive to the transverse gluonic structure of particles for extending the understanding of the high-energy limit of QCD.
\newenvironment{acknowledgement}{\relax}{\relax}
\begin{acknowledgement}
\section*{Acknowledgements}
\input{fa_2020-12-22.tex}
\end{acknowledgement}
\bibliographystyle{utphys}
|
1,108,101,562,766 | arxiv | \section{Introduction}
Reachability and safety specifications for robotic and autonomous systems
are one of fundamental problems for the verification of such systems.
It is difficult to imagine deploying robots, without (safety) verification, in practical environments due to possible critical issues such as collisions and malfunctions.
Several reachability analysis techniques have been developed for the safe operation of various types of systems (e.g.,~\cite{abate2008probabilistic, Majumdar2014, Chen2018}) and applied to quadrotor control~\cite{gillula2011applications}, legged locomotion~\cite{Piovan2015}, obstacle avoidance~\cite{Malone2017}, among others.
However, the practicality of these tools is often limited because
they require knowledge of system models.
The focus of this work is to develop a model-free reinforcement learning method for specifying reachability and safety in a probabilistic manner.
Several learning-based safety specification methods have recently been proposed for \emph{deterministic} dynamical systems without needing complete information about system models.
To learn backward reachable sets, Hamilton--Jacobi reachability-based tools were used in conjunction with Gaussian process regression~\cite{Fisac2018} and reinforcement learning~\cite{Fisac2019}.
As another safety certificate, a region of attraction was estimated using Lyapunov-based reinforcement learning~\cite{Berkenkamp2017} and
a neural network Lyapunov function~\cite{Richards2018}.
Forward invariance has also been exploited for safety verification by learning control barrier functions~\cite{Wang2018, Taylor2019}.
Departing from these tools for deterministic systems,
we propose a model-free safety specification method for stochastic systems by carefully combining probabilistic reachability analysis and reinforcement learning.
Specifically, our method aims to learn the maximal probability of avoiding the set of unsafe states.
Several methods have been developed for computing the probability of safety in various cases
via dynamic programming when the system model is known~\cite{abate2008probabilistic, Summers2010, Lesser2016, Yang2018}.
To overcome this limitation, our tool uses model-free reinforcement learning for estimating the probability of safety.
We further consider safety guarantees during the learning process so that our scheme runs without frequent intervention of a human supervisor who takes care of safety.
To attain this property, we employ the Lyapunov-based RL framework proposed in \cite{chow2018lyapunov}, where the Lyapunov function takes the form of value functions, and thus safety is preserved in a probabilistic manner through the Bellman recursion.
We revise this safe RL method to enhance its exploration capability.
Note that the purpose of exploration in our method is to enlarge or confirm knowledge about safety, while most safe RL schemes encourage exploration to find reward-maximizing policies within verified safe regions~\cite{turchetta2016safe,alshiekh2018safe,wachi2018safe}.
The main contributions of this work can be summarized as follows.
First, we propose a safe RL method that specifies the probabilistic safety of a given Markov control system without prior information about the system dynamics.
Our approach yields a sequence of safe and improving policies by imposing the Lyapunov constraint in its policy improvement stage and establishing a Lyapunov function in the policy evaluation stage.
If there is no approximation error, our RL-based safety specification algorithm is guaranteed to run safely throughout the learning process.
In such a case, the safe region determined by our approach also monotonically expands in a stable manner, and eventually converges to the maximal safe set.
Second, we develop an efficient safe exploration scheme to learn safe or reachable sets in a sample-efficient manner.
Safe policies tend to avoid reaching the borders of safe regions, so the ``learned'' probability of safety at their borders and outside them is likely to be more inaccurate than others.
To mitigate the imbalance of knowledge, we select the least-safe policy to encourage exploration.
This exploratory policy visits less-safe states so that the safe set becomes more accurate or grows faster.
Third, we implement our approach with deep neural networks to alleviate the scalability issue that arises in high-dimensional systems.
Converting the Lyapunov constraints to a regularization term, our approach can be implemented in conventional actor-critic algorithms for deep RL.
We further show that our method outperforms other baseline methods through simulation studies.
\section{Background}\label{sec:setup}
We consider an MDP, defined as a tuple $\left( \mathcal{S}, \mathcal{A}, p \right)$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions, and $p: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0,1]$ is the transition probability function.
We also use the notation $\mathcal{S}_{\mathrm{term}}$ and $\mathcal{S}'$ to represent the set of termination states and non-terminal states, respectively.
Moreover, a (stochastic) Markov policy, $\pi : \mathcal{S} \times \mathcal{A} \to [0,1]$, is a measurable function, and $\pi ( \bm{a} | \bm{s} )$ represents the probability of executing action $\bm{a}$ given state $\bm{s}$.
We also let $\Pi$ denote the set of stochastic Markov policies.
\subsection{Probabilistic Reachability and Safety Specifications}
We consider the problem of specifying the probability that the state of an MDP will not visit a pre-specified \emph{target set} $\mathcal{G} \subseteq S$ before arriving at a terminal state given an initial state $\bm{s}$ and a Markov policy $\pi$.
For our purpose of safety specification, the target set represents the set of \emph{unsafe} states.
The probability of safety given a policy $\pi$ and an initial state $\bm{s}$ is denoted by $P_{\bm{s}}^{\mathrm{safe}} (\pi)$.
To compute it, we consider the problem of evaluating $1- P_{\bm{s}}^{\mathrm{safe}} (\pi)$, which represents the probability of visiting the target set at least once given an initial state $\bm{s}$ and a Markov policy $\pi$:
\[
P_{\bm{s}}^{\mathrm{reach}}(\pi) := \mathbb{P}^{\pi}\left( \exists t\in \left\{ 0,\dots,T^{\ast}-1 \right\} \;\mathrm{s.t.}\; s_{t}\in\mathcal{G} | s_{0}=\bm{s} \right),
\]
where $T^{\ast}$ is the first time to arrive at a terminal state.
Note that $P_{\bm{s}}^{\mathrm{reach}}(\pi)$ represents the \emph{probability of unsafety}.
Our goal is to compute the minimal probability of unsafety and specify the following \emph{maximal probabilistic safe set} with tolerance $\alpha \in (0,1)$:
\[
S^* (\alpha) := \{ \bm{s} \in \mathcal{S} \mid \inf_\pi P_{\bm{s}}^{\mathrm{reach}}(\pi) \leq \alpha \}.
\]
This set can be used for \emph{safety verification}: If the agent is initialized within $S^* (\alpha)$, we can guarantee safety with probability $1 - \alpha$ by carefully steering the agent; otherwise, it is impossible to do so.
We now express the probability of unsafety as an expected sum of stage-wise costs by using the technique proposed in \cite{Summers2010}.
Let $\mathbf{1}_{C}:\mathcal{S}\mapsto\{0,1\}$ denote the indicator function of set $C \subseteq \mathcal{S}$ so that its value is 1 if $s \in C$; otherwise, 0.
Given a sequence of states $\{s_{0},\dots,s_{t}\}$, we observe that
\begin{align*}
\prod_{k=0}^{t-1}\mathbf{1}_{\mathcal{G}^{c}}(s_{k}) \mathbf{1}_{\mathcal{G}}(s_{t}) =
\begin{cases}
1\quad\mathrm{if} \; s_{0},\dots,s_{t-1}\in\mathcal{G}^{c}, s_{t}\in\mathcal{G}
\\
0\quad\mathrm{otherwise}.
\end{cases}
\end{align*}
It is easily seen that the sum of $\prod_{k=0}^{t-1}\mathbf{1}_{\mathcal{G}^{c}}(s_{k}) \mathbf{1}_{\mathcal{G}}(s_{t})$ along the trajectory is equal to 0 if the trajectory is away from $\mathcal{G}$ and 1 if there exists at least one state $s_{t}$ that is in $\mathcal{G}$.
The probability of unsafety under $\pi$ is then given by
\[
P_{\bm{s}}^{\mathrm{reach}}(\pi) =
\mathbb{E}^{\pi} \left[ \sum_{t=0}^{T^{\ast}-1} \prod_{k=0}^{t-1}\mathbf{1}_{\mathcal{G}^{c}}(s_{k}) \mathbf{1}_{\mathcal{G}}(s_{t}) \mid s_{0}=\bm{s} \right].
\]
We introduce an auxiliary state $x_{t}$, which is an indicator of whether a trajectory $\{s_{0},\cdots,s_{t-1}\}$ is fully safe or not.
It is defined as
\[
\begin{aligned}
&x_{0} = 1, \quad x_{t} = \prod_{k=0}^{t-1} \mathbf{1}_{\mathcal{G}^{c}}(s_{k}),\quad t \geq 1.
\end{aligned}
\]
Since
$x_{t+1}=x_{t}\mathbf{1}_{\mathcal{G}^{c}}(s_{t})$,
$x_{t+1}$ depends solely on $(s_{t}, x_{t})$ and $a_t$, so the Markov property holds with respect to the state pair $(s_{t},x_{t})$.
The problem of computing the minimal probability of unsafety can be formulated as
\begin{equation}\label{opt}
\inf_{\pi \in \Pi} P_{\bm{s}}^{\mathrm{reach}}(\pi) = \mathbb{E}^{\pi} \left[ \sum_{t=0}^{T^{\ast}-1} x_{t}\mathbf{1}_{\mathcal{G}}(s_{t}) \mid (s_{0},x_{0})=(\bm{s},1) \right ],
\end{equation}
which is in the form of the standard optimal control problem.
Let $V^{\ast}: \mathcal{S} \times \{0, 1\} \to \mathbb{R}$ denote the optimal value function of this problem, that is, $V^* (\bm{s}, \bm{x}) := \inf_{\pi \in \Pi} \mathbb{E}^{\pi} [ \sum_{t=0}^{T^{\ast}-1} x_{t}\mathbf{1}_{\mathcal{G}}(s_{t}) \mid (s_{0},x_{0})=(\bm{s},\bm{x}) ]$.
After computing the optimal value function, we can obtain the maximal probabilistic safe set by simple thresholding:
\[
S^* (\alpha) = \{ \bm{s} \in \mathcal{S} \mid V^* (\bm{s}, 1) \leq \alpha \}.
\]
Note that this set is a superset of
$S^\pi (\alpha):= \{ \bm{s} \in \mathcal{S} \mid P_{\bm{s}}^{\mathrm{reach}}(\pi) \leq \alpha \} = \{\bm{s} \in \mathcal{S} \mid V^\pi (\bm{s}, 1)\leq \alpha \}$
for any Markov policy $\pi$, where $V^\pi: \mathcal{S} \times \{0,1\}$ denotes the value function of $\pi$ defined by
$V^\pi (\bm{s}, \bm{x}) := \mathbb{E}^{\pi} [ \sum_{t=0}^{T^{\ast}-1} x_{t}\mathbf{1}_{\mathcal{G}}(s_{t}) \mid (s_{0},x_{0})=(\bm{s},\bm{x}) ]$.
To distinguish $S^\pi (\alpha)$ from $S^* (\alpha)$, we refer to the former as the (probabilistic) safe set under $\pi$.
\subsection{Safe Reinforcement Learning}
Our goal is to compute the minimal probability of unsafety and the maximal probabilistic safe set without the knowledge of state transition probabilities in a \emph{safety-preserving} manner.
We propose an RL algorithm that guarantees the safety of the agent during the learning process for safety specification.
More specifically, the sequence $\{\pi_k\}_{k=0, 1, \ldots}$ generated by the proposed RL algorithm satisfies
\begin{equation}\label{const}
P_{\bm{s}}^{\mathrm{reach}} (\pi_{k+1}) \leq \alpha \quad \forall \bm{s} \in S^{\pi_k} (\alpha)
\end{equation}
for $k=0, 1, \ldots$.
This constraint ensures that
\[
S^{\pi_k} (\alpha) \subseteq S^{\pi_{k+1}} (\alpha),
\]
that is, the probabilistic safe set (given $\alpha$) monotonically expands. We also use the constraint~\eqref{const} to perform \emph{safe exploration} to collect sample data by preserving safety in a probabilistic manner.
\section{Lyapunov-Based Safe Reinforcement Learning for Safety Specification}\label{sec:method}
To determine the set of safe policies that satisfy \eqref{const}, we adopt the Lyapunov function proposed in \cite{chow2018lyapunov} and enhance the approach to incentivize the agent to explore the state space efficiently.
Throughout the section, we assume that every terminal state lies in $S^{\ast}(\alpha)$ and that, at all events, an agent arrives at a terminal state in a finite period.
Thus, there exists an integer $m$ such that $\mathbb{P}^\pi (s_{m}\in \mathcal{S}_{\mathrm{term}} ; s_{0}=\bm{s}) > 0$ $\forall \bm{s}\in \mathcal{S}, \forall \pi\in\Pi.$
In Section~\ref{sec:lyapunov_approach} and~\ref{sec:ess}, the state space $\mathcal{S}$ and the action space $\mathcal{A}$ are assumed to be finite. This assumption will be relaxed when discussing the deep RL version in Section~\ref{sec:deeprl}.
Let $\mathcal{T}_{d}^{\pi}$ denote the stationary Bellman operator for the cost function $d( \bm{s}, \bm{x}) := \bm{x}\mathbf{1}_{\mathcal{G}}(\bm{s})$
\begin{equation} \nonumber
\begin{split}
&(\mathcal{T}_{d}^{\pi} V)( \bm{s}, \bm{x}) := d(\bm{s}, \bm{x}) \\
&+ \sum_{\bm{a} \in A}\pi( \bm{a}| \bm{s}, \bm{x})\sum_{\bm{s}' \in S} p(\bm{s}'| \bm{s}, \bm{a}) V(\bm{s}',\bm{x}\mathbf{1}_{\mathcal{G}^{c}}(\bm{s}))
\end{split}
\end{equation}
for all $(\bm{s}, \bm{x}) \in \mathcal{S}' \times \{0,1\}$, and
\[
(\mathcal{T}_{d}^{\pi}V)( \bm{s}, \bm{x}) := 0
\]
for all $(\bm{s}, \bm{x}) \in \mathcal{S}_{\mathrm{term}} \times \{0,1\}$.
Note that $\mathcal{T}_d^\pi$ is an $m$-stage contraction with respect to $\| \cdot \|_\infty$ for all $(\bm{s}, \bm{x}) \in \mathcal{S}' \times \{0,1\}$.
\subsection{Lyapunov Safety Specification}\label{sec:lyapunov_approach}
We adopt the following definition of Lyapunov functions, proposed in~\cite{chow2018lyapunov}:
\begin{definition}
A function $L:S\times\{0,1\}\mapsto[0,1]$ is said to be a \emph{Lyapunov function} with respect to a Markov policy $\pi$ if it satisfies the following conditions:
\begin{subequations}
\begin{align}
( \mathcal{T}_{d}^{\pi} L)( \bm{s}, \bm{x}) &\leq L(\bm{s}, \bm{x}) \quad
\forall (\bm{s}, \bm{x}) \in \mathcal{S}\times\{0,1\} \label{condition:lyapunov}
\\
L(\bm{s},1) &\leq \alpha
\quad \forall \bm{s} \in S_{0}, \label{condition:safety}
\end{align}
\end{subequations}
where $S_{0}$ is a given subset of $S^{\ast}(\alpha)$ and $d(s,x) := x \mathbf{1}_{\mathcal{G}}(s)$.
\end{definition}
Inequalities (\ref{condition:lyapunov}) and (\ref{condition:safety}) are called the \emph{Lyapunov condition} and the \emph{safety condition}, respectively.
We can show that if an arbitrary policy $\tilde{\pi}$ satisfies the Lyapunov condition, then the probability of unsafety at $S_{0}$ does not exceed the threshold $\alpha$.
To see this, we recursively apply $\mathcal{T}_d^\pi$ on both sides of \eqref{condition:safety} and use \eqref{condition:lyapunov} and the monotonicity of $\mathcal{T}_d^\pi$ to obtain that, for any $\bm{s}\in S_{0}$,
\begin{align}\label{eqn:monotonicity}
&\alpha \geq L(\bm{s},1) \geq ( \mathcal{T}_{d}^{\tilde{\pi}} L)(\bm{s},1) \geq ( ( \mathcal{T}_{d}^{\tilde{\pi}} )^{2} L)(\bm{s},1) \geq \cdots.
\end{align}
has a unique fixed point, which corresponds to the probability of unsafety,
Due to the $m$-stage contraction property, $\left( \mathcal{T}_{d}^{\tilde{\pi}} \right)^{m}$ has a unique fixed point that corresponds to the probability of unsafety, $P_{\bm{s}}^{\mathrm{reach}} (\tilde{\pi}) = V^{\tilde{\pi}} (\bm{s}, 1)$, under $\tilde{\pi}$.
Therefore, by the Banach fixed point theorem, we have
\begin{align}\label{eqn:contractivity}
&\alpha \geq \lim_{k\rightarrow\infty} \left( ( \mathcal{T}_{d}^{\tilde{\pi}})^{km} L \right )(\bm{s},1) = V^{\tilde{\pi}}(\bm{s},1)
\quad \forall \bm{s}\in S_{0}.
\end{align}
Given a Lyapunov function $L$, consider the set $\{\tilde{\pi} \mid (\mathcal{T}_{d}^{\tilde{\pi}} L)(\bm{s},1) \leq \alpha \; \forall \bm{s} \in S_0 \}$.
Then, any policy $\tilde{\pi}$ in this set satisfies the probabilistic safety condition $P_{\bm{s}}^{\mathrm{reach}}(\tilde{\pi})\leq\alpha$ for all $\bm{s}\in S_{0}$ by \eqref{eqn:contractivity}.
Thus, when $S_0$ is chosen as ${S}^{\pi_k} (\alpha)$, the safety constraint \eqref{const} is satisfied.
This set of safe policies is called the \emph{L-induced policy set}.
We can now introduce the Lyapunov safety specification method.
For iteration $k$, we construct the Lyapunov function $L_{k}$ by using the current policy $\pi_{k}$ and update the policy to $\pi_{k+1}$ taken from the $L_{k}$-induced policy set.
Specifically, we set
\begin{equation*}
L_{k}(\bm{s}, \bm{x}) := \mathbb{E}^{\pi_{k}} \left[ \sum_{t=0}^{T^{\ast}-1} (d+\epsilon_{k})(s_{t},x_{t}) \mid (s_{0},x_{0})=(\bm{s}, \bm{x}) \right],
\end{equation*}
where $\epsilon_{k}:\mathcal{S}\times\{0,1\}\mapsto\mathbb{R}_{\geq 0}$ is an auxiliary cost function.
Following the cost-shaping method of \cite{chow2018lyapunov}, we define the auxiliary cost as the function
\[
\epsilon_{k}(\bm{x}) := \bm{x} \cdot \min_{\bm{s} \in S_{0}} \; \frac{\alpha - V^{\pi_{k}}(\bm{s},1)}{T^{\pi_{k}}(\bm{s},1)},
\]
where $T^{\pi_{k}}(\bm{s},\bm{x})$ is the expected time for an agent to reach $\mathcal{G}$ or $\mathcal{S}_{\mathrm{term}}$ the first time under policy $\pi_{k}$ and initial state $(\bm{s},\bm{x})$.
We refer to $T^{\pi_{k}}(\bm{s},1)$ as the \emph{first-hitting time} for the rest of this article.
It is straightforward to check that the Lyapunov condition~\eqref{condition:lyapunov} is satisfied with $L_k$.
Furthermore, the function $L_{k}$ satisfies the safety condition \eqref{condition:safety} because, for all $\bm{s} \in S_{0}$,
\begin{equation*}
\begin{aligned}
L_{k}(\bm{s},1) &\leq V^{\pi_{k}}(\bm{s},1) + \epsilon_{k}(1)T^{\pi_k}(\bm{s},1)
\\
&\leq V^{\pi_{k}}(\bm{s},1) + T^{\pi_k}(\bm{s},1)\cdot\frac{\alpha - V^{\pi_{k}}(\bm{s},1)}{T^{\pi_k}(\bm{s},1)} \leq \alpha.
\end{aligned}
\end{equation*}
Therefore, $L_k$ is a Lyapunov function.
In the policy improvement step, we select $\pi_{k+1}$ from the $L_k$-induced policy set so the updated policy is both safe and has an expanded probabilistic safe set.
\begin{proposition}\label{prop1}
Suppose that $\pi_{k+1}$ is chosen in $\{ {\pi} \mid (\mathcal{T}_{d}^{{\pi}} L)(\bm{s},1) \leq \alpha \; \forall \bm{s} \in S^{\pi_k} (\alpha) \}$.
Then, we have
\[
P_{\bm{s}}^{\mathrm{reach}} (\pi_{k+1}) \leq \alpha \quad \forall \bm{s} \in S^{\pi_k} (\alpha),
\]
and
\[
S^{\pi_k} (\alpha) \subseteq S^{\pi_{k+1}} (\alpha).
\]
\end{proposition}
\begin{proof}
The probabilistic safety of $\pi_{k+1}$ follows from \eqref{eqn:contractivity}.
This also implies that for an arbitrary $\bm{s} \in S^{\pi_k} (\alpha)$, we have $\bm{s} \in S^{\pi_k} (\alpha)$. Therefore, the result follows.
\end{proof}
To achieve the minimal probability of unsafety, we choose $\pi_{k+1}$ as the ``safest'' one in the $L_{k}$-induced policy, that is,
\begin{equation}\label{eqn:lyapunov_policy_improvement}
\begin{split}
&\pi_{k+1}(\cdot|
\bm{s})\\
& \in \argmin_{\pi(\cdot| \bm{s} )} \{ ( \mathcal{T}_{d}^{\pi} V_{k} )(\bm{s}, 1) \mid (\mathcal{T}_{d}^{\pi} L_{k} ) (\bm{s}, 1) \leq L_{k}(\bm{s}, 1) \}.
\end{split}
\end{equation}
Note that the value of Lyapunov function is 0 at $\bm{x}=0$, so we need not compute a policy for $\bm{x} = 0$·
As the MDP model is unknown, we approximate the value function of a policy using sample trajectories.
We also use Q-learning to obtain a reliable estimate of state-action value functions.
Let $Q_{V}$ and $Q_{T}$ denote the Q-functions for the probability of unsafety and a first-hitting time, respectively.
Given $(s_t, a_t, s_{t+1})$ obtained by executing $\pi_k$, the Q-functions are updated as follows:
\begin{equation}\label{eqn:qlearning_value_computation}
\begin{split}
&\begin{aligned}
Q_{V}(s_{t},a_{t}) \leftarrow \mathbf{1}_{\mathcal{G}}(s_{t}) + \mathbf{1}_{\mathcal{G}^{c}}(s_{t}) \bigg [ (1-\tau_{l}) Q_{V}(s_{t},a_{t})&
\\
+ \tau_{l} \sum_{\bm{a} \in A}\pi_{k}( \bm{a}|s_{t+1})Q_{V}(s_{t+1}, \bm{a}) \bigg ]&
\end{aligned}
\\
&\begin{aligned}
Q_{T}(s_{t},a_{t}) \leftarrow \mathbf{1}_{\mathcal{G}^c}(s_{t}) \bigg[&
\tau_{l} \bigg( 1 + \sum_{\bm{a} \in A}\pi_{k}(\bm{a}|s_{t+1})Q_{T}(s_{t+1}, \bm{a}) \bigg)
\\
&+ (1-\tau_{l}) Q_{T}(s_{t},a_{t}) \bigg ],
\end{aligned}
\end{split}
\end{equation}
where $\tau_{l}(\bm{s}, \bm{a})$ is the learning rate satisfying $\sum_{l}\tau_{l}(\bm{s}, \bm{a})=\infty$ and $\sum_{l}\tau_{l}^{2}(\bm{s}, \bm{a})<\infty$.
We can also rewrite \eqref{eqn:lyapunov_policy_improvement} as the following linear program associated with Q-functions:
\begin{equation}\label{eqn:qlearning_policy_improvement}
\begin{aligned}
\min_{\pi(\cdot | \bm{s})} \;& \sum_{\bm{a} \in \mathcal{A}} \pi(\bm{a} | \bm{s}) Q_{V,k}(\bm{s}, \bm{a})
\\
\mathrm{s.t.} \; & \sum_{\bm{a} \in \mathcal{A}} Q_{L,k}(\bm{s}, \bm{a}) (\pi(\bm{a} | \bm{s}) - \pi_{k}(\bm{a} | \bm{s})) \leq \epsilon_{k},
\end{aligned}
\end{equation}
where $Q_{L,k}$ is the Q-value of Lyapunov function given by $Q_{L,k}(\bm{s}, \bm{a})=Q_{V,k}(\bm{s}, \bm{a}) + \epsilon_{k}(1) Q_{L,k}(\bm{s}, \bm{a})$ and $\epsilon_{k}$ is the shortened expression of $\epsilon_{k}(1)$.
The policy $\pi_{k+1}(\cdot|\bm{s})$ is then updated as the optimal solution of the linear program \eqref{eqn:qlearning_policy_improvement}.
Combining the policy evaluation and the policy improvement steps of Q-functions, we construct the \emph{Lyapunov safety specification} (LSS) as described in Algorithm~\ref{alg:tabular_lss}.
The convergence property of Q-learning in finite-state, finite-action space is well studied in \cite{tsitsiklis1994asynchronous}, so we omit the theoretical details here.
Under the standard convergence condition for Q-learning, the algorithm obtains a sequence of policies that satisfy Proposition \ref{prop1}.
\begin{algorithm}[t]
\caption{LSS Q-Learning}
\label{alg:tabular_lss}
\begin{algorithmic}[1]
\REQUIRE Tolerance for unsafety $\alpha \in (0,1)$,\\ baseline policy $\pi_{\mathrm{base}}$;
\STATE Set initial policy $\pi_{0}$ as $\pi_{\mathrm{base}}$;
\FOR{each iteration $k$}
\FOR{each environment step $l$}
\STATE $a_{t} \sim \pi_{k}(\cdot|s_{t})$
\STATE Get $s_{t+1} \sim p(\cdot|s_{t},a_{t})$ and $\mathbf{1}_{\mathcal{G}}(s_{t})$;
\STATE Update $Q_{V}(s_{t},a_{t})$, $Q_{T}(s_{t},a_{t})$ as (\ref{eqn:qlearning_value_computation});
\STATE Reset the environment if $\mathbf{1}_{\mathcal{G}}(s_{t})=1$;
\ENDFOR
\STATE Update $\pi_{k+1}( \cdot | \bm{s})$ by solving \eqref{eqn:qlearning_policy_improvement} for each $\bm{s}$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Efficient Safe Exploration}\label{sec:ess}
In this subsection, we develop a novel for safe exploration to efficiently solve a probabilistic safety specification problem.
We can utilize the Lyapunov constraint to construct a policy that takes potentially dangerous actions with adequate probability and thus assures safe navigation.
We take our motivation from the discovery that if a state is falsely assumed to have a high probability of unsafety, it is unlikely to correct the misconception without taking exploratory actions.
Consider the table of Q-value estimates used in the LSS algorithm.
The Q-learning agent is initiated from the blank slate, so it is a safe choice to assume that all unvisited states evolve into the target set with high probability.
As a result, the safe policy derived from the algorithm tends to confine an agent inside the current safe set.
With enough time, the Q-value table becomes accurate at all states, but this is unattainable in practice.
Therefore, it is crucial to explore the unidentified states, and this process involves visiting the exterior of the safe set.
In this regard, we choose the exploratory policy to be the most aggressive among the set of policies that guarantee safety in the safe set.
Conversely, the probabilistic safety of the exploratory policy in the safe set is marginally greater than the tolerance.
As there is no element $\mathcal{G}$ in $S^{\pi_{s}}(\alpha)$, such a policy is likely to bring an agent outside the safe set.
The exploratory policy is efficient if used with an experience replay, the state distribution of which may diverge from the true distribution due to the scarcity of samples obtained in the exterior of the safe set.
Our exploratory policy can mitigate the approximation error due to the discrepancy.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.9\columnwidth}
\centering
\includegraphics[width=\linewidth]{safe-exploration-example-1.pdf}
\end{subfigure}
\hfil
\centering
\begin{subfigure}[b]{.9\columnwidth}
\centering
\includegraphics[width=\linewidth]{safe-exploration-example-2.pdf}
\end{subfigure}
\caption{An example of safe exploration on a one-dimensional grid world. The confidence level is set to $0.9$. Boxes represent states, and arrows toward the left or right symbolize the policies at each state. Unexamined states are shaded; the gray one is not in the target set, but it is considered unsafe. Choosing the policy at $\bm{s}_{c}$ allows an agent to explore toward $\bm{s}_{d}$ (top). As the RL agent successfully returns to the safe set after visiting $\bm{s}_{d}$ with high probability, $\bm{s}_{d}$ is added to the safe set (bottom).}
\label{fig:ess_example}
\end{figure}
To illustrate our idea, we show a one-dimensional (1D) grid world consisting of five states $\bm{s}_{a}, \dots, \bm{s}_{e}$ and two actions $(\mathrm{left}, \mathrm{right})$ as in Fig. \ref{fig:ess_example}.
We know from experience that moving to the left at $\bm{s}_{a},\dots, \bm{s}_{c}$ guarantees 100\% safety.
The states $\bm{s}_{d}$ and $\bm{s}_{e}$ are not visited yet, so the probabilities of unsafety at those states are 1.
Suppose the agent is in $\bm{s}_{c}$ and chooses to move left or right with probability $(1-\alpha, \alpha)$.
The probability of unsafety of $\pi$ is then no more than $\alpha$ because an agent never reaches $\bm{s}_{d}$ or $\bm{s}_{e}$ with probability $1-\alpha$.
Also, if an agent successfully reaches $\bm{s}_{d}$ or $\bm{s}_{e}$ and returns safely, we obtain an accurate estimate of the probability of unsafety and expand the safe set.
\begin{algorithm}[!t]
\caption{ESS Q-Learning}
\label{alg:tabular_ess}
\begin{algorithmic}
\REQUIRE Tolerance for unsafety $\alpha \in (0,1)$, \\baseline policy $\pi_{\mathrm{base}}$;
\STATE Set $\pi_{s,0} \leftarrow \pi_{\mathrm{base}}$ and $\pi_{e,0}\leftarrow \pi_{\mathrm{base}}$;
\FOR{each iteration $k$}
\FOR{each environment step $l$}
\STATE $a_{t} \sim \pi_{e,k}(\cdot|s_{t})$;
\STATE Get $s_{t+1} \sim p(\cdot|s_{t},a_{t})$ and $\mathbf{1}_{\mathcal{G}}(s_{t})$;
\STATE Update $Q_{V}(s_{t},a_{t})$, $Q_{T}(s_{t},a_{t})$ as \eqref{eqn:qlearning_value_computation};
\STATE Reset environment if $\mathbf{1}_{\mathcal{G}}(s_{t})=1$;
\ENDFOR
\STATE Set $\pi_{s,k+1}(\cdot| \bm{s})$ by solving \eqref{eqn:qlearning_policy_improvement} for each $\bm{s}$;
\STATE Set $\pi_{e,k+1}(\cdot| \bm{s})$ by solving \eqref{eqn:exploratory_policy_improvement} for each $\bm{s}$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
A policy suitable for exploration is not usually the safest policy; therefore, we separate the \emph{exploratory policy} $\pi_{e}$ from the policy that constructs the safe set, which is denoted by the \emph{safety specification-policy} (SS-policy) $\pi_{s}$.
Unlike the SS-policy, the exploratory policy drives an agent around the boundary of the safe set.
To construct $\pi_e$ in a formal way, we exploit a given $\pi_s$ and the Lyapunov function $L$ defined as in Section \ref{sec:lyapunov_approach}.
First, consider the following policy optimization problem:
\begin{equation}\label{def:exploratory_policy_optimization}
\begin{aligned}
\max_{\pi\in\Pi}~ & V^{\pi}(s_0,1)\\
\mathrm{s.t.}~ & (\mathcal{T}_{d}^{\pi} L)(\bm{s}, \bm{x}) \leq L(\bm{s}, \bm{x}) \quad \forall (\bm{s}, \bm{x}) \in \mathcal{S}\times\{0,1\},
\end{aligned}
\end{equation}
where $s_0$ is an initial state.
Note that this is the auxiliary problem merely to construct the exploratory policy with no connection to the original problem \eqref{opt}.
As stated above, the exploratory policy should preserve safety confidence in the safe set under the SS-policy, that is, $V^{\pi_{e}}(\bm{s},1) \leq \alpha,~\forall \bm{s} \in S^{\pi_{s}}(\alpha)$.
The solution of \eqref{def:exploratory_policy_optimization} satisfies this condition because of the Lyapunov constraint, but it can be suboptimal because the constraint in \eqref{def:exploratory_policy_optimization} is stronger than the original.
However, by using the Lyapunov constraints, we can enjoy the benefit of using dynamic programming to solve \eqref{def:exploratory_policy_optimization}.
\begin{proposition}\label{prop2}
Let $L$ be the Lyapunov function stated in \eqref{def:exploratory_policy_optimization}.
An optimal solution of \eqref{def:exploratory_policy_optimization} can be obtained by the value iteration using the Bellman operator
\begin{equation}\nonumber
\begin{split}
&(\mathcal{T}_{\mathrm{exp}} V)(\bm{s}, \bm{x})\\
& := \max_{\pi(\cdot|\bm{s})} \{ (\mathcal{T}_{d}^{\pi} V)(\bm{s}, \bm{x}) \mid (\mathcal{T}_{d}^{\pi} L)(\bm{s}, \bm{x}) \leq L(\bm{s}, \bm{x}) \}.
\end{split}
\end{equation}
Specifically, the value function that satisfies $\mathcal{T}_{\mathrm{exp}} V = V$ is the probability of unsafety under such a policy.
\end{proposition}
\begin{proof}
The operator $\mathcal{T}_{\mathrm{exp}}$ is a special form of the safe Bellman operator defined in \cite{chow2018lyapunov}, which is a monotone contraction mapping by Proposition 3 in \cite{chow2018lyapunov}.
Thus, there exists a unique fixed point of $\mathcal{T}_{\mathrm{exp}}$.
By the definition of the operator, the fixed point corresponds to the policy and solves problem \ref{def:exploratory_policy_optimization}.
\end{proof}
As Proposition \ref{prop2} certifies, we can perform the Bellman operation on $V^{\pi_{s}}$ iteratively to obtain $\pi_{e}$, which is the solution of \eqref{def:exploratory_policy_optimization}.
However, in the RL domain, it is difficult to reproduce the whole dynamic programming procedure, since each Bellman operation corresponds to a time-consuming Q-value computation.
We thus apply the Bellman operation once to obtain $\pi_{e}(\cdot|\bm{s})$ at iteration number $k$ as
\begin{equation}\label{eqn:exploratory_policy_improvement}
\mathrm{arg}\max_{\pi(\cdot| \bm{s})} \{ (\mathcal{T}_{d}^{\pi} V_{k})(\bm{s},1) \mid (\mathcal{T}_{d}^{\pi} L_{k})(\bm{s}, \bm{x}) \leq L_{k}(\bm{s}, \bm{x}) \}.
\end{equation}
To sum up, we add an exploratory policy to LSS to obtain the \emph{exploratory LSS} (ESS), as Algorithm~\ref{alg:tabular_ess}.
\subsection{Deep RL Implementation}\label{sec:deeprl}
Each policy improvement stages in Algorithm \ref{alg:tabular_lss} or \ref{alg:tabular_ess} solves a linear program.
This operation is not straightforward for nontabular implementations. Thus, we provide adaptations of the LSS and ESS for parametrized policies, such as neural networks.
To apply our approach to high-dimensional environments in this section, we assume that the state and action spaces are continuous, which is the general setup in policy gradient (PG) algorithms.
Suppose a generic policy is parameterized with $\theta$, and we rewrite the policy improvement step of the LSS as
\begin{equation}\label{eqn:parameterized_policy_improvement}
\begin{aligned}
&\max_{\theta} \int_{\mathcal{A}} - Q_{V}(\bm{s}, \bm{a}) \pi_{\theta}(\bm{a} |\bm{s}) \: \mathrm{d}\bm{a} \quad \mathrm{subject~to}\\
&\int_{\mathcal{A}} Q_{L}(\bm{s}, \bm{a}) \left( \pi_{\theta}(\bm{a}| \bm{s}) - \pi_{s}(\bm{a}| \bm{s})\right) \mathrm{d}\bm{a} \leq \epsilon \quad \forall \bm{s}\in \mathcal{S},
\end{aligned}
\end{equation}
where $\pi_{s}$ is the current SS-policy and $Q_{V}$, $Q_{L}$, and $\epsilon$ are the values defined as the previous section with respect to $\pi_{s}$.
We use Lagrangian relaxation \cite{Bertsekas1999} to form an unconstrained problem.
Ignoring the constraints, the PG minimizes a single objective $\mathbb{E}_{s,a\sim\pi}[Q_{V}(s,a)]$.
The Lyapunov condition is state-wise, so the number of constraints is the same as $|\mathcal{S}|$.
We can replace the constraints with a single one $\max_{\bm{s} \in \mathcal{S}} \int_{\mathcal{A}} Q_{L}(\bm{s},\bm{a}) \left( \pi_{\theta}(\bm{a}|\bm{s}) - \pi_{s}(\bm{a}|\bm{s})\right) \mathrm{d}\bm{a} - \epsilon \leq 0$.
However, one drawback of this formulation is that the Lagrangian multiplier of the max-constraint places excessive weight on the constraint.
In practice, the LHS of this max-constraint is likely greater than 0 due to the parameterization errors, resulting in the monotonic increase of the Lagrangian multiplier throughout learning.
Therefore, we adopt state-dependent Lagrangian multipliers to have
\begin{equation}\label{eqn:unconstrained_problem}
\begin{aligned}
&\min_{\lambda \geq 0} \max_{\theta} \mathbb{E}_{s\sim\rho_{\theta}} \big[ \mathbb{E}_{a\sim\pi_{\theta}}[-Q_{V}(s,a)]\\
& - \lambda(s) \left(\mathbb{E}_{a\sim\pi_{\theta}}[Q_{L}(s,a)] - \mathbb{E}_{a\sim\pi_{s}}[Q_{L}(s,a)] - \epsilon \right) \big],
\end{aligned}
\end{equation}
where $\lambda(s)$ is the Lagrangian multiplier at state $s$, and $\rho_{\theta}$ is the discounted state-visiting probability of $\pi_{\theta}$.
We can assume that nearby states have similar $\lambda(s)$. Thus, we can parameterize $\lambda(s)$ as a critic model, as in \cite{bohez2019value}.
Throughout this section, we represent $\omega$ as the parameter of $\lambda$.
Our goal is to find the saddle point of \eqref{eqn:unconstrained_problem}, which is a feasible solution of the original problem \eqref{eqn:parameterized_policy_improvement}.
We apply the gradient descent (ascent) to optimize $\theta$ and $\omega$.
The Q-values that comprise the Lagrangian are, by definition, the functions of the policy parameter $\theta$, but since we incorporate the actor-critic framework, the Q-functions are approximated with critics independent of $\theta$.
In this regard, we obtain the update rules for the safety specification-actor (SS-actor) and the Lagrangian multiplier associated with it as follows:
\begin{subequations}
\begin{align}
&\theta_{s} \leftarrow \theta_{s} - \eta_{\theta}\nabla_{\theta} \left( Q_{V}(s_{t}, {a}_{t}) + \lambda_{\omega_{s}}(s_{t}) Q_{L}(s_{t}, {a}_{t}) \right),
\label{eqn:actor_update}
\\
&\begin{aligned}
\omega_{s} \leftarrow \omega_{s} + \eta_{\omega} \nabla_{\omega}\lambda_{\omega_{s}}(s_{t}) \big( & Q_{L}(s_{t}, {a}_{t}) - \epsilon \\ &- Q_{L}(s_{t}, {a}_{\mathrm{old},t}) \big),
\end{aligned}
\label{eqn:lagrangian_update}
\end{align}
\end{subequations}
where ${a}_{t}\sim\pi_{\theta_{s}}(s_{t})$ and ${a}_{\mathrm{old},t}$ denotes the sampled action from the policy parametrized with the old $\theta_{s}$.
We apply the same approach to improve the exploratory actor.
The unconstrained problem is similar to \eqref{eqn:unconstrained_problem} except for the opposite sign of the primal objective, so we have
\begin{subequations}
\begin{align}
&\theta_{e} \leftarrow \theta_{e} + \eta_{\theta}\nabla_{\theta} \left( Q_{V}(s_{t}, {a}_{t}) - \lambda_{\omega_{e}}(s_{t}) Q_{L}(s_{t}, {a}_{t}) \right)
\label{eqn:exploratory_actor_update}
\\
&\begin{aligned}
\omega_{e} \leftarrow \omega_{e} + \eta_{\omega} \nabla_{\omega}\lambda_{\omega_{e}}(s_{t}) \big( & Q_{L}(s_{t}, {a}_{\mathrm{exp},t}) - \epsilon \\ &- Q_{L}(s_{t}, {a}_{t}) \big),
\end{aligned}
\label{eqn:exploratory_lagrangian_update}
\end{align}
\end{subequations}
where ${a}_{\mathrm{exp},t}\sim\pi_{\theta_{e}}(s_{t})$, ${a}_{t}\sim\pi_{\theta_{s}}(s_{t})$.
Besides, critic parameters are optimized to minimize the Bellman residual.
The scheme is analogous to the Q-learning version, as in \eqref{eqn:qlearning_value_computation}, but in this case, we express the discount factor $\gamma$.
Recall that the Lyapunov Q-function is a weighted sum of the two Q-functions $Q_{V}$ and $Q_{T}$, one for a probability of unsafety and the other for a first-hitting time, respectively.
Letting $\phi$ and $\psi$ represent the parameters of $Q_{V}$ and $Q_{T}$, the targets for the critics $Q_{\phi}$ and $Q_{\psi}$ are defined as
\begin{align*}
&y_{V} := \mathbf{1}_{\mathcal{G}}(s_{t}) + \mathbf{1}_{\mathcal{G}^{c}}(s_{t})\gamma Q_{\phi'}(s_{t+1}, {a}_{t+1})
\\
&y_{T} := \mathbf{1}_{\mathcal{G}^{c}}(s_{t})(1 + \gamma Q_{\psi'}(s_{t+1}, {a}_{t+1})),
\end{align*}
where ${a}_{t+1}$ is the action sampled from $\pi_{\theta_{s}'}(s_{t+1})$.
The proposed actor-critic algorithm is summarized in Algorithm~\ref{alg:actor_critic_lyapunov}.
In our experiments, we use the double Q-learning technique in \cite{hasselt2010double} to prevent the target $y_{V}$ from being overly greater than the true probability of unsafety.
In this case, two critics have independent weights $\phi_{1}$, $\phi_{2}$, and two target critics pertained to the respective critics.
That is, $Q_{\phi'}(s_{t+1}, {a}_{t+1})$ in $y_{V}$ is replaced with $\min_{j=1,2} Q_{\phi'_{j}}(s_{t+1},{a}_{t+1})$,
where ${a}_{t+1} \sim \pi_{\theta_{s}'}(s_{t+1})$.
Moreover, we adjust the experience replay to alleviate errors in $Q_{V}$.
Catastrophic forgetting is the primary concern, since the target set should be precisely specified to obtain safe policies.
We fix the ratio of safe samples (\textit{i.e.}, $s_{t} \notin \mathcal{G}$) and unsafe samples (\textit{i.e.}, $s_{t} \in \mathcal{G}$) in a minibatch so that the value of $Q_{V}$ is 1 in the identified states of the target set.
We explain the ancilliary techniques in Section \ref{sec:deeprl_result}.
\begin{algorithm}[!t]
\caption{Actor-critic LSS (ESS)}
\label{alg:actor_critic_lyapunov}
\begin{algorithmic}
\REQUIRE Tolerance for unsafety $\alpha \in (0,1)$;
\STATE Initialize SS-actor/critics $\pi_{\theta_{s}}, Q_{\phi}, Q_{\psi}$ and Lagrangian multiplier $\lambda_{\omega_{s}}$;
\IF {ESS}
\STATE Initialize exploratory actor $\pi_{\theta_{e}}$ and Lagrangian multiplier $\lambda_{\omega_{e}}$;
\ENDIF
\STATE Initialize target networks:
$\theta_{s}'\leftarrow\theta_{s}$, $\psi'\leftarrow\psi$, $\phi'\leftarrow\phi$;
\FOR{each iteration $t$}
\FOR{each environment step}
\STATE $a_{t} \sim \pi_{\theta_{s}}(\cdot|s_{t})$ (Use $\pi_{\theta_{e}}$ if ESS);
\STATE $s_{t+1} \sim p(\cdot|s_{t},a_{t})$;
\STATE $\mathcal{D} \leftarrow \mathcal{D} \cup \{s_{t},a_{t},\mathbf{1}_{\mathcal{G}}(s_{t}),s_{t+1}\}$;
\STATE Reset environment if $\mathbf{1}_{\mathcal{G}}(s_{t})=1$;
\ENDFOR
\FOR{each gradient step}
\STATE Update $\phi$ by minimizing $\left(y_{V} - Q_{\phi}(s_{t}, a_{t}) \right)^{2}$;
\STATE Update $\psi$ by minimizing $\left(y_{T} - Q_{\psi}(s_{t}, a_{t}) \right)^{2}$;
\STATE Update $\theta_{s}$ as the solution of \eqref{eqn:actor_update};
\STATE Update $\omega_{s}$ as the solution of \eqref{eqn:lagrangian_update};
\IF {ESS}
\STATE Update $\theta_{e}$ as the solution of \eqref{eqn:exploratory_actor_update};
\STATE Update $\omega_{e}$ as the solution of \eqref{eqn:exploratory_lagrangian_update};
\ENDIF
\ENDFOR
\STATE Soft target update for SS-actor/critic: $\theta_{s}' \leftarrow \tau\theta_{s} + (1-\tau)\theta_{s}'$, $\phi' \leftarrow \tau \phi + (1-\tau)\phi'$, $\psi' \leftarrow \tau\psi+(1-\tau)\psi'$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Simulation Studies}
In this section, we demonstrate our safe learning and safety specification methods using simulated control tasks.
We test the validity of our approach in a simple double integrator and further verify our deep RL algorithms with the high-dimensional dynamic system introduced in \cite{duan2016benchmarking}, both of which have a tuple of positions and velocities as a state.
To make the environments congruous with our problem setting, a target set is defined as the exterior of a certain bounded region of the state space, a setup that enables the implementation of tabular Q-learning.
The description of environments, including the definition of the target sets, can be found in Appendix B.
In Section \ref{sec:method}, we stated the theoretical guarantees as follows.
First, Lyapunov-based methods obtain a subset of $S^{\ast}(\alpha)$.
Second, the improved safe set includes the current safe set.
Third, the agent ensures safety while running in the environment if the initial state is safe.
However, in practice, these guarantees cannot be strictly satisfied, since we determine a safe set with the approximated probability of unsafety.
To distinguish the obtainable safe set from the ideal one derived from the true MDP, we represent the estimate of the safe set under $\pi$ as
\[
\hat{S}^{\pi}(\alpha) := \{ \bm{s} \in \mathcal{S} : \pi_{s}(\cdot | \bm{s})\hat{Q}_{V}(\bm{s},\cdot) \leq \alpha \}.
\]
We introduce two metrics to quantify how close well-trained RL agents are to such guarantees.
Regarding the accuracy of safety specification, we inspect if a safe set contains the elements of $S^{\ast}(\alpha)$ and if it does not include the unreliable states in $S^{\ast}(\alpha)^{c}$.
We thus consider the \emph{ratio of correct specification}
\[
r_{\mathrm{c}} = |\hat{S}^{\pi}(\alpha) \cap S^{\ast}(\alpha)| / |S^{\ast}(\alpha)|,
\]
and the \emph{ratio of false-positive specification}
\[
r_{\mathrm{fp}} = |\hat{S}^{\pi}(\alpha) \cap S^{\ast}(\alpha)^{c}| / |\mathcal{S}|.
\]
We also verify safe exploration throughout learning by tracking the proportion of safely terminated episodes among the 100 latest episodes, which is denoted by the \emph{average episode safety} (AES).
A trajectory is considered safe if an agent reaches terminal states without visiting $\mathcal{G}$ or stays in $\mathcal{G}^{c}$ until the time limit.
Throughout our experiments, we set $\alpha = 0.2$, so AES should be no less than $0.8$ to guarantee safe navigation. We also improve learning speed by introducing
a discount factor $\gamma < 1$, which is equivalent to $p(s_{t+1}\in \mathcal{S}_{\mathrm{term}}|s_{t},a_{t})$. As the key idea of our approach is the separation of the exploratory policy from the SS-policy, we set an unmodified RL method as baseline;
that is, the baseline agents are trained to minimize the expected sum of $x_{t}\mathbf{1}_{\mathcal{G}}(s_{t})$.
\subsection{Tabular Q-Learning}
First, we evaluate our Lyapunov-based safety specification methods with tabular implementations.
For tabular Q-learning, we discretize a continuous action $a=(a_{1},\cdots,a_{\dim{\mathcal{A}}})$ into partitions of $A_{1},\cdots,A_{\dim{\mathcal{A}}}$ equal intervals for each element.
In other words, applying the $n$th action for $a_{m}$ is interpreted as $a=(a_{m,\mathrm{max}} - a_{m,\mathrm{min}})\frac{n}{A_{m}-1} + a_{m,\mathrm{min}}$.
Likewise, state space is represented as a finite-dimensional grid.
Based on the MDP quantized as above, the true probability of safety is computed via dynamic programming.
We use a double integrator environment to test the tabular cases.
To reduce the size of $S^{\ast}(\alpha)$, we modify the integrator to perturb the input acceleration with a certain probability.
We compare LSS, ESS, and a baseline Q-learning with no extra techniques to shield unsafe actions.
We initialize the Q-function tables with random values uniformly sampled from the interval $[0.99,1]$; that is, the probability of unsafety is estimated as nearly 1 in all states.
Therefore, in the tabular setting we impose the assumption that all unvisited states have the probabilistic safeties lower than the threshold.
We then perform the policy improvement $10^{2}$ times, each of which comes after $10^{6}$ environment steps.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{integrator1.pdf}
\caption{Integrator, $r_{\mathrm{c}}$}
\end{subfigure}%
\hfil
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{integrator2.pdf}
\caption{Integrator, $r_{\mathrm{fp}}$}
\end{subfigure}
\caption{Safety specification via tabular Q-learning tested on the double integrator. The solid line denotes the average, and the shaded area indicates the confidence interval of 20 random seeds. The baseline, LSS, and ESS are denoted by teal, orange, and blue, respectively.
\label{fig:speculation_tab}
\end{figure}
Fig. \ref{fig:speculation_tab} summarizes the specification result averaged across 20 random seeds.
Both LSS and ESS show monotonic improvement of the approximated safe set $\hat{S}^{\pi}(\alpha)$.
Notably, we find evidence of ESS taking advantage of exploration.
The $r_{\mathrm{c}}$ of ESS increase faster than those of LSS or the baseline, while the excess of $r_{\mathrm{fp}}$ of ESS is negligible.
The average value of $r_{\mathrm{c}}$ is $44\%$ for ESS, surpassing the baseline of $34\%$.
The effect of ESS culminates at the beginning of the learning process then dwindles because the boundary of $\hat{S}^{\pi}(\alpha)$ becomes unlikely to reach as the set inflates, so the chance of exploration decreases.
Ideally, with the appropriate choice of $\gamma \approx 1$ and the learning rate, $r_{\mathrm{fp}}$ is nearly 0.
We skip the AES in Fig. \ref{fig:speculation_tab}, since no agent lacks safety confidence.
However, the AES might decline without the limit, since an episode is configured to terminate after 200 steps, which restricts the chance of reaching the target set.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.27\columnwidth}
\centering
\includegraphics[width=\linewidth]{integratorA.pdf}
\caption{Ground truth}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{integratorB.pdf}
\caption{Baseline}
\end{subfigure}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{integratorC.pdf}
\caption{LSS}
\end{subfigure}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{integratorD.pdf}
\caption{ESS}
\end{subfigure}
\caption{Safe sets for the integrator problem with $\alpha = 0.2$. Each grid point denotes a state $(\mathrm{position}, \mathrm{velocity})$. The ground truth $S^{\ast}(\alpha)$ is denoted by yellow in (a). The other figures show the safe set estimated by (b) the baseline, (c) LSS, and (d) ESS. The shaded region represents $\hat{S}^{\pi}(\alpha)$: correctly specified states are marked yellow, and unsafe states misclassified as safe are marked red. }
\label{fig:visualize_tab}
\end{figure}
We illustrate the safety analysis results of respective methods and the ground-truth probabilistic safe set in Fig.~\ref{fig:visualize_tab}.
Each approximated safe set is established from the Q-learning table of an agent with the highest rate of correct specification among the 20 random seeds analyzed in Fig.~\ref{fig:speculation_tab}.
A grid map represents the whole non-target set except for the grid points on the sides, and the approximated safe set is the set of red and yellow points.
The size of $\hat{S}^{\pi}(\alpha)$ for ESS is notably larger than that of the baseline or LSS in the cases of both correctly specified states (yellow) and misclassified states (red).
However, the false-positive in the safe set estimated by ESS is hardly due to the ESS method but comes from a universal limitation of tabular Q-learning.
This can be explained from the observation that the ratio of misclassified states over the whole $\hat{S}^{\pi}(\alpha)$ of ESS is greater than that of the baseline only by $5\%$;
that is, ESS does not particularly overestimate the probability of safety in unsafe states.
The ESS Q-learning is expected to obtain an accurate estimate of $S^{\ast}$ if the implementation of Q-learning is improved.
\subsection{Deep RL}\label{sec:deeprl_result}
We present the experimental results in Algorithm \ref{alg:actor_critic_lyapunov} using a realistic robotic simulation.
We demonstrate that our approach can be coupled with well-established deep RL methods to perform safety specifications efficiently in the continuous state and action space.
Details about our deep RL implementation can be found in Appendix~A.
We consider a \emph{Reacher} system for safety analysis.
In the Reacher, safety constraints are set on the position of the end effector (See Appendix B for details).
We implement the LSS and ESS actor-critic in DDPG \cite{lillicrap2015continuous}, and the baseline.
For the sake of fairness, all the algorithms use the same actor network weight and the same replay memory at the start of learning.
The critics are initialized randomly, but the bias value for each layer of $Q_{V}$ is set to 1 so that $Q_{V}(\bm{s}, \bm{a}) = 1$ for almost all $(\bm{s}, \bm{a}) \in \mathcal{S} \times \mathcal{A}$.
This ensures that the ratio of correct specification is 0 at the very beginning.
We also optimize only the critics for the first $10^{5}$ steps to reduce the discrepancies between critics and actors.
The techniques mentioned in Section \ref{sec:deeprl} are also applied: we fill $20\%$ of each minibatch with the unsafe samples and use double $Q_{V}$ networks for critic update.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacher1}
\caption{Reacher, $r_{\mathrm{c}}$ (average)}
\label{fig:reacher_spec_ddpg_correct_average}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacher2}
\caption{Reacher, $r_{\mathrm{fp}}$ (average)}
\label{fig:reacher_spec_ddpg_falsepositive_average}
\end{subfigure}
\hfil
\centering
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacher3.pdf}
\caption{Reacher, $r_{\mathrm{c}}$ (best)}
\label{fig:reacher_spec_ddpg_correct_best}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[b]{.48\columnwidth}
\includegraphics[width=\linewidth]{reacher4.pdf}
\caption{Reacher, $r_{\mathrm{fp}}$ (best)}
\label{fig:reacher_spec_ddpg_falsepositive_best}
\end{subfigure}
\hfil
\centering
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacher5.pdf}
\caption{Reacher, AES}
\label{fig:reacher_spec_ddpg_aesafety}
\end{subfigure}
\caption{Safety specification via deep RL tested on the Reacher. (a-b) are the results averaged across 10 random seeds, and (c-d) are the best results for various methods. (e) displays the average episode safety swept across all seeds. Color schemes are equivalent to Fig. \ref{fig:speculation_tab}.}
\label{fig:speculation_ddpg}
\end{figure}
The Lyapunov-based RL agents require auxiliary cost~$\epsilon$, as in Section \ref{sec:setup}.
For the case of a continuous state space, the safe set is not explicitly defined, so $\epsilon$ should be approximated.
We first set the denominator of $\epsilon$ to $T^{\pi}(\bm{s}) \approx (1 - \gamma)^{-1}$ to prevent it from being larger than the true value.
To estimate $\min_{\bm{s} \in S^{\ast}(\alpha)}\{ \alpha - V^{\pi}(\bm{s},1)\}$, we use supplementary memory that remembers the value of $\{\alpha - V^{\pi}(\bm{s},1)\}^{+}$ for $\bm{s}$ such that $V^{\pi}(\bm{s},1) \leq \alpha$.s
When an episode is terminated, an agent computes $V^{\pi}$ for all the states in the trajectory and find the maximum among the values that satisfy $V^{\pi}(\bm{s},1) \leq \alpha$.
The memory stores the result for the 100 latest trajectories.
We also exploit the two actors of the ESS actor-critic to ensure safe operation.
Since it takes time to construct a stable exploratory actor, the agent makes stochastic choices between the two actors in the early stages.
The probability of an SS-actor being chosen is 1 at the first gradient step and declines linearly until it becomes 0 after the first half of the learning process.
The SS-actor is also utilized as the backup policy; that is, the agent takes the action using $\pi_{s}$ if the AES is less than the threshold $1-\alpha$, regardless of the policy choice scheme described above.
To reduce computation time, $\lambda_{\omega_{s}}$ is fixed to 0 for the ESS actor-critic.
Fig. \ref{fig:speculation_ddpg} summarizes the experimental result.
We perform tests on 10 random seeds to take an average (\ref{fig:reacher_spec_ddpg_correct_average}, \ref{fig:reacher_spec_ddpg_falsepositive_average}) and to display the ones that attain the greatest $r_{\mathrm{c}}$ among various methods (\ref{fig:reacher_spec_ddpg_correct_best}, \ref{fig:reacher_spec_ddpg_falsepositive_best}).
Comparing the average cases, the ESS actor-critic shows improvement in both specification criteria, and is noticeable for false positives.
ESS consistently reduces $r_{\mathrm{fp}}$ except for the first $3\times10^5$ steps and then achieves $4.10\%$, while the baseline and LSS settle at $7.30\%$ and $5.22\%$, respectively.
The learning curves of ESS and the baseline are similar at the very start, since ESS does not regularly use the exploratory policy then.
The exploratory policy in ESS supplements novel information about the states, which are normally the elements of the target set, and the safe set thus becomes more accurate.
On the other hand, those of the baseline stay stagnant because the agent barely falls into an unusual trajectory with the SS-policy.
Regarding LSS, we observe that the regularization term in its update rule degrades the overall performance.
As seen by the large confidence interval of ESS in Fig. \ref{fig:reacher_spec_ddpg_correct_average}, the effect of the exploratory policy varies.
ESS performs as the description in Section \ref{sec:ess};
considering the best cases, ESS attains $77.7\%$ for the correct specification, which is $13.4\%$ above the baseline.
The exploratory policies sometimes converge fast and become indifferent to the SS-policies in terms of exploration, resulting in poor performance.
Note that the difference in ESS behavior is determined by the approximation error in the critic $Q_{V}$.
Although it is difficult to organize the parametrized critic, we can exploit the potential of ESS by running on multiple seeds and finding the best among them.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.27\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacherA.pdf}
\caption{Ground truth}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacherB.pdf}
\caption{Baseline}
\end{subfigure}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacherC.pdf}
\caption{LSS}
\end{subfigure}
\begin{subfigure}[b]{.23\columnwidth}
\centering
\includegraphics[width=\linewidth]{reacherD.pdf}
\caption{ESS}
\end{subfigure}
\caption{Safe sets in the state space of the Reacher. Each grid point denotes a state of the end effector whose position is determined by the angles of the two joints and whose velocity is 0. Given $\alpha = 0.2$, the ground truth $S^{\ast}(\alpha)$ is denoted by yellow in (a). The other figures show the estimated safe set obtained by (b) the baseline, (c) LSS, and (d) ESS. Color schemes are equivalent to Fig. \ref{fig:visualize_tab}.}
\label{fig:visualize_ddpg}
\end{figure}
In Fig. \ref{fig:visualize_ddpg}, we further visualize a relevant part of the state space and the safe sets in it.
Each grid map displays $\hat{S}^{\pi}(\alpha)$ of the agent whose $r_{\mathrm{c}}$ is the greatest among the 10 random seeds discussed above.
The safe set obtained by ESS clearly resembles the true safe set better than the others.
\section{Conclusion}
We have proposed a model-free safety specification tool that incorporates a Lyapunov-based safe RL approach with probabilistic reachability analysis.
Our method exploits the Lyapunov constraint to construct an exploratory policy that mitigates the discrepancy between state distributions of the experience replay (or the tabular Q-function) and the environment.
Another salient feature of
the proposed method is that it can be implemented on generic, model-free deep RL algorithms, particularly in continuous state and action spaces through Lagrangian relaxation.
The results of our experiments demonstrate that our method encourages visiting the unspecified states, thereby improving the accuracy of specification.
By bridging probabilistic reachability analysis and reinforcement learning, this work can provide an exciting avenue for future research in terms of extensions to partially observable MDPs, and model-based exploration and its regret analysis, among others.
\section*{Appendix}
\subsection{Deep RL Implementation}\label{appendix:deeprl}
In this section, we provide a specific description of the deep RL agents used in our experiments.
Table \ref{tab:ddpg_top_layer} displays the basic architecture of neural networks, all of which are fully connected and consist of two hidden layers with ReLU as an activation function unless it is an estimator of $Q_{V}$.
The first and second hidden layers have 400 and 300 nodes, respectively.
Adam optimizer \cite{kingma2015adam} is used to apply gradient descent.
Aside from the techniques stated in Section \ref{sec:deeprl_result}, an action is perturbed with Ornstein-Uhlenbeck noise with parameters $\mu = 0$, $\theta = 0.1$, and $\sigma = 0.05$.
\begin{table}[!h]
\centering
\begin{tabular}{ llll }
\toprule
Type & Output size & Activation & Learning rate\\
\midrule
Critic & 1 & $\mathrm{clamp}(0,1)$ & $10^{-4}$\\
Actor & $\mathrm{dim}(a)$ & tanh & $10^{-5}$\\
$\log\lambda$ & 1 & $\mathrm{clamp}(-10,6)$ & $10^{-6}$\\
\bottomrule
\end{tabular}
\caption{The top layers of respective networks in DDPG.}
\label{tab:ddpg_top_layer}
\end{table}
\subsection{Environments}\label{appendix:envs}
An environment provides a Boolean \texttt{done} signal that declares the termination of an episode strictly equivalent to $\bm{1}_{\mathcal{S}_{\mathrm{term}}}$.
When its value is 1, both $Q_V$ and $Q_T$ at that state are set to 0. If the length of an episode exceeds the time limit before arriving at a terminal state, the environment resets itself, but \texttt{done} is still 0 at that moment.
Refer to Table \ref{tab:env_params} for the time limit and the discount factor settings.
\textbf{Randomized integrator.}~
A vanilla double integrator is a system with a 2D state $(x_{1},x_{2})$ and the scalar control $u$.
$x_{1}$ and $x_{2}$ represent the position and velocity on a 1D line, respectively. The control is an acceleration.
We add a few features to construct a safety specification problem in this environment.
First, we set the terminal states as the points near the origin $(x_{1},x_{2}) \in [-0.2,0.2] \times [-3.75 \times 10^{-3}, 3.75 \times 10^{-3}]$.
Next the target set is defined as all the states
$(x_1, x_2) \notin [-1,1] \times [-0.5, 0.5]$.
Finally, we restrict admissible action to the range $[-0.5,0.5]$, and adjust the dynamics so that the acceleration is scaled to $0.5u/|u|$ with probability $1/2$.
Due to the introduction of stochastic behavior, it becomes more difficult to reach the terminal states safely than in the original environment.
\begin{figure}[!t]
\centering
\includegraphics[width=0.85\columnwidth]{reacher.pdf}
\caption{Description of the Reacher environment.}
\label{fig:reacher}
\end{figure}
\textbf{Reacher.}~
Reacher is a simulative planar 2-DOF robot with two arms attached to joints implemented with a Mujoco engine \cite{mujoco}.
The joint of the first arm is fixed on the center of the plane, and the joint of the second is connected to the movable end of the first.
The objective of the robot is to touch a motionless goal point with its end effector.
An observation is thus defined as a vector that contains the angular positions and the angular velocities of the joints as well as the position of the goal.
The action is defined as the torques on the joints, each of which is bounded in the range $[-1,1]$.
Let the coordinates be defined as in Fig. \ref{fig:reacher}.
Specifically, the goal is deployed randomly in the hued area $\{(x,y) | \sqrt{x^{2}+y^{2}} \leq \sqrt{2}l, |\arctan{y/x}| \leq \pi/4 \}$, where $l$ is the length of one arm.
The exact position changes for each reset.
We define the target set as $\{(x,y) | |y| > l\}$, where $(x,y)$ is the coordinate of the tip.
We derive the probabilistic safe set in Fig. \ref{fig:visualize_ddpg} under the assuming no friction.
This is not the case in a Mujoco-based simulation, but the effect of such an assumption is minor.
Recall that the states displayed in Fig. \ref{fig:visualize_ddpg} stand for an end effector with zero velocity.
If appropriate control is applied, the robot can avoid reaching the target set by moving toward an arbitrary position near the goal unless it launched from the target set at the beginning.
In our simulation studies, we only assess the agents with the states where the goal point is given by $(-2l,0)$, and the angular velocity is $(\dot{\theta}_{1},\dot{\theta}_{2}) = (0,0)$.
We use the Reacher configuration provided by Gym \cite{openaigym}.
\begin{table}[!h]
\centering
\begin{tabular}{ lll }
\toprule
Environment & Time limit & $\gamma$\\
\midrule
Integrator & $1000$ & $1 - 10^{-4}$\\
Reacher & $300$ & $1 - 10^{-3}$\\
\bottomrule
\end{tabular}
\caption{The environment-specific parameters.}
\label{tab:env_params}
\end{table}
\bibliographystyle{IEEEtran}
|
1,108,101,562,767 | arxiv | \section{Introduction}\label{intro}
Let $(B_t,~t\geq 0)$ be a standard Brownian motion, $T_1$ its first hitting time of level one, and
$U$ a uniform random variable on $[0,1]$, independent of $B$. In \cite{elie2013expectation}, it is first shown that the random variable $\alpha$ defined by
\begin{equation}\label{variable}
\alpha=\frac{B_{UT_1}}{\sqrt{T_1}}
\end{equation}
is centered. Intrigued by this property, we determined the distribution of this variable, which is expressed in \cite{rosenbaum2013law} under the following form, where $\underset{\mathcal{L}}{=}$ denotes equality in law:
\begin{equation}\label{law}
\alpha\underset{\mathcal{L}}{=}\Lambda L_1-\frac{1}{2}|B_1|,\end{equation}
with $L_t$ the local time at point $0$ of $B$ at time $t$ and $\Lambda$ a uniform random variable on $[0,1]$, independent of $(|B_1|,L_1)$. The centering property is easily recovered from \eqref{law} since
$$\mathbb{E}[\Lambda L_1-\frac{1}{2}|B_1|]=\frac{1}{2}\mathbb{E}[L_1-|B_1|]=0.$$
\noindent In fact, in \cite{rosenbaum2013law}, a preliminary to the proof of \eqref{law} is to obtain the law of a triplet of random variables defined in terms of the pseudo-Brownian bridge introduced in \cite{biane1987processus}, see Section \ref{sec_prelim} below. In this paper, we show that the law of this triplet enables us to derive several unexpected simple formulas for various quantities related to some very classical Brownian type processes, namely the Brownian bridge, the Brownian meander and the three dimensional Bessel process. More precisely, we focus on distributional properties of these processes when sampled with an independent uniform random variable. Thus, this work can be viewed as a modest complement to the seminal paper by Pitman, see \cite{pitman1999brownian}, where the laws of these processes when sampled with (several) independent uniform random variables are already studied.\\
\noindent The paper is organized as follows. In Section \ref{sec_prelim}, we give some preliminary results related to the law of $(|B_1|,L_1)$. Indeed, they play an important role in the proofs. Distributional properties for the Brownian bridge are established in Section \ref{sec_bri} whereas the Brownian meander and the three dimensional Bessel process are investigated in Section \ref{sec_mea}. Finally, in Section \ref{sec_filt}, we reinterpret the fact that $\alpha$ is centered through the lenses of an enlargement formula for the Brownian motion with the time $T_1$ due to Jeulin, see \cite{jeulin1979grossissement}. In particular we show that this centering property can be translated in terms of the expectation of the random variable
$U/(R_UR_1^2)$, where $R$ is a three dimensional Bessel process and $U$ a uniform random variable on $[0,1]$ independent of $R$.
\section{Some preliminary results about the law of $(|B_1|,L_1)$}\label{sec_prelim}
Before dealing with the Brownian bridge, the Brownian meander and the three dimensional Bessel process, we give here some preliminary
results related to the distribution of the couple $(|B_1|,L_1)$. These results will play an important role in the proofs of our main theorems.\\
\noindent It is well known that the law of $(B_1,L_1)$ admits a density on $\mathbb{R}\times\mathbb{R}^+$. Its value at point $(x,l)$ is given by
\begin{equation}\label{dens}
\frac{1}{\sqrt{2\pi}}(|x|+l)\text{exp}\big(-\frac{(l+|x|)^2}{2}\big).
\end{equation}
For $l\geq 0$, we set
$$H(l)=\text{e}^{l^2/2}\int_l^{+\infty}dx\text{e}^{-x^2/2}.$$
The following consequences of \eqref{dens} shall be useful in the sequel.
\begin{proposition}\label{prop_prelim}
Let $l\geq 0$. We have the double identity
\begin{equation}\label{double}
\mathbb{E}[L_1||B_1|=l]=\mathbb{E}[|B_1||L_1=l]=H(l).
\end{equation}
Furthermore, one has
\begin{equation}\label{double2}
H(l)=l\mathbb{E}[\frac{1}{N^2+l^2}],\end{equation}
where $N$ denotes a standard Gaussian random variable.
\end{proposition}
\begin{proof}
We start with the proof of \eqref{double}. Of course, it can be deduced from \eqref{dens} at the cost of some integrations. We
prefer the following arguments. First, the equality on the left hand side of \eqref{double} stems from the symmetry of the law of
$(|B_1|,L_1)$, which is obvious from \eqref{dens}. Thus, we now have to show the second equality in \eqref{double}. This easily follows from the identity
\begin{equation}\label{balayage}
\mathbb{E}[\phi(L_1)|B_1|]=\mathbb{E}[\int_0^{L_1}dx\phi(x)],
\end{equation}
which is valid for any bounded measurable function $\phi$. Indeed, assuming \eqref{balayage} for a moment, using the fact that
$$L_1\underset{\mathcal{L}}{=}|B_1|,$$
we may write \eqref{balayage} as
$$\int_0^{+\infty}dl\text{e}^{-l^2/2}\phi(l)\mathbb{E}[|B_1||L_1=l]=\int_0^{+\infty}dl\phi(l)\int_l^{+\infty}dx\text{e}^{-x^2/2}.$$
Hence, since this is true for every bounded measurable function $\phi$, we get
$$\text{e}^{-l^2/2}\mathbb{E}[|B_1||L_1=l]=\int_l^{+\infty}dx\text{e}^{-x^2/2},$$
which is the desired result for \eqref{double}.\\
\noindent It remains to prove \eqref{balayage} for a generic bounded measurable function $\phi$. Remark that the formula
$$\phi(L_t)|B_t|=\int_0^tdB_s\phi(L_s)\text{sign}(B_s)+\int_0^tdL_s\phi(L_s)$$
is a very particular case of the balayage formula, see \cite{revuz1999continuous}, page 261. It now suffices to take expectation on both sides of this last equality to obtain \eqref{balayage}.\\
\noindent We now give the proof of the second part of Proposition \ref{prop_prelim}. First, note that
$$\mathbb{E}[\frac{1}{N^2+l^2}]=\int_0^{+\infty}dv\text{e}^{-vl^2}\mathbb{E}[\text{e}^{-vN^2}].$$
Using the Laplace transform of $N^2$, we obtain
\begin{equation}\label{form}
\mathbb{E}[\frac{1}{N^2+l^2}]=\int_0^{+\infty}dv\frac{\text{e}^{-vl^2/2}}{2\sqrt{1+v}}.
\end{equation}
Then remark that thanks to the change of variable $x^2=(1+v)l^2$, we get
$$H(l)=\text{e}^{l^2/2}\int_l^{+\infty}dx\text{e}^{-x^2/2}=\int_0^{+\infty}dv\frac{l}{2\sqrt{1+v}}\text{e}^{-vl^2/2}.$$
This together with $\eqref{form}$ gives the second part of Proposition \ref{prop_prelim}.
\end{proof}
\section{The Brownian bridge under uniform sampling}\label{sec_bri}
Before giving our theorem on the uniformly sampled Brownian bridge, we recall a result related to the pseudo-Brownian bridge established in \cite{rosenbaum2013law}, and which is the key to most of the proofs in this paper. The pseudo-Brownian bridge was introduced in \cite{biane1987processus} and is defined by
$$(\frac{B_{u\tau_1}}{\sqrt{\tau_1}},~u\leq 1),$$ with
$(\tau_l,~l>0)$ the inverse local time process:
$$\tau_l=\text{inf}\{t,~L_t>l\}.$$
This pseudo-Brownian bridge is equal to $0$ at time $0$ and time $1$ and has the same quadratic variation as the Brownian motion. Thus, it shares some similarities with the Brownian bridge, which explains its name. Let $U$ be a uniform random variable on $[0,1]$ independent of $B$. The following theorem is proved in \cite{rosenbaum2013law}.
\begin{theorem}\label{theo1}
There is the identity in law
$$(\frac{B_{U\tau_1}}{\sqrt{\tau_1}},\frac{1}{\sqrt{\tau_1}},L_{U\tau_1})\underset{\mathcal{L}}{=}(\frac{1}{2}B_1,L_1,\Lambda),$$
with $\Lambda$ a uniform random variable on $[0,1]$, independent of $(B_1,L_1)$.
\end{theorem}
\noindent In other words, $L_{U\tau_1}$ is a uniform random variable on $[0,1]$, independent of the pair $$(\frac{B_{U\tau_1}}{\sqrt{\tau_1}},\frac{1}{\sqrt{\tau_1}}),$$ which is distributed as $(\frac{1}{2}B_1,L_1)$.\\
\noindent To deduce some properties of the Brownian bridge from Theorem \ref{theo1}, the idea is to use an absolute continuity relationship between the law of the pseudo-Brownian bridge and that of the Brownian bridge shown by Biane, Le Gall and Yor in \cite{biane1987processus}. More precisely, for $F$ a non negative measurable function on $\mathbb{C}([0,1],\mathbb{R})$, we have
\begin{equation}\label{bly}
\mathbb{E}\big[F\big(\frac{B_{u\tau_1}}{\sqrt{\tau_1}},~u\leq 1\big)\big]=\sqrt{\frac{2}{\pi}}\mathbb{E}\big[F\big(b(u),~u\leq 1\big)\frac{1}{\lambda_1^0}\big],\end{equation}
where $\big(b(u),~u\leq 1\big)$ denotes the Brownian bridge and $(\lambda_u^x,~u\leq 1,~x\in \mathbb{R})$ its family of local times.
Let $U$ be again a uniform random variable on $[0,1]$ independent of $b$. The following theorem is easily deduced from Theorem \ref{theo1} together with Equation \ref{bly}.
\begin{theorem}\label{theo2}
For any non negative measurable functions $f$ and $g$, we have
\begin{equation}\label{pont}
\mathbb{E}\big[f\big(b(U),\lambda_1^0\big)g\big(\frac{\lambda_U^0}{\lambda_1^0}\big)\big]=\sqrt{\frac{\pi}{2}}\mathbb{E}\big[f(\frac{1}{2}B_1,L_1)L_1\big]\mathbb{E}[g(\Lambda)],\end{equation}
with $\Lambda$ a uniform random variable on $[0,1]$, independent of $(B_1,L_1)$.
\end{theorem}
\noindent Thus, $\lambda_U^0/\lambda_1^0$ is a uniform random variable on $[0,1]$, independent of the pair $\big(b(U),\lambda_1^0\big)$ which is distributed according to \eqref{pont} with $g=1$.\\
\noindent The following corollary of Theorem \ref{theo2} provides some surprisingly simple expressions for some densities and (conditional) expectations of quantities related to the Brownian bridge.
\begin{corollary}\label{cortheo2}
The following properties hold:\\
\noindent $\bullet~$The variable $\lambda_1^0$ admits a density on $\mathbb{R}^+$. Its value at point $l\geq 0$ is given by
$$l\emph{exp}(-l^2/2).$$
Hence, $\lambda_1^0$ has the same law as $\sqrt{2\mathcal{E}}$, with $\mathcal{E}$ an exponential random variable. Therefore,
$\lambda_1^0$ is Rayleigh distributed.\\
\noindent $\bullet~$The density of $b(U)$ at point $y$ given $\lambda_1^0=l$ is given by
$$\mathbb{E}[\lambda_1^y|\lambda_1^0=l]=(2|y|+l)\emph{exp}\big(-(2y^2+2|y|l)\big).$$
Consequently, there is the formula
$$\mathbb{E}\big[\frac{\lambda_1^y}{\lambda_1^0}\big]=\emph{exp}(-2y^2).$$
\noindent $\bullet~$The density of $b(U)$ at point $y$ is given by
$$\mathbb{E}[\lambda_1^y]=\int_{2|y|}^{+\infty}dz\emph{exp}(-z^2/2).$$
Thus, we have $b(U)\underset{\mathcal{L}}{=}\sqrt{2\mathcal{E}}(V/2)$, with $\mathcal{E}$ an exponential variable independent of $V$ which is uniformly distributed on $[-1,1]$.
\end{corollary}
\noindent The first part of Corollary \ref{cortheo2} is obviously deduced from Theorem \ref{theo2}
and is in fact a very classical result, see \cite{biane1987processus,biane1988quelques,imhof1984density,revuz1999continuous}. We now prove the second part.
\begin{proof}
\noindent Let $f$ be a non negative measurable function. First note that
$$\mathbb{E}\big[f\big(b(U)\big)|\lambda_1^0=l\big]=\mathbb{E}\big[\int_0^1 du f\big(b(u)\big)|\lambda_1^0=l\big]=\int_{\mathbb{R}} dy f(y)\mathbb{E}[\lambda_1^y|\lambda_1^0=l].$$
Hence the density of $b(U)$ at point $y$ given $\lambda_1^0=l$ is equal to
$$\mathbb{E}[\lambda_1^y|\lambda_1^0=l].$$
Now, let $h$ denote the density of the couple $(B_1,L_1)$ given in Equation \eqref{dens}. From Theorem \ref{theo2}, we easily get that the density of $b(U)$ at point $y$ given $\lambda_1^0=l$ is equal to
$$2\sqrt{\frac{\pi}{2}}\frac{lh(2y,l)}{l\text{exp}(-l^2/2)}=\sqrt{2\pi}h(2y,l)\text{exp}(l^2/2).$$
The first statement in the second part of Corollary \ref{cortheo2} readily follows from Equation \eqref{dens}. For the second statement, we use the fact that
$$
\mathbb{E}\big[\frac{\lambda_1^y}{\lambda_1^0}\big]=\mathbb{E}\big[\int_0^{+\infty}dl\frac{1}{l}\mathbb{E}[\lambda_1^y|\lambda_1^0=l]\big]l\text{exp}(-l^2/2)=\sqrt{2\pi}\int_0^{+\infty}dlh(2|y|,l).
$$
Using the definition of $h$, this last expression is equal to
$$\text{exp}(-2y^2).$$
\end{proof}
\noindent The last identity in Corollary \ref{cortheo2} is easily deduced from Theorem \ref{theo2} together with Proposition \ref{prop_prelim}. Note that this formula can also be found in \cite{shorack2009empirical}, page 400.
\section{The Brownian meander and the three dimensional Bessel process under uniform sampling}\label{sec_mea}
In this section, we reinterpret Theorem \ref{theo1} in terms of the Brownian meander and the three dimensional Bessel process.
\subsection{The Brownian meander}
We first turn to the translation of Theorem \ref{theo1} in terms of the Brownian meander, denoted by $\big(m(u),~u\leq 1\big)$. To do so, we use an equality in law shown by Biane and Yor in \cite{biane1988quelques}. More precisely we have
$$\big((m(u),i_u),~u\leq 1\big)\underset{\mathcal{L}}{=}\big((|b(u)|+\lambda_u^0,\lambda_u^0),~u\leq 1\big),$$
where $$i_u=\underset{u\leq t\leq 1}{\text{inf}} m_t.$$ Thus, we can reinterpret Theorem \ref{theo2} as follows.
\begin{theorem}\label{theo3}
For any non negative measurable functions $f$ and $g$, we have
\begin{equation*}
\mathbb{E}\big[f\big(m(U),m(1)\big)g\big(\frac{i_U}{m(1)}\big)\big]=\sqrt{\frac{\pi}{2}}\mathbb{E}\big[f(\frac{1}{2}|B_1|+\Lambda L_1,L_1)L_1g(\Lambda)\big],\end{equation*}
with $\Lambda$ a uniform random variable on $[0,1]$, independent of $(B_1,L_1)$.
\end{theorem}
\noindent Let $(\tilde{\lambda}_1^y,~y\geq 0)$ denotes the family of local times of $m$ at time $1$. Similarly to what we have done for the Brownian bridge, we are able to retrieve from Theorem \ref{theo3} simple expressions for the laws of $m(1)$ and $m(U)$. We state these results in the following corollary.
\begin{corollary}\label{cortheo3}
The following properties hold:\\
\noindent $\bullet~$The variable $m(1)$ is Rayleigh distributed.\\
\noindent $\bullet~$The density of $m(U)$ at point $y\geq 0$ is given by
$$\mathbb{E}[\tilde{\lambda}_1^y]=2\int_y^{2y}\emph{exp}(-z^2/2)dz.$$
Thus, we have $m(U)\underset{\mathcal{L}}{=}\sqrt{2\mathcal{E}}W$, with $\mathcal{E}$ an exponential variable independent of $W$ which is uniformly distributed on $[1/2,1]$.
\end{corollary}
\begin{proof}
The proof of the first part of Corollary \ref{cortheo3} is obvious from Theorem \ref{theo3}. We now consider the second part. Let $f$ be a non negative measurable function. Using Theorem \ref{theo3} together with Equation \eqref{dens}, we get that
$$\mathbb{E}\big[f\big(m(U)\big)\big]=\int_0^{+\infty}dx\int_0^{+\infty}dl l(x+l)\text{e}^{-(x+l)^2/2}\mathbb{E}\big[f(\frac{x}{2}+\Lambda l)\big].$$
Now remark that
$$\mathbb{E}\big[f(\frac{x}{2}+\Lambda l)\big]=\frac{1}{l}\int_{x/2}^{x/2+l}d\nu f(\nu).$$
Therefore, by Fubini's theorem, we get
\begin{align*}
\mathbb{E}\big[f\big(m(U)\big)\big]&=\int_0^{+\infty}dx\int_0^{+\infty}dl (x+l)\text{e}^{-(x+l)^2/2}\int_{x/2}^{x/2+l}d\nu f(\nu)\\
&=\int d\nu f(\nu)\int_0^{2\nu}dx\int_{\nu-x/2}^{+\infty}dl (x+l)\text{e}^{-(x+l)^2/2}.
\end{align*}
Thus, the density of $m(U)$ at point $\nu$ is given by
\begin{align*}
\int_0^{2\nu}dx\int_{\nu-x/2}^{+\infty}dl (x+l)\text{e}^{-(x+l)^2/2}&=\int_0^{2\nu}dx\text{exp}\big(-\frac{(x/2+\nu)^2}{2}\big)\\
&=2\int_{\nu}^{2\nu}dz\text{e}^{-z^2/2}.
\end{align*}
This ends the proof.
\end{proof}
\noindent In fact, as it is the case for the Brownian bridge, we can give explicit formulas for several other quantities related to the Brownian meander, for example the law of $m(U)$ given $m(1)$. However, these expressions are not so simple and therefore probably less interesting than those obtained for the Brownian bridge.
\subsection{The three dimensional Bessel process}
Finally, let $(R_t,~t\geq 0)$ be a three dimensional Bessel process starting from $0$ and
$$J_u=\underset{u\leq t\leq 1}{\text{inf}}R_t.$$
Using Imhof's absolute continuity relationship between the law of the meander and that of the three dimensional Bessel process, see \cite{biane1987processus,imhof1984density}, we may rewrite Theorem \ref{theo3} as follows.
\begin{theorem}\label{theo4}
For any non negative measurable functions $f$ and $g$, we have
\begin{equation*}
\mathbb{E}\big[f\big(R(U),R(1)\big)g\big(\frac{J_U}{R(1)}\big)\big]=\mathbb{E}\big[f(\frac{1}{2}|B_1|+\Lambda L_1,L_1)L_1^2g(\Lambda)\big],\end{equation*}
with $\Lambda$ a uniform random variable on $[0,1]$, independent of $(B_1,L_1)$.
\end{theorem}
\noindent We finally give the following corollary.
\begin{corollary}\label{cortheo4}
The following properties hold:\\
\noindent $\bullet~$The density of $R(1)$ at point $y\geq 0$ is given by
$$\sqrt{\frac{2}{\pi}}y^2\emph{exp}(-y^2/2).$$
\noindent $\bullet~$ $R(U)\underset{\mathcal{L}}{=}\sqrt{U}R(1)$ and its density at point $y\geq 0$ is given by
$$2\sqrt{\frac{2}{\pi}}y\int_y^{+\infty}\emph{exp}(-z^2/2)dz.$$
\noindent $\bullet~$The law of $R(U)$ given $R(1)$ is the same as the law of $m(U)$ given $m(1)$.
\end{corollary}
\noindent The first two parts of Corollary \ref{cortheo4} are in fact easily deduced from basic properties of the three dimensional Bessel process. The last part is a consequence of Imhof's relation.
\section{The centering property of $\alpha$ revisited through an enlargement of filtration formula}\label{sec_filt}
In this last section, we revisit the centering property of the variable
$$\alpha=\frac{B_{UT_1}}{\sqrt{T_1}},$$ which is proved in \cite{elie2013expectation} and leads to various developments in \cite{rosenbaum2013law}.
Our goal here is to show that this result can be recovered from simple properties of the three dimensional Bessel process sampled at uniform time, together with an enlargement of filtration formula for the Brownian motion with the time $T_1$ due to Jeulin, see \cite{jeulin1979grossissement}.
\subsection{Some preliminary remarks on the uniformly sampled Bessel process}
Let $U$ be a uniform random variable on $[0,1]$, independent of the considered Bessel process $R$. We start with the two following lemmas on the conditional expectation of the uniformly sampled Bessel process.
\begin{lemma}\label{lem1}
We have
\begin{equation}\label{lem1_1}
\mathbb{E}[R_U|R_1=r]=\frac{1}{2}\big(r+\mathbb{E}[\frac{U}{R_U}|R_1=r]\big).
\end{equation}
Consequently,
$$\mathbb{E}\big[\frac{R_U}{R_1^2}\big]=\frac{1}{2}\big(\sqrt{\frac{2}{\pi}}+\mathbb{E}[\frac{U}{R_UR_1^2}]\big).$$
\end{lemma}
\begin{lemma}\label{lem2}
We have
$$\mathbb{E}[\frac{U}{R_U}|R_1=r]=H(r).$$
Consequently,
$$\mathbb{E}\big[\frac{R_U}{R_1^2}\big]=\mathbb{E}[\frac{U}{R_UR_1^2}]=\sqrt{\frac{2}{\pi}}.$$
\end{lemma}
\noindent Remark that we already proved the equality
$$\mathbb{E}\big[\frac{R_U}{R_1^2}\big]=\sqrt{\frac{2}{\pi}}$$
in \cite{elie2013expectation}. This was in fact the cornerstone of our first proof of the centering property of $\alpha$. In the enlargement of filtration approach used here, we will see that instead of $R_U/R_1^2$, the random variable $U/(R_UR_1^2)$ appears naturally.
\subsection{Proofs of Lemma \ref{lem1} and Lemma \ref{lem2}}
We now give the proofs of Lemma \ref{lem1} and Lemma \ref{lem2}.
\subsubsection{Proof of Lemma \ref{lem1}}
The first part of Lemma \ref{lem1} follows from the identity
\begin{equation}\label{returnbes}
\mathbb{E}[\frac{R_t}{t}|R_1]=R_1+\mathbb{E}[\int_t^1\frac{dv}{vR_v}|R_1],~t\leq 1,
\end{equation}
after multiplying both sides by $t$ and integrating in $t$ from $0$ to $1$. To show \eqref{returnbes}, we use time inversion with $t=1/w$ and $$R'_w=wR_{1/w},$$ another three dimensional Bessel process. With this notation, using the Ito representation of the Bessel process, we get
$$\mathbb{E}[R'_w-R'_1|R'_1]=\mathbb{E}[\int_1^w\frac{dt}{R'_t}|R'_1],$$ from which \eqref{returnbes} is easily obtained. The second statement in Lemma \ref{lem1} readily follows.
\subsubsection{Proof of Lemma \ref{lem2}}
We start with the proof of the first part of Lemma \ref{lem2}. Let
$$\rho=\mathbb{E}[\int_0^1dv\frac{v}{R_v}|R_1=r].$$
Using the same time inversion trick as for the proof of Lemma \ref{lem1} together with the Markov property, we get
$$\rho=\int_1^{+\infty}\frac{dw}{w^2}\mathbb{E}[\frac{1}{R_w'}|R_1'=r]=\int_0^{+\infty}\frac{dt}{(1+t)^2}\mathbb{E}_r[\frac{1}{R'_t}],$$
where $\mathbb{P}_r$ denotes the law of a Bessel process $R'$ starting from $r$. We then use the Doob's absolute continuity relationship, that is
$$\mathbb{P}_r\big|_{\mathcal{F}_t}=\frac{X_{t\wedge T_0}}{r}W_r\big|_{\mathcal{F}_t},$$
where $W_r$ is the Wiener measure associated to a Brownian motion starting at point $r$, $X$ is the canonical process and $T_0$
is the first hitting time of $0$ by $X$, see for example \cite{revuz1999continuous}, Chapter XI. This together with the fact that
$$\frac{X_{t\wedge T_0}}{X_t}=\mathrm{1}_{T_0>t}$$
gives
$$\mathbb{E}_r[\frac{1}{R'_t}]=\frac{1}{r}W_r[T_0>t]=\frac{1}{r}W_0[T_r>t].$$
Therefore,
$$\rho=\frac{1}{r}\mathbb{E}^{W_0}\big[\int_0^{T_r}\frac{dt}{(1+t)^2}\big]=\frac{1}{r}\mathbb{E}^{W_0}[\frac{T_r}{1+T_r}]=r\mathbb{E}[\frac{1}{N^2+r^2}].$$
Using Equation \eqref{double2}, this is equal to $H(r)$. This ends the proof of the first part of Lemma \ref{lem2}. Using the expression for the density of $R_1$ given in Corollary \ref{cortheo4}, the proof of the second part readily follows remarking that
$$\mathbb{E}\big[\frac{U}{R_UR_1^2}\big]=\sqrt{\frac{2}{\pi}}\int_{0}^{+\infty}dr\int_{r}^{+\infty}dx\text{e}^{-x^2/2}=\sqrt{\frac{2}{\pi}}\int_{0}^{+\infty}dx x\text{e}^{-x^2/2}=\sqrt{\frac{2}{\pi}}.$$
\subsection{An enlargement of filtration approach to the centering property of $\alpha$}
We now revisit the centering property of $\alpha$ through an enlargement of filtration formula. Let $(\mathcal{F}_t)$ denote the filtration of the Brownian motion $(B_t)$ and $(\mathcal{F}'_t)$ the filtration obtained by initially enlarging $(\mathcal{F}_t)$ with $T_1$. It is shown in \cite{jeulin1979grossissement} that $(B_t)$ is a $(\mathcal{F}'_t)$ semi-martingale. More precisely,
\begin{equation}\label{dec}
B_t=\beta_t-\int_0^{t\wedge T_1}\frac{ds}{1-B_s}+\int_0^{t\wedge T_1}ds\frac{1-B_s}{T_1-s},
\end{equation}
where $(\beta_t)$ is a $(\mathcal{F}'_t)$ Brownian motion (in particular it is independent of $T_1$).
Taking expectation on both sides of \eqref{dec} at time $t=UT_1$, we get
$$\mathbb{E}[\alpha]=-\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\int_0^{UT_1}\frac{ds}{1-B_s}\big]+\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\int_0^{UT_1}ds\frac{1-B_s}{T_1-s}\big].$$
Using the change of variable $s=uT_1$ in both integrals, we get
$$\mathbb{E}[\alpha]=-\mathbb{E}\big[\sqrt{T_1}\int_0^{U}\frac{du}{1-B_{uT_1}}\big]+\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\int_0^{U}du\frac{1-B_{uT_1}}{1-u}\big].$$
Since $U$is independent of $B$ and uniformly distributed on $[0,1]$, we get
$$\mathbb{E}[\alpha]=-\mathbb{E}\big[\sqrt{T_1}\int_0^{1}du\frac{(1-u)}{1-B_{uT_1}}\big]+\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\int_0^{1}du(1-B_{uT_1})\big].$$
Thus,
$$2\mathbb{E}[\alpha]=-\mathbb{E}\big[\sqrt{T_1}\int_0^{1}dv\frac{v}{1-B_{T_1(1-v)}}\big]+\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\big].$$
We now use Williams time reversal result:
$$\Big(T_1,\big(1-B_{T_1(1-v)},~v\leq 1\big)\Big)\underset{\mathcal{L}}{=}\Big(\gamma_1,\big(R_{v\gamma_1},~v\leq 1\big)\Big),$$
where
$$\gamma_1=\text{sup}\{s>0,~R_s=1\}.$$
Hence we obtain
$$2\mathbb{E}[\alpha]=-\mathbb{E}\big[\frac{V}{R_{V\gamma_1}/\sqrt{\gamma_1}}\big]+\mathbb{E}\big[\frac{1}{\sqrt{T_1}}\big],$$ with $V$ a uniform random variable on $[0,1]$, independent of $R$. From the absolute continuity relationship between the laws of
$$\big(R_{v\gamma_1}/\sqrt{\gamma_1},~v\leq 1\big)$$ and $(R_{v},~v\leq 1),$ see \cite{biane1987processus}, we get
$$2\mathbb{E}[\alpha]=\sqrt{\frac{2}{\pi}}-\mathbb{E}[\frac{V}{R_VR_1^2}].$$
Hence $\mathbb{E}[\alpha]=0$ if and only if
$$E[\frac{V}{R_VR_1^2}]=\sqrt{\frac{2}{\pi}}.$$ From Lemma \ref{lem2}, the last equality holds. Moreover, it has been shown without the help of our previous results \cite{elie2013expectation,rosenbaum2013law}. Thus, the use of the enlargement formula of \cite{jeulin1979grossissement} provides an alternative proof of the centering property of $\alpha$.
\section{A few words of conclusion}
Together with \cite{elie2013expectation} and \cite{rosenbaum2013law}, this paper is our third work where various aspects of the law of
\begin{equation*}
\alpha=\frac{B_{UT_1}}{\sqrt{T_1}}
\end{equation*} are investigated.
For example, we have considered its centering property, the explicit form of its density, which may be directly deduced from Equation \eqref{law} and Equation \eqref{dens}, and its Mellin transform. In the present paper, starting from the pseudo-Brownian bridge, we obtain some results relative to the Brownian bridge, the Brownian meander and the three dimensional Bessel process. Imhof type relations between these processes allow to go from one to another.
\bibliographystyle{abbrv}
|
1,108,101,562,768 | arxiv | \section{Introduction}
The study of the wall--crossing phenomenon in 4--dimensional ${\cal N} = 2$ supersymmetric quantum field theories\cite{SW1,SW2,KS,DG,DGS,GMN:2008,GMN:2009} is unravelling deep relations \cite{CV09,CVN,CDZ} in between BPS spectroscopy and quantum cluster algebras \cite{Cluster} for the class of ${\cal N} = 2$ theories that admit BPS quiver \cite{CV11}. In particular, the fundamental wall--crossing invariant, the $4d$ quantum monodromy $\mathbb{M}(q)$, admits various distinct factorizations into basic quantum mutations each corresponding to a \emph{formal} stable BPS spectrum --- see \cite{CVN,CDZ,Keller:2011} and references therein. BPS quiver theories divides into two classes: complete theories\cite{CV11,CVal:complete}, for which all possible BPS chambers are physically realized, and non--complete ones, for which the physical submanifold is a proper subset of the space of stability conditions --- a fact that leads to the so called \emph{quantum Schottky problem}.
\medskip
In ref.\cite{CDZ}, an extension of the CNV strategy to find such factorizations was obtained to handle models that are more general than square tensor ones \cite{CVN,Keller:DynkinPairs} and it was used to study the non--square tensor models out of the Arnold's exceptional unimodal singularities list \cite{AGV,EMS,Ebeling1}. Unimodal ${\cal N} = 2$ superconformal theories are the simplest non--complete ones, the physical submanifold having codimension one. The purpose of this note is to study, along the lines of \cite{CDZ}, the further layer of non--completeness, the models obtained by type IIB engineering on quasi--homogenous bimodal singularities.
\medskip
The structure of this paper is the following: In section 2 we discuss the quasi--homogeneous elements of Arnold's bimodal singularities list. In section 3 we discuss (mostly) the information about the models coming from the $2d$ theory: the issue of non--completeness and its relation with the modality of a singularity \cite{AGV,EMS,Ebeling1,Ebeling2,Gabrielov2,Gabrielov3,GabrielovKushnirenko} are discussed in section 3.1; the r\^ole of the $2d$ wall--crossing group \cite{CV92,CV92bis,WClectures,HIV} is presented in \Sigma 3.2, manifesting itself for the even elements of the $Q$ series; in \Sigma 3.3 we discuss, by the RG argument of \cite{CDZ}, the issue of the physical realizableness of the `strongly coupled' BPS chambers we have obtained; in \Sigma 3.4 we compute the number of flavor charges of the bimodal superconformal theories. In section 4, we consider the BPS spectroscopy at `strong coupling'. We refer to \cite{CDZ} for the explanation of the (extended) CNV strategy --- see in particular \Sigma 4,5 --- and we limit ourselves to stating our results, referring to appendix A for the details of computations. The delicate interplay in between cluster algebras theory and the thermodynamical Bethe ansatz --- see \cite{Keller:DynkinPairs,Keller:Triang,FZ:Ys} and references therein --- leads, as a side result of this work, to the prediction of 11 new periodic $Y$--systems, we briefly discuss this fact in \Sigma 5. All Coxeter--Dynkin diagrams we have used here were obtained by A. M. Gabrielov in \cite{Gabrielov, Gabrielov2} --- see also \cite{Ebeling2}.
\section{Bimodal singularities}
\begin{table}
\caption{Exceptional bimodal singularities that are not direct sum of simple ones.}\label{ArnB1}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
name & $W(x,y,z)$ & Coxeter--Dynkin diagram\\
\hline
$\begin{matrix}\\ E_{19}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3+xy^7+z^2\\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d]\ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dr]\ar@{-}[r]&\bullet\ar@{-}[d]\ar@{..}[dr]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet }\end{gathered}$}\\\hline
$\begin{matrix}\\ Z_{17}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3y+y^8+z^2\\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dl]&\bullet \ar@{-}[d]\ar@{..}[dl] & &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ Z_{18}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3y+xy^6+z^2\\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dl]&\bullet \ar@{-}[d]\ar@{..}[dl] & &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dr]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ Z_{19}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3y+y^9+z^2\\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dl]&\bullet \ar@{-}[d]\ar@{..}[dl] & &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d] \ar@{..}[dr]\ar@{-}[r]&\bullet \ar@{-}[d]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ W_{17}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^4+x y^5+z^2\\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dl]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]& \bullet \ar@{-}[d]\ar@{..}[dl]\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
$\begin{matrix}\\ Q_{16}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3+yz^2+y^7\\ \phantom{a}\end{matrix}$ &{\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]& &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet\ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ Q_{17}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3+yz^2+xy^5\\ \phantom{a}\end{matrix}$ &{\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]& &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet\ar@{-}[d]\ar@{..}[dr] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ Q_{18}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^3+yz^2+y^8\\ \phantom{a}\end{matrix}$ &{\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]& &\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet\ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ S_{16}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^2z+yz^2+xy^4\\ \phantom{a}\end{matrix}$ & {\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]&\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ S_{17}\\ \phantom{a}\end{matrix}$ & $\begin{matrix}\\ x^2z+yz^2+y^6\\ \phantom{a}\end{matrix}$ & {\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Quasi--homogeneous elements of the 8 infinite series of bimodal singularities that are not direct sum of simple ones. We indicate the corresponding Milnor numbers in parenthesis.}\label{ArnB2}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
name & $W(x,y,z)$ & Coxeter--Dynkin diagram\\
\hline
$\begin{matrix}\\ Z_{1,0} \\ \phantom{a}\end{matrix}$ {\footnotesize (15)} & $\begin{matrix}\\ x^3 y+y^7+z^2 \\ \phantom{a}\end{matrix}$ & {\scriptsize $\begin{gathered}\xymatrix{
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dl] &\bullet \ar@{-}[d]\ar@{..}[dl]&\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d]\\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ Q_{2,0} \\ \phantom{a}\end{matrix}$ {\footnotesize (14)} & $\begin{matrix}\\ x^3 +y z^2 + x y^4 \\ \phantom{a}\end{matrix}$ & {\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]& \\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet\ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ S_{1,0} \\ \phantom{a}\end{matrix}$ {\footnotesize (14)}& $\begin{matrix}\\ x^2 z + y z^2 + y^5 \\ \phantom{a}\end{matrix}$ &{\scriptsize$\begin{gathered}\xymatrix{\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]&\\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
$\begin{matrix}\\ U_{1,0} \\ \phantom{a}\end{matrix}$ {\footnotesize (14)} & $\begin{matrix}\\ x^3 + x z^2 + x y^3 \\ \phantom{a}\end{matrix}$ & {\scriptsize$\begin{gathered}\xymatrix{
\bullet\ar@{-}[r]\ar@{-}@/_/[dd]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]\ar@{-}[r]&\bullet \ar@{..}[ddl]\ar@{-}@/_/[dd]&\\
\bullet \ar@{-}[r]\ar@{-}[d]&\bullet \ar@{-}[d]\ar@{..}[dl]\ar@{-}[r]&\bullet \ar@{-}[d]\ar@{..}[dl]& \\
\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet \ar@{-}[r]\ar@{-}[d]\ar@{..}[dr]&\bullet\ar@{-}[d] \\
\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet \ar@{-}[r]&\bullet\\
}\end{gathered}$}\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
& $q_x, q_y, q_z$ & $\hat{c}$ & $\ell$ \\\hline
$E_{19}$ & 1/3 , 1/7 , 1/2 & 8/7 & 18 \\
$Z_{17}$ & 7/24, 1/8, 1/2 & 7/6 & 10 \\
$Z_{18}$ & 5/17, 2/17, 1/2 & 20/17 & 14 \\
$Z_{19}$ & 8/27, 1/9, 1/2 & 32/27 & 22 \\
$W_{17}$ & 1/4, 3/20, 1/2 & 6/5 & 8 \\
$Q_{16}$ & 1/3, 1/7, 3/7 & 25/21 & 17 \\
$Q_{17}$ & 1/3, 2/15, 13/30 & 6/5 & 12 \\
$Q_{18}$ & 1/3, 1/8, 7/16 & 29/24 & 19 \\
$S_{16}$ & 5/17, 3/17, 7/17 & 21/17 & 13 \\
$S_{17}$ & 7/24, 1/6, 5/12 & 5/4 & 9 \\\hline
$Z_{1,0}$ & 2/7, 1/7, 1/2 & 8/7 & 6 \\
$Q_{2,0}$& 1/3, 1/6, 5/12 & 7/6 & 5 \\
$S_{1,0}$& 3/10, 1/5, 2/5 & 6/5 & 4 \\
$U_{1,0}$& 1/3, 2/9, 1/3 & 11/9 & 9 \\\hline
\end{tabular}
\end{center}
\caption{Chiral charges $q_i$, central charges $\hat{c}$ and orders $\ell$ of the quantum monodromy $\mathbb{M}(q)$ for the quasi--homogeneous bimodal singularities that are not square tensor models. }\label{numerology}
\end{table}
Bimodal singularities are fully classified \cite{AGV, EMS}: they are organized in 8 infinite series and 14 exceptional families. All 14 exceptional families have a quasi--homogeneous point in their moduli and, in between the 8 infinite series, there are 6 families that admit one. With an abuse of language, we will refer to this set as to the set of quasi--homogeneous bimodal singularities. The quasi--homogeneous potentials, $W(x,y,z)$, that corresponds to the elements of this set lead to non--degenerate $2d$ ${\cal N}=(2,2)$ Landau--Ginzburg superconformal field theories that have central charge $\hat{c}<2$, therefore, according to \cite{GVW,SV}, these singularities are all at finite distance in Calabi--Yau moduli space: the local CY 3--fold $\mathscr{H}$, given by the hypersurface in $\mathbb{C}^4$
\begin{equation}\label{IRCY}
\mathscr{H} \colon W(x,y,z) + u^2 + \textrm{ lower terms} = 0.
\end{equation}
is a good candidate for the compactification of type IIB superstring that leads to the engineering of an ${\cal N} = 2$ superconformal $4d$ theory.
In between the theories obtained engineering Type IIB superstring on bimodal singularities there are the following superconformal square tensor models \cite{CVN}:
\begin{equation}
\begin{gathered}
\begin{tabular}{|c|c|c|cc|c|c|c|}\cline{1-3}\cline{6-8}
$E_{18}$ & $x^3 + y^{10} + z^2$ &$ A_2 \square A_9$ &&& $E_{20}$& $x^3 + y^{11} + z^2 $ & $A_2 \square A_{10}$\\
$W_{18}$& $x^4 + y^7 + z^2 $ & $A_3 \square A_6$ &&& $U_{16}$ & $x^3 + xz^2 + y^5$ & $D_4 \square A_4$\\
$J_{3,0}$ & $x^3 + y^9 + z^2 $ & $A_2 \square A_8$ &&& $W_{1,0} $&$ x^4 + y^6 + z^2 $ & $A_3 \square A_5$\\\cline{1-3}\cline{6-8}
\end{tabular}
\end{gathered}
\end{equation}
In the present work we will focus on the bimodal singularities that are not in this list --- see tables \ref{ArnB1}, \ref{ArnB2}, \ref{numerology}. We remark that also the even rank elements of the $Q$ series are square tensor models, indeed,
\begin{equation}\label{Qs}
\begin{gathered}
\begin{tabular}{|c|c|c|cc|c|c|c|}\cline{1-3}\cline{6-8}
$Q_{10}$ &$ x^2 z + y^3 + z^4$ & $A_2 \square D_5 $&&& $Q_{12}$&$ x^2 z + y^3 + z^5 $ &$ A_2 \square D_6$\\
$Q_{16}$ &$ x^3 + yz^2 + y^7 $&$ A_2 \square D_8$&&&$ Q_{18}$&$ x^3 + yz^2 + y^8 $ &$ A_2 \square D_9$\\\cline{1-3}\cline{6-8}
\end{tabular}
\end{gathered}
\end{equation}
we will discuss this point in \Sigma \ref{2dWC}.
\section{${\cal N} = 2$ superconformal models}
\subsection{Modality and completeness}\label{ModComp}
\begin{table}
\begin{center}
\begin{tabular}{|cc||cc|}\hline
$Z_{1,0}$&$x^2y^3(1), xy^6(\frac{8}{7})$&$Q_{2,0}$&$x^2y^2(1), x^2y^3(\frac{7}{6})$\\\hline
$S_{1,0}$&$zy^3(1), zy^4(\frac{6}{5})$&$U_{1,0}$&$zy^3(1), zy^4(\frac{11}{9})$\\\hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|cc|cc|cc|}\hline
$E_{19}$&$y^{11}(\frac{22}{21}), y^{12}(\frac{8}{7})$&$W_{17}$&$y^7(\frac{21}{20}), y^8(\frac{6}{5})$&&\\\hline
$Z_{17}$&$xy^6(\frac{25}{24}), xy^7(\frac{7}{6}) $&$Z_{18}$&$y^9(\frac{18}{17}), y^{10}(\frac{20}{17})$&$Z_{19}$&$xy^7(\frac{29}{27}), xy^8(\frac{32}{27})$\\\hline
$Q_{16}$&$xy^5(\frac{22}{21}), xy^6(\frac{25}{21})$&$Q_{17}$&$y^8(\frac{16}{15}), y^9(\frac{6}{5})$&$Q_{18}$&$xy^6(\frac{13}{12}), xy^7(\frac{29}{24})$\\\hline
$S_{16}$&$y^6(\frac{18}{17}), y^7(\frac{21}{17})$&$S_{17}$&$zy^4(\frac{13}{12}), zy^5(\frac{5}{4})$&&\\\hline
\end{tabular}
\end{center}
\caption{Primary deformations of dimension $\geq 1$ of the quasi--homogeneous bimodal singularities that are not square tensor models. The number in parenthesis is the dimension of the deformation.}\label{deform}
\end{table}
The Landau--Ginzburg models we are considering here have chiral ring of primary operators \cite{VW,LVW}
\begin{equation}\label{chiralring}
\mathscr{R} \simeq \mathbb{C}[x,y,z] / J_W,
\end{equation}
where $J_W$ is the jacobian ideal of $W$, \emph{i.e.} the ideal of $\mathbb{C}[x,y,z]$ generated by the partials $\partial_i W$. The theories being non--degenerate, the ring is finite--dimensional as a $\mathbb{C}$--algebra and its dimension, $\mu$, is called the Milnor number (or multiplicity) of the singularity $W(x,y,z)$,
\begin{equation}\label{Milnornumber}
\mu \equiv \text{dim}_{\mathbb{C}} \mathscr{R}.
\end{equation}
This number equals the Witten--index $\text{tr}(-)^F$ of the theory, by the spectral flow isomorphism \cite{LVW,C91}. Moreover, by $2d/4d$ correspondence $\mu$ equals the rank of the charge lattice $\Gamma$ of the 4d theory. $\text{tr}(-)^F$ jumps at the infrared fixed point obtained by perturbing the theory away from criticality with primary relevant perturbations: this corresponds to the fact that taking this infrared limit we are projecting on a proper subalgebra of $\mathscr{R}$. \emph{Modality} is the cardinality of the set of perturbations that generate renormalization flows asymptotically preserving the Witten--index \cite{Gabrielov3}. From this definition, it follows that it can be computed as the number of marginal and irrelevant primary operators in a monomial basis of the chiral ring\footnote{A result known in the mathematical litterature as the fact that inner modality and modality coincide \cite{EMS, GabrielovKushnirenko}.}.
\medskip
The $4d$ theories obtained by geometric engineering Type IIB on $\mathscr{H}$ are \emph{non}--complete quiver quantum field theories in the sense of \cite{CV11}. This means that \cite{CDZ} the image in $\mathbb{C}^{\mu}$ of the domain $\mathcal{D}$ in parameter space that corresponds to consistent QFT's through the holomorphic map $\varpi : \mathcal{D} \rightarrow \mathbb{C}^{\mu}$ defined by the central charge function has nonzero codimension. Variations of the central charge $Z_i \rightarrow Z_i + \delta Z_i$ correspond to infinitesimal deformations of the periods of the holomorphic top form $\Omega$ associated to deformations $\delta t_{\alpha}$ of the complex structure of $\mathscr{H}$ of the form
\begin{equation}
W(x,y,z) + u^2 + \sum_{\alpha} \delta t_{\alpha} \phi_{\alpha} = 0
\end{equation}
where $\{ \phi_{\alpha} \} $ is a basis of chiral primaries for the chiral ring. The quantum--obstructed variations $(\delta Z_i)_{\textrm{obs}}$ are the ones normal to the physical submanifold $\varpi (\mathcal{D}) \subset \mathbb{C}^{\mu}$. The $2d$ renormalization group allows us to identify such directions. Indeed, the $2d$ theory has infrared conformal fixed point dictated by the Zamolodchikov's $c$--theorem \cite{Zamolod}. The infrared fixed point is stable under perturbation by irrelevant operators. Correspondingly, variations of the periods of $\Omega$ in these directions of the basis of the Milnor fibration \cite{Milnor} are forbidden physically, since these $2d$ deformations renormalize away. Thus, the unphysical deformations of the theory $(\delta Z_i)_{\textrm{obs}}$ are precisely those corresponding to the irrelevant primary perturbations. In computing the BPS spectra, the combinatorics of the quantum clusters is not sensible to this fact: one can compute a mathematically consistent spectrum that corresponds to a BPS chamber that cannot be realized physically due to the above phenomenon --- this is the root of the \emph{quantum Schottky problem}.
\medskip
In the case of exceptional bimodal singularities, there are two quantum obstructed deformations, therefore the codimension of $\varpi (\mathcal{D}) \subset \mathbb{C}^{\mu}$ is 2, while for the non--exceptional bimodals, there is only one such deformation, the other modulus being a marginal deformation --- see table \ref{deform}. As an example, consider the $Q_{2,0}$ theory. The marginal deformation of it is $x^2 y^2$. This is an operator equivalent, in the chiral ring, to $y^6$. By deforming $Q_{2,0}$ with $y^6$, we obtain the equivalence $Q_{2,0} \sim A_2 \square D_7$.
\subsection{$2d$ wall--crossings and Coxeter--Dynkin graphs}\label{2dWC}
As already mentioned in \cite{CDZ}, Coxeter--Dynkin graphs are \emph{not} unique, depending on the choice of a distinguished homology basis. Equivalent distinguished basis are related by the braid group (Picard--Lefshetz) transformations, \emph{i.e.} by the $2d$ wall--crossing group. Whenever we switch the position of two {\sc susy} vacua, say $| i \rangle$ and $| i+1 \rangle$, in the $W$ plane, we cross a $2d$ wall of marginal stability and this has the effect of a phase transition in the $2d$ BPS spectrum. Accordingly, there is a mapping, $\alpha_i$, of the strongly distinguished basis $\{ \delta_k \}$ into a new one $\{ \delta^{\prime} _k \}$ as follows:
\begin{equation}\label{2dWX}
\alpha_i \colon \begin{cases}
\delta^{\prime} _j = \delta_j \text{ for } j \neq i, i+1\\
\delta^{\prime} _{i+1} = \delta_i\\
\delta^{\prime} _i = \delta_{i+1} + (\delta_{i+1} \cdot \delta_i ) \, \delta_i.
\end{cases}
\end{equation}
If the singularity has Milnor number $\mu$, there are $\mu - 1$ such operations: $\alpha_i$, $1 \leq i < \mu$. It is easy to check that they satisfy the braid group relations
\begin{equation}\label{notKeller}
\begin{gathered}
\begin{tabular}{rcl}
$\alpha_i \alpha_{i+1} \alpha_i = \alpha_{i+1} \alpha_i \alpha_{i+1}$ && for $1 \leq i \leq \mu - 2$ \\
$\alpha_i \alpha_j = \alpha_j \alpha_i $ && for $|i-j| \geq 2$.
\end{tabular}
\end{gathered}
\end{equation}
so that this is a presentation of $B_{\mu}$, the (Artin) braid group with $\mu$ strands\footnote{We stress that the $2d$ wall--crossing group is not, in principle --- according to proposition 9 in I.7.6. of \cite{Bourbaki:Algebra} --- equivalent to the braid group generated by Seidel--Thomas twists, ${\tt Braid}(Q)$ \cite{Keller:2011}, that has generators $\sigma_i$, $i \in Q_0$ and relations:
$$
{\tt Braid}(Q) \colon \begin{cases}
\s_i \s_j = \s_j \s_i \text{ if } i \text{ and } j \text{ are not linked by any arrow}\\
\s_i \s_j \s_i = \s_j \s_i \s_j \text{ if there is one arrow between } i \text{ and } j\\
none \text{ if there is more than one arrow between } i \text{ and } j.
\end{cases}
$$
If one considers, say, the Dynkin $A_m$ quiver, ${\tt Braid} (A_m)$ is the $B_{m+1}$ braid \cite{SeidelThomas}, while the $2d$ wall--crossing group is $B_m$. This is the situation whenever the graph is Dynkin: the $2d$ wall--crossing group embeds in ${\tt Braid}(Q)$; the situation gets more complicated whenever the quiver is not a tree.}. Moreover we are allowed by PCT to reverse the orientation of the cycles:
\begin{equation}
r_i \colon \begin{cases}
\delta^{\prime} _j = \delta_j \text{ for } i \neq j \\
\delta^{\prime} _{i} = - \delta_i.
\end{cases}
\end{equation}
Consider the $2d$ quantum monodromy, $(S^{-1})^t S$, where
\begin{equation}
S_{ij} = \delta_{ij} - \begin{cases}
\delta_i \cdot \delta_j \text{ for } i < j\\
0 \qquad \text{otherwise}.
\end{cases}
\end{equation}
The spectrum of this operator is physical:
\begin{equation}\label{2dcheck}
{\tt Eigenvalues}[ (S^{-1})^t S] = \{ \texttt{exp}[2 \pi i ( q_i - \hat{c}/2)] \}
\end{equation}
where $q_i$ are the chiral charges of a basis of chiral primaries at the $\hat{c}$ conformal fixed point. Being physical, the spectrum of the $2d$ quantum monodromy is conserved along the orbits of the $2d$ wall--crossing group\cite{CV92}.
\medskip
Assume that the superpotential of the ${\cal N} = (2,2)$ $2d$ Landau--Ginzburg superconformal theory has nonzero modality, then there are directions along the $2d$ renormalization group flow along which the Witten--index is conserved also asymptotically. The behaviour of the Coxeter--Dynkin graph under these deformations is encoded in the following proposition:
\smallskip
{\bf Proposition}( 1 of \cite{Gabrielov2}): all the irrelevant and marginal deformations of an ${\cal N} = (2,2)$ $2d$ Landau--Ginzburg superconformal theory with $\mu < \infty$ lead to equivalent configurations of vacua and interpolating BPS solitons, where, by equivalent, we mean that they are in the same $2d$ wall--crossing group orbit up to PCT.
\smallskip
This proposition is the key to understand the phenomenon we encountered with the even elements of the $Q$ series: it is just the $2d$ wall--crossing in action. The diagrams we reported in \cite{CDZ} and in table \ref{ArnB1} are referred to an irrelevant deformation, then tuned to zero for consistency of the quantum theory, while the ones from which the square tensor form is explicit are obtained directly from the undeformed theory: being the diagrams in a $\mu$=const. stratum of $Q_{2k}$ they are equivalent.
\medskip
Indeed, for all $k$'s the result is in perfect agreement with \cite{CVN}:
$$Q_{2k} \colon
\begin{gathered}
\underbrace{\xymatrix{
\bullet\ar@{-}[d]\ar@{-}@/_/[rr]&\bullet\ar@{-}[d]\ar@{-}[r]&\bullet\ar@{..}[dr]\ar@{-}[d]\ar@{-}[r]&\bullet\ar@{-}[d]& \dots &\bullet\ar@{-}[d]\ar@{-}[r]\ar@{..}[dr]&\bullet\ar@{-}[d]\\
\bullet\ar@{-}@/_/[rr]&\bullet\ar@{-}[r]&\bullet\ar@{..}@/^/[ull]\ar@{..}[ul]\ar@{-}[r]&\bullet& \dots &\bullet\ar@{-}[r]&\bullet\\
}}_{k \text{ elements }}
\end{gathered}
$$
\smallskip
We remark that $Q_{14}$ above is just $Q_{2,0}$.
\subsection{$2d$ Renormalization group flows}\label{RGflows}
In \cite{CDZ} both a mathematical and a physical argument in favour of the physical realizability of the BPS chamber in which we will compute the spectra were given. The same argument carries over to the theories obtained from quasi--homogenous bimodal singularities, so let us briefly review the physical (and more stringent) one here. The idea is that the 14 superconformal models which are not already of the form $G\square G^{\prime}$ can be obtained each from an appropriate square tensor model of type $A_n \square G$ by perturbing it with a suitable IR--relevant operator\footnote{IR--relevant at the UV--fixed point described by the $A_n \square G$ theory.} $\phi_{\star}$. The IR--relevant perturbation is chosen in such a way that the corresponding ${\cal N} = 2$ theory will flow in the IR to the given Arnold superconformal theory.
\medskip
The trivial instances of this RG process are the following theories (we indicate in parenthesis the dimension of $\phi_{\star}$):
\begin{equation}
\begin{gathered}
\begin{tabular}{c}
$A_2 \square A_{10} \colon x^3 + y^{11} + z^2 \xrightarrow{\ xy^7 \ (32/33)\ } E_{19}$\\
$A_3 \square A_7 \colon x^4 + y^8 + z^2 \xrightarrow{\ x^3 y \ (7/8)\ } Z_{17}$\\
$A_3 \square A_6 \colon x^4 + y^7 + z^2 \xrightarrow{\ x y^5 \ (27/28)\ } W_{17}$\\
$A_3 \square A_6 \colon x^4 + y^7 + z^2 \xrightarrow{\ x^3 y \ (25/28)\ } Z_{1,0}$\\
$A_4 \square D_4 \colon x^2 z + x^3 + y^5 \xrightarrow{\ x y^3 \ (14/15)\ } U_{1,0}$.\\
$A_4 \square D_4 \colon x^2 z + z^3 + y^5 \xrightarrow{\ y z^2 \ (13/15)\ } S_{1,0}$\\
\end{tabular}
\end{gathered}
\end{equation}
The theories $Z_{18}, S_{16}$ and the ones of the $Q$ series are better described as the final IR points of RG `cascades'
\begin{equation}
\begin{gathered}
\begin{tabular}{c}
$A_3 \square A_8 \colon x^4 + y^9 + z^2 \xrightarrow{\ x^3 y \ (31/36)\ } Z_{19} \xrightarrow{\ x y^6 \ (26/27)\ } Z_{18} $\\
$A_5 \square D_4 \colon x^2 z + z^3 + y^6 \xrightarrow{\ y z^2 \ (5/6)\ } S_{17} \xrightarrow{\ x y^4 \ (23/24)\ } S_{16}$\\
$A_6 \square D_4 \colon x^3 + z^3 + y^7 \xrightarrow{\ y z^2 \ (17/21)\ } Q_{16} \xrightarrow{\ x y^4 \ (19/21)\ } Q_{2,0}$\\
$A_7 \square D_4 \colon x^3 + z^3 + y^8 \xrightarrow{\ y z^2 \ (19/24)\ } Q_{18} \xrightarrow{\ x y^5 \ (23/24)\ } Q_{17}$.\\
\end{tabular}
\end{gathered}
\end{equation}
As explained in \cite{CDZ}, this RG argument applies directly to the theories at their superconformal point, \emph{i.e.} when all relevant deformations are switched off. We can give volumes to the (special lagrangian) 3--cycles $\gamma_i$ in the third homology group of the Calabi--Yau 3--fold
$$ y^{n+1} + W_{G}(x,z) + u^2 = 0,$$
by the primary deformation of this singularity. The D3--branes that wrap around these 3--cycles, therefore, get central charges
$$ Z(\gamma_i) = \int_{\gamma_i} \Omega \, ,$$
becoming the BPS particles of the massive deformation of the corresponding 4 dimensional superconformal theory. If now we deform this theory with $\phi_{\star}$ and with chiral primaries of dimension $q$ less than $q(\phi_{\star})$, along the 2 dimensional RG flow in the infrared some of the above 3--cycles start increasing their volume, that becames bigger and bigger the more close we are to the IR fixed point. Accordingly, the corresponding BPS particle masses increases. Thus, these particles decouples and are absent from the BPS spectrum of the 4 dimensional theory obtained by engineering type IIB on the corresponding primary deformation of $\mathscr{H}$, the Calabi--Yau 3--fold associated to the IR theory. Obviously, the two spectra are comparable if, along the flow line of the 2 dimensional RG, we do not cross any wall of marginal stability in 4 dimensions. Since checking this fact may be difficult in practice\footnote{One has to check the existence of a `tuning' of the complex deformation $\lambda \phi_{\star}$ such that it gives rise to a path that avoids the wall--crossings, while keeping control of the possible mixing between the conserved quantum currents.}, we use the above idea in the weak sense of \cite{CDZ}: whenever a (mathematically correct) BPS spectrum can be naturally interpreted as obtained from a physically realized one by the decoupling of some heavy states along the 2 dimensional RG flow above described, we take this fact just as a circumstantial evidence for the corresponding BPS chamber to be physically realizable.
\subsection{Flavor charges}
An important invariant of the theory, \emph{i.e.} a quantity that remains uniformly constant over $\mathcal{D}$, is the number of flavor charges, $n_f$, the dimension of the Cartan subalgebra of the flavor symmetry group. At a generic point of $\mathcal{D}$ the theory has flavor group $U(1)^{n_f}$, while at certain points of parameter space this symmetry can enhance to a non--Abelian one $G_f$. From the point of view of BPS quiver theory, $n_f$ is just the number of zero eigenvalues of the exchange matrix of the quiver, indeed, a charge $\gamma_f$ is flavor if and only if
\begin{equation}\label{flatt}
\langle \gamma , \gamma_f \rangle_D = 0 \qquad \forall \, \gamma \in \Gamma
\end{equation}
where $\Gamma$ is the charge lattice of the theory. In particular, $n_f = \textrm{rank } \Gamma \, \textrm{ mod } 2$. A general consequence of $2d/4d$ correspondence \cite{CVN, CV11, CDZ} is that $n_f$ is equal to the number of chiral primaries of dimension $\hat{c}/2$. As explained in \cite{CDZ}, as far as the non--exceptional theories one can read off this number from table 2 of ref. \cite{lenzing}, it is just the number of $\Phi_2$ factors in the factorization of the characteristic polynomial of the strong monodromy $H$ in cyclotomic polynomials\footnote{Although, there is a misprint there: the correct flavor charge of $Z_{1,0}$ is 3.}. As far as the other singularities, the degeneracies of the chiral ring elements are captured by the Poincar\'e polynomial:
\begin{equation}
\sum_\alpha t^{q_{\alpha}} = \prod_i \frac{(1- t^{1-q_i})}{1-t^{q_i}}
\end{equation}
where the $q_i$ are the charges of table \ref{numerology} and the sum is over all the elements of a monomial basis of the chiral ring. Expanding the RHS, $n_f$ is the (positive or zero) \emph{integer} multiplying the coefficient $t^{\hat{c}/2}$ of the serie. So,
\begin{equation}\label{flavorch}
\begin{gathered}
n_f = \begin{cases}
3 \textrm{ for } Z_{1,0}\\
2 \textrm{ for } Q_{2,0}, S_{1,0}, J_{3,0}, Z_{18} \\
1 \textrm{ for odd rank exceptionals and } W_{1,0}\\
0 \textrm{ otherwise.}
\end{cases}
\end{gathered}
\end{equation}
\subsection{A remark about weak coupling}
Let us end this section with a remark about the consequences of the above analysis on the weak coupling limit of these theories, although we will not discuss it in the present paper. Consider the possibility that one of the above theories admits in some corner of its parameter space the structure of a $G$ SYM weakly coupled to some other sector. Assume, momentarily, that such a description is purely lagrangian. The dimension of the parameter space, in this case, can be computed as
\begin{equation}
\begin{gathered}
\text{dim }(\mathcal{D}) = \# (\text{gauge couplings}) + \text{ dim }(\text{Coulomb branch}) + \#(\text{masses})\\
= \# (\text{simple factors of G}) + \text{rk}(G) + n_f
\end{gathered}
\end{equation}
By the $4d/2d$ correspondence, $\mu$ is equal to the rank of the charge lattice $\Gamma$ that, in this case, is given by \cite{CV11}:
\begin{equation}\label{rkGamma}
\text{rk}(\Gamma) = 2 \text{ rk}(G) + n_f.
\end{equation}
Thus
\begin{equation}\label{groupmagic}
\mu - \text{dim }(\mathcal{D}) = \text{rk}(G) - \# (\text{simple factors of G}) = \text{codim} (\mathcal{D}).
\end{equation}
This equality holds in a \emph{lagrangian} corner of the parameter space: for a non--lagrangian one we expect that it becames an inequality,
\begin{equation}
\text{rk}(G) - \# (\text{simple factors of G}) \leq \text{codim} (\mathcal{D}),
\end{equation}
since there could be more complicated mechanisms that lead to forbidden directions. From \Sigma 3.1 --- see table 4 --- we are able to compute the codimension of $\mathcal{D}$ in $\mathbb{C}^{\mu}$
\begin{equation*}
\text{codim} (\mathcal{D}) = \begin{cases} 2 \text{ for exceptional bimodals;}\\
1 \text{ otherwise.}
\end{cases}
\end{equation*}
Therefore, if one of the theories we are considering has the structure of a $G$ SYM weakly coupled to some other sector that maybe non--lagrangian, just counting dimensions, we are able to constrain the possible gauge groups $G$: for non-exceptional bimodals the possibilities are
\begin{equation}
SU(2)^k,\quad SU(2)^k\times SU(3),\quad SU(2)^k\times SO(5),\quad SU(2)^k\times G_2,
\end{equation}
while for exceptional bimodals we have the above cases and the following ones
\begin{equation}
\begin{gathered}
SU(2)^k\times SU(3)\times SO(5),\quad SU(2)^k\times SU(3)\times G_2, \\
SU(2)^k\times SO(5)\times G_2, \quad SU(2)^k\times SU(4),\\
SU(2)^k\times SO(6),\quad SU(2)^k\times SO(7)
\end{gathered}
\end{equation}
for some $k \in \mathbb{N}$. Since $\mu = \text{rk}(\Gamma)$, the possible number of $SU(2)$ factors appearing here is constrained via \eqref{rkGamma} by the Witten--index of the corresponding 2d theory.
\section{BPS spectra at strong coupling}
\subsection{Quivers and potentials}\label{CDZMethod}
In ref.\cite{CDZ}, by the $2d/4d$ correspondence, a method for obtaining a $4d$ BPS quiver with potential from a Coxeter--Dynkin graph of the corresponding $2d$ SCFT was outlined. The method carries over in all cases for which the resulting basic algebra of step 2, $\mathscr{A}$, is such that $\texttt{gl.dim.}\mathscr{A} \leq 2$. It consists of the following four steps:
\smallskip
\emph{Step 0:} find an appropriate $2d$ configuration of vacua and BPS solitons in the $2d$ wall--crossing group orbit of the theory.
\smallskip
\emph{Step 1:} choose an orientation of the solid edges of the Coxeter--Dynkin graph such that the dashed edges make sense as relations in the path algebra of the quiver $Q$ so obtained.
\smallskip
\emph{Step 2:} the dashed edges are interpreted as generating an ideal $J$ in the path algebra $\mathbb{C} Q$. $\mathscr{A}$ is the basic algebra $\mathbb{C}Q / J$.
\smallskip
\emph{Step 3:} interpret the ideal $J$ as the Jacobian ideal of a $1d$ superpotential $\mathcal{W}$. Add to $Q$ the arrows that allows you to interpret the relations as $F$--term flatness conditions. This way the completed quiver $\widetilde{Q}$ is obtained. The relevant algebra --- \emph{i.e.} the one whose stable representations are related to BPS spectra --- is the 3--CY completion of $\mathscr{A}$, $\Pi_3 (\mathscr{A}) \simeq \mathbb{C} \widetilde{Q} / \partial \mathcal{W} $.
\smallskip
Let us notice that this superpotential is interpreted as describing the supersymmetric quantum mechanics that governs the dynamics of the worldline of the D--brane system used in the engineering of the theory. With the above method we have obtained quivers with potentials ($\widetilde{Q}$,$\mathcal{W}$) for all the theories in the present paper. For each theory, starting from this representative of the quiver with potential, one can easily obtain, by repeated mutations, a \emph{square form} representative --- \emph{i.e.} the one obtained by eliminating all the dashed arrows from the Coxeter--Dynkin diagram and orienting all the squares; the superpotential of this quiver is given by the traces of the cycles corresponding to the oriented squares.
\subsection{Finite BPS chambers}\label{BPSspectra}
All the square form representatives of the BPS quivers of the ${\cal N} = 2$ theories that we are considering, admit decompositions in complete families of $ADE$ Dynkin subquivers \cite{CDZ} $\{ G_a \}_{a \in A}$, this means that the charge lattice $\Gamma$ has an isomorphism with the following direct sum of root lattices of Lie algebras:
\begin{equation}
\Gamma \simeq \bigoplus_{a \in A} \Gamma_{G_a}.
\end{equation}
To these decompositions, moreover, there correspond Weyl--factorized sink--sequences of Coxeter--type and, therefore, there are some algebraically obvious finite BPS chambers. In these chambers, the BPS spectra consist of one hypermultiplet per charge vector of the direct--sum form
\begin{equation}
0\oplus ... \oplus \alpha^{(a)} \oplus 0 \oplus ... \oplus 0, \qquad \alpha^{(a)} \in \Delta_+(G_a).
\end{equation}
having only one non--zero component (equal to a positive root of the corresponding Lie algebra $G_a$). In these cases the consistency of the mass spectrum follows from comparison with the (obviously consistent) mass spectrum of the $G_a$--type Argyres--Douglas models in the maximal chamber \cite{CVN}. Our result are the following\footnote{The notation $(..., G \times N, ... )$ means that the Dynkin graph $G$ appears $N$ times in the family.}:
\begin{itemize}
\item $ { \bf E_{19} }$ : this theory is a one--point extension of $A_2 \square A_9$. We have four algebraically trivial finite chambers:
\begin{equation}\label{E19ch}
\begin{tabular}{c|c}
$( A_2 \times 8 , A_3 )$ & $(A_2 \times 9, A_1)$\\\hline
$(A_{10} , A_9)$ & $(A_9 \times 2, A_1) $
\end{tabular}
\end{equation}
\item ${ \bf Z_{17}}$ : we have two algebraically trivial finite chambers:
\begin{equation}\label{Z17ch}
\begin{tabular}{c|c}
$( A_3, A_7 , A_7 )$ & $(A_3 \times 3, A_2 \times 4)$\\
\end{tabular}
\end{equation}
\item ${\bf Z_{18}}$ : this is the one point extension of the previous one:
\begin{equation}\label{Z18ch}
\begin{tabular}{c|c}
$( A_3, A_7 , A_8)$ & $( A_3, A_7 , A_7, A_1)$\\\hline
$(A_3 \times 3, A_2 \times 3, A_3)$ & $(A_3 \times 3, A_2 \times 4, A_1)$
\end{tabular}
\end{equation}
\item $ {\bf Q_{2k}} $ : the canonical chambers of $A_2 \square D_k$ --- see eq. \eqref{Qs} --- and the following two algebraically trivial finite chambers:
\begin{align}
&{\bf Q_{2,0}} \colon \begin{tabular}{c|c}
$( D_4 \times 2, A_2 \times 3)$ & $(A_2 \times 2, A_5 \times 2)$\\
\end{tabular}\\
&{\bf Q_{16}} \colon \begin{tabular}{c|c}
$( D_4 \times 2, A_2 \times 4)$ & $(A_2 \times 2, A_6 \times 2)$ \label{Q16ch}\\
\end{tabular}\\
&{\bf Q_{18}} \colon \begin{tabular}{c|c}
$( D_4 \times 2, A_2 \times 5)$ & $(A_2 \times 2, A_7 \times 2)$\label{Q18ch}\\
\end{tabular}
\end{align}
\item ${\bf Q_{17}}$ : this is a one point extension of $Q_{16}$:
\begin{equation}\label{Q17ch}
\begin{tabular}{c|c}
$( D_4 \times 2, A_2 \times 4,A_1)$ & $(A_2 \times 2, A_6 \times 2,A_1)$\\\hline
$( D_4 \times 2, A_2 \times 3,A_3)$ & $(A_2 \times 2, A_6, A_7)$
\end{tabular}
\end{equation}
\item For all the others we have two algebraically trivial finite chambers:
\begin{align}
&{\bf Z_{19}} \colon
\begin{tabular}{c|c}
$( A_3, A_8 , A_8 )$ & $(A_3 \times 3, A_2 \times 5)$\\
\end{tabular}\\
&{\bf W_{17}} \colon
\begin{tabular}{c|c}
$( A_5, A_6 , A_6 )$ & $(A_3 \times 5, A_2 )$\label{W17ch} \\
\end{tabular}\\
&{\bf S_{16}} \colon
\begin{tabular}{c|c}
$( D_4 \times 2, A_3 \times 2, A_2)$ & $(A_2, A_4, A_5 \times 2)$\\
\end{tabular}\\\label{S17ch}
&{\bf S_{17}} \colon
\begin{tabular}{c|c}
$( D_4 \times 2, A_3 \times 3)$ & $(A_2, A_5 \times 3)$\\
\end{tabular}\\
&{\bf Z_{1,0}} \colon
\begin{tabular}{c|c}
$( A_3 , A_6 \times 2)$ & $(A_3 \times 3, A_2 \times 3)$\\
\end{tabular}\\\label{S10ch}
&{\bf S_{1,0}} \colon \begin{tabular}{c|c}
$( D_4 \times 2, A_3 \times 2)$ & $(A_2, A_4 \times 3)$\\
\end{tabular}\\
&{\bf U_{1,0}} \colon
\begin{tabular}{c|c}
$( D_4 \times 3, A_2)$ & $(A_3 \times 2, A_4 \times 2)$\\
\end{tabular}
\end{align}
\end{itemize}
We stress that all these finite BPS chambers have natural physical interpretations as the decoupling of some heavy hypermultiplet from the physical BPS spectrum of a canonical chamber of a square tensor model \cite{CVN}, as we showed in section \Sigma \ref{RGflows}. As already remarked in \cite{CDZ}, we stress that in general it is difficult to understand whether a given chamber is physical or not, even at the heuristic level. This is one of the reasons why we have limited ourself to the study of those chambers that are canonically related to the analysis of \cite{CVN}.
\section{More periodic $Y$--systems}
According to our analysis we are predicting the existence of 11 new periodic $Y$--systems that can be straightforwardly generated with the help of the Keller's mutation applet \cite{applet} using the Weyl--factorized sequences that corresponds to the BPS chambers we listed in \Sigma \ref{BPSspectra} --- see appendix \ref{sequences}. Indeed, BPS spectroscopy provides expressions for the quantum monodromy $\mathbb{M}(q)$. The action of $\mathbb{M}(q)$ on the quantum torus algebra $\mathbb{T}_Q$ is specified by its action on the set of generators $\{Y_i\}_{i \in Q_0}$, where $Q_0$ denotes the set of nodes of $Q$,
\begin{equation}\label{Mon}
Y_i \rightarrow Y_i^{\prime} \equiv \text{Ad}(\mathbb{M}(q)^{-1})Y_i \equiv N[R_i (Y_j)],
\end{equation}
where $N[...]$ is the normal--ordering \cite{CVN}. The classical limit of \eqref{Mon}, is a rational map $R\colon Y_i \rightarrow R_i (Y_j)$, the iteration of which is the $Y$--system:
\begin{equation}
Y_i(s+1) = R_i (Y_j(s)), \qquad s\in \mathbb{Z}.
\end{equation}
The $Y$--systems so obtained are \emph{periodic} since,
\begin{equation}
\text{Ad} \big[ \mathbb{M}(q)^{\ell} \big] = \text{ Id} \Longleftrightarrow Y_j(s+\ell) = Y_j(s), \forall \, j \in Q_0, s \in \mathbb{Z},
\end{equation}
and string theory predicts \cite{CDZ,CVN} the values of the orders $\ell$ for the models we have studied in this paper --- see the list in table \ref{numerology}. Moreover, we have checked the RHS with the help of the computer procedure described in \cite{CDZ}. From our analysis, the periodic $Y$--systems associated to the $Q$--series of unimodal and bimodal singularities should be interpreted as new forms of the $A_2 \square D_k$ ones.
\medskip
As remarked in \cite{CDZ}, it should be possible to give an interpretation of these new periodic $Y$--systems in terms of exactly solvable $2d$ theories in analogy with the $(G,G^{\prime})$ ones \cite{Yrefs}.
\section*{Aknowledgments}
The author wants to thank Sergio Cecotti for his enlightening teachings. Moreover, we acknowledge Wolfgang Ebeling for having cleared up to us a point of fundamental importance for our analysis.
|
1,108,101,562,769 | arxiv | \section{\label{sec:intro}Introduction}
Densification of wireless cellular networks, by overlaying smaller cells over the traditional macrocell, is seen as an inevitable step in enabling future networks to support the expected increase in data rate demand. As we move towards 5G, networks will become more heterogeneous as services will be offered via various types of points of access (PoAs). Indeed, besides the traditional macro base station, it is expected that users will be able to access the network through WiFi access points, small cell (i.e., micro, pico and femto) base stations, or even other users when device-to-device communications are supported. This approach will improve both the capacity and the coverage of current cellular networks, however, since the different PoAs are expected to fully share the available radio resources, inter-cell interference as well as the interference between the different tiers will pose a significant challenge \cite{andrews-5g}.
Future networks are also expected to support carrier aggregation (CA), which allows the simultaneous use of several component carriers (CCs), in order to guarantee higher data rates for end users. Downlink transmissions over the CCs will be characterized by different values of maximum output power depending on the type of PoA, and each carrier will have an independent power budget \cite{3gpp-trca}. Thus, since CCs may belong to different frequency bands, they may have also very different coverage areas and impact in terms of interference, due to both their different transmit power level and their propagation characteristics.
Currently, three main approaches have been proposed to address the interference problem in dense networks: per-tier assignment of carriers, Enhanced Inter Cell Interference Coordination (eICIC), which has been adopted in LTE-A systems, and downlink power control. Per-tier assignment of carriers simply implies that in CA-enabled networks, each tier is assigned a different CC so as to nullify inter-tier interference \cite{lp-abs}. eICIC includes techniques such as Cell Range Expansion (CRE) to incentivize users to associate with micro base stations, and Almost Blank Subframes (ABS), i.e., subframes during which macrocells mute their transmissions to alleviate the interference caused to microcells. Algorithms to optimize biasing coefficients and ABS patterns in LTE heterogeneous networks have been studied in, e.g., \cite{eicic-alg}, however they do not address CA. Also, modifications to the eICIC techniques that allow macro base stations to transmit at reduced power during ABS subframes have been proposed in \cite{lp-abs}. In this paper we do not consider a solution within the framework of eICIC or its modifications, rather we use them as comparison benchmarks for the solutions we propose.
We adopt instead the third approach, which consists in properly setting the downlink transmit power of the different CA-enabled PoAs so as to avoid interference between different tiers. We propose to leverage the diversity in the component carrier coverage areas to mitigate inter-tier interference by varying their downlink transmit power. Thus, we enable a wide range of network configurations which reduce power consumption, provide high throughput and ensure a high level of coverage to network users. This type of configurations have also been envisioned by 3GPP \cite{3gpp-ca}, however, unlike the current specifications, we aim at reaching such solutions dynamically and in-response to real traffic demand.
As envisioned in LTE-A systems and unlike most of previous work, we consider that each CC at each PoA has an independent power budget, and that PoAs can choose the transmit power on each carrier from a discrete set of values. Therefore, our goal is to adequately choose a power level from a range of choices to ensure optimal network performance. It is easy to see that the complexity of the problem increases exponentially with the number of cells, CCs and the granularity of the power levels available to the PoAs. In addition, if one of the objectives is to maximize the network throughput, the problem becomes non linear since transmission data rates depend on the signal-to-interference-plus-noise ratio (SINR) experienced by the users. It follows that an optimal solution requiring a centralized approach would be both unfeasible and unrealistic, given the large number of cells in the network.
We therefore study the above problem through the lens of game theory, which is an excellent mathematical tool to obtain a multi-objective, distributed solution in a scenario with entities (PoAs) sharing the same pool of resources (available CCs). We model each group of PoAs in the coverage area of a macrocell as a team so that we can capture both (i) cooperation between the macrocell and the small cells with overlapping coverage areas, and (ii) the competitive interests of different macrocells. The framework we provide however allows for straightforward extension to teams that include several macrocells. We prove that the game we model belongs to the class of {\em pseudo-potential} games, which are known to admit pure Nash Equilibria (NE) \cite{pa-potential}. This allows us to propose a distributed algorithm based on best-reply dynamics that enables the network to dynamically reach an NE representing the preferred solution in terms of throughput, user coverage and power consumption. As shown by simulation results, our scheme outperforms fixed transmit power strategies, even when advanced interference mitigation techniques such as eICIC are employed.
\section{\label{sec:rel-work}Related work}
While many papers have appeared in the literature on uplink power control, fewer exist on downlink power setting.
Among these,
\cite{coalitions_overlap} uses coalitional games to investigate power and resource allocation in heterogeneous networks where cooperation between players is allowed. Downlink power allocation in cellular networks
is modeled in \cite{hierarchical-competition} as a Stackelberg game, with macro and femto base stations competing to maximize their individual capacities under power constraints. Resource allocation in heterogeneous networks is also addressed in \cite{diaz-geometric} where the authors propose two possible solutions, a heuristic approach using simulated annealing and geometric optimization, while taking into account both the geometry of the network and load fluctuations. Interference in densely deployed femtocell networks is addressed in \cite{lin-powadj} through proper power adjustment and user scheduling. The authors propose a heuristic distributed algorithm that adjusts the coverage radius of the femtocells and then schedules the users in a fair manner. However, the algorithm applies only to femtocells, thereby missing out on many possible solutions offering both better energy efficiency and network throughput. A backhaul-aware approach is taken by the authors in \cite{sapountzis-downlink} where they propose an optimal user association scheme to mitigate interference, which takes into account the base station load, the backhaul load as well as backhaul topology.
An energy efficient approach is instead proposed in \cite{hetnet-eff}. There, base stations do not select transmit power levels as we do in our work, rather they can only choose between on and off states.
Maximizing energy efficiency is also the goal of \cite{yang-eeff}, which however is limited to the study of resource allocation and downlink transmit power in a two-tier LTE single cell. In \cite{udn-saad}, in order to improve the energy efficiency of ultra-dense networks, the authors frame the problem of joint power control and user scheduling as a mean-field game and solve it using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Mean-field games are also used in \cite{zahrani} where the interference problem (both inter-tier and inter-cell interference) is formulated as a two-nested problem: an overlay problem at the macrocell level and an underlay problem at the small-cel level. In the overlay problem, the macrocell selects the optimal action first, to provide minimum service, while the underlay problem is then formulated as a non-cooperative game among the small cells. The mean-field theory is exploited to help decouple a complex large-scale optimization problem into a family of localized optimization problems.
We remark that the above papers address heterogeneous dense networks but, unlike our work, they do not consider CA support, which will be a fundamental feature of future cellular networks and significantly changes the problem settings. Also, \cite{yang-eeff, coalitions_overlap, hierarchical-competition} formulate a resource allocation problem that aims at distributing the transmit power among the available resources under overall power constraints. In our work, instead,
we do not formulate the problem as a downlink power allocation problem, rather as a power setting problem at carrier level, assuming {\em each carrier has an independent power budget}. Additionally, while most of the previous work \cite{hetnet-eff, discrete-eeff, yang-eeff, diaz-geometric,lin-powadj} focus on the heterogeneous network interference problem only, using game theory concepts we jointly address interference mitigation, power consumption and user coverage by taking advantage of the diversity and flexibility provided by the availability of multiple component carriers. Finally, we propose a
solution that enables the PoAs to dynamically change their power strategies based on user distribution, propagation conditions and traffic patterns.
To our knowledge, the only existing work that investigates downlink power setting in cellular networks with CA support is \cite{joint-ra-ca}. There, Yu et al. formulate an optimization problem that aims at maximizing the system energy efficiency by optimizing power allocation and user association. However, interference issues, which are one of the main challenges we address, are largely ignored in \cite{joint-ra-ca} as the authors consider a non-heterogeneous, single cell scenario.
\section{System model and assumptions\label{sec:system}}
We consider a CA-enabled two-tier dense network composed of macro and microcells, each controlled by different types of PoAs. The network serves a large number of CA-enabled user equipments (UEs), which may move at low-speed (pedestrian) or high-speed (vehicles).
To make the problem tractable, we partition the entire network area into a set of identically-sized square-shaped tiles, or zones, denoted by $\mathcal{Z}$. From the perspective of downlink power setting, the propagation conditions within a tile from a specific PoA represent averages of the conditions experienced by the UEs within the tile. Note that the tile size can be arbitrarily set, and represents a trade-off between complexity and realism. The choice, however, must be such that, the number of users falling within a tile should not be too high, and the assumption that they experience similar channel conditions should hold. We will assume that tiles (i.e., the UEs therein) are associated with the strongest received reference power, although the extension to other, dynamic association schemes as well as to the case where a tile is served by multiple PoAs can be easily obtained. For simplicity, the user equipments (UEs) in the network area are all assumed to be CA enabled.
Note, however, that the extension to
a higher number of tiers as well as to the case where there is a mix of CA-enabled and non CA-enabled UEs is straightforward.
All cells share the same radio resources. In particular, a comprehensive set of component carriers (CC), indicated by $\mathcal{C}$, is available simultaneously at all PoAs (PoAs having at their disposal a subset of CCs is a sub-case of this scenario).
Each CC is defined by a central frequency
and a certain bandwidth. The central frequency affects the carrier's coverage area, as the propagation conditions deteriorate greatly with increasing frequency.
The level of transmit power irradiated by each PoA on the available CCs can be updated periodically depending on the traffic and propagation conditions in the served tiles, or it can be triggered by changes in UE distribution or traffic demand. The update time interval, however, is expected to be substantially longer than a resource block (RB)\footnote{A resource block (RB) is the smallest resource unit that can be allocated to a UE in LTE. It is 180 kHz wide and 1~ms long.} allocation period, e.g., order of hundreds of subframes.
Indeed, since downlink power setting is based on
averaged values of reported CSI's over the tile,
it is not imperative for a power setting scheme to
constantly have accurate CSI
for each user; additionally, it is not necessary for the update period to be
aligned with the coherence time of the channel.
The PoAs can choose from a discrete set of available power levels, including 0 that corresponds to switching off the CC. The possible power values are expressed as fractions of the maximum transmit power, which may vary depending on the type of PoA, i.e., $\boldsymbol{P}=\{0.1, 0.2,...,1\}$. As noted before, each CC at each PoA has an independent power budget.
\section{Game theory framework\label{sec:game}}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{fig/Fig1.pdf}
\caption{\label{fig:net-model}Network model and teams.
Team locations are denoted by $l_1, l_2, l_3$. Solid red lines represent team boundaries, while black solid lines represent coverage areas. Tiles are represented by grey squares.}
\vspace{-5mm}
\end{figure}
As mentioned before, game theory is an excellent tool to address complex problems, for which an optimal centralized solution might not be feasible. In game theory, solutions to complex problems are usually reached by identifying the Nash Equilibria (NE) of the game; these are strategy profiles in which every player plays its best strategy, considering all other players' strategies fixed. Since none of the players have an incentive to unilaterally move from an NE strategy, such outcomes of the game are desirable, indeed they represent stable solutions which can be reached in a distributed manner.
Considering that the complexity of the carrier power setting problem increases exponentially with the number of PoAs, CCs and the granularity of the transmit power levels, we adopt a game theoretic approach in order to derive low-complexity, distributed solutions that lead to NEs and are applicable in practice.
Specifically, we formulate the problem of power setting in dense CA-enabled networks as a competitive game between {\em teams} of PoAs (see Fig.~\ref{fig:net-model}), where each team wants to maximize its own payoff. Indeed, given the network architecture at hand, PoAs within an overlapping geographical area have the common objective to provide the UEs under their coverage high data throughput. Thus, they may choose to cooperate with each other in order to improve their individual payoffs as well as contribute to the ``social welfare'' of the team. Cooperation among such PoAs is beneficial especially since the inter-tier interference is most significant within the cell.
It follows that teams will compete between each other for the same resources, each aiming at maximizing their own benefits. The game we model and its analysis are detailed below. We note that the formulation can be easily extended to accommodate various team configurations and clusters of teams, each controlled by a central controller.
\subsection{Game model\label{subsec:game-definition}}
Let $G=\{\mathcal{T},\mathcal{S},\mathcal{W}\}$ be a competitive game between the set of players $\mathcal{T}$, where $\mathcal{S}$ is the comprehensive set of strategies available to the players and $\mathcal{W}$ is the set of payoff functions. The objective of each player in the game is to choose a strategy such that it maximizes its payoff. The payoff function, in general, depends also on the strategies of the other players, thus a player must make decisions
accounting for the strategies
the other players have selected.
We now proceed to define the {\em players}, {\em strategies} and {\em payoff} functions in our scenario.
\subsubsection{Players}
As we mentioned in the previous section, we formulate the carrier transmit power setting as a competitive game between teams of PoAs. Hence the players in our competitive game are the {\em teams}, each comprising a macro PoA and the micro PoAs whose coverage areas geographically overlap with that of the macrocell. The terms {\em team} and {\em player} are used interchangeably throughout the paper. We denote the set of teams in our network as $\mathcal{T}=\{t_1,...,t_{T}\}$, where $T$ is the number of teams. We assume that the team members exchange information between each other, and that the macro PoA plays the role of team leader, i.e., it makes the decisions for all team members in a way that maximizes the overall team benefits. Furthermore, we will refer to the PoAs forming a team $t$ as the {\em locations} of the team, $\mathcal{L}_t=\{l_1,l_2,...,l_L\}$ where, for simplicity of notation, the number of locations within a team is assumed to be constant and equal to $L$. Such a generalization is particularly useful since the interference caused within the team depends also on the relative position between the different PoAs. We indicate the set of tiles under the coverage area of a particular location $l$ by $\mathcal{Z}_l$, and their union, denoting the comprehensive set of tiles of the team, by $\mathcal{Z}_t$. In addition, we use $E_l$, $E_z$ and $E_t$ to denote the number of UEs under the coverage of location $l$, tile $z$, and team $t$, respectively, with $E_t=\sum_{l\in\mathcal{L}_t}E_l=\sum_{z\in\mathcal{Z}_t}E_z$.
\subsubsection{Strategies}
Each team, comprising a set of locations,
has to decide which transmit power level to use (out of the possible values in $\boldsymbol{P}$), at each one of those locations and for each of the available carriers $\mathcal{C}=\{c_1,c_2,...,c_C\}$.
It follows that the strategy selected by a team $t$, $\boldsymbol{s^t}$, is an $L\times C$ matrix, where each $(l,c)$ entry indicates the power level chosen from set $\boldsymbol{P}$,
to be used at location $l$ on carrier $c$. Consequently, the strategy set available to a team will be composed of all possible combinations of power levels, locations and carriers.
\subsubsection{Payoff functions} In game theory payoff functions are used to model the objectives of the players, usually expressed in terms of utility and cost, when choosing between different available strategies. Since network throughput is an important performance metric in cellular networks, it is natural that the utility of each team in our scenario is defined as a function of the data rates it can serve to its UEs. The data rate a UE obtains is closely linked to the SINR it experiences, which depends on the transmit power chosen by the serving location (PoA), the CC that is used and the transmit power levels chosen by neighboring locations. Assuming that all UEs within the same tile experience the same amount of interference, for each team we can first define an interference matrix of size $|\mathcal{Z}_t| \times C$, denoted by $\boldsymbol{I^t}$. Each entry in the matrix indicates the interference experienced by UEs in tile $z$ on carrier $c$, which is caused by other teams:
\vspace{-2mm}
\begin{equation}
I^t_{z,c}(\boldsymbol{s^{-t}}) = \sum_{t'\in\mathcal{T} \wedge t'\neq t}\sum_{l'\in\mathcal{L}_{t'}}s^{t'}_{l',c}a_{l',z,c}
\label{eq:interference}
\end{equation}
where $\boldsymbol{s^{-t}}$ represents the strategies adopted by all teams other than $t$, $s^{t'}_{l',c}$ is the power level (the strategy) of team $t'$ for location $l'$ on carrier $c$, and $a_{l',z,c}$ is the attenuation factor ($0\leq a_{l',z,c}\leq1$) related to the signal transmitted from location $l'$ on $c$ and received by the UEs in tile $z$. The attenuation values are pre-calculated using the urban propagation models specified in \cite{itu}.
The SINR at tile $z$, when served by location $l$ in team $t$, is:
\vspace{-2mm}
\begin{equation}
\gamma_{z,c}^t=\frac{s^{t}_{l,c}a_{l,z,c}}{N+\sum_{l'\in\mathcal{L}_t \wedge l'\neq l}a_{l',z,c}s^t_{l',c}+I^t_{z,c}}
\label{eq:SINR}
\end{equation}
where $N$ represents the average noise power level. Note that, besides $N$ and $I^t_{z,c}$, we have an additional term at the denominator, which stands for the intra-team interference and indicates the sum of all power received from the locations within the same team, other than location $l$.
Then the utility of each team can be defined as a function of the individual tiles' SINR values.
In particular, the sigmoid-like function has been often used for this purpose in uplink power control \cite{pa-sigmoid}. We note that this function is suited to capture also the utility in downlink power setting, as it has features that closely resemble the realistic relationship between the SINR and the data rate.
We therefore adopt the sigmoid function proposed in \cite{pa-sigmoid}, as the utility function of each (tile, carrier) duplet in the team, and write the team utility as:
\vspace{-2mm}
\begin{equation}
u^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) = \sum_{l\in\mathcal{L}_t} \sum_{z\in\mathcal{Z}_l}\sum_{c\in\mathcal{C}}\frac{E_z}{E_t \left(1+e^{-\alpha(\gamma^t_{z,c}-\beta)} \right)} \,. \label{eq:team-utility-sigmoid}
\end{equation}
The sigmoid function in Eq.~(\ref{eq:team-utility-sigmoid}) has two tunable parameters, $\alpha$, which controls the steepness of the function, and $\beta$, which controls its centre. They can be tweaked to
best meet the scenario of interest. In particular, the higher the $\alpha$, the closer the function resembles a step function, i.e., the utility becomes more discontinuous with the increase of the SINR. The higher the $\beta$, the larger the SINR for which a tile obtains a positive utility. In our scenario, $\alpha$ and $\beta$ are set so that the resulting sigmoid-like function captures the relationship between SINR and throughput.
In addition,
the individual utility of each tile $z$ in team $t$ is weighted by the fraction of UEs
covered by the team in the tile ($E_z/E_t$) so as to give more weight to more populated tiles. This enables us to account for the user spatial distribution whenever this is not uniform over the network area.
Next, we introduce a cost function to account for the interference and its detrimental effect, as well as for fairness in the service level to users. We define a first cost component that aims at penalizing teams who choose high power strategies, as:
$ \sum_{l\in\mathcal{L}_t}\sum_{c\in\mathcal{C}}\xi^t_{l,c}\bar{a}_{l,c}s^t_{l,c}$
where $\bar{a}_{l,c}$ is the link quality (i.e., attenuation) on carrier $c$ averaged over all tiles served by location $l$, and $\xi^t_{l,c}$ is the price per received power unit for location $l$ and carrier $c$. This cost component increases with the increase in the chosen level of transmit power, however it also accounts for the propagation conditions of the users served by the location.
In other words, locations that have to serve UEs experiencing poor channel quality will incur a lower cost, which ensures some level of fairness. The way the unit price, $\xi$, should be set is investigated in Sec. \ref{sec:price-set}.
The second term of the cost function further provides fairness in the network
by penalizing those strategies that leave UEs without coverage. It is defined as
$\delta e_t$,
where $\delta$ is a unit price paid for each unserved user and $e_t$ is the fraction of UEs within the team area that experience SINR levels below a certain threshold. We remark that since a macro PoA can communicate with the micro PoAs in the macrocell, the team leader has knowledge of the UE density under the coverage of its locations. Thus, it can easily estimate the fraction of users, $e_t$, depending on the strategy chosen for each of its locations ($\boldsymbol{s^t}$) as well as on all other teams' strategies ($\boldsymbol{s^{-t}}$).
The total cost function is then given by:
\vspace{-2mm}
\begin{align}
\pi^t(\boldsymbol{s^t},\boldsymbol{s^{-t}})= \sum_{l\in\mathcal{L}_t}\sum_{c\in\mathcal{C}}\xi^t_{l,c}\bar{a}_{l,c}s^t_{l,c}+\delta e_t \label{eq:fullcost}
\end{align}
where $\xi$ and $\delta$ indicate the weight that is assigned to each part of the cost function.
Finally, we define the payoff of each team $t$ as the utility minus the cost paid:
\vspace{-2mm}
\begin{align}
w^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) = u^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) -\pi^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) \,.\label{eq:teampayoff}
\end{align}
As mentioned, a team's goal is to maximize its payoff. Provided that the team is aware of the strategies selected by other teams, it can choose among its available strategies, the one that maximizes the payoff function. We will refer to this strategy as {\em{best reply}}.
Moreover, to reduce both power consumption and the interference towards other teams, a team will select its best reply among strategies that maximize its payoff, as follows.\\
\noindent
{\em (i)}~Between strategies that are equivalent in terms of payoff, it will choose the one with the lowest total power, to reduce the overall power consumption. \\
\noindent
{\em (ii)}~When indifferent between strategies with equal total power but assigned to different locations, it will select the strategy that assigns higher power levels to micro PoAs that are closer to the centre of the cell, to minimize interference. \\
\noindent
{\em (iii)}~When indifferent with respect to the two above criteria,
it will choose the strategy that assigns higher power levels to higher frequency carriers, again, to minimize interference.
\vspace{-3mm}
\subsection{Price setting}\label{sec:price-set}
The price parameter $\xi^t_{l,c}$ introduced in Eq.~(\ref{eq:fullcost}) is an important parameter which affects the nature of the game. To gain some insight into the possible values of $\xi$, we can start by considering a single carrier, single location scenario.
We further simplify the scenario to consider one tile per location; dropping the superfluous notation, the team payoff becomes:
\vspace{-2mm}
\begin{align}
w^t
&= \frac{1}{\left(1+e^{-\alpha(\frac{as^t}{\mathcal{I}^t+N}-\beta)} \right)} - \xi^tas^t
\end{align}
where $\mathcal{I}^t(\boldsymbol{s^{-t}})$ indicates the interference determined by other teams' strategies. We also set $\delta=0$, since the two cost components are independent of each other, therefore the second component bears no effect on the analysis of the first component. Differentiating with respect to the team's chosen strategy, $s^t$, which now is scalar, and solving for $0$, we get:
\vspace{-0mm}
\begin{align}
e^{-2\alpha(\frac{as^t}{\mathcal{I}^t+N}-\beta)}-\left(\frac{\alpha }{\xi^t(\mathcal{I}^t+N)}-2\right)e^{-\alpha(\frac{as^t}{\mathcal{I}^t+N}-\beta)}+1 = 0 \,.
\end{align}
From the above expression, we can derive the strategy tha maximizes the payoff, which turns out to be a real and positive value only if the following condition is satisfied:
\vspace{-2mm}
\begin{align}
\xi^t\leq \frac{\alpha}{4\left(\mathcal{I}^t+N\right)} \,.\label{eq-priceubound}
\end{align}
The last expression indicates that the price parameter $\xi^t$ is inversely proportional to the interference experienced by the UEs served by the team location. If the interference experienced by the UEs in the tiles served by the location increases, it is clear that the value of $\xi^t$ needs to be lowered in order to ensure that the chosen power is a positive value.
This suggests that, in order to achieve high performing operational points for our network, a dynamic price setting is required, so that the teams can adapt to the changing interference, as other teams change their strategies. Also, note that the interference experienced by users in a tile does
not affect the interference experienced by users in other tiles;
the same holds for the interference experienced by users in the
same tile, but using different carriers. This implies that the
relationship between the price parameter and interference remains
the same, even when more
carriers and more
tiles are considered.
We further remark that aside from being dynamically updated depending on the value of the interference, the price must also be tailored individually for each team location. Indeed, the interference experienced by UEs served by a specific location $l$ depends not only on the strategies selected by other teams, but also on the topology of the network, i.e., the relative position and distance between the interfering locations and said UEs. A team leader can leverage the knowledge it has about its team topology to adjust the price parameter, according to each location's expected external interference coming from other teams, and the expected intra-team interference.
\begin{algorithm}
\begin{algorithmic}[1]
\Require $c$, $\boldsymbol{s_c}$, $t$ \label{prs-input}
\ForAll{$l\in\mathcal{L}_t$ }
\State $\bar{I}^{t}_{l,c}=0$
\ForAll{$z\in \mathcal{Z}_l$}
\State Compute $I^t_{z,c}$ by using Eq.~(\ref{eq:interference}) \label{line:prs-extint}
\State $I^{int}_{z,c}=\sum_{l'\in\mathcal{L}_t\wedge l'\neq l}s^t_{l',c}a_{l',z,c}$\label{line:prs-tiint}
\State $\bar{I}^{t}_{l,c}=\bar{I}^{t}_{l,c}+\frac{E_z}{E_l}\left(I^t_{z,c}+I^{int}_{z,c}\right)$\label{line:prs-ovint}
\EndFor
\State $\xi^t_{l,c}=\frac{k\alpha}{\bar{I}^t_{l,c}}$\label{line:prs-price}
\EndFor
\end{algorithmic}
\caption{\label{alg:price-setting}Dynamic team price setting}
\end{algorithm}
How to dynamically update the price for each team location under general settings is shown in Alg.~\ref{alg:price-setting}.
The procedure takes into account both the external interference coming from the other competing teams, calculated in line \ref{line:prs-extint}, as well as the internal interference coming from the other locations of the team, calculated in line \ref{line:prs-tiint}. Once these values are obtained, the price parameter $\xi^t_{l,c}$ is updated in line \ref{line:prs-price} using $\xi^t_{l,c}=\frac{k\alpha}{\bar{I}^t_{l,c}}$, where $k$ is a weight factor used to indicate the importance we place on the cost function; higher $k$ values indicate that consuming less power will be given more consideration when selecting the best response. As a result, for higher $k$ we obtain overall lower best response values, and vice versa. Note that $k\leq1/4$ must hold in order to satisfy Eq.~(\ref{eq-priceubound}). Note that the initial price for each team location is determined given an initial strategy, $\boldsymbol{s_c}$, which can be any of the fixed strategies, and then updated every iteration or game. Although more frequent updates can be implemented, our results show that it is sufficient to update the price parameters once for each game run.
\subsection{Game analysis\label{subsec:game-analysis}}
To analyse the behaviour of the above-defined game, and discuss the existence of NEs, we rely on the definition of games of {\em strategic complements/substitutes with aggregation} as provided in~\cite{pa-potential,pa-strategic}.
A game $\Gamma=\{\mathcal{P},\mathcal{S},\mathcal{W}\}$, where $\mathcal{P}$ is the set of players, and $\mathcal{S}$ and $\mathcal{W}$ are defined as above, is a game of {\bf{strategic substitutes}} with aggregation if for each player
$p\in \mathcal{P}$ there exists a best-reply function $\theta_p:\boldsymbol{S^{-p}} \to \boldsymbol{S^p}$ such that:
\vspace{-2mm}
\begin{align}
1)&\quad \theta_p(I^p)\in \Theta(I^p)\label{eq:cond-1}\\
2)&\quad \theta_p\text{ is continuous in } \boldsymbol{S^{-p}}\label{eq:cond-2} \\
3)& \quad \theta_p(\hat{I}^p) \leq \theta_p(I^p), \quad \forall \hat{I}^p>I^p \,.\label{eq:cond-3}
\end{align}
$\Theta(I^p)$ is the set of best replies for player $p$ and $\boldsymbol{S^{-p}}$ is the Cartesian product of the strategy sets of all participating players other than $p$. $I^p$ is an additive function of all other players' strategies, also referred to as the {\em aggregator} \cite{pa-strategic}:
\vspace{-2mm}
\begin{equation}
I^p(\boldsymbol{s^{-p}}) =\sum_{p'\in\mathcal{P}, p' \neq p} b_{p'}s_{p'}\label{eq:aggregator}
\end{equation}
where $b_{p'}$ are scalar values.
Condition 1) is fulfilled whenever the dependence of the payoff function on the other players' strategies can be completely encompassed by the aggregator. Condition 2), also known as the {\em continuity} condition, implies that for each possible value of $I^p$, the best reply function $\theta_p$ provides unique best replies. Condition 3) implies that the best reply of the team decreases with the value of the aggregator.
A game of {\bf{strategic complements}} with aggregation is identical, except for condition 3), which changes into:
\vspace{-2mm}
\begin{align}
\theta_p(\hat{I}^p) \leq \theta_p(I^p), \quad \forall \hat{I}^p<I^p \,,\label{eq:cond-4}
\end{align}
i.e., in the case of games of strategic complements, the best reply of the team increases with the value of the aggregator.
Next, we show the following important result.
\begin{theorem}
{\em Our competitive team-based game $G$
is a game of {\bf strategic complements/substitutes with aggregation}. }
\end{theorem}
\IEEEproof
See Appendix A.
\endIEEEproof
As a further remark to the above result, it is worth stressing that the cost introduced in Eq.~(\ref{eq:fullcost}) is an important function that determines whether the game is of strategic complements or substitutes. Indeed, if we consider the payoff to coincide with the utility function (i.e., $\xi=\delta= 0$), a team's best reply will consist in increasing its transmit power as the interference grows, implying that the game is of strategic complements. This would lead to an NE in which all teams transmit at maximum power level, without consideration for the interference caused.
Instead, imposing some $\xi>0$, the game will turn into a game of strategic substitutes. This is because the first term of the cost function is linear with the received power, and hence increasing with the chosen strategies. Therefore, the payoff function will start decreasing once the increase in the chosen transmit powers does not justify the price the team has to pay. Note that, throughout the paper, we will consider $\xi>0$ , therefore our game is of strategic substitutes.
Imposing some $\delta>0$ (i.e., activating the second cost component), the relationship between transmit power and cost becomes more complicated but it does not change the nature of the game: the fraction of unserved UEs within the team will be high
for
very low power strategies, then it will decrease as the transmit power is increased, and increase again as the strategies chosen cause high intra-team interference. In other words, the second cost component strengthens the trend in the payoff function imposed by the utility for increasing interference in presence of low power strategies. For those mid-level strategies that ensure good coverage, it does not affect the cost function. Instead, it resembles the behavior of the first cost component for high power strategies, as it is still able to discriminate against high power strategies that may harm the system performance.
Main results from~\cite{pa-potential,pa-strategic} and references therein show that games of strategic complements/substitutes with aggregation belong to the class
of {\em{pseudo-potential games}}, which are known to admit pure Nash Equilibria.
Another important result that holds for such games with a discrete set of strategies is that, thanks to the continuity condition in Eq.~(\ref{eq:cond-2}), convergence to an NE is ensured by best reply dynamics \cite{pa-strategic,pa-potential}.
\section{The power setting algorithm\label{sec:algo}}
We now use the above model and results to build a distributed, low-complexity scheme
that enables efficient downlink power setting on each CC.
We first consider
a single carrier and show that it
converges to the best NE among the possible ones, in terms of payoff. We aim for an NE because it is the only solution of the game which the participating teams can reach independently, although it may not be the most optimal one in terms of utility. We then extend the algorithm to the multiple-carrier case and discuss its complexity.
\subsection{Single-carrier scenario\label{subsec:single-carrier}}
Let us first focus on a single carrier and consider two possible borderline strategies that a team may adopt: the {\em max-power} strategy in which all locations transmit at the highest power level, and the {\em min-power} strategy in which all locations transmit at the lowest available power level greater than 0. As shown in our previous study \cite{us-wowmom}, it transpires that the {\em min-power} always outperforms the {\em max-power} in a multi-tier dense scenario.
We therefore devise a procedure that should be executed by each team leader (macro PoA), in order to update the locations' downlink power setting, either periodically or upon changes in the user traffic or propagation conditions. It is based on the intuition that if teams are to start from the lowest possible strategy (i.e., zero transmit power), then the overall transmit power and interference would increase incrementally as teams play their best replies sequentially. Thus the game would converge to the NE with the lowest overall transmit power, which, as we argue below, would be preferred in terms of social welfare. To do that, all teams initialize their transmit power to zero, and sequentially run the Best-reply Power Setting (BPS) algorithm reported in Alg.~\ref{alg:single-cc-br}.
We refer to the single execution of the BPS algorithm by any of the teams as an iteration. Note that the order in which teams play does not affect the convergence or the outcome of the game, since all teams start from the zero-power strategy. At each iteration, the leader of the team that is playing
determines the strategy (i.e., the power level to be used at each location in the team) that represents the best reply to the strategies selected so far by the other teams. The team leader will then notify it to the neighboring team leaders that can be affected by this choice. BPS will be run by the teams till convergence is reached,
which, as shown in \cite{us-wowmom}, occurs very swiftly. Also, we remark that the strategies identified over the different iterations are not actually implemented by the PoAs. Only the strategies representing the game outcome will be implemented by the PoAs, which will set their downlink power accordingly for the current time period.
In order to detail how the BPS algorithm (Alg.~\ref{alg:single-cc-br}) works, let us consider the generic $i+1$-th iteration and denote the team that is currently playing by $t$. The algorithm requires as input the carrier $c$ at disposal of the PoAs and the strategies selected so far by the other teams, $\boldsymbol{s^{-t}_c}(i)$. Additionally, it requires the cost components weights $\xi$ and $\delta$, the SINR threshold $\gamma_{min}$, used to qualify unserved users, and the utility function parameters $\alpha$ and $\beta$.
This latter set of parameters are calculated offline and provided to the teams by the network operator.
The algorithm loops over all possible strategies in the strategy set of team $t$, $\boldsymbol{S^t_c}$. For each possible strategy, $\boldsymbol{s}$, and each location $l$ within the team, it evaluates the interference experienced by the tiles within the location area (line~\ref{line:scc-interference}). This value is used to calculate the SINR and the utility (lines~\ref{line:scc-sinr}-\ref{line:scc-util}), then the first cost component is updated (line~\ref{line:power-cost}).
In line~\ref{line:quality-cost1}, it is verified whether UEs in tile $z$ achieve the minimum SINR value.
If not, the cost component $e_t$ is amended to include the affected UEs.
The overall team utility for each potential strategy
$\boldsymbol{s}$ is obtained by summing over the individual tile utilities weighted by the fraction of UEs present in each tile. We recall that such weight factor ensures that the UE distribution affects the outcome of the game accordingly. Once the utility and cost are obtained, the team payoff corresponding to strategy $\boldsymbol{s}$ is calculated (line~\ref{line:payoff}). After this is done for all possible strategies, the leader chooses the strategy $\boldsymbol{s^t}(i+1)$ that maximizes the team payoff. Note that, according to our game model, $\arg\max^{\star}$ in line~\ref{line:max} denotes the following operation: it applies the $\arg\max$ function and, if more than one strategy is returned, the best strategy is selected by applying the list of preferences in Sec.~\ref{subsec:game-definition}.
\begin{algorithm}
\begin{algorithmic}[1]
\Require $c$, $\boldsymbol{s^{-t}_c}(i)$, $\xi,\delta,\alpha,\beta,\gamma_{min}$ \label{line:scc-input}
\ForAll{$\boldsymbol{s}\in\boldsymbol{S^t_c}$}\label{line:scc-str}
\State Set $u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$, $w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$, $\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$, $e_t$ \hspace{-1mm} to \hspace{-1mm} 0
\ForAll{$l\in\mathcal{L}_t$ {\bf and} $z\in \mathcal{Z}_l$}
\State Compute $I^t_{z,c}$ by using Eq.~(\ref{eq:interference}) \label{line:scc-interference}
\State Compute $\gamma^t_{z,c}$ by using Eq.~(\ref{eq:SINR}) \label{line:scc-sinr}
\State $u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i)) \hspace{-1mm}\gets u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\hspace{-1mm}\frac{E_z}{E_t\left(1+e^{-\alpha(\gamma^t_{z,c}-\beta)}\right)}$ \label{line:scc-util}
\State $\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets \pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\xi \bar{a}_{l,c}s_{l,c}$ \label{line:power-cost}
\If{$\gamma^t_{z,c}\leq \gamma_{min} $} \label{line:quality-cost1}
\State $e_t\gets e_t+\frac{E_z}{E_t}$ \label{line:quality-cost2}
\EndIf
\State $\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets \pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\delta e_t)$ \label{line:power-cost2}
\EndFor
\State $w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))-\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$ \label{line:payoff}
\EndFor \label{line:scc-strend}
\State $\boldsymbol{s^{t}_c}(i+1)\gets \arg\max^{\star}_{\boldsymbol{s}}w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$\label{line:max}
\end{algorithmic}
\caption{\label{alg:single-cc-br}BPS Algorithm run by team $t$ at iteration $i+1$}
\end{algorithm}
\begin{theorem}
{\em When the NE is not unique, then the BPS algorithm reaches the NE that maximizes the social welfare, i.e., the sum of individual payoffs. }
\end{theorem}
\IEEEproof
See Appendix B.
\endIEEEproof
\subsection{Multi-carrier scenario}
We now extend the BPS algorithm to the multi-carrier case. As mentioned before, the team leader has to decide on the power level to be used at each available carrier, at each location within the team. Thus the team strategy is no longer a vector, but an $L\times C$ matrix, each entry $(l,c)$ indicating the power level to be used for carrier $c$ at location $l$.
A straightforward extension of Alg.~\ref{alg:single-cc-br} would imply that lines \ref{line:scc-str}--\ref{line:scc-strend} are executed for each element in the new extended strategy set. However, the new strategy set, depending on the number of carriers, may become too large and therefore make the algorithm impractical to use in realistic scenarios.
Analyzing the utility expression obtained in Eq.~(\ref{eq:team-utility-sigmoid}), we can note that since the carriers are in different frequency bands and have separate power budgets (as foreseen in LTE-A), the utilities secured at each carrier are independent of each other. In other words, the utility a team will get at one of the carriers, is not affected by the strategy chosen at another carrier. The same holds for the first cost component in Eq.~(\ref{eq:fullcost}). However, the overall payoff value is dependent on the interaction between carriers due to the second cost component.
Indeed, in networks with CA support, a UE can be considered unserved only if the SINR it experiences is below the threshold in all carriers.
In order to obtain a practical and effective solution in the multi-carrier scenario, we take advantage of the partial independence between the carriers, and run Alg.~\ref{alg:single-cc-br} independently for each carrier, keeping the size of the strategy set the same as in the single-carrier scenario. Then, to account for the dependence exhibited by the second cost component, we set the order in which the per-carrier games are played, using the order of preferences listed in Sec.~\ref{subsec:game-definition}. Since the teams prefer to use high-frequency carriers over low-frequency ones, due to their smaller interference impact, it is logical that the game is played starting from the highest-frequency carrier. It follows that low-frequency carriers will likely be used to ensure coverage to UEs not served otherwise.
Importantly, our algorithm is still able to converge to an NE, since surely none of the teams will deviate from the strategies they chose at each carrier. Also, since the game for the lowest frequency carrier is played last, the number of served UEs cannot be further improved without increasing the power level on the other carriers, which we already know is not a preferable move as it has not been selected earlier. Thus, although it does not search throughout the entire solution space as for the single-carrier scenario, the procedure is still able to converge to an NE that
provides a close-to-optimum tradeoff among throughput, user coverage and power consumption. The results presented in \cite{us-wowmom}, obtained for toy scenarios, confirm that our scheme provides performance as good as that achieved by an exhaustive search in the strategy space.
\subsection{Complexity and overhead}
The complexity of the algorithm depends largely on the size of the strategy sets that are available to the teams, $\boldsymbol{S^t}$, since each team has to find the strategy which maximizes its payoff value by searching throughout the entire set. The set size depends on the number of discrete power levels available to the PoAs ($|\boldsymbol{P}|$), the number of locations in the team ($L$) and the number of CCs available at each location ($C$). In the single-carrier scenario, we have $|\boldsymbol{S^t}|=|\boldsymbol{P}|^L$, while in the multi-carrier scenario the size exponentially grows to $|\boldsymbol{S^t}|=|\boldsymbol{P}|^{LC}$, which is reduced to $|\boldsymbol{S^t}|=C|\boldsymbol{P}|^L$ by our approach.
In order to determine the downlink power setting, PoAs leverage the feedback they receive from their associated users on the channel quality that they experience with respect to all PoAs within reference signal range.
These reports which are supported by current standards \cite{3gpp-36.331} occur approximately every 5~s. Each location is expected to send these values, once per BPS update period, to the team leader which will in turn run the BPS algorithm. We assume that PoAs within the same macrocell are interconnected, or at least connected to the macro PoA, via, e.g., optical fiber connections, which allows for swift communication between them. Thus, the overhead of control information flowing between PoAs within a team and their team leader is very limited and can be considered negligible. This is a reasonable assumption since it is expected that the architecture foreseen for future networks will allow PoAs that are geographically close to share a common baseband~\cite{andrews-5g}. In addition team leaders, i.e., macro PoAs, also need to exchange their respective BPS outcomes at each iteration. Recall that, at each iteration, BPS produces the selected power level at each location and each carrier, which indicates that the team leaders need to exchange $L\times C$ integer values. In order to avoid additional overhead, team leaders can stop the broadcast of their BPS outcome, as soon as it is unchanged from the previous iteration.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{fig/Fig2}
\vspace{-3mm}
\caption{\label{fig:newsce}The network scenario and the different types of urban areas.}
\vspace{-3mm}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.25\textwidth]{fig/Fig3a}
\hspace{5mm}
\includegraphics[width=0.25\textwidth]{fig/Fig3b}
\hspace{5mm}
\includegraphics[width=0.25\textwidth]{fig/Fig3c}
\vspace{-3mm}
\caption{\label{fig:usdist}Snapshots of user distribution. The red dots represent pedestrian UEs, while the blue dots represent vehicle UEs. Left: Morning; Middle: Afternoon; Right: Evening.}
\vspace{-3mm}
\end{figure*}
\section{Performance evaluation\label{sec:peva}}
\begin{figure*}
\centering
\includegraphics[width=0.25\textwidth]{fig/Fig4a.pdf}
\hspace{5mm}
\includegraphics[width=0.25\textwidth]{fig/Fig4b.pdf}
\hspace{5mm}
\includegraphics[width=0.25\textwidth]{fig/Fig4c.pdf}
\vspace{-3mm}
\caption{\label{fig:necc}BPS achieved power strategy for the morning scenario. Left: CC1; Middle: CC2; Right: CC3. }\vspace{-3mm}
\end{figure*}
We consider the realistic two-tier network scenario that is used within 3GPP for evaluating LTE networks~\cite{scenario}. The network is composed of 57 macrocells and 228 microcells. Macrocells are controlled by 19 three-sector macro PoAs, while micro PoAs are deployed randomly over the coverage area so that there are 4 non-overlapping microcells per macrocell. The inter-site distance is set to 500~m. The overall network area is divided into 4,560 square tiles of equal size. The tile size was set so that an average of 2.5 and a maximum of 10 users fall within any tile, while ensuring that users within a tile experience similar channel conditions. The PoAs are grouped into 57 five-location teams, each consisting of 1 macro PoA and 4 micro PoAs within its macrocell, unless stated otherwise. Specifically, to make the scenario more realistic and comparable to an actual urban scenario, we divide the network coverage area into five types of urban areas: city centre, residential area, commercial area, parks and school area, as shown in Fig.~\ref{fig:newsce}. The UEs are also randomly dropped with varying density depending on the population density of the area type as well as time of the day (morning, afternoon or evening). Reference values for UE density were obtained using official population statistics of the city of Rome (Italy)~\cite{rome}, and then scaled to represent realistic values for cellular users of a single network provider. The UE densities were further scaled for the different urban areas and times of the day, using weights extracted from the data provided in the MIT Senseable City Lab project~\cite{mitlab}. Note that, in addition, user density around micro PoAs is four times higher than over the rest of the macrocell. The mobility of pedestrian UEs was modeled using the random walk model, while the mobility of vehicular UEs was modelled using real mobility traces collected from taxi cabs in Rome \cite{trace-taxi}, assuming an average velocity of $30$~km/h. Snapshots of user distribution at different times of the day are shown in Fig.~\ref{fig:usdist}. The data traffic is simulated by generating download requests, whereby a random user requests to download a file which can be either video (file size: 1 Mb) or a generic file (file size: 500 kb), with equal probability. The number of requests per cell follows a Poisson distribution with a certain arrival rate $\lambda$ per cell, which varies depending on the urban area and time of the day. The final values obtained for user densities and $\lambda$ are shown in Table. \ref{table:ue_dense}
\begin{table*}[t]
\caption{UE densities and cell request arrival rates\vspace{-2mm}}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|}
\hline
&City centre & Commercial area & School & Park & Residential area \\ \hline
Baseline UE density [UE/msq] & 0.0245 & 0.0147 & 0.0074 & 0.0009 & 0.0009 \\ \hline
Percentage of vehicles & 30\%& 5\%&5\%& 5\%&50\% \\ \hline
\multicolumn{6}{|l|}{\bf Density weights} \\ \hline
Morning (7-9 AM)&0.5&0.6&0.6&0.8&0.8\\ \hline
Afternoon (3-5 PM)&1&0.95&0.95&0.7&1\\ \hline
Evening (10 PM-12 AM)&0.08&0.5&0.01&0.5&0.6\\ \hline
\multicolumn{6}{|l|}{\bf Cell request arrival rates $\lambda$} \\ \hline
Morning (7-9 AM)&0.75&0.54&0.27&0.04&0.04\\ \hline
Afternoon (3-5 PM)&1.5&0.9&0.4&0.03&0.05\\ \hline
Evening (10 PM-12 AM)&0.12&0.45&0.005&0.02&0.03\\ \hline
\end{tabular}
\end{center}
\label{table:ue_dense}
\vspace{-4mm}
\end{table*}%
All UEs are assumed to be CA enabled. PoAs can use three CCs, each $10$ MHz wide, with the following central frequencies: 2.6 GHz (CC1), 1.8 GHz (CC2) and 800 MHz (CC3).
We apply the ITU Urban Macro (UMa) model to calculate channel coefficients between macro PoAs and UEs, and the ITU Urban Micro (UMi) model for the channel between micro PoAs and users~\cite{itu}. In addition to path loss, we also consider shadowing effects, and, in the case of vehicular users, fast fading caused by the mobility. SINR values are mapped on throughput using the look-up table in \cite{sinr-map}. The maximum transmit powers for macro and micro PoAs are set at $20$~W and $1$~W, respectively~\cite{itu}.
The game is played by all teams using the BPS algorithm for the multiple-carrier scenario. The power consumed by the network nodes is calculated using the power consumption model provided in \cite{earth}.
The sigmoid function parameters are $\alpha=1$ and $\beta=1$, which were selected as the most appropriate to model the relationship between the selected strategy and final user rate.
The SINR threshold is set at $\gamma_{min}=-10$~dB, based on
\cite{sinr-map}.
The value of the cost parameter $\xi$, is calculated before running the BPS, using the dynamic pricing algorithm
in Sec.~\ref{sec:price-set}, with $k=0.25$. The power setting update period is set at 100~ms, which is considered sufficient from a practical perspective. Shorter update frequencies, as low as 10~ms, can also be implemented, provided that the delay incurred by the communication between macro PoAs is reasonable.
However, while such short update time might make the algorithm more responsive to channel dynamics, we
consider that longer update periods, such as 100 ms, perform excellently as confirmed by our results, while incurring significantly less signaling overhead. Unless otherwise specified, the weight factor for the second cost component is set at
$\delta=0.6$. Note that the values for $k$ and $\delta$ were chosen based on a numerical sensitivity analysis provided in our previous study \cite{us-wowmom}.
The performance of the two algorithms is compared to the fixed strategy in which all PoAs transmit at highest power coupled with the eICIC technique, denoted as {\em eICIC} in the results. This combination was shown to perform best in our previous work \cite{us-wowmom} and is widely used in the literature and in practice. eICIC is applied with CRE for microcells set at $8$~dB and
macro PoAs downlink transmissions muted in 25\% of subframes (ABS). These values were chosen to represent the mid-range of those applied in the surveyed literature \cite{eicic-alg}. The underlying resource allocation is performed using the Proportional Fair (PF) algorithm.
First, we take a look at the power setting strategies that the BPS algorithm produces. In Fig.~\ref{fig:necc}, we depict the averaged strategies reached through the BPS algorithm during the simulation period for the morning scenario. The strategies chosen by the teams for each CC are differentiated using different shades, from white ({\em zero} power) to black ({\em maximum} power). Recall that the maximum power varies depending on the type of PoA. Hexagons represent the macro PoAs, while circles represent the micro PoAs. The figure shows that CC1, i.e., the high frequency carrier, allows for higher transmit power levels to be used by both macro and micro PoAs, due to its low interference impact. CC2 and, especially, CC3 are used to complement each other to ensure overall coverage. In general we see that low transmit power levels are preferred for macro PoA across all CCs, while for micro PoAs the chosen transmit power levels tend to be higher for higher frequency CCs such as CC1. It can also be noted that in highly concentrated areas such as the city centre and commercial areas, the micro PoAs tend to transmit at higher power levels, while macro PoAs at lower power levels. Such a strategy enables micro PoAs in these areas, which support most of the traffic demand, to transmit using a higher modulation coding scheme (MCS), which in turn implies higher bit rate and, hence, throughput. In residential areas instead, traffic demand is lower and more spatially spread; thus, it is the macro PoAs that serve most of the traffic demand and therefore need to use higher power.
In the following plots we show how the dynamically obtained power strategies outperform eICIC in some of the main performance metrics. Fig.~\ref{fig:tot_content} (left) shows that when BPS is employed the amount of data downloaded over the network is always higher, especially during high intensity periods like morning and afternoon. BPS also improves the service experienced by the UEs in terms of demand met and percentage of failed downloads, as shown in Fig.~\ref{fig:tot_content} (middle) and (right). Note that for each type of file we have set a specific deadline (0.5 seconds for videos and 1 second for generic files), within which we expect the download to complete, otherwise it is considered a failed download. In Fig.~\ref{fig:tot_content} (middle), we show that during intensive periods, BPS improves the percentage of demand met across the entire network by around 10\%, and reduces the rate of failed downloads by approximately 15\%. It is clear that, as the traffic load intensifies in certain areas, which is the case in the afternoon scenario, both approaches have difficulties in managing the demand, however BPS does ensure an improvement, especially for video content, without applying any intelligent content-aware resource allocation techniques.
While the difference is smaller in the evening when the traffic load decreases significantly, BPS still retains a considerable edge in energy efficiency (see Fig.~\ref{fig:eneff}). This is because BPS is able to serve higher amounts of data, while consuming significantly less power. From Fig.~\ref{fig:eneff} (left) it is clear that BPS improves the energy efficiency for both macro PoAs and micro PoAs, however the effect is more significant for the latter: the gain in energy efficiency for macro PoAs varies between 15 and 20\%, while for micro PoAs it can be as high as 100\% during the morning and it drops to around 60\% in the evening. BPS tends to choose lower transmit powers for macro PoAs, especially for dense areas with heavy traffic load, which significantly reduces the interference experienced by micro PoAs who are responsible for serving the bulk of the data. Indeed, if we look at the energy efficiency values for the different areas, shown in Fig.~\ref{fig:eneff} (middle) and (right) for the morning scenario, it is clear that energy efficiency is highest in the city centre, commercial and school areas where the traffic load is more intense in the morning. Again, this is true for both macro and micro PoAs, but it is more significant for the latter.
In Fig.~\ref{fig:cdf_veh_ped} we look at the cumulative distribution function (CDF) of the achieved average user throughput at the different times of the day, differentiated for vehicular (circle) and pedestrian (cross) UEs. Note that, in general, BPS offers higher average user throughput for both types of traffic, but the improvement compared to eICIC is more significant during peak hours. This is true especially for pedestrian UEs, who are concentrated in the high density areas with heavy traffic load. While it may look counterintuitive, vehicular UEs tend to have better average throughput. The reason is that most of the vehicle UEs tend to be spread in the residential area where the traffic demand is lower, and they tend to be situated in well covered areas. These two factors influence the performance more than the fast-fading effects.
Fig.~\ref{fig:rbusage} shows the RB usage efficiency for macro (left) and micro PoAs (middle) calculated in terms of kilobits transmitted per number of RBs used. Note that this metric takes into account only those RBs allocated to UEs, not the overall number of RBs available. Again, BPS improves the performance of the network for all types of PoAs, but more significantly for micro PoAs. eICIC alone introduces important improvement in this metric, especially for macro PoAs, by offloading some of their UEs to the micro PoAs; however, BPS provides an additional edge while lowering the overall power consumption, as seen in the previous figures. For micro PoAs, BPS improves this metric significantly by strategically varying the transmit power of the different macro PoAs to reduce the overall interference. The performance of micro PoAs is further improved by the fact that the power setting of the micro PoAs within the same cell is decided at the team level, ensuring optimal coordination in terms of interference. It is worth noting that BPS could also be applied jointly with eICIC, especially to take advantage of the CRE feature.
Fig.~\ref{fig:rbusage} (right) shows the level of fairness between inner and edge UEs in terms of average user throughput, by calculating the Jain fairness index. While the level of fairness for inner and edge UEs tends to be the same, there is a modest improvement for both categories when BPS is applied. Note that, in multi-tier networks with high density of small cells, the line between inner and edge UEs tends to blur, as edge UEs under the coverage of a micro PoA may experience even better conditions than inner UEs; as a result, the average throughput may vary greatly between UEs of the same category. BPS, however is able to improve the fairness by limiting the overall interference. Fig.~\ref{fig:variations} (left) depicts the gains obtained by using BPS, compared to eICIC, in terms of average user throughput in the different urban areas; again, significant gains are shown, especially during morning and afternoon.
Finally, in Fig.~\ref{fig:variations} (middle), we look at the improvement obtained by applying BPS when compared to eICIC alone, for different network configurations with a varying number of microcells within each cell. The improvement in the three core metrics: energy efficiency, average user throughput and RB usage efficiency, tends to be significant and consistent as the number of microcells is increased. In Fig.~\ref{fig:variations} (right), we also show the gains achieved in the same core metrics, when we consider a higher maximum power for macro PoAs, i.e., 46~dBm (40~W), which is foreseen for 5G systems \cite{metis}, instead of 43~dBm (20~W), which we typically assume. As expected, the effect of BPS is increased when the maximum power of the macro PoAs is elevated, since the effects of interference, which BPS effectively mitigates, are even more pronounced.
\begin{figure*}
\centering
\includegraphics[width=0.3\textwidth]{fig/Fig5a}
\hspace{2mm}
\includegraphics[width=0.3\textwidth]{fig/Fig5b.png}
\hspace{2mm}
\includegraphics[width=0.3\textwidth]{fig/Fig5c.png}
\vspace{-3mm}
\caption{\label{fig:tot_content}Left: Total amount of downloaded content. Middle: Demand met. Right: Failed downloads per content type.} \vspace{-5mm}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.3\textwidth]{fig/Fig6a}
\hspace{2mm}
\includegraphics[width=0.3\textwidth]{fig/Fig6b.png}
\hspace{2mm}
\includegraphics[width=0.3\textwidth]{fig/Fig6c.png}
\vspace{-3mm}
\caption{\label{fig:eneff} Energy efficiency in bits transmitted per joule consumed, at different times of the day (Left), and for different areas in the morning scenario, for macro PoAs (Middle) and micro PoAs (Right).}\vspace{-5mm}
\end{figure*}
\vspace{-5mm}
\begin{figure*}
\centering
\includegraphics[width=0.3\textwidth]{fig/Fig7a}
\includegraphics[width=0.3\textwidth]{fig/Fig7b}
\includegraphics[width=0.3\textwidth]{fig/Fig7c}
\vspace{-3mm}
\caption{\label{fig:cdf_veh_ped}CDF of the average user throughput achieved by pedestrian and vehicular UEs. Left: Morning; Middle: Afternoon; Right: Evening. }\vspace{-5mm}
\end{figure*}
\vspace{2mm}
\section{Conclusions\label{sec:concl}}
Given the devastating effects interference will have in future networks as they become more dense and heterogeneous, effective means to contain and mitigate it will be key to enabling the optimal use of resources. In this paper, we proposed a novel solution for downlink power setting in dense networks with carrier aggregation, which aims to reduce interference and power consumption, and to provide high quality of service to users. Our approach leverages the different propagation conditions of the carriers and the different transmit powers that the various types of PoAs in the network can use for each carrier.
Applying game theory, we framed the problem as a competitive game among teams of macro and micro PoAs, and identified it as a game of strategic substitutes/complements with aggregation.
We then introduced a distributed algorithm that enables the teams to reach a desirable NE
in very few iterations. Simulation results, obtained in a realistic large-scale scenario, show that our solution greatly outperforms the existing strategies in the main performance metrics, such as energy efficiency, user throughput and spectral efficiency, while consuming little power.
At last we remark that, while in this paper we focused on downlink power setting, our approach could be applied to uplink power control as well. In particular, in future ultra dense cellular networks, the set of users accessing a small cell PoA could be modelled as a team whose leader is the PoA itself. Then the goal would be to set the uplink transmit power so as to mitigate the interference that such users may cause at the neighboring PoAs providing service to other sets of users.
\begin{figure*}
\centering
\hspace{-5mm}
\includegraphics[width=0.33\textwidth]{fig/Fig8a}
\hspace{-2mm}
\includegraphics[width=0.33\textwidth]{fig/Fig8b}
\hspace{-1mm}
\includegraphics[width=0.33\textwidth]{fig/Fig8c}
\vspace{-5mm}
\caption{\label{fig:rbusage} RB usage efficiency expressed in Kb transmitted per RB. Left: Macro PoAs; Middle: Micro PoAs; Right: Fairness among inner and edge UEs (Jain's index).}\vspace{-5mm}
\end{figure*}
\begin{figure*}
\centering
\hspace{-5mm}
\includegraphics[width=0.33\textwidth]{fig/Fig9a.png}
\hspace{-5mm}
\includegraphics[width=0.33\textwidth]{fig/Fig9b}
\hspace{-5mm}
\includegraphics[width=0.33\textwidth]{fig/Fig9c}
\vspace{-5mm}
\caption{\label{fig:variations}Left: BPS gains in average user throughput in the different urban areas and at different times of the day. Middle and Right: Improvement due to BPS over eICIC in energy efficiency, RB usage efficiency, and average user throughput, for a varying number of micro PoAs within a cell (Middle) and different maximum transmit power for macro PoAs (Right). }\vspace{-5mm}
\end{figure*}
\section*{Acknowledgment}
This work has received funding from the 5G-Crosshaul project (H2020-671598).
|
1,108,101,562,770 | arxiv | \section*{Acknowledgments}
We thank Pablo Jarillo-Herrero, Xiaomeng Liu, Yuval Ronen and Onder Gul for fruitful discussions. The major experimental work is supported by NSF (DMR-1922172). P.K. acknowledges support from the DoD Vannevar Bush Faculty Fellowship N00014-18-1-2877. Z.H. is supported by ARO MURI (W911NF-14-1-0247). AV and EK were supported by a Simons Investigator award (AV) and by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from
the Simons Foundation (651440, A.V.). PL was supported by the Department of Defense (DoD) through the National Defense Science \& Engineering Graduate Fellowship (NDSEG) Program.
K.W. and T.T. acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan, Grant Number JPMXP0112101001, JSPS KAKENHI Grant Number JP20H00354 and the CREST(JPMJCR15F3), JST.
\subsection*{S1: Device fabrication and characterization}
Our twisted trilayer graphene (TTG) device TMB (device names used in Fig.~1) and the twisted bilayer graphene (TBG) device TM have both top and bottom graphite gates. The TBG device MB is controlled by top graphite gate and silicon back gate. The van der Waals heterostructure stack for making the devices consists of 8 layers of two-dimensional materials in the order of hBN, few-layer graphite, hBN, mono-graphene, mono-graphene twisted with angle $\theta$, mono-graphene twisted with angle $-\theta$, hBN and few-layer graphite. The stack was prepared using the dry transfer method, similar to the procedures introduced in most published literature on twisted graphene devices. We make stamps consisting polycarbonate (PC) polymer and polydimethylsiloxane and pick up each layer sequentially. The temperature is kept under 180~$^\circ$C through out the transfer process. We find that generally graphene flakes with large area (e.g., 70~$\mu m$ by 70~$\mu m$) give higher yield in making twisted graphene samples. In order to minimize the movement of graphene flakes during transfer processes, we use an atomic force microscope (Asylum Cypher S) to precut the graphene flakes. For this, we follow the general procedure described in reference \cite{Sai.20c}, using a platinum doped AFM cantilever and contact mode. An 100kHz AC bias of 30V is applied to the cantilever during cutting. We find that this AC bias is critical but its exact role in cutting is currently unknown. The stack is deposited on top of a 300-nm SiO$_2$/Si substrate that has evaporated gold alignment marks on it. Alignment marks are made beforehand so the twisted sample is not subject to the high temperature of the evaporation process before being etched. Three Hall bar devices were fabricated in the regions of TMB, TM and MB following the standard e-beam lithography and dry etch procedures.
The transport data was measured at 17.7~Hz using the standard lock-in technique, with a 0.5--1~mV voltage bias and a current-limiting-resistor of 180~k$\Omega$ connected in series with the sample, which limits the current in the sample to an upper bound of 5--10~nA. The sample is connected to the cryostat probe through an RC filter to reduce noise.
We calculate twist angles using two independent methods. The first is to use the geometric capacitance between the twisted samples and gates. The carrier density is determined by the top gate voltage $V_t$ and the bottom gate voltage $V_b$ through $n=c_tV_t+c_bV_b$, where $c_t$ ($c_b$) is the capacitance between the top (bottom) gate and the sample , and it can be directly calculated $c_{t(b)}=\kappa\epsilon_0/d_{t(b)}$. $\kappa$ is the dielectric constant for hBN and is usually taken as 3.9. $\epsilon_0$ is the vacuum permittivity. $d_{t(b)}$ is the top (bottom) hBN thickness. Using the resistivity, $\rho$, versus gate voltage $V_{t(b)}$ at zero magnetic field, we can associate the resistive peaks with integer fillings of the moir\'e bands, and therefore obtain the gate voltage, or equivalently using the above formulae the carrier density $n_s$ for full filling at $\nu=4$. This carrier density corresponds to 4 electrons per moir\'e unit cell $n_s=4/A_m$, where $A_m$ is the moir\'e unit cell area and is connected to the small twist angle by $A_m=\frac{\sqrt{3}a^2}{2\theta^2}$, where $a$ is the lattice constant for graphene. The main uncertainty in this method comes from the uncertainty in the value of the dielectric constant $\kappa$ and the finite width of the integer-filling resistive features. The second method uses the Landau fan diagrams shown in magnetotransport data. By comparing the longitudinal resistivity data with the Hall conductance, we can assign each line in the Landau fans with a Chern number $C$ so that $\sigma_{xy}=Ce^2/h$, where $e$ is the electron charge and $h$ is the Planck's constant. The slopes of the lines in Landau fans are connected to the Chern numbers through $BA_m/\phi_0=Cn/n_s+s$, where $B$ is magnetic field, $\phi_0=e/h$ is the magnetic flux quantum, $s$ is the filling fraction from which the Landau fan emanates. The main uncertainty in this method comes from how well the slopes are fit. This gives an uncertainty of $\pm0.02^\circ$ in calculating the angle.
\subsection*{S2: Band structure calculation and DOS}
In this section we discuss the single particle band structure of magic angle twisted trilayer graphene, shown in Figs.~1c and 4e,f. The density of states was also plotted in Fig.~4g. The band structure was computed from the trilayer analogue \cite{Kha.19} of the Bistritzer-Macdonald model \cite{Bis.11} of twisted bilayer graphene. In this case, however, the in-plane displacement between layers matters. As shown in Ref.~\cite{Kha.19}, the Hamiltonian can be brought to a form where only the relative displacement between the top and bottom layers appears. We denote this distance ${\boldsymbol{d}}$. For a single spin and graphene valley, the Hamiltonian is
\begin{equation}
H({\boldsymbol{d}}) = \begin{pmatrix} -i v {\boldsymbol{\sigma}}_{\theta/2} \cdot {\boldsymbol{\nabla}} & T({\boldsymbol{r}} - {\boldsymbol{d}}/2) & 0 \\
T^\dag ({\boldsymbol{r}} - {\boldsymbol{d}}/2) & -iv {\boldsymbol{\sigma}}_{-\theta/2} \cdot {\boldsymbol{\nabla}} & T^\dag({\boldsymbol{r}} + {\boldsymbol{d}}/2) \\
0 & T({\boldsymbol{r}} + {\boldsymbol{d}}/2) & -i v {\boldsymbol{\sigma}}_{\theta/2} \cdot {\boldsymbol{\nabla}} \end{pmatrix}.
\label{ham}
\end{equation}
Here, ${\boldsymbol{\sigma}}_{\theta/2} = e^{-\frac{i}{4} \theta \sigma_z} (\sigma_x, \sigma_y ) e^{\frac{i}{4} \theta \sigma_z}$, $v$ is the graphene Fermi velocity, and
\begin{equation}
\begin{aligned}
T({\boldsymbol{r}}) & = \begin{pmatrix} w_0 U_0({\boldsymbol{r}}) & w_1 U_1({\boldsymbol{r}}) \\ w_1 U^*_1(-{\boldsymbol{r}}) & w_0 U_0({\boldsymbol{r}}) \end{pmatrix}, \\
U_0({\boldsymbol{r}}) & = e^{-i {\boldsymbol{q}}_1 \cdot {\boldsymbol{r}} } + e^{-i {\boldsymbol{q}}_2 \cdot {\boldsymbol{r}} } + e^{-i {\boldsymbol{q}}_3 \cdot {\boldsymbol{r}} }, \\
U_1({\boldsymbol{r}}) & = e^{-i {\boldsymbol{q}}_1 \cdot {\boldsymbol{r}} } + e^{i \phi}e^{-i {\boldsymbol{q}}_2 \cdot {\boldsymbol{r}} } + e^{-i \phi}e^{-i {\boldsymbol{q}}_3 \cdot {\boldsymbol{r}} },
\label{tunnel}
\end{aligned}
\end{equation}
with $\phi = 2\pi/3$. The vectors ${\boldsymbol{q}}_i$ are ${\boldsymbol{q}}_1 = k_\theta(0,-1)$ and ${\boldsymbol{q}}_{2,3} =k_\theta(\pm \sqrt{3}/2,1/2)$. The wavevector $k_\theta = 2k_D\sin \frac{\theta}{2}$ is the moir\'{e} version of the Dirac wavevector $k_D= 4\pi/3a_0$, where $a_0$ is the graphene lattice constant. For the other graphene valley, the Hamiltonian is the complex conjugate of \eqref{ham}.
The spectrum of $H({\boldsymbol{d}})$ depends strongly on ${\boldsymbol{d}}$. However, Ref. \cite{Car.20} finds that ${\boldsymbol{d}} = 0$ has the lowest energy due to relaxation effects, and that the system is likely to slide into this configuration naturally. We therefore focus on ${\boldsymbol{d}} = 0$ which corresponds to AA stacking between the top and bottom layers.
For ${\boldsymbol{d}} = 0$, the Hamiltonian has a symmetry under exchanging the top and bottom layer.
\begin{equation}
M_z = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}.
\end{equation}
We may then consider separately the Hamiltonian in the $M_z = \pm1$ sectors. For $M_z = +1$ we find a TBG Hamiltonian
\begin{equation}
H_+ = \begin{pmatrix} -i v {\boldsymbol{\sigma}}_{\theta/2} \cdot {\boldsymbol{\nabla}} & \sqrt{2} T({\boldsymbol{r}}) \\
\sqrt{2} T^\dag({\boldsymbol{r}}) & -iv {\boldsymbol{\sigma}}_{-\theta/2} \cdot {\boldsymbol{\nabla}} \end{pmatrix},
\label{tbgsector}
\end{equation}
where the tunneling is $\sqrt{2}$ times stronger. On the other hand for $M_z = -1$ we obtain ordinary graphene
\begin{equation}
H_- = -iv {\boldsymbol{\sigma}}_{+\theta/2} \cdot {\boldsymbol{\nabla}}.
\label{graphenesector}
\end{equation}
Here, the ordinary graphene electrons come from the top and bottom layers only and the Dirac cone is centered around the moir\'{e} $K$ point. Similarly in the other graphene valley the Dirac cone is centered at the moir\'{e} $K'$ point. Thus, for this system we expect that when the angle is $\sqrt{2}$ times the TBG magic angle, we will obtain flat bands from \eqref{tbgsector} together with a Dirac cone from \eqref{graphenesector}. This band structure is depicted in Fig.~1c. with parameters $\theta = 1.55^\circ$, $w_1 = 110 \rm{meV}$, and $\kappa = w_0/w_1$.
A nonzero displacement field mixes the TBG and graphene sectors by breaking $M_z$; its effect is largest at the K point where the bands intersect. There, the two Dirac points near charge neutrality, one from each of the graphene and TBG subsystems, split and hybridize so that there is one above zero energy and one below zero energy. These Dirac points are still protected by inversion combined with time reversal which acts as $H({\boldsymbol{r}}) \to \sigma_x H^*(-{\boldsymbol{r}}) \sigma_x$ and is a symmetry when ${\boldsymbol{d}} = 0$. Band structures with nonzero displacement fields are shown in Fig.~4e,f. with the same parameters as Fig.~1c.
The density of states is shown in Fig.~4g in the main text. It is obtained from the band structure of the Hamiltonian \eqref{ham} by a gaussian-smoothing over energy levels with standard deviation $0.03$ meV. Here we also include the density of states plotted versus energy instead of filling, see Fig.~S\ref{fig:dosenergy}.
\begin{figure}
\centering
\includegraphics[width = 0.9\textwidth]{dosenergy}
\caption{ \textbf{Density of states versus energy and displacement field.} Similar to Figure 4g in the main text, one sees two flat bands that spread out after a sufficiently large displacement field is applied. Prominent van Hove singularities are visible in white and spread out with increasing displacement field.}
\label{fig:dosenergy}
\end{figure}
\subsection*{S3: Sample homogeneity}
Fig.~S2 shows a comparison of $\rho$ versus n measured with $V_{\text{BG}}=0$ for different pair of contacts. From Fig.~S2A to E, the blue circles in the device image illustrate the pair of contacts, labeled P1 - P5, used for measuring the data on the right. Red dashed lines label the resistive states at integer fillings $\nu=$-2, 0, 1, 2, 3. It can be seen that for different contacts, the red dashed lines are slightly misaligned with respect to each other, indicating that the regions between the contacts have different angles. We calculated the angle for P2 using quantum oscillation. Then using their relative ratio of full filling densities, we obtained angle for each pair of contacts. The measured angles are $\theta_1=1.552^\circ$,$\theta_2=1.567^\circ$, $\theta_3=1.567^\circ$, $\theta_4=1.572^\circ$, and $\theta_5=1.572^\circ$ all with uncertainty $\pm0.02^\circ$. Although the third digit in the angles maybe seem meaningless given the magnitude of the uncertainty, they indicate the relative angle difference between different pairs of contacts, which has smaller uncertainty. There are double peak features in P5 indicating a region of $\theta=1.61^\circ$. None of the presented data was taken in this more diordered region. We note that over the majority of the sample, the angle is extremely uniform changing less that $0.2^\circ$. The angle gradually becomes larger from the left to the right. And there is more angle disorder on the right side of the sample. The superconductivity is strongest at the left end of the sample with $\theta_1=1.552^\circ$, and the majority of the presented data was taken in this region.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{contact_compare.pdf}
\caption{\textbf{Angle inhomogeneity.} The blue circles mark which pair of contacts are measured for the data in each figure. Red dashed lines label integer fillings from left to right $\nu=$-2, 0, 1, 2 and 3.}
\end{figure}
\subsection*{S4: Fan diagram comparison}
Fig.~S3A shows inverse Hall resistivity $1/\rho_{xy}$ as a function of $\nu$ and B at $V_{\text{BG}}=0$ and T=340~mK. This is complementary to the $\rho(B,n)$ data shown in Fig.~1f. To better illustrate the quantization values, in Fig.~S3B, we overlay the $1/\rho_{xy}(B,n)$ data with contours that have values $(l\pm0.5)e^2/h$, where $l$ is an integer between 0 and 15. We can see large area of $1/\rho_{xy}=-2e^2/h$ near $\nu=-4$ and $2e^2/h$ near $\nu=4$. From the single particle band calculation we know that when a displacement field is applied, the originally independent Dirac cone and flat bands at zero displacement field mix resulting in two Dirac cones splitting to higher and lower energy respectively. The large regions of quantized reverse Hall resistivity are likely from the quantum Hall states of these Dirac cones. Fig.~S2C is a schematic of the quantum Hall structure observed in Fig.~1f. Emanating from the charge neutrality, the main sequences are $C=-2, -6, -10,\dots$ on the hole doped side and $C=2, 6, 10$ on the electron doped side. At higher magnetic field, between $C=-2$ and $C=-6$, symmetry breaking states with $C=-3, -4$ and $-5$ emerge and the sequence $C=-14,-18,-20$ transitions into $C=-12, -16, -20$.
Fig.~S4 shows the fan diagram at zero displacement field. The Landau fan sequences emerging from neutrality, $\nu=\pm2$ are similar to that observed in the $V_{BG}=0$ fan. Interestingly, in low field range, we observe ``arc-like" features or a coexisting quantum oscillation structure distinct from those of the flat bands. We argue these are in fact the quantum Hall states of the additional Dirac cone. Fig.~S5A shows $\rho$ and Hall conductance $\sigma_{xy}$ as a function of inverse magnetic field at $\nu=-3.82$, where the arcs are prominent. We see clear quantum oscillations, displayed as the equally distanced minima in $\rho$. The $\sigma_{xy}$ values corresponding to the first two minima are quantized to $-2e^2/h$ and $-6e^2/h$, the same as the Dirac Landau level sequence. $\sigma_{xy}$ for the other minima are not well quantized. Most likely because they appear at lower magnetic fields and are not well developed. Fig.~S5B shows $\frac{d\rho}{dB}$ to make these quantum oscillations more prominent.
The quantum Hall states of the Dirac cone appear as arcs instead of the normal straight lines emanating from $\nu=0$ because the flat bands and the Dirac cones are filled simultaneously. Kinks in the Dirac cone states correspond to changes in the flat band chemical potential due to strong-interaction induced symmetry breaking as has been observed in TBG\cite{Zon.20,Won.20,park2020flavour}. To confirm that these structures are the Landau fan of the additional Dirac cone, in Fig.~S5C and D, we trace out $\rho$ minima for the visible states which we assume are the $C=6$, 10 and 14 states for Dirac cone, labeled by the triangle symbols with different colors. With these traces, we obtain the positions in magnetic field $B_6$, $B_{10}$, and $B_{14}$ as a function of $\nu$ for the $C= 6$, 10 and 14 states respectively. According to Diophantine equation $\nu-s=C\phi/\phi_0$ (where s is the filling where the Landau fan emerges from and $\phi=BA$ is the magnetic flux through the sample and $A$ is the sample area), if these structures are quantum Hall states of $C=6$, 10, 14, we expect that $6B_6=10B_{10}=14B_{14}$ even with charge carriers split between the Dirac cone and the flat bands. Fig~S5B and C shows the normalized ratio between $6B_6$, $10B_{10}$ and $14B_{14}$ and indeed they are roughly one, confirming that the states originate from the Dirac cone. We also note that these arcs are not present when the fan diagram is measured with a finite $D$ as shown in Fig. 1 of the main text. This is consistent with the theoretical prediction that a gap opens at the Dirac cone when a displacement field is applied, causing the Dirac cones to not fill until after the flat bands.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{fanBG0.pdf}
\caption{\textbf{TTG Hall resistivity at $V_{\text{BG}}=0$ and T=340mK}. \textbf{(A)} $1/\rho_{xy}$ as a function of $\nu$ and B. \textbf{(B)} Contour plot of $1/\rho_{xy}$. Contour values are taken as $(l\pm0.5)e^2/h$, where $l$ is an integer between 0 and 15. Large regions of $1/\rho_{xy}=\pm2e^2/h$ are marked. \textbf{(C)} Schematic of the quantum Hall structure in the Landau Fan diagram at $V_{\text{BG}}=0$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{fanD0.pdf}
\caption{\textbf{TTG Landau fan diagrams at $D=0$ and T=340mK.} \textbf{(A)} and \textbf{(B)} $\rho$ and $1/\rho_{xy}$ as a function $\nu$ and B. \textbf{(C)} Schematic of the quantum Hall structure.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Dirac_evidence.pdf}
\caption{\textbf{Evidence of the additional Dirac cone.}\textbf{(A)}Quantum oscillations at $\nu=-3.82$. Blue curve is longitudinal resistivity $\rho$ and orange curve is Hall conductance $\sigma_{xy}$. Black dashed lines mark quantized conductance values corresponding to sequence -2, -6, -10...\textbf{(B)} $\frac{d\rho}{dB}$ of the $\rho$ data in Fig.~S4A below 2T.\textbf{(C)} and \textbf{(E)} zoom-ins of the oscillations on the hole side and the electron side. Triangles trace $\rho$ minima. $B_{6}$, $B_{10}$ and $B_{14}$ correspond to the Dirac cone Landau fan sequence 6, 10 and 14. \textbf{(D)} and \textbf{(F)} Normalized ratio between the magnetic field values multiplied by the sequence number of different traces. The ratios are all approximately 1. }
\label{dirac_evidence}
\end{figure}
\subsection*{S5: Critical temperature and GL coherence length}
We extract $T_c$ from $\rho(T)$ by extrapolating the normal state resisitivty $\rho_N$ to low temperature by fitting a line to the high temperature linear $\rho$ in the normal state and finding the temperature where $\rho(T)=x\rho_N(T)$, where $x$ is a percentage. An example of this linear fit for data at 2~T is shown in Fig~S\ref{fig.Tcextract} as well as $\rho(T)$ taken at several different magnetic fields. we note that a dip in $\rho$ at low temperature is evident in the data even at high $B$ although $\rho$ does not go to zero. A similar tail of low $\rho$ extending to high field is evident in the $dV/dI$ data and $\rho(\nu, B)$, shown in Fig.~S\ref{fig.Bfigs}. It is possible that this dip is due to a vortex phase with non-zero $\rho$.
This dip in resistivity results in very different results for $T_c$ depending on the choice of $x$. Fig.~S\ref{fig.coh} shows this difference for $x=0.1$ and $x=0.5$. For $x=0.1$ we find a linear relationship as described by the Ginzburg-Landau (GL) theory for a two-dimensional superconductor: $B_c=[\Phi_0/(2\pi\xi^2_{\text{GL}})](1-T/T_c)$ where $\xi_{\text{GL}}$ is the GL coherence length \cite{Tin.04}. For $x=0.1$, $\xi_{\text{GL}}=61$~nm. For $x=0.5$ the resulting $T_c$ is much higher and remains above 1.5~K to fields larger than 2~T. It is also very non-linear, although fitting the low field portion gives $\xi_{\text{GL}}=13.4$~nm. We have chosen $x=0.1$ as the standard for this paper as it more clearly defines the region where we observe $\rho=0$ at low temperature, and agrees well with the measured BKT transtion temperature.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{BSweeps.pdf}
\caption{\textbf{$\mathbf{T_c}$ Extraction} $\rho(T)$ measured at several different fields at $\nu=-2.3$ and $D/\epsilon_0=0.3$~V/nm. The dashed line is a fit to the normal state resistivity at 2~T.}
\label{fig.Tcextract}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Bfigs.pdf}
\caption{\textbf{B dependence} \textbf{a} Differential resistance as a function of DC bias current showing a long low resistance tail extending to large $B$. \textbf{b} Resistivity as a function of $nu$ and $B$ also with low resistivity at larger $B$. }
\label{fig.Bfigs}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{coherence_length.pdf}
\caption{\textbf{Coherence Length} $T_c$ determined using $x=0.1$ \textbf{a} and $x=0.5$ \textbf{b}. Dashed lines are fits to GL theory with different $\xi_{\text{GL}}$.}
\label{fig.coh}
\end{figure}
\subsection*{S6: Strong-coupling superconductivity}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{FitExamples.pdf}
\caption{\textbf{Strong coupling fit} Examples of typical fits to extract the slope of $T_c$ for comparison with a strong-coupling model of superconductivity. The black dashed lines show fits to $T_c$ near the optimal point on the superconducting domes constrained to pass through $\nu=\pm2$. }
\label{fig.FitEx}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{JandTc.pdf}
\caption{\textbf{Extracted pairing scale} Extracted values of $J$ from the fits to the strong-coupling model in the hole (\textbf{a}) and electron (\textbf{b}) regions of superconductivity compared with maximum $T_c$ at each $D$ from Fig. 3 of the main text. We see $J$ and the $T_c$ are correlated.}
\label{fig.J}
\end{figure}
The rapid increase in $T_c$ with doping in addition to the suppression of superconductivity due to the van Hove singularity point towards a strong coupling (BEC) scenario for superconductivity where preformed bosonic charge $2e$ objects are condensed. One such model is the skyrmion model of superconductivity proposed in Ref.~\cite{Kha.20} where such bosonic charge $2e$ objects were proposed to be topological skyrmion textures in some pseudospin variable. Regardless of the actual mechanism, a strong-coupling BEC superconductor obtained by condensing charge $2e$ objects whose density is $\nu_{2e}$ and mass is $M_{2e}$ is characterized by the critical temperature \cite{NelsonKosterlitz}
\begin{equation}
k_B T_c = \frac{\nu_{2e} \pi\hbar^2}{2A_MM_{2e}}=\frac{\nu_{2e} J}{2},
\end{equation}
Here, we take the filling fraction of the charge $2e$ objects $\nu_{2e}$ to be equal to half the filling fraction measured relative to half-filling $\nu = \pm 2$. $A_M$ denotes the area of the moir\'e unit cell and $J$ is an effective pairing scale. We can use this formula to extract $J$ from our data by fitting $T_c(\nu)$ near its maximum, where we expect this formula to apply. Fig.~S\ref{fig.FitEx} Shows examples of these fits in the electron and hole superconducting domes superimposed onto the dome resistivty. Since the superconductivity appears only in the flavour symmetry broken regions where $n_H-\nu=2$, where the relevant carrier filling fraction is related to $\nu=\pm2$, we constrain the fits so that $T_c(\nu=\pm2)=0$. The resulting fits agree reasonably well with our data and correspond to values of $J$ between 2.5 and 3.5~meV on the hole side and 0.6 and 1.2~meV on the electron side. This is roughly of the same order as the coupling scale predicted theoretically \cite{Kha.20}. Moreover, as shown in Fig.~S\ref{fig.J}, we find that $J$ is correlated with the maximum $T_c$ at a given $D$ as we would expect if $J$ is a measure of the pairing strength. We note that based on our extracted values of $J$ we calculate $M_{2e}\sim m_e$ for the hole superconductivity and $M_{2e}\sim 3 m_e - 5 m_e$ for the electron superconductivity.
\pagebreak
|
1,108,101,562,771 | arxiv | \section{Introduction and statement of the results}
In this paper we consider the Dirichlet problem for the constant mean curvature equation on a domain of a horosphere in three-dimensional hyperbolic space $\mathbb{H}^3$. In order to fix the terminology, we consider the upper halfspace model of ${\mathbb H}^3$, that is, $\mathbb{R}^3_{+}=\{(x_1,x_2,x_3)\in\mathbb{R}^3:x_3>0\}$ endowed with the hyperbolic metric $g=g_0/x_3^2$, $g_0$ being the Euclidean metric. After a rigid motion of $\mathbb{H}^3$, a horosphere can be expressed as a horizontal plane $P_a$ of equation $x_3=a$, $a>0$. Let $\Omega$ be a domain of $P_a$, where we identify $\Omega$ with its orthogonal projection $\Omega\times\{0\}$ on the plane $x_3=0$. We study the Dirichlet problem
\begin{eqnarray}
&&\mbox{div}\left(\dfrac{Du}{\sqrt{1+|Du|^2}}\right)=-\frac{2}{u}\left(\frac{1}{\sqrt{1+|Du|^2}}-H\right)\ \mbox{in $\Omega$}\label{eq1}\\
&&u=a>0\ \mbox{on $\partial\Omega,$}\label{eq2}
\end{eqnarray}
where $u>0$ is a smooth function in $\Omega$, $H\in\mathbb{R}$ is a constant and $D$ and $\mbox{div}$ denote the gradient and the divergence operators in the Euclidean plane $\mathbb{R}^2$. The graph $\Sigma_u=\{(x,u(x)):x\in\Omega\}$, $x=(x_1,x_2)$, represents a surface in $\mathbb{H}^3$ with constant mean curvature $H$ computed with respect to the upwards orientation. The study of the solutions of the Dirichlet problem (\ref{eq1})-(\ref{eq2}) depends strongly of the relation between $H$ and the value $1$, the modulus of the sectional curvature $-1$ of ${\mathbb H}^3$. For example, if $H<1$ ($H>1$), then $\Sigma_u$ lies above the horosphere $P_a$ (respectively below $P_a$) and the geometric behaviour of $\Sigma_u$ in both cases is completely different: let us observe that in hyperbolic geometry, the translations along the $x_3$-coordinate are not isometries of ${\mathbb H}^3$.
In this article, we will use the theory of maximum principles developed by Payne and Philippin to obtain estimates of the gradient for a solution of (\ref{eq1})-(\ref{eq2}). We derive estimates of the gradient $|Du|$ in terms of $C^0$ bounds of $u$.
\begin{theorem}\label{t-du}
Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain. Let $u$ be a solution of (\ref{eq1})-(\ref{eq2}) and denote $u_M=\sup_\Omega u$ and
$$C=\frac{1}{1-H}\frac{u_M^2}{a^2}\cdot$$
If $0\leq H<1$ or if $H<0$ with
\begin{equation}\label{um}
u_M<\sqrt{\frac{H-1}{H}}a,
\end{equation}
then
\begin{equation}\label{duu}
|Du| \leq \frac{\sqrt{C^2-(1+HC)^2}}{1+HC}\quad\mbox{in $\Omega$.}
\end{equation}
\end{theorem}
If we have estimates for the gradient of solutions of (\ref{eq1})-(\ref{eq2}), it is natural to address the problem of the existence of solutions of the Dirichlet problem. In the context of the hyperbolic space, the results of existence require some assumption on the convexity of the domain $\Omega$. If $0\leq H<1$, the convexity of $\partial\Omega$ is enough to ensure the existence of a solution of (\ref{eq1})-(\ref{eq2}): see \cite{lo0,ns}. However, if $H<0$, the mere convexity of $\Omega$ does not ensure the existence of solutions and it is required stronger convexity. More exactly, the solvability of the Dirichlet problem (\ref{eq1})-(\ref{eq2}) was proved if the curvature $\kappa$ of $\partial\Omega$ satisfies $-k<H<0$ (\cite[Th. 1.1]{lm}). For other existence results, see \cite{dl,li,lo1,st}. As a consequence of Theorem \ref{t-du}, we establish the following existence result.
\begin{theorem}\label{t-ex}Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain. Let $2R$ be the diameter of $\partial\Omega$. If $-1\leq H<0$ satisfies
\begin{equation}\label{hhh}
R^2<-2-\frac{1}{H}+2\sqrt{\frac{H}{H-1}},
\end{equation}
then there exists a unique solution of (\ref{eq1})-(\ref{eq2}).
\end{theorem}
We notice that we need to assume that the diameter of $\Omega$ is small in relation with the value of $H$ but, in contrast, it is not necessary strong convexity of $\partial\Omega$ and we allow that the existence of regions of $\partial\Omega$ whose curvature is closed to $0$.
Theorems \ref{t-du} and \ref{t-ex} will be proved in $\S$ \ref{sec3}. In the proof of these results, we need to show the uniqueness of critical points of solutions of (\ref{eq1})-(\ref{eq2}). Although this may be expected because the resemblance of (\ref{eq1}) with other quasilinear elliptic equations, as for example, the capillary equation (\cite{ba,be,el}) or the singular minimal surface equation (\cite{lo2}), we prove this uniqueness only in the range of values $H<1$, which is enough for our purposes: see Theorem \ref{t1} in $\S$ \ref{sec2}. Finally we prove in $\S$ \ref{sec4} an estimate from below of the global maximum $u_M$ of a solution of (\ref{eq1})-(\ref{eq2}) when $H<1$. In general, estimates of $u$ are obtained by comparing $u$ with known solutions of (\ref{eq1}), as for example, radial solutions. However, our result establishes a lower estimate of the value $u$ at the critical point in terms of the curvature of $\partial\Omega$ and $H$: see Theorem \ref{t3}.
\section{Uniqueness of critical points}\label{sec2}
The first result in this paper establishes, under some hypothesis, the uniqueness of critical points of a solution of the Dirichlet problem. The topic on the number of critical points of solutions for elliptic equations is a subject of high interest and the literature is very extensive, especially related with the question of the convexity of level sets of solutions of elliptic equations. In the context of the constant mean curvature equation in Euclidean space, and if the domain is convex, Sakaguchi proved the existence of a unique critical point assuming Dirichlet boundary condition or Neumann boundary condition (\cite{sa}). In this paper we address this problem for the constant mean curvature equation in hyperbolic space when $H<1$.
\begin{theorem}\label{t1}
Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain and let $H\in\mathbb{R}$. If $H<1$, then a solution $u$ of (\ref{eq1})-(\ref{eq2}) has exactly one critical point, which coincides with the point where $u$ attains its global maximum.
\end{theorem}
We prove this result as a consequence of the following arguments.
A first step consists in proving the existence of at least one critical point of a solution $u$ of (\ref{eq1})-(\ref{eq2}). When $H\leq 0$, this is achieved by the Hopf maximum principle. Indeed, because the right-hand side of (\ref{eq1}) is non positive, the minimum of $u$ is attained at some boundary point, proving $u>a$ in $\Omega$. Since $\Omega$ is bounded, the function $u$ has a global maximum at some interior point.
This argument fails if $0<H<1$. For this range of values of $H$ (also if $H\leq 0$) we will use a {\it comparison principle} based in the standard theory of quasilinear elliptic equations (\cite[Th. 10.1]{gt}). In our context, it can be formulated as follows: if two surfaces $\Sigma_1$ and $\Sigma_2$ have a common interior point $p$ and with constant mean curvature $H_1$ and $H_2$, respectively, with respect to the orientations that coincide at $p$, if $\Sigma_1$ lies above $\Sigma_2$ around $p$, then $H_1\geq H_2$ (the same conclusion holds if $p$ is a common boundary point with tangent boundaries at $p$): see \cite[p. 194]{lo}.
\begin{lemma}\label{l1}
Suppose $\Omega\subset\mathbb{R}^2$ is a bounded domain. If $H<1$, then a solution of (\ref{eq1})-(\ref{eq2}) satisfies $u>a$ in $\Omega$.
\end{lemma}
\begin{proof} The proof is by contradiction. Suppose that there exists $x_0\in \Omega$ such that $u$ attains the minimum value, $u(x_0)\leq a$. Let $p=(x_0,u(x_0))$. For $b<u(x_0)$, consider the the horosphere $P_b$ of equation $x_3=b$, whose mean curvature is $1$ with respect to the upwards orientation. Then we move up $P_b$ by letting $b\nearrow\infty$, until the first touching point with $\Sigma_u$ at $b_1=u(x_0)$. Then the horosphere $P_{b_1}$ touches $\Sigma_u$ at $p$, which is an interior point of both $\Sigma_u$ and $P_{b_1}$. As $\Sigma_u$ lies above $P_{b_1}$, we arrive a contradiction with the comparison principle.
\end{proof}
Once proved this lemma, we follow with the proof of Theorem \ref{t1}. Denote $u_k=\partial u/\partial x_k$, $k=1,2$, and consider the summation convention of repeated indices. Equation (\ref{eq1}) can be expressed as
$$(1+|Du|^2)\Delta u-u_iu_ju_{ij}+\frac{2(1+|Du|^2)}{u}-\frac{2H(1+|Du|^2)^{3/2}}{u}=0.$$
Denote $v^k=u_k$, $k=1,2$, and we differentiate the above identity with respect to $x_k$ obtaining:
\begin{eqnarray}
&&\left((1+|Du|^2)\delta_{ij}-u_iu_j\right)v_{ij}^k+2\left(u_i\Delta u-u_ju_{ij}+\frac{2 u_i}{u}-\frac{3Hu_i}{u}(1+|Du|^2)^{1/2}\right)v_i^k\nonumber\\
&&-\frac{2(1+|Du|^2)}{u^2}(1-H\sqrt{1+|Du|^2})v^k=0\label{eq33}
\end{eqnarray}
for $k=1,2$ and where $\delta_{ij}$ is the Kronecker delta. Equation (\ref{eq33}) is a linear elliptic equation in $v^k$. We need to apply the Hopf Maximum Principle (\cite[Th. 3.5]{gt}) to this equation. Then we have to know that the term of $v^k$ is non positive, or equivalently,
\begin{equation}\label{ineh}
1-H\sqrt{1+|Du|^2} \geq 0\quad \mbox{in $\Omega$.}
\end{equation}
If $H\leq 0$, this inequality is clear. If $0<H< 1$, one needs to estimate $|Du|$ in terms of $H$. For this, we prove the next lemma, which is implicitly contained in the proof of the main result in \cite{lm}.
\begin{lemma}\label{l2}
Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain of $\mathbb{R}^2$ and let $0<H< 1$. If $u$ satisfies (\ref{eq1})-(\ref{eq2}), then
\begin{equation}\label{esh}
|Du|^2\leq\frac{1-H^2}{H^2}\cdot
\end{equation}
\end{lemma}
\begin{proof}
Consider the Minkowski model for $\mathbb{H}^3$ (see notation and details in \cite{lm})). It is proved in \cite[Theorem 4.1]{lm} that under the assumptions of Lemma \ref{l2},
\begin{equation}\label{hna}
H\langle p,a\rangle+\langle N'(p),a\rangle\leq 0,\quad \mbox{ $p\in \Sigma_u$},
\end{equation}
where $N'$ is the Gauss map of $\Sigma_u$. We write the inequality (\ref{hna}) in the upper half-space model of $\mathbb{H}^3$. The relation between both models establishes
$$ \langle p,a\rangle=\frac{1}{u},\quad \langle N',a\rangle=-\frac{\langle N,(0,0,1)\rangle}{u},$$ where here $N$ is the Gauss map of $\Sigma_u$ as surface in Euclidean space $\mathbb{R}^3_+$. Thus (\ref{hna}) becomes
$H-\langle N,(0,0,1)\rangle \leq 0$, that is,
$$H-\frac{1}{\sqrt{1+|Du|^2}}\leq 0,$$
which yields (\ref{esh}).
\end{proof}
As a consequence of Lemma \ref{l2}, the Hopf Maximum Principle for equation (\ref{eq33}) implies that if $v^k$ takes a non-negative maximum in $\Omega$ or a non positive minimum in $\Omega$, then $v^k$ must be a constant function (\cite[Th. 3.5]{gt}). We point out also that the function $u$ is analytic by standard theory (\cite{bers,ni}), and the same holds for the functions $v^k$.
For each $\theta\in\mathbb{R}$, let $(\cos\theta,\sin\theta)$ be a vector of the unit circle ${\mathbb S}^1$. Since (\ref{eq33}) is a linear equation on $v^k$, the function
\begin{equation}\label{vv}
v(\theta)= Du\cdot (\cos\theta,\sin\theta)=v^1\cos\theta+v^2\sin\theta
\end{equation}
also satisfies (\ref{eq33}). Denote $\mathbf{n}$ the outward unit normal vector of $\partial\Omega$. Because $u$ is constant along $\partial\Omega$, we have $(v^1,v^2)=Du= ( Du\cdot\mathbf{n} ) \mathbf{n}$ along $\partial\Omega$, that is,
$$(v^1,v^2)=\frac{\partial u}{\partial n}\mathbf{n}.$$
From (\ref{vv}),
$$v(\theta)=\frac{\partial u}{\partial\mathbf{n}} \mathbf{n}\cdot (\cos\theta,\sin\theta)\quad \mbox{ along }\partial\Omega.$$
On the other hand, since $u$ is constant along $\partial \Omega$, the Strong Maximum Principle of Hopf (\cite[Th. 3.6]{gt}) implies that any outward
directional derivative on $\partial\Omega$ is negative and thus,
$$\frac{\partial u}{\partial \mathbf{n}}<0\quad \mbox{along $\partial\Omega$}.$$
Fix $\theta\in\mathbb{R}$. Since $\partial\Omega$ is strictly convex, the map $\mathbf{n}:\partial\Omega\rightarrow{\mathbb S}^1$ is one-to-one. It follows that there exist exactly two points of $\partial\Omega$ where $\mathbf{n}(s)$ is orthogonal to the fixed direction $(\cos\theta,\sin\theta)$. By the definition of $v(\theta)$, the function $v(\theta)$ vanishes along $\partial\Omega$ at exactly two points.
We now follow the argument given by Philippin in \cite{ph} to prove the uniqueness of the critical points. By completeness, we give it briefly. The proof is by contradiction, and suppose that there are at least two critical points of $u$ in $\Omega$. Let $P_1$ and $P_2$ be two critical points which are fixed in the sequel. The argument consists into the following steps.
\begin{enumerate}
\item The function $v(\theta)$ is not constant in $\Omega$ because $v(\theta)$ only has two zeroes along $\partial\Omega$.
\item As a consequence, the critical points of $v(\theta)$ are isolated point of $\Omega$ because $v(\theta)$ is analytic.
\item Let $\mathcal{N}_\theta=v(\theta)^{-1}(\{0\})$ be the nodal set of $v_\theta$. Because $v(\theta)$ is analytic, standard theory asserts that near to a critical point of $v(\theta)$, the function $v(\theta)$ is asymptotic to a harmonic homogeneous polynomial (\cite{bers}). Following Cheng \cite{ch}, $\mathcal{N}_\theta$ is diffeomorphic to the nodal set of the homogeneous polynomial that approximates, in particular, $\mathcal{N}_\theta$ is formed by a set of regular analytic curves at regular points, the so-called nodal lines. On the other hand, in a neighbour of a critical point, the nodal lines form an equiangular system.
We claim that there does not exist a closed component of $\mathcal{N}_\theta$ contained in $\Omega$. This is because if $\mathcal{N}_\theta$ encloses a subdomain $\Omega'$ of $\Omega$ where $v(\theta)=0$ along $\partial\Omega'$, the maximum principle implies that $v(\theta)$ is identically $0$ in $\Omega'$, a contradiction.
\item We prove that $\mathcal{N}_\theta$ is formed exactly by one nodal line. Suppose by contradiction that there are two nodal lines $L_1$ and $L_2$. Because both $L_1$ an $L_2$ are not closed, then the arcs $L_1$ and $L_2$ end at the boundary points where $v(\theta)$ vanishes, being both points the two end-points of $L_i$. Since $\Omega$ is simply-connected, then the two arcs $L_1$ and $L_2$ enclose at least one subdomain $\Omega'\subset\Omega$ where $v(\theta)$ vanishes along $\partial\Omega'$. This is impossible by the maximum principle.
\item As a conclusion, the nodal set $\mathcal{N}_\theta$ is formed exactly by one arc. We now give an orientation to the arc $\mathcal{N}_\theta$ for all $\theta$. The chosen orientation in $\mathcal{N}_\theta$ is that we first pass through $P_1$ and then through $P_2$. With respect to this orientation, we are ordering the two boundary points where $v(\theta)$ vanishes. More precisely, denote by $P(\theta)$ the initial point of $\mathcal{N}_\theta$, which after passing $P_1$ and then $P_2$, finishes at the other boundary point, which is denoted by $Q(\theta)$.
\item Let us consider $\theta$ varying in the interval $[0,\pi]$. We observe that by the definition of $v(\theta)$ in (\ref{vv}), the functions $v(0)$ and $v(\pi)$ coincides up to the sign, that is, $v(0)=-v(\pi)$ and thus the nodal lines $\mathcal{N}_0$ and $\mathcal{N}_\pi$ coincide as sets of points. However, when $\theta$ runs in $[0,\pi]$, the ends points of $\mathcal{N}_0$ interchange its position when $\theta$ arrives to the value $\theta=\pi$, that is, in the nodal line $\mathcal{N}_\pi$. Consequently, and according to the chosen orientation in $\mathcal{N}_\theta$, $P(0)=Q(\pi)$ and $P(\pi)=Q(0)$. Because all the arcs $\mathcal{N}_\theta$ pass first $P_1$ and then $P_2$, this interchange of the end points between $\mathcal{N}_0$ and $\mathcal{N}_\pi$ implies the existence of another nodal line for $v(\pi)$. This is impossible by the item 4: this contradiction completes the proof of Theorem \ref{t1}.
\end{enumerate}
We extend Theorem \ref{t1} in the limit case $u=0$ along $\partial\Omega$.
\begin{corollary} Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain. Let $H$ be a real number with $H<1$. If $u$ is a solution (\ref{eq1}) and $u=0$ along $\partial\Omega$, then $u$ has a unique critical point.
\end{corollary}
\begin{proof}
We consider positive values $a$ sufficiently close to $0$ so the set $\Omega_a=\{x\in\Omega: u(x)> a\}$ is strictly convex. Then Theorem \ref{t1} asserts the existence of a unique critical point, which must coincide for all $a$ because $\Omega_{a'}\subset\Omega_a$ if $a<a'$. The argument finishes by letting $a\rightarrow 0$.
\end{proof}
\section{Proof of theorems \ref{t-du} and \ref{t-ex}}\label{sec3}
In this section we apply the theory of the maximum principle developed by Payne and Philippin in \cite{pp} for some $\Phi$-functions associated to equation (\ref{eq1}). We introduce the notation employed there. Consider an equation of type
\begin{equation}\label{ep}
\mbox{div}(g(q^2)Du)+\rho(q^2)f(u)=0,
\end{equation}
where $\rho, g>0$, $g$ is a $C^2$ function of its argument and $\rho$ and $f$ are $C^1$ functions. Here $q=|Du|$. We also assume that (\ref{ep}) satisfies the elliptic condition $g(\xi)+2\xi g'(\xi)>0$ for all $\xi>0$. We define the $\Phi$-function
$$\Phi(x;\alpha)=\int_{c_1}^{q^2}\frac{g(\xi)+2\xi g'(\xi)}{\rho(\xi)}\, d\xi+\alpha\int_{c_2}^u f(\eta)\,d\eta,$$
where $\alpha$ is a real parameter and $c_1,c_2\in\mathbb{R}$. Here the functions $g$ and $\rho$ are evaluated in $q^2$.
We now prove Theorem \ref{t-du}.
\begin{proof}[Proof of Theorem \ref{t-du}]
For equation (\ref{eq1}), we take $c_1=0$, $c_2=1$ and the functions $g$, $\rho$ and $f$ are
\begin{equation}\label{cho}
g(\xi)=\frac{1}{\sqrt{1+\xi}},\ \rho(\xi)=\frac{1}{\sqrt{1+\xi}}-H,\ f(u)=\frac{2}{u}\cdot
\end{equation}
Following the theory of Payne and Philippin, we require that $\rho$ is positive, which it is clear if $H\leq 0$. On the other hand, in the range $0<H<1$, the evaluation of $\rho$ at $q^2$ is non-negative by Lemma \ref{l2}. A straight-forward computation of $\Phi(x;\alpha)$ gives
$$\Phi(x;\alpha)=\log\left(\frac{(1+q^2)}{(1-H\sqrt{1+q^2})^2} u^{2\alpha}\right),\quad x\in\Omega.$$
When $\Omega$ is strictly convex, it is proved in \cite[Corollary 1]{pp} that $\Phi(x;2)$ attains its maximum at one critical point of $u$. By Theorem \ref{t1}, we know that the function $u$ has exactly one critical point, which we denote by $\mathbf{O}$, and let $u_M=u(\mathbf{O})$, which coincides with the maximum value of $u$ in $\Omega$. Then we find
$$\frac{1+|Du|^2}{(1-H\sqrt{1+|Du|^2})^2}u^4 \leq \frac{1}{(1-H)^2}u_M^4,$$
that is,
\begin{equation}\label{du1}
\frac{1+|Du|^2}{(1-H\sqrt{1+|Du|^2})^2} \leq \frac{1}{(1-H)^2}\left(\frac{u_M}{u}\right)^{4}\leq \frac{1}{(1-H)^2}\left(\frac{u_M}{a}\right)^{4}.
\end{equation}
Recall the value $C=u_M^2/((1-H)a^2)$. It follows from (\ref{du1}) that
$$(1+HC)\sqrt{1+|Du|^2}\leq C.$$
The inequality (\ref{duu}) is shown provided $1+HC>0$. This inequality is immediate if $0\leq H<1$. In case $H<0$, the inequality $1+HC>0$ is equivalent to (\ref{um}).
\end{proof}
We follow by focusing in Theorem 4 of \cite{pp}. The inequality (2.39) in \cite{pp} can be written for our functions defined in (\ref{cho}) as
\begin{equation}\label{ff1}
\left(\delta_{ij}-\frac{u_iu_j}{1+|Du|^2}\right)\Phi_{ij}+{W}_i\Phi_i\geq \frac{2 (\alpha -1) \left(2 H \sqrt{1+q^2}+(\alpha -2) q^2-2\right)}{u^2 (1+q^2)},
\end{equation}
where ${W}_i$ is a vector function uniformly bounded in $\Omega$. In order to apply the First Hopf Maximum Principle, we require that the right-hand side in (\ref{ff1}) is non-negative. If $\alpha\in [0,1]$, it suffices that the expression in the second parentheses in (\ref{ff1}) is non-positive. This is clear if $H\leq 0$ independently if $\Omega$ is or is not convex. If $0<H<1$ and $\Omega$ is convex, we deduce from (\ref{esh}) that
$$2 H \sqrt{1+q^2}+(\alpha -2)q^2-2\leq (\alpha-2)q^2\leq 0.$$
Following \cite{pp}, we deduce that $\Phi(x;\alpha)$ attains its maximum at some some boundary point for all $\alpha\in [0,1]$.
In the particular case $\alpha=0$, we deduce the following result.
\begin{corollary}\label{t2} Let $\Omega\subset\mathbb{R}^2$ be a bounded domain and let $H\leq 0$. If $u$ is a solution of (\ref{eq1}),
$$\max_{\overline{\Omega}}|Du|=\max_{\partial\Omega}|Du|.$$
The same holds when $0<H<1$ if, in addition, $\Omega$ is strictly convex, and $u=a>0$ on $\partial\Omega$.
\end{corollary}
\begin{proof} If we take $\alpha=0$ in (\ref{ff1}), then there exists a boundary point $P\in\partial\Omega$ such that
$$\frac{1+|Du|^2}{(1-H\sqrt{1+|Du|^2})^2}\leq \frac{1+q_M^2}{(1-H\sqrt{1+q_M^2})^2},$$
where $q_M=|Du|(P)$. It follows directly that $|Du|\leq q_M$, proving the result.
\end{proof}
From Theorem \ref{t-du} and Corollary \ref{t2}, we prove the existence result of Theorem \ref{t-ex}.
\begin{proof}[Proof of Theorem \ref{t-ex}]
The uniqueness of solutions is a consequence that the right-hand side of (\ref{eq1}) is non-decreasing on $u$ by Lemma \ref{l2} (\cite[Th. 10.2]{gt}).
For the existence of a solution $u$ of (\ref{eq1})-(\ref{eq2}), we apply a modified version of the continuity method to the family of Dirichlet problems parametrized by $\tau\in [0,1]$
$$\mathcal{P}_\tau: \left\{\begin{array}{cll}
Q_\tau[u]&=&0 \mbox{ in $\Omega$}\\
u&=&a \mbox{ on $\partial\Omega,$}
\end{array}\right.$$
where
$$Q_\tau[u]= \mbox{div}\left(\dfrac{Du}{\sqrt{1+|Du|^2}}\right)+\frac{2}{u}\left(\frac{1}{\sqrt{1+|Du|^2}}- \tau H\right).$$
See \cite[Th. 11.4]{gt}. The graph $\Sigma_{u_\tau}$ of a solution of $u_\tau$ of $\mathcal{P}_\tau$ is a graph on $P_a$ with constant mean curvature $ \tau H$ and boundary $\partial\Omega$. Let us observe that for the value $\tau=0$, there is a solution of $\mathcal{P}_0$ because $\partial\Omega$ is convex (\cite{lm,ns}). As usual, let $\mathcal{A}$ be the subset of $[0,1]$ consisting of all $\tau$ for which the Dirichlet problem $\mathcal{P}_\tau$ has a $C^{2,\alpha}$ solution. The proof consists in showing that $1\in \mathcal{A}$ because standard regularity PDE results guarantee that any solution of $Q_\tau[u]=0$ is smooth in $\Omega$.
First observe that $\mathcal{A}\not=\emptyset$ because $0\in\mathcal{A}$. On the other hand, the set $\mathcal{A}$ is open in $[0,1]$ because
$$\frac{\partial Q_\tau[u]}{\partial u}=-\frac{2}{u^2}\left(\frac{1}{\sqrt{1+|Du|^2}}- \tau H\right)\leq 0,$$
since $H<0$.
Finally, the main difficulty lies in proving that $\mathcal{A}$ is closed. This follows if we are able to derive a priori $C^0$ and $C^1$ estimate of $u_\tau$ for every $\tau\in [0,1]$ and depending only on the initial data. In other words, we have to find a constant $M$, depending only on $H$, $a$ and $\Omega$, such that if $u_\tau$ is a solution of $\mathcal{P}_\tau$, then
\begin{equation}\label{sup}
\sup_\Omega |u_\tau|+\sup_\Omega|Du_\tau|\leq M.
\end{equation}
See \cite[Th. 13.8]{gt}. However, by using Theorem \ref{t-du}, it is enough if we find an upper bound for $\sup_\Omega |u_\tau|$. We now use a geometric viewpoint of the solutions of $\mathcal{P}_\tau$.
Fix $H\in\mathbb{R}$. After a dilation from the origin of $\mathbb{R}^3_+$, which is an isometry of $\mathbb{H}^3$, we suppose $a=1$. Then the diameter of $\partial\Omega$ coincides with the Euclidean one. Let $C_R\subset P_1$ be the circumscribed circle of $\partial\Omega$, which has a radius equal to $R$. After a horizontal translation in $\mathbb{R}^3_+$, if necessary, we suppose that the centre of $C_R$ is $(0,0,1)$ and denote $D_R\subset P_1$ the disc bounded by $C_R$, which contains $\Omega$ in its interior. We know that $\Sigma_u$ lies above the plane $x_3=1$. On the disc $D_R$, we are going to place an umbilical surface $\Sigma_w$ with the same mean curvature $H$ and being a graph on $D_R$. Indeed, and from the Euclidean viewpoint, $\Sigma_w$ is a spherical cap which is a graph of a function $w$ on the disc $D_R$. Then we prove that $\Sigma_u$ lies in the interior of the domain determined by $\Sigma_w$ and the plane $P_1$, or in other words, $u<w$ in $\Omega$. This will be proved by doing dilations $p\rightarrow tp$, $p\in\mathbb{R}^3_+$ from the origin $O$ of $\mathbb{R}^3$. After that, we have $u_M<w_M$, where $u_M$ and $w_M$ are the global maximum of $u$ and $w$ respectively. But now, we notice that $w_M$ depends only on the initial data, that is, from $\Omega$, $a$ and $H$.
The first step is to show the existence of the surface $\Sigma_w$. Consider (part of) the Euclidean sphere in $\mathbb{R}^3_+$ of radius $m>0$ given by
$$w(r)=c_0+\sqrt{m^2-r^2},\quad 0\leq r\leq R,$$
where
\begin{equation}\label{eqmh}
c_0=-mH,\quad m^2=(1-c_0)^2+R^2,
\end{equation}
$0<c_0<1$ and $w(R)=1$. The mean curvature of $\Sigma_w$ is $H$ with respect to the upwards orientation. If we see $c_0$ as a parameter varying from $0$ to $1$, the value of the mean curvature of $\Sigma_w$ goes from $0$ to $-1/R$. It is not difficult to see that the right-hand side of (\ref{hhh}) is less than $1/H^2$. Thus $R^2<1/H^2$, that is, $-1/R<H$. Definitively, given $H$ under the hypothesis of Theorem \ref{t-ex}, we have assured the existence of $\Sigma_w$.
We now do the argument of comparison between $\Sigma_u$ and $\Sigma_w$ by dilations. By dilations of $\Sigma_w$ with respect to the origin $O$ of $\mathbb{R}^3_+$, namely, $t\Sigma_w$ and $t>1$, we take $t$ sufficiently big so $t\Sigma_w$ does not meet $\Sigma_u$. Then let $t\searrow 1$ until the first touching point between $t\Sigma_w$ with $\Sigma_u$. Because an interior touching point is not possible because both surfaces have the same (constant) mean curvature, then the first touching point occurs at $t=1$, that is, when $\Sigma_w$ comes back to its initial position and $\Sigma_w$ touches $\Sigma_u$ only at some boundary point of $\Omega$. In particular, $\Sigma_u$ is contained inside the domain determined by $\Sigma_w$ and the plane $x_3=1$. Therefore, we deduce that the global maximum $u_M$ is less than the highest point of $\Sigma_w$, namely, $w_M=c_0+m=m(1-H)$ and
$$u_M<m(1-H).$$
The above argument has been done for the value $H$, but it holds for $\tau H$, $\tau\in [0,1]$. Indeed, we replace $H$ by $\tau H$. We now prove the $C^0$ estimates for the problems $\mathcal{P}_\tau$. Fix $-1\leq H<0$ and let $u_\tau$ the solution of $\mathcal{P}_\tau$, $\tau\in [0,1]$. Let us observe that the mean curvature of $\Sigma_{u_\tau}$ is $\tau H$ and $\tau H>H$ for $\tau\in [0,1)$. Then the same process of dilations together the comparison principle proves that for each $\tau \in [0,1]$, we find
\begin{equation}\label{m1}
u_\tau< w_M=m (1-H).
\end{equation}
In order to use Theorem \ref{t-du}, and because $H<0$ and $a=1$, it suffices to prove
\begin{equation}\label{m1h}
m(1-H)<\sqrt{\frac{H-1}{H}},
\end{equation}
that is,
\begin{equation}\label{inedsi}
m<\frac{1}{\sqrt{H^2-H}}\cdot
\end{equation}
However, from (\ref{eqmh}), we deduce $m^2=(1+mH)^2+R^2$, which leads to
$$m=\frac{H+\sqrt{H^2+(1-H^2)R^2}}{1-H^2}\cdot$$
By using (\ref{hhh}), we conclude the desired inequality (\ref{inedsi}). Once we have obtained (\ref{m1h}), Theorem \ref{t-du} applies deducing an a priori estimate for $|Du|$. Hence, and together (\ref{m1}), we have proved the existence of $M$ in (\ref{sup}). This completes the proof of Theorem \ref{t-ex}.
\end{proof}
\begin{remark} We compare this result with Theorem 1.1 in \cite{lm}. In \cite{lm}, the hypothesis requires that $\Omega$ is strongly convex in terms of the boundary data $H$, namely, $\kappa>|H|$. However in Theorem \ref{t-ex} we need that the domain $\Omega$ is strictly convex but it may contain regions where the curvature $\kappa$ of $\partial\Omega$ is close to $0$. In contrast, the size of the domain is small in relation to the value of $H$.
\end{remark}
\section{A lower estimate of the critical point }\label{sec4}
In this section, for $H<1$, we prove an estimate from below of the global maximum of a solution of (\ref{eq1})-(\ref{eq2}).
\begin{theorem}\label{t3} Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain with curvature $\kappa>0$. If $H<1$ and $u$ is a solution of (\ref{eq1})-(\ref{eq2}), then
\begin{equation}\label{um2}
u_M\geq \frac{1-H}{\kappa_0},
\end{equation}
where $\kappa_0=\max_{\partial\Omega}\kappa$.
\end{theorem}
Firstly, we need to prove a minimum principle for the function $\Phi(x;1)$. The next result is inspired by other similar in the torsional creep problem (\cite{ph}).
\begin{proposition}\label{t-min}
Let $\Omega\subset\mathbb{R}^2$ be a bounded strictly convex domain. Let $H$ be a real number with $H<1$. If $u$ is a non-radial solution of (\ref{eq1})-(\ref{eq2}), then the function $\Phi(x;\alpha)$ attains its minimum value on $\partial\Omega$ for any $\alpha\in [1,2]$.
\end{proposition}
\begin{proof} Following \cite[inequality (2.15)]{pp}, it was proved that if $u$ is a solution of (\ref{eq1})-(\ref{eq2}), then $\Phi(x;\alpha)$ satisfies the next elliptic differential equation
\begin{equation}\label{pp2}
\left(\delta_{ij}-\frac{u_iu_j}{1+|Du|^2}\right)\Phi_{ij}+\tilde{W}_i\Phi_i=\frac{2 (\alpha -2) (\alpha -1) \left(2(1- H \sqrt{q^2+1})+q^2\right)}{\left(q^2+1\right) u^2},
\end{equation}
where $\tilde{W}_i$ is a vector function which is singular at the critical point of $u$. It is not difficult to see that if $\alpha\in [1,2]$, the right-hand side of (\ref{pp2}) is non-positive because $(\alpha-2)(\alpha-1)\leq 0 $ and the expression in parentheses $2(1-H\sqrt{1+q^2})+q^2$ is always non-negative: this is immediate for $H\leq 0$ and if $0<H<1$, we use Lemma \ref{l2}.
By the Hopf Maximum Principle, and since the vector functions $\tilde{W}_i$ are singular at the critical points of $u$, we conclude that $\Phi(x;\alpha)$ attains its minimum at the unique critical point of $u$ or at a boundary point. Recall that by Theorem \ref{t1}, the function $u$ has exactly one critical point $\mathbf{O}$.
The proof of Proposition \ref{t-min} finishes if we discard the case that the minimum occurs at some critical point. The proof follows now the next steps.
\begin{enumerate}
\item The function $\Phi(x;\alpha)$ is not constant in $\Omega$. The proof is by contradiction. If $\Phi$ is constant, then the left-hand side of (\ref{pp2}) is $0$. If we see the right-hand side of (\ref{pp2}), the only possibility to be $0$ is that $\alpha$ is $1$ or $2$. We prove that this is not possible. We consider the case $\alpha=1$ because the argument for $\alpha=2$ is similar. By the expression of $\Phi(x;1)$, we find that
$$\frac{1+|Du|^2}{(1-H\sqrt{1+|Du|^2})^2}u^{2}$$
is constant, in particular, $|Du|$ is constant along $\partial\Omega$. Since $u=a$ along $\partial\Omega$, then $\partial u/\partial\mathbf{n}$ is constant along $\partial\Omega$. Then $u$ is a solution of the Dirichlet problem (\ref{eq1})-(\ref{eq2}) together the Neumann condition $\partial u/\partial\mathbf{n}=\mbox{ct}$ along $\partial\Omega$. A result of Serrin establishes that $\Omega$ is a round disk and $u$ is a radial function $u=u(r)$ (\cite{se}). This is a contradiction.
\item After a change of coordinates, suppose that the critical point is $\mathbf{O}=(0,0)$. Then we deduce $u_1(\mathbf{O})=u_2(\mathbf{O})=0$. A new change of coordinates allows to assume $u_{12}(\mathbf{O})=0$. Since $u$ is a maximum of $u$, we have $u_{11}(\mathbf{O})\leq 0$ and $u_{22}(\mathbf{O})\leq 0$.
{\it Claim:} $u_{11}(\mathbf{O})< 0$ and $u_{22}(\mathbf{O})< 0$.
The proof is by contradiction and suppose that $u_{11}(\mathbf{O})=0$ (the same argument if $u_{22}(\mathbf{O})=0$). Here we follow the same notation as in the proof of Theorem \ref{t1}. If the function $v^1=u_1$ is constant in $\Omega$, then $u$ depends only on the variable $x_2$ and the boundary condition (\ref{eq2}) is impossible. Thus $v^1$ is a non constant analytic function. Since $v^1$ vanishes at $\mathbf{O}$ as well as $v^1_1$ and $v^1_2$, the function $v^1$ vanishes at $\mathbf{O}$ with a finite order $m\geq 1$. Thus there exist at least two nodal lines of $v^1$ which form an equiangular system in a neighbour of $\mathbf{O}$. We have proved in Theorem \ref{t1} that this is impossible because there exists exactly one nodal line of $v^1$.
\item Finally we prove that $\Phi(x;\alpha)$ can not attain its minimum at $\mathbf{O}$. We know $u_1(\mathbf{O})=u_2(\mathbf{O})=u_{12}(\mathbf{O})=0$. We need the first and second partial derivatives of $\Phi$ at $\mathbf{O}\in\Omega$. Following the notation employed in \cite[p. 197]{pp}, at the critical point $\mathbf{O}$ we have
$$ \Phi_i(\mathbf{O},\alpha)=0,\quad \Phi_{ij}(\mathbf{O};\alpha)=2\frac{g+2q^2g'}{\rho}u_{ik}u_{jk}+\alpha fu_{ij}.
$$
Hence, and from (\ref{eq1}),
\begin{eqnarray*}
\Phi_{11}(\mathbf{O};\alpha)&=&\frac{2}{1-H}u_{11}(\mathbf{O})^2+\frac{2\alpha}{u(\mathbf{O})}u_{11}(\mathbf{O})\\
\Phi_{12}(\mathbf{O};\alpha)&=&0\\
\Phi_{22}(\mathbf{O};\alpha)&=&\frac{2}{1-H}u_{22}(\mathbf{O})^2+\frac{2\alpha}{u(\mathbf{O})}u_{22}(\mathbf{O}).
\end{eqnarray*}
Because $\mathbf{O}$ is a minimum of $\Phi(x;\alpha)$, we find that $\Phi_{11}(\mathbf{O};\alpha)\geq 0$ and $\Phi_{22}(\mathbf{O};\alpha)\geq 0$. Since $u_{11}(\mathbf{O}), u_{22}(\mathbf{O})<0$ by the previous item, and $1-H>0$,
$$\frac{2}{1-H}u_{11}(\mathbf{O})+\frac{2\alpha}{u(\mathbf{O})}\leq 0$$
$$ \frac{2}{1-H}u_{22}(\mathbf{O})+\frac{2\alpha}{u(\mathbf{O})}\leq 0.$$
Then
\begin{equation}\label{delta1}
u_{11}(\mathbf{O})+u_{22}(\mathbf{O})=\Delta u(\mathbf{O})\leq -\frac{2\alpha(1-H)}{u(\mathbf{O})}\cdot
\end{equation}
Finally, equation (\ref{eq1}) at $\mathbf{O}$ yields
\begin{equation}\label{delta2}
\Delta u(\mathbf{O})=\frac{-2(1-H)}{u(\mathbf{O})}\cdot
\end{equation}
Comparing (\ref{delta1}) and (\ref{delta2}), we conclude, $\alpha\leq 1$. Thus if $\alpha\in (1,2]$, we arrive to a contradiction and the theorem is proved.
We analyse the case $\alpha=1$. Denote by $\mathbf{O}_\alpha$ the the minimum point of $\Phi(x;\alpha)$. We have proved that $\mathbf{O}_\alpha$ lies in $\partial\Omega$ for all $\alpha\in (1,2]$. By continuity, the point $\mathbf{O}_1$ must be a boundary point, because on the contrary, $\Phi(x;\alpha)$ would be constant for some parameter $\alpha\in (1,2]$. This proves the result for $\alpha=1$ and the proof of Proposition \ref{t-min} is completed.
\end{enumerate}
\end{proof}
\begin{remark} In case that $u$ is a radial solution, then $u$ can be expressed as
$$u(r)=-Hm+\sqrt{m^2-r^2},\quad 0\leq r\leq R.$$
It is not difficult to see that if we denote $u'=u'(r)$, then the functional
$$\Phi(x;\alpha)=\frac{1+u'^2}{(1-H\sqrt{1+u'^2})^2}u^{2\alpha}$$
is constant only when the parameter $\alpha$ is $\alpha=1$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{t3}]
First suppose that $u$ is not a radial solution. By the proof of Proposition \ref{t-min}, we know that $\Phi(x;1)$ attains its minimum at some point $Q\in\partial\Omega$. Then if $q_M=|Du|(Q)$, we have
$$\frac{1+|Du|^2)}{(1-H\sqrt{1+|Du|^2})^2}u^2\geq \frac{1+q_M^2}{(1-H\sqrt{1+q_M^2})^2}a^2.$$
We evaluate this inequality at the only critical point $\mathbf{O}$, obtaining
\begin{equation}\label{uo}
\left(\frac{u_M}{a(1-H)}\right)^{2}\geq \frac{1+q_M^2}{(1-H\sqrt{1+q_M^2})^2}\cdot
\end{equation}
On the other, $\partial\Phi(Q;1)/\partial\mathbf{n}\leq 0$ because $Q$ is the minimum of $\Phi(x;1)$. If $u_n$ and $u_{nn}$ denote the first and second outward normal derivatives of $u$ along $\partial\Omega$, by the expression of $\Phi_i$ (see \cite[p. 197]{pp}), we deduce
\begin{equation}\label{uo2}
\frac{u_nu_{nn}}{(1+u_n^2)(1-H\sqrt{1+u_n^2})}+\frac{ u_n}{u}\leq 0\ \mbox{at $Q$}\cdot
\end{equation}
In normal coordinates, and taking into account that $u$ is constant along $\partial\Omega$, equation (\ref{eq1}) along $\partial\Omega$ becomes
$$\frac{u_{nn}}{(1+u_n^2)^{3/2}}+\frac{\kappa u_n}{\sqrt{1+u_n^2}}=\frac{-2}{u}\left(\frac{1}{\sqrt{1+u_n^2}}-H\right).$$
Combining this equation at $Q$ with (\ref{uo2}) and using that $u_n\leq 0$,
$$\frac{-1}{a}\geq \frac{\kappa(Q)u_n(Q)}{1-H\sqrt{1+u_n^2(Q)}}\cdot$$
Hence, and as $\kappa(Q)\leq\kappa_0$,
$$\frac{1}{a^2\kappa_0^2}\leq \frac{u_n^2(Q)}{(1-H\sqrt{1+u_n^2(Q)})^2}\cdot$$
As $|Du|^2=u_n^2$ at $Q$, we obtain from (\ref{uo})
$$\left(\frac{u_M}{a(1-H)}\right)^{2}\geq \frac{u_n^2(Q)}{(1-H\sqrt{1+u_n^2(Q)})^2}\geq \frac{1}{a^2\kappa_0^2},$$
proving the result.
Suppose now that $u$ is a radial solution. Then $u(r)=c_0+\sqrt{m^2-r^2}$, where $m>0$, $c_0=-Hm$ and $0\leq r\leq R$. Since $m>R$,
$$u_M=u(0)=(1-H) m>(1-H)R=\frac{1-H}{\kappa_0}\cdot$$
This proves the inequality (\ref{um2}) and completes the proof of Theorem \ref{t3}.
\end{proof}
\section*{Acknowledgements} The author has been partially
supported by MEC-FEDER
grant no. MTM2017-89677-P.
|
1,108,101,562,772 | arxiv | \section{Introduction}
\label{intr}
Almost all HgMn stars exhibit a strong absorption feature at
3984~\AA, which has been identified as a line of \hbox{{\rm Hg}~{$\scriptstyle {\rm II}$}}.
The wavelength at
which this line is observed depends on the isotopic mix of Hg
(White et al. 1976), which
ranges from the terrestrial mix to nearly pure $^{204}\!$Hg.
Mercury is not the only very heavy element observed in HgMn stars.
Lines of \hbox{{\rm Pt}~{$\scriptstyle {\rm II}$}}\ (Dworetsky \& Vaughan 1973) and \hbox{{\rm Au}~{$\scriptstyle {\rm II}$}}\ (Guthrie 1985) are
also observed.
Dworetsky \& Vaughan (1973) studied the \hbox{{\rm Pt}~{$\scriptstyle {\rm II}$}}\ $\lambda\,4046$ line in a
sample of nine HgMn stars. This line is the strongest Pt line at optical
wavelengths, and in the nine stars studied it is shifted toward
longer wavelengths by 0.04 to 0.09~\AA, with respect to the centroid of
the terrestrial platinum line. These shifts are interpreted as an
isotopic effect. The corresponding anomalies are
analogous to those found for Hg, in the sense that the
heavier isotopes tend to dominate in cooler stars.
Neither radiatively driven diffusion nor any other theory
until now can account
satisfactorily for the variations in the Hg and Pt isotope mix
among the HgMn stars (Leckrone et al. 1993).
The main purpose of the work reported here was to provide additional
observational constraints to guide the theorists in the understanding
of the isotopic anomalies in HgMn stars, improving upon previous studies
through the much better data quality obtainable now.
Thanks to the availability of new laboratory
measurements of isotope shifts in \hbox{{\rm Pt}~{$\scriptstyle {\rm II}$}}\ (Engleman 1989) it became possible
to identify more definitely the \hbox{{\rm Pt}~{$\scriptstyle {\rm II}$}}\ isotopes.
\begin{table}[t]
\small
\begin{center}
\caption{Isotopic compositions}
\label{t1}
\begin{tabular}{lrrrrrr}
\hline\hline
Star & {Terrestrial} & {$\chi$~Lup} & {HR~7775} & {HR~1800} &
{74 Aqr} & {HR 6520}\\
&{abundance}\\
\hline\hline
$T_{\rm eff}$ (K) &&10680 &10830 &11070 &11880 &13250\\
$\log g$ &&3.99 &4.11 &3.75 &4.03 &4.17\\
\hline\hline
{[Hg]} && +5.45 & +5.65 & +5.25 & & +5.10\\
\hline
&\multicolumn{6}{c}{Hg isotopic structure (\%)}\\
\hline
196 & 0.15 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
198 & 9.97 & 0.00 & 0.00 & 0.16 & 7.02 & 4.10 \\
199a & 7.14 & 0.00 & 0.00 & 0.32 & 6.19 & 10.99 \\
199b & 9.71 & 0.00 & 0.00 & 0.44 & 9.29 & 14.00 \\
200 & 23.09 & 0.00 & 0.00 & 2.97 & 26.79 & 28.97 \\
201a & 4.80 & 0.00 & 0.10 & 1.77 & 6.01 & 8.61 \\
201b & 8.30 & 0.00 & 0.20 & 3.06 & 9.10 & 8.93 \\
202 & 29.80 & 1.00 & 49.70 & 50.58 & 26.79 & 22.49 \\
204 & 6.86 & 99.00 & 49.70 & 40.74 & 8.81 & 1.80\\
\hline\hline
{[Pt]} && +4.00 & +4.69 & +3.30\\
\hline
&\multicolumn{4}{c}{Pt isotopic structure (\%)}\\
\hline
194 &32.90& 0.00& 0.00& 0.00 \\
195b&18.78& 0.00& 7.50& 0.00 \\
195a&13.15& 0.00&10.00& 0.00 \\
196 &25.20&10.00&55.00& 0.00 \\
198 & 7.19&90.00&27.50&100.00 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Observations and spectrum synthesis}
Spectra were obtained with the ESO 1.4~m Coud\'e Auxiliary Telescope and
the Coud\'e Echelle Spectrograph Long Camera at a resolving power
R = $\lambda/\Delta
\lambda=118\,000$ and ${\rm S/N}\geq250$. The observed wavelength ranges
are 3965--4000~\AA\ and 4018--4035~\AA.
Synthetic spectra and model atmospheres were computed with the SYNTHE
and ATLAS9 codes (Kurucz 1997), respectively.
A code similar to the
TEFFLOGG code of Moon \& Dworetsky (1985), but based on new
computed uvbybeta indices, was used to obtain the stellar parameters
(Castelli \&\ Kurucz 1994).
Observed indices were taken from the Mermilliod, Mermilliod,
\& Hauck catalogue (1997) and were dereddened using the
UVBYBETA code of Moon (1985).
For all the stars we assumed zero microturbulent velocity, while the
rotational velocity was derived from the comparison of the observed and
computed spectra, after having degraded the computed spectra for the
broadening due to the instrumental profile.
For the whole transition of \hbox{{\rm Hg}~{$\scriptstyle {\rm II}$}}\ 3983~\AA\ we adopted $\log gf=-1.73$
(Dworetsky 1980). For each isotopic and hyperfine component this
value was scaled in agreement with each observed relative intensity.
For reference, the terrestrial intensities from Kurucz (1993) and
from Smith (1997) were adopted.
For the transitions of \hbox{{\rm Pt}~{$\scriptstyle {\rm II}$}}\ 4023.8, 4034.2, and 4046.4~\AA\ we
adopted log $gf=-2.61, -2.09$, and $-1.19$, respectively
(Dworetsky \& Vaughan 1973). The isotopic and hyperfine
shifts and intensities were either taken directly
from Engleman (1989) or were derived from Engleman (1989)
and Kalus et al. (1997).
One of the programme stars with very sharp lines is the double-lined
spectroscopic binary $\chi$~Lup. An updated version
of the BINARY code of Kurucz (1993, CD-ROM 18) has yielded as final
computed spectrum the spectrum resulting from the contribution of both
components.
The atmospheric parameters of the secondary star and ratio of
the radii of the primary to the secondary stars in $\chi$~Lup were taken
from Wahlgren et al. (1994).
For five programme stars with extremely sharp spectral lines
($v\,\sin i < 3$~km/s)
the isotopic composition could be studied in greater detail.
The results are summarized in Table~1.
All stars have Hg overabundance by more than 5~dex compared with the solar
abundance. The largest overabundance of Pt (4.69~dex) was found in
the star HR~7775.
No star shows terrestrial isotopic proportions. The most pronounced
deviation from the terrestrial composition is found in the stars $\chi$~Lup
and HR~7775, which are the coolest ones in our sample.
The large overabundances of Hg and Pt and the star-to-star variations in their
isotopic composition clearly pose a challenge to
any theory aimed at explaining the origin of chemical peculiarities.
|
1,108,101,562,773 | arxiv | \section{Introduction}
Traces on an algebra are important linear functionals which come up in
various incarnations in various branches of mathematics, e.g. group
characters, norm and trace in field extensions, many trace formulas,
to mention just a few.
On a separable Hilbert space $\cH$ there is a canonical trace (tracial
weight, see Section \plref{s:HST}) $\Tr$ defined on non--negative
operators by
\begin{equation}
\Tr(T):=\sum_{j=0}^\infty \scalar{T e_j}{e_j},
\end{equation}
where $(e_j)_{j\ge 0}$ is an orthonormal basis.
This is the unique semifinite normal trace on the algebra $\cB(\cH)$ of
bounded operators on $\cH$.
In the 1930's \textsc{Murray} and \textsc{von Neumann} \cite{MurNeu:ROI}, \cite{MurNeu:ROII},
\cite{Neu:ROIII}, \cite{MurNeu:ROIV} studied traces on weakly closed
$*$--subalgebras (now known as von Neumann algebras) of $\cB(\cH)$.
They showed that on a von Neumann \emph{factor} there is up to a
normalization a unique semifinite normal trace.
\begin{sloppy}
\textsc{Guillemin} \cite{Gui:NPW} and \textsc{Wodzicki}
\cite{Wod:LIS}, \cite{Wod:NRF} discovered independently
that a similar uniqueness statement holds for the algebra
of pseudodifferential operators on a compact manifold. The \emph{residue
trace}, however, has nothing to do with the Hilbert space trace: it vanishes
on trace class operators.
\end{sloppy}
In the 60s \textsc{Dixmier} \cite{Dix:ETN} had already proved that the uniqueness
statement for the Hilbert space trace fails if one gives up the assumption
that the trace is normal.
In the late 80's and early 90's then the Dixmier trace had a
celebrated comeback when \textsc{Alain Connes} \cite{Con:AFN}
proved that in important cases the residue trace coincides with a Dixmier trace.
The aim of this note is to survey some of these results. We will not touch
von Neumann algebras, however, any further.
The paper is organized as follows:
In Section \plref{s:HST} our point of departure is the classical Hilbert space
trace. We give a short proof that it is up to a factor
the unique normal tracial weight on the algebra $\cB(\cH)$
of bounded operators on a separable Hilbert space $\cH$.
Then we reproduce Dixmier's very elegant construction which
shows that non--normal tracial weights are abundant. We do
confine ourselves however to those Dixmier traces which will
later turn out to be related to the residue trace.
Section \plref{s:POP} presents the basic calculus of
pseudodifferential operators with parameter on a closed manifold.
In Section \plref{s:EHS} we pause the discussion of pseudodifferential operators
and look at the problem of extending the Hilbert space trace to
pseudodifferential operators of higher order. A pseudodifferential operator
$A$ of order $<-\dim M$ on a closed manifold $M$ is of trace class
and its trace is given by integrating is Schwartz kernel $k_A(x,y)$ over
the diagonal
\begin{equation}\label{intro-2}
\Tr(A)=\int_M k_A(x,x) dx.
\end{equation}
We will show that the classical Hadamard partie finie regularization of
integrals allows to extend Eq. \eqref{intro-2} to all pseudodifferential
operators of non--integral order. This is the celebrated Kontsevich-Vishik
canonical trace.
Section \plref{s:POPAE} on asymptotic analysis
then shows how the parameter dependent pseudodifferential
calculus leads naturally to the asymptotic expansion of the resolvent trace
of an elliptic differential operator. For the resolvent of elliptic pseudodifferential operators a refinement,
due to Grubb and Seeley, of the parametric calculus is necessary. Without going into the details of this refined calculus
we will explain why additional $\log \lambda$ terms appear in the asymptotic expansion of $\Tr(B (P-\gl)^{-N})$ if $B$ or
$P$ are pseudodifferential rather than differential operators. These $\log \lambda$ terms are at the heart of
the noncommutative residue trace.
The straightforward relations between the resolvent expansion, the heat
trace expansion and the meromorphic continuation of the $\zeta$--function, which are
based on the Mellin transform respectively a contour integral method, are
also briefly discussed.
In Section \plref{s:RT} we state the main result about the existence and
uniqueness of the residue trace. We present it in a slightly generalized form
due to the author for $\log$--polyhomogeneous pseudodifferential operators.
A formula for the relation between the residue trace of a power of the
Laplacian and the Einstein--Hilbert action due to \textsc{Kalau--Walze} \cite{KalWal:GNC}
and \textsc{Kastler} \cite{Kas:DOG} is proved in an example.
Then we give a proof of Connes' Trace Theorem which states that on
pseudodifferential operators of order minus $\dim M$ on a closed manifold $M$
the residue trace is proportional to the Dixmier trace.
Having seen the significance of the parameter dependent calculus
it is natural to ask whether the algebras of parameter dependent
pseudodifferential operators have an analogue of the residue trace.
Somewhat surprisingly the results for these algebras are quite different:
there are many traces on this algebra, however, there is a unique symbol--valued trace from which many other traces
can be derived. This result resembles very much the
center valued trace in von Neumann algebra theory.
Furthermore, in contrast to the non--parametric case the
$L^2$--Hilbert space trace extends to a trace on the whole algebra.
This part of the paper surveys results from a joint paper with
\textsc{Markus J. Pflaum} \cite{LesPfl:TAP}.
Finally, in the short Section \plref{s:DFC} we will discuss
the analogue of the regularized traces on the
symbolic level and announce a generalization of a recent result of S.
Paycha concerning the characterization of the Hadamard partie finie
integral and the residue integral in light of the Stokes property.
The result presented here allows one to calculate de Rham cohomology
groups of forms on $\R^n$ whose coefficients lie in a certain symbol space.
We will show that both the Hadamard partie finie integral and the residue
integral provide an integration along the fiber on the cone
$\R_+^*\times M$ and as a consequence there is an analogue of
the Thom isomorphism.
\textsc{Acknowledgments.}
I would like to thank the organizers of the conference on Motives, Quantum
Field Theory and Pseudodifferential Operators for inviting me to contribute these notes.
Also I would like to thank the anonymous referee
for taking his job very seriously and for making very detailed remarks
on how to improve the paper. I think the paper has benefited considerably
from those remarks.
\section{The Hilbert space trace (tracial weight)}
\label{s:HST}
\subsection{Basic definitions}
Let $\cH$ be a separable Hilbert space. Denote by $\cB(\cH)$ the algebra
of bounded operators on $\cH$. Let $\cA$ be a $C^*$--subalgebra, that is,
a norm closed self--adjoint ($a\in\cA\Rightarrow a^*\in\cA$) subalgebra.
It follows that $\cA$ is invariant under continuous functional calculus,
e.g. if $a\in\cA$ is non--negative then $\sqrt{a}\in\cA$.
Denote by $\cA_+\subset \cA$ the set of non--negative elements.
$\cA_+$ is a cone in the following sense:
\begin{enumerate}
\item $T\in\cA_+, \gl\in\R_+ \Rightarrow \gl T\in\cA_+,$
\item $S,T\in\cA_+, \gl,\mu\in\R_+\Rightarrow \gl S+\mu T\in\cA_+$.
\end{enumerate}
A \emph{weight} on $\cA$ is a map
\begin{equation}
\tau: \cA_+\longrightarrow \R_+\cup \{\infty\},\quad \R_+:=[0,\infty),
\end{equation}
such that
\begin{equation}\label{eq:def-weight}
\tau (\gl S+\mu T)= \gl \tau(S)+\mu \tau(T),\quad \gl,\mu\ge 0,\;
S,T\in\cA_+.
\end{equation}
A weight is called \emph{tracial} if
\begin{equation}\label{eq:def-tracial-weight}
\tau(TT^*)=\tau(T^* T), \quad T\in\cA_+.
\end{equation}
It follows from \eqref{eq:def-tracial-weight} that for a unitary $U\in\cA$ and $T\in\cA_+$
\begin{equation}
\begin{split}
\tau(UTU^*)=
\tau((UT^{1/2})(UT^{1/2})^*)=\tau((UT^{1/2})^*(UT^{1/2}))=\tau(T).
\end{split}
\end{equation}
\eqref{eq:def-weight} implies that $\tau$ is monotone in the sense that if $0\le S\le T$ then
\begin{equation}
\tau(T)=\tau(S)+\tau(T-S)\ge \tau(S).
\end{equation}
\begin{remark}
In the literature tracial weights are often just called traces.
We adopt here the convention of \textsc{Kadison} and \textsc{Ringrose}
\cite[Chap. 8]{KadRin:FTOII}.
We reserve the word trace for a linear functional $\tau:\cR\longrightarrow \C$
on a $\C$--algebra $\cR$ which satisfies $\tau(AB)=\tau(BA)$ for
$A,B\in\cR$.
A priori a tracial weight $\tau$ is only defined on the positive cone
of $\cA$ and it may take the value $\infty$. Below we will see that
there is a natural ideal in $\cA$ on which $\tau$ is a trace.
\end{remark}
\subsubsection{The canonical tracial weight on bounded operators on a
Hilbert space}
Let $(e_j)_{j\in \Z_+}$ be an orthonormal basis of the Hilbert space $\cH$;
$\Z_+:=\{0,1,2,\ldots\}$. For $T\in\cB_+(\cH)$
put
\begin{equation}\label{eq:def-tr}
\Tr(T):=\sum_{j=0}^\infty \scalar{T e_j}{e_j}.
\end{equation}
$\Tr(T)$ is indeed independent of the choice of the orthonormal basis
and it is a tracial weight on $\cB(\cH)$ (\textsc{Pedersen} \cite[Sec.
3.4]{Ped:AN}).
\subsubsection{Trace ideals}\label{ss:trace-ideals} We return to the general set--up of
a tracial weight on a $C^*$--subalgebra $\cA\subset\cB(\cH)$.
Put
\begin{equation}\label{eq:trace-ideal-1}
\cL_+^1(\cA,\tau):=\bigsetdef{T\in\cA_+}{ \tau(T)<\infty}
\end{equation}
and denote by $\cL^1(\cA,\tau)$ the linear span of $\cL^1_+(\cA,\tau)$.
Furthermore, let
\begin{equation}\label{eq:trace-ideal-2}
\cL^2(\cA,\tau):=\bigsetdef{T\in\cA}{\tau(T^*T)<\infty}.
\end{equation}
Using the inequality
\begin{equation}\label{eq:20090508-3}
\begin{split}
(S+T)^*(S+T)&\le (S+T)^*(S+T)+(S-T)^*(S-T)\\
&=2 (S^*S+T^*T)
\end{split}
\end{equation}
and the polarization identity
\begin{equation}\label{eq:polarization}
4 T^* S=\sum_{k=0}^3 i^k (S+i^k T)^*(S+ i^k T)
\end{equation}
one proves exactly as for the tracial weight $\Tr$ in \cite[Sec. 3.4]{Ped:AN}:
\begin{prop}\label{p:20090511-1} $\cL^1(\cA,\tau)$ and $\cL^2(\cA,\tau)$ are two--sided
self--adjoint ideals in $\cA$.
Moreover for $T,S\in \cL^2(\cA,\tau)$ one has $TS,ST\in\cL^1(\cA,\tau)$
and
\[
\tau(ST)=\tau(TS).\]
The same formula holds for $T\in\cL^1(\cA,\tau)$ and $S\in\cB(\cH)$.
In particular $\tau\restriction \cL^p(\cA,\tau), p=1,2,$ is a trace.
\end{prop}
\subsection{Uniqueness of $\Tr$ on $\cB(\cH)$}
As for finite--dimensional matrix algebras one now shows
that up to a normalization there is a unique trace on the ideal
of finite rank operators.
\begin{lemma}\label{l:Tr-uniqueness-FR} Let $\FRH$ be the ideal of finite rank operators on $\cH$.
Any trace $\tau:\FRH\longrightarrow \C$ is proportional to
$\Tr\restriction\FRH$.
\end{lemma}
\begin{proof}
Let $P,Q\in\cB(\cH)$ be rank one orthogonal projections.
Choose $v\in\im P, w\in \im Q$ with $\|v\|=\|w\|=1$ and put
\begin{equation}
T:= \scalar{v}{\cdot}\; w.
\end{equation}
Then $T\in\FRH$ and $T^*T=P, TT^*=Q$. Consequently $\tau$
takes the same value $\gl_\tau\ge 0$ on all orthogonal projections of
rank one.
If $T\in\FRH$ is self--adjoint then $T=\sum_{j=1}^N \mu_j P_j$
with rank one orthogonal projections $P_j$. Thus
\begin{equation}
\tau(T)=\gl_\tau \sum_{j=1}^N\mu_j=\gl_\tau \Tr(T).
\end{equation}
Since each $T\in\FRH$ is a linear combination of self--adjoint
elements of $\FRH$ we reach the conclusion.
\end{proof}
The properties of $\Tr$ we have mentioned so far are not sufficient
to show that a tracial weight on $\cB(\cH)$ is proportional to $\Tr$.
The property which implies this is \emph{normality}:
\begin{prop}\label{p:tr-normal}
\textup{1.} $\Tr$ is \emph{normal}, that is, if $(T_n)_{n\in\Z_+}\subset\cB_+(\cH)$ is
an increasing sequence with $T_n\to T\in\cB_+(\cH)$ strongly then
$\Tr(T)=\sup_{n\in\Z_+} \Tr(T_n)$.
\textup{2.} Let $\tau$ be a normal tracial weight on $\cB(\cH)$. Then there
is a constant $\gl_\tau\in \R_+\cup \{\infty\}$ such that for $T\in\cB_+(\cH)$
we have $\tau(T)=\gl_\tau \Tr(T)$.
\end{prop}
\begin{remark} In the somewhat pathological case $\gl=\infty$ the tracial
weight $\tau_\infty$ is given by
\begin{equation}
\tau_\infty(T)=\begin{cases} \infty,& T\in\cB_+(\cH)\setminus \{0\},\\
0,& T=0.
\end{cases}
\end{equation}
In all other cases $\tau$ is \emph{semifinite}, that means for
$T\in\cB_+(\cH)$ there is an increasing sequence $(T_n)_{n\in \Z_+}$
with $\tau(T_n)<\infty$ and $T_n\nearrow T$ strongly.
Here, $T_n$ may be chosen of finite rank.
\end{remark}
\begin{proof}
1. Let $(e_k)_{k\in\Z_+}$ be an orthonormal basis of $\cH$. Since $T_n \to
T$ strongly we have $\scalar{T_n e_k}{e_k}\nearrow \scalar{T e_k}{e_k}$.
The Monotone Convergence Theorem for the counting measure on $\Z_+$ then
implies
\begin{equation}
\Tr(T)=\sum_{k=0}^ \infty \scalar{Te_k}{e_k}=\sup_{n\in\Z_+} \sum_{k=0}^\infty
\scalar{T_ne_k}{e_k}=\sup_{n\in\Z_+} \Tr(T_n).
\end{equation}
2. Let $\tau:\cB_+(\cH)\longrightarrow \R_+\cup \{\infty\}$ be a normal
tracial weight. As in the proof of Lemma \ref{l:Tr-uniqueness-FR} one shows
that $\tau\restriction \FRH=\gl_\tau \Tr\restriction\FRH$ for some
$\gl_\tau\in
\R_+\cup \{\infty\}$.
Choose an increasing sequence of orthogonal projections $(P_n)_{n\in
\Z_+}$, $\rank P_n=n$. Given $T\in\cB_+(\cH)$ the sequence of finite rank operators
$(T^{1/2}P_nT^{1/2})_{n\in\Z_+}$ is increasing and it converges strongly to
$T$. Since $\tau$ is assumed to be normal we thus find
\begin{equation}\begin{split}
\tau(T)&=\sup_{n\in\Z_+} \tau(T^{1/2}P_nT^{1/2})\\
&=\sup_{n\in\Z_+} \gl_\tau \Tr(T^{1/2}P_nT^{1/2})=\gl_\tau\Tr(T).\qedhere
\end{split}
\end{equation}
\end{proof}
\begin{remark}
The uniqueness of the trace $\Tr$ we presented here is in fact a special
case of a rich theory of traces for weakly closed self--adjoint
subalgebras of $\cB(\cH)$ (von Neumann algebras) due to \textsc{Murray}
and \textsc{von Neumann} \cite{MurNeu:ROI}, \cite{MurNeu:ROII},
\cite{Neu:ROIII}, \cite{MurNeu:ROIV}.
\end{remark}
\subsection{The Dixmier Trace
\label{ss:Dixmier-trace}
In view of Proposition \plref{p:tr-normal} it is natural to ask whether
there exist non--normal tracial weights on $\cB(\cH)$.
A cheap answer to this question would be to define for $T\in\cB_+(\cH)$
\begin{equation}
\tau(T):=\begin{cases} \Tr(T),& T\in\FRH,\\
\infty, & T\not\in\FRH.
\end{cases}
\end{equation}
Then $\tau$ is certainly a non--trivial non--normal tracial weight on
$\cB(\cH)$.
To make the problem non--trivial, one should ask whether there exists a
non--trivial non--normal tracial weight on $\cB(\cH)$ which vanishes
on trace class operators.
This was answered affirmatively by \textsc{J. Dixmier} in the short note \cite{Dix:ETN}.
We briefly describe Dixmier's very elegant argument.
Denote by $\cK(\cH)$ the ideal
of compact operators. We abbreviate
\begin{equation}
\cL^p(\cH):=\cL^p(\cB(\cH),\Tr),
\end{equation}
see Section \plref{ss:trace-ideals}.
A compact operator $T$ is in $\cL^1(\cH)$
if and only if $\sum\limits_{j=1}^\infty \mu_j(T)<\infty$. Here $\mu_j(T), j\ge 1,$
denotes the sequence of eigenvalues of $|T|$ counted with multiplicity.
By $\cL^{(1,\infty)}(\cH)\supset\cL^1(\cH)$ one denotes the space of
$T\in\cK(\cH)$ for which
\[\sum\limits_{j=1}^{N}\mu_j(T)=O(\log N),\quad N\to\infty.\]
For an operator $T\in\cL^{(1,\infty)}(\cH)$ the sequence
\[
\ga_N(T):=\frac{1}{\log (N+1)}\sum_{j=1}^{N} \mu_j(T),\quad N\ge 1,\]
is thus bounded.
\begin{prop}[\textsc{J. Dixmier} {\cite{Dix:ETN}}]\label{p:Dixmier-Connes}
Let $\go\in l^\infty(\Z_+\setminus\{0\})^*$ be
a linear functional satisfying
\begin{enumerate}
\item[\textup{(1)}] $\go$ is a \emph{state}, that is, a
positive linear functional with\\ $\go(1,1,\dots)=1$.
\item[\textup{(2)}] $\go((\ga_N)_{N\ge 1})=0$ if $\lim\limits_{N\to\infty} \ga_N=0$.
\item[\textup{(3)}]
\begin{equation}\label{eq:20081114-3}
\go(\ga_1,\ga_2,\ga_3,\dots)=\go(\ga_1,\ga_1,\ga_2,\ga_2,\dots).
\end{equation}
\end{enumerate}
Put for non--negative $T\in\cL^{(1,\infty)}(\cH)$
\begin{equation}
\begin{split}
\Tr_\go(T)&:=\go\Bigl(\bigl(\frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T)\bigr)_{N\ge 1}\Bigr)\\
&=:\lim_\go \frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T).
\end{split}
\end{equation}
Then $\Tr_\go$ extends by linearity to a trace on $\cL^{(1,\infty)}(\cH)$.
If $T\in\cL^1(\cH)$ is of trace class then $\Tr_\go(T)=0$ . Furthermore,
\begin{equation}
\Tr_\go(T)=\lim_{N\to\infty}\frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T),
\end{equation}
if the limit on the right hand side exists.
Finally, by putting $\Tr_\go(T)=\infty$
if $T\in\cB_+(\cH)\setminus \cL^{(1,\infty)}(\cH)$
one extends $\Tr_\go$ to $\cB_+(\cH)$ and hence one
obtains a non--normal
tracial weight on $\cB(\cH)$.
\end{prop}
\begin{proof}
Let us make a few comments on how this result is proved:
First the existence of a state $\go$ with the properties (1), (2), and (3)
can be shown by a fixed point argument; in this simple case even Schauder's
Fixed Point Theorem would suffice. Alternatively, the theory of Ces{\`a}ro
means leads to a more constructive proof of the existence of $\go$,
\textsc{Connes} \cite[Sec. 4.2.$\gamma$]{Con:NG}.
Next we note that (1) and (2) imply
that if $(\ga_N)_{N\ge 1}$ is convergent then
$\go((\ga_N)_{N\ge 1})=\lim\limits_{N\to\infty} \ga_N$.
Thus changing finitely many terms of $(\ga_N)_{N\ge 1}$
(i.e. adding a sequence of limit $0$) does not change
its $\go$--limit. Together with the positivity of
$\go$ this implies
\begin{equation}\label{eq:20090511-2}
\text{if $\ga_N\le\gb_N$ for $N\ge N_0$ then $\go((\ga_N)_{N\ge 1})
\le \go((\gb_N)_{N\ge 1})$.}
\end{equation}
The previously mentioned facts imply furthermore
\begin{equation}\label{eq:20081114-2}
\liminf_{N\to\infty}\ga_N\le \go((\ga_N)_{N\ge 1})\le \limsup_{N\to\infty}\ga_N.
\end{equation}
Now let $T_1,T_2\in\cL^{(1,\infty)}$ be non--negative operators and
put
\begin{equation}\begin{split}
\ga_N&:=\frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T_1),\quad
\gb_N:=\frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T_2),\\
\gamma_N&:=\frac{1}{\log (N+1)}\sum_{j=1}^{N}\mu_j(T_1+T_2).
\end{split}
\end{equation}
Using the min-max principle one shows the inequalities
\begin{equation}\label{eq:maxmin-inequalities}
\sum_{j=1}^N \mu_j(T_1+T_2)\le \sum_{j=1}^N \mu_j(T_1)+\mu_j(T_2)\le
\sum_{j=1}^{2N} \mu_j(T_1+T_2),
\end{equation}
cf. \textsc{Hersch} \cite{Her:CVS, Her:IVP}, thus
\begin{align}
\gamma_N&\le \ga_N+\gb_N,\label{eq:20090511-1}\\
\ga_N+\gb_N &\le \frac{\log (2N+1)}{\log (N+1)}
\gamma_{2N}.\label{eq:20081118-7}
\end{align}
\eqref{eq:20090511-1} gives $\go((\gamma_N)_{N\ge 1})\le
\go((\ga_N)_{N\ge 1})+\go((\gb_N)_{N\ge 1})$.
The proof of the converse inequality makes essential use
of the crucial assumption \eqref{eq:20081114-3}. Together with
\eqref{eq:20081118-7} and \eqref{eq:20090511-2} we find
\begin{equation}
\begin{split}
\go((\ga_N)_{N\ge 1})+\go((\gb_N)_{N\ge 1}) & \le
\go(\gamma_2,\gamma_4,\gamma_6,\dots)\\
&=\go(\gamma_2,\gamma_2,\gamma_4,\gamma_4,\dots),
\end{split}
\end{equation}
so, in view of \ref{p:Dixmier-Connes} (2), it only remains to remark
that \[\lim\limits_{N\to\infty} (\gamma_{2N}-\gamma_{2N-1})=0.\]
Thus $\Tr_\go$ is additive on the cone of positive operators. Since $\Tr_\go(T)$
depends only on the spectrum, it is certainly invariant under conjugation
by unitary operators. Now it is easy to see that $\Tr_\go$ extends by linearity
to a trace on $\cL^{(1,\infty)}(\cH)$. The other properties follow easily.
\end{proof}
\section{Pseudodifferential operators with parameter
\label{s:POP}
\subsection{From differential operators to pseudodifferential operators
Historically, pseudodifferential operators were invented to understand differential
operators. Suppose given a differential operator
\begin{equation}\label{eq:2-1}
P=\sum_{|\ga|\le d} p_\ga(x)\; i^{-|\ga|} \frac{\pl^\ga}{\pl x^\ga}
\end{equation}
in an open set $U\subset\R^n$. Representing a function $u\in\cinfz{U}$
in terms of its Fourier transform
\begin{equation}
u(x)=\int_{\R^n} e^{i \scalar{x}{\xi}} \hat u(\xi)\dbar\xi,\quad \dbar\xi=(2\pi)^{-n} d\xi,
\label{eq:Fourier2-2}
\end{equation}
where $\hat u(\xi)=\int_{\R^n} e^{-i \scalar{x}{\xi}} u(x) dx$,
we find
\begin{equation}\label{eq:2.3}
\begin{split}
Pu(x)&= \int_{\R^n} e^{-i \scalar{x}{\xi}}p(x,\xi) \hat u(\xi)\dbar\xi \\
&=\int_{\R^n}\Bigl(\int_U e^{i\scalar{x-y}{\xi}} p(x,\xi) u(y) dy \Bigr)\dbar\xi\\
&=:\bigl(\Op(p) u\bigr)(x).
\end{split}
\end{equation}
Here
\begin{equation}\label{eq:2.4}
p(x,\xi)= \sum_{|\ga|\le d}p_\ga(x)\xi^\ga
\end{equation}
denotes the \emph{complete symbol} of $P$. The right hand side of \eqref{eq:2.3} shows
that $P$ is a pseudodifferential operator with complete symbol function
$p(x,\xi)$.
Note that $p(x,\xi)$ is a polynomial in $\xi$. One now considers
pseudodifferential operators with more general symbol functions such that
inverses of differential operators are included into the calculus. E.g.
a first approximation to the resolvent $(P-\gl^d)\ii$ is given
by $\Op((p(\cdot,\cdot)-\gl^d)\ii)$. For constant coefficient differential
operators this is indeed the exact resolvent.
Let us now describe the most commonly used symbol spaces. In view of
the resolvent example above we are going to consider symbols with
an auxiliary parameter.
\pagebreak[3]
\subsection{Basic calculus with parameter
We first recall the notion of conic manifolds and conic sets
from \textsc{Duistermaat} \cite[Sec. 2]{Dui:FIO}.
A conic manifold is a smooth principal fiber bundle $\Gamma \rightarrow B$
with structure group $\R_+^*:=(0,\infty)$. It is always trivializable.
A subset $\Gamma \subset \R^N\setminus \{ 0 \} $
which is a conic manifold by the natural $\R_+^*$-action on $\R^N\setminus
\{0\}$ is called a conic set.
The base manifold of a conic set $\Gamma \subset \R^N\setminus\{0\}$ is
diffeomorphic to $S \Gamma := \Gamma \cap S^{N -1}$. By a cone
$\Gamma \subset \R^N$ we will always mean a conic set or the
closure of a conic set in $\R^N$ such that $\Gamma$ has nonempty interior.
Thus $\R^N$ and $\R^N\setminus\{0\}$ are cones, but only the latter is a conic set.
$\{0\}$ is a zero--dimensional cone.
\subsubsection{Symbols}
Let $U\subset \R^n$ be an open subset and $\Gamma\subset \R^N$ a cone. A typical
example we have in mind is $\Gamma=\R^n\times\Lambda$, where $\Lambda\subset\C$
is an open cone.
We denote by $\sym^m(U;\Gamma)$, $m\in \R$, the space of symbols
of H\"ormander type $(1,0)$ (\textsc{H\"ormander} \cite{Hor:FIOI},
\textsc{Grigis--Sj{\o}strand} \cite{GriSjo:MAD}).
More precisely, $\sym^m(U;\Gamma)$ consists of those
$a\in \CC^\infty(U\times \Gamma)$ such that for multi--indices
$\alpha\in \Z_+^n,\gamma\in \Z_+^N$ and compact subsets $K\subset U, L\subset\Gamma$
we have an estimate
\begin{equation}\label{eq:3.1}
\bigl|\partial_x^\alpha\partial_\xi^\gamma a(x,\xi)\bigr|
\le C_{\alpha,\gamma,K,L} (1+|\xi|)^{m-|\gamma|}, \quad x\in K, \xi\in L^c.
\end{equation}
Here $L^c=\bigsetdef{t\xi}{\xi\in L, t\ge 1}$.
The best constants in \eqref{eq:3.1} provide a set of
semi-norms which endow
$\sym^\infty (U;\Gamma):=\bigcup_{m\in\C}\sym^m(U;\Gamma)$ with the structure of a
Fr{\'e}chet algebra.
We mention the following variants of the space $\sym^\bullet$:
\subsubsection{Classical symbols $\CS^m(U;\Gamma)$
A symbol $a\in\sym^m(U;\Gamma)$ is called \emph{classical} if there are
$a_{m-j}\in \cinf{U\times\Gamma}$ with
\begin{equation}\label{eq:classical}
a_{m-j}(x,r\xi)=r^{m-j} a_{m-j}(x,\xi),\quad r\ge 1, |\xi|\ge 1,
\end{equation}
such that for $N\in\Z_+$
\begin{equation}\label{eq:classical-a}
a-\sum_{j=0}^{N-1} a_{m-j}\in\sym^{m-N}(U;\Gamma).
\end{equation}
The latter property is usually abbreviated $a\sim\sum\limits_{j=0}^\infty a_{m-j}$.
Many authors require the functions in \eqref{eq:classical} to be
homogeneous everywhere on $\Gamma\setminus\{0\}$. Note however
that if $\Gamma=\R^p$ and $f:\Gamma\to\C$ is a function which is homogeneous
of degree $\ga$ then $f$ cannot be smooth at $0$ unless $\ga\in\Z_+$. So
such a function is not a symbol in the strict sense. We prefer the
functions in the expansion \eqref{eq:classical-a} to be smooth everywhere
and homogeneous only for $r\ge 1$ and $|\xi|\ge 1$.
The space of classical symbols of order $m$ is denoted by $\CS^m(U;\Gamma)$.
In view of the asymptotic expansion \eqref{eq:classical-a} we have
$\CS^{m'}(U;\Gamma)\subset \CS^m(U;\Gamma)$ only if $m-m'\in\Z_+$ is a non--negative
integer.
\subsubsection{$\log$--polyhomogeneous symbols $\CS^{m,k}(U;\Gamma)$
$a\in \sym^m(U;\Gamma)$ is called
\emph{$\log$--polyhomogeneous} (cf.~\textsc{Lesch} \cite{Les:NRP}) of order
$(m,k)$ if it has an
asymptotic expansion in $\sym^\infty (U;\Gamma)$ of the form
\begin{equation}\label{ML-G2.2}
a\sim\sum\limits_{j=0}^\infty a_{m-j} \quad
\text{ with } a_{m-j}=\sum_{l=0}^{k} b_{m-j,l},
\end{equation}
where $a_{m-j}\in \CC^\infty(U\times \Gamma)$ and
$b_{m-j,l}(x,\xi)=\tilde b_{m-j,l}(x,\xi/|\xi|)|\xi|^{m-j}\log^l|\xi|$ for
$|\xi|\ge 1$.
By $\CS^{m,k}(U;\Gamma)$ we denote
the space of $\log$--polyhomogeneous symbols of order $(m,k)$.
Classical symbols are those of $\log$ degree $0$, i.e.
$\CS^m(U;\Gamma)=\CS^{m,k}(U;\Gamma)$.
\subsubsection{Symbols which are holomorphic in the parameter
If $\Gamma=\R^n\times\Lambda$, where $\Lambda\subset\C$ is
a cone one may additionally require symbols to
be holomorphic in the $\Lambda$ variable. This aspect is
important if one deals with the resolvent of an elliptic differential
operator since the latter depends analytically on the resolvent parameter.
This class of symbols is not emphasized in this paper.
\subsubsection{Pseudodifferential operators with parameter
Fix $a\in\sym^m(U;\R^n\times\Gamma)$ (respectively~$\in \CS^m(U;\R^n\times\Gamma)$).
For each fixed
$\mu_0\in\Gamma$ we have $a(\cdot, \cdot, \mu_0) \in \sym^m (U; \R^n)$
(respectively~$\in\CS^m(U;\R^n))$ and hence
we obtain a family of pseudodifferential operators
parametrized over $\Gamma$ by putting
\begin{equation}\label{eq:psido}
\begin{split}
\big[ \Op( &a(\mu_0) ) \, u \big] \, (x):= \big[ A(\mu_0) \, u \big] (x)\\
&:= \int_{\R^n} \, e^{i \langle x,\xi \rangle} \,
a(x,\xi,\mu_0) \, \hat{u} (\xi ) \, \dbar \xi \\
&= \int_{\R^n}\int_U \, e^{i \langle x-y,\xi \rangle} \,
a(x,\xi,\mu_0) \, u(y) dy \dbar \xi .
\end{split}
\end{equation}
Note that the Schwartz kernel $K_{A(\mu_0)}$ of $A(\mu_0)=\Op(a(\mu_0))$
is given by
\begin{equation}\label{eq:Schwartz-kernel}
K_{A(\mu_0)}(x,y,\mu_0)=\int_{\R^n}\, e^{i \scalar{x-y}{\xi} }\,
a(x,\xi,\mu_0) \, \dbar \xi .
\end{equation}
In general the integral is to be understood as an oscillatory integral,
for which we refer the reader to \cite{Shu:POST}, \cite{GriSjo:MAD}.
The integral exists in the usual sense if $m+n<0$.
The extension to manifolds and vector bundles is now straightforward.
Although historically it took quite a while until the theory
of singular integral operators had evolved into a theory
of pseudodifferential operators on vector bundles over smooth
manifolds (\textsc{Calder{\'o}n-Zygmund} \cite{CalZyg:SIO}, \textsc{Seeley}
\cite{See:SIC,See:IDO}, \textsc{Kohn-Nirenberg} \cite{KohNir:APD}).
For a smooth manifold $M$ and a vector bundle $E$ over $M$ we define the
space $\CL^m (M,E; \Gamma)$ of classical parameter dependent pseudodifferential
operators between sections of $E$ in the usual way by patching together local
data:
\begin{dfn}\label{d:pseudo-param}
Let $E$ be a complex vector bundle of finite fiber dimension $N$ over a smooth closed
manifold $M$ and let $\Gamma\subset\R^p$ be a cone.
A {\em classical pseudo\-differential
operator of order $m$ with parameter} $\mu\in \Gamma$ is a family
of operators
$B(\mu):\Gamma^\infty(M;E)\longrightarrow \Gamma^\infty(M;E),\, \mu\in\Gamma$,
such that locally $B(\mu)$ is given by
\[
\big[B(\mu)\, u\big](x)=(2\pi)^{-n}\int_{\R^n}\int_Ue^{i\scalar{x-y}{\xi}} b(x,\xi,\mu)u(y)dyd\xi
\]
with $b$ an $N\times N$ matrix of functions belonging to
$\CS^m(U,\R^n\times \Gamma)$.
$\CL^{m,k}(M,E;\Gamma)$ is defined similarly, although we will discuss $\CL^{m,k}$
only in the non--parametric case. Of course, operators may act between
sections of different vector bundles $E,F$. In that case we write $\CL^{m,k}(M,E,F;\Gamma)$.
\end{dfn}
\begin{remark}\label{rem:20081120}
1. In case $\Gamma=\{0\}$ we obtain the usual (classical)
pseudodifferential operators of order $m$ on $U$.
Here we write $\CL^m(M,E)$ instead of $\CL^m(M,E;\{0\})$
respectively $\CL^m(M,E,F)$ instead of $\CL^m(M,E,F;\{0\})$.
2.
Parameter dependent pseudodifferential operators play a crucial
role, e.g., in the construction of the resolvent expansion
of an elliptic operator (\textsc{Gilkey} \cite{Gil:ITH}).
A {\em pseudodifferential operator with parameter} is more than just a map from
$\Gamma$ to the space of pseudodifferential operators, cf. Corollary
\ref{c:elliptic-regularity} and Remark
\ref{rem:elliptic-regularity}.
To illustrate this let us consider a single elliptic operator
$A\in\CL^m(U)$. For simplicity let the symbol $a(x,\xi)$ of $A$
be positive definite. Then we can consider the ``parametric
symbol''
$b(x,\xi,\gl)=a(x,\xi)-\gl^m$ for $\gl\in \Lambda:=\C\setminus \R_+$.
However, in general $b$ lies in $\CS^m(U;\Lambda)$ only if $A$
is a differential operator. The reason is that $b$ will satisfy
the estimates \eqref{eq:3.1} only if $a(x,\xi)$
is polynomial in $\xi$, because then $\partial_\xi^\gb a(x,\xi)=0$
if $|\gb|>m$. If $a(x,\xi)$ is not polynomial in $\xi$, however,
\eqref{eq:3.1} will in general not hold if $\gb>m$.
This problem led \textsc{Grubb} and \textsc{Seeley} \cite{GruSee:WPP}
to invent their calculus of \emph{weakly parametric} pseudodifferential
operators. $b(x,\xi,\gl)=a(x,\xi)-\gl^m$ is weakly parametric
for any elliptic $A$ with positive definite leading symbol
(or more generally if $A$ satisfies Agmon's angle condition).
The class of weakly parametric operators is beyond the scope
of this survey, however.
3. The definition
of the parameter dependent calculus is not uniform in the literature.
It will be crucial in the sequel that differentiating by the parameter
reduces the order of the operator. This is the convention, e.g.
of \textsc{Gilkey} \cite{Gil:ITH} but differs from the one in
\textsc{Shubin} \cite{Shu:POST}.
In \textsc{Lesch--Pflaum} \cite[Sec.~3]{LesPfl:TAP}
it is shown that parameter dependent pseudodifferential operators can
be viewed as translation invariant pseudodifferential
operators on $U\times \Gamma$ and therefore our convention
of the parameter dependent calculus contains \textsc{Melrose}'s
suspended algebra from \cite{Mel:EIF}.
\end{remark}
\begin{prop}$\CL^{\bullet,\bullet}(M,E;\Gamma)$ is a bi--filtered algebra,
that is,
\[A B\in\CL^{m+m',k+k'}(M,E;\Gamma)\]
for $A\in\CL^{m,k}(M,E;\Gamma)$ and $B\in\CL^{m',k'}(M,E;\Gamma)$.
\end{prop}
The following result about the $L^2$--continuity of a
parameter dependent pseudodifferential operator is crucial.
We denote by $L^2_s(M,E)$ the Hilbert space of sections of $E$
of Sobolev class $s$.
\begin{theorem}\label{t:l2continuity}
Let $A\in\CL^m(M,E;\Gamma)$. Then for fixed $\mu\in\Gamma$ the
operator $A(\mu)$ extends by continuity to a bounded linear
operator $L^2_s(M,E)\longrightarrow L^2_{s-m}(M,E)$, $s\in\R$.
Furthermore, for $m\le 0$ one has the following uniform estimate
in $\mu$: for $0\le\vartheta\le 1, \mu_0\in\Gamma$,
there is a constant $C(s,\vartheta)$ such that
\[
\|A(\mu)\|_{s,s+\vartheta|m|}\le C(s,\vartheta,\mu_0)
(1+|\mu|)^{-(1-\vartheta)|m|},\quad |\mu|\ge |\mu_0|,\; \mu\in\Gamma.
\]
Here $\|A(\mu)\|_{s,s+\vartheta|m|}$ denotes the norm of the operator
$A(\mu)$ as a map from the Sobolev space $L^2_s(M,E)$ into
$L^2_{s+\vartheta |m|}(M,E)$.
\end{theorem}
If $\Gamma=\R^n$ then we can omit the $\mu_0$ in the formulation of the Theorem
(i.e. $\mu_0=0$). For a proof of Theorem \plref{t:l2continuity} see e.g.
\cite[Theorem 9.3]{Shu:POST}.
\subsubsection{The parametric leading symbol}
The leading symbol of a classical pseudodifferential operator $A$ of order $m$
with parameter is now defined as follows: if $A$ has complete symbol $a(x,\xi,\mu)$
with expansion $a\sim\sum\limits_{j=0}^\infty a_{m-j}$ then
\begin{equation}\label{20081116-1}
\begin{split}
\sigma_A^m(x,\xi,\mu)&:=\lim_{r\to\infty} r^{-m}a(x,r\xi,r\mu)\\
&= (|\xi|^2+|\mu|^2)^{m/2}
a_m(x,\frac{(\xi,\mu)}{\sqrt{|\xi|^2+|\mu|^2}}).
\end{split}
\end{equation}
$\sigma_A^m$ has an invariant meaning as a smooth function on
\[T^*M\times\gG\,\setminus\, \bigsetdef{(x,0,0)}{x\in M}\]
which is homogeneous in the following sense:
\[
\gs^m_A(x,r\xi,r\mu)=r^m\gs^m_A(x,\xi,\mu) \text{ for } (\xi,\mu)\ne (0,0),\, r>0.
\]
This symbol is determined by its restriction to the sphere in
\[
S(T^*M\times \Gamma)=\bigsetdef{(\xi,\mu)\in T^*M\times \Gamma}{ |\xi|^2+|\mu|^2=1}
\]
and there is an exact sequence
\begin{equation}
0\longrightarrow \CL^{m-1}(M;\Gamma)\hookrightarrow \CL^m(M;\Gamma)\xrightarrow{\sigma}
C^\infty(S(T^*M\times\Gamma))\longrightarrow 0;
\end{equation}
the vector bundle $E$ being omitted from the notation just to save horizontal space.
\begin{example}\label{ex:20081120} Let us look at an example to illustrate the difference between
the parametric leading symbol and the leading symbol for a single pseudodifferential
operator. Let
\begin{equation}
a(x,\xi)=\sum_{|\ga|\le m} a_\ga(x) \xi^\ga
\end{equation}
be the complete symbol of an elliptic \emph{differential} operator. Then
(cf. Remark \eqref{rem:20081120} 2.)
\begin{equation}
b(x,\xi,\gl)= a(x,\xi) -\gl^m
\end{equation}
is a symbol of a parameter dependent (pseudo)differential operator $B(\gl)$
with parameter $\gl$ in a suitable cone $\Lambda\subset\C$.
The parameter dependent leading symbol of $B$ is $\sigma_B^m(x,\xi,\gl)=a_m(x,\xi)-\gl^m$
while for fixed $\gl$ the leading symbol of the single operator $B(\gl)$ is
$\sigma_{B(\gl)}^m(x,\xi)=a_m(x,\xi)=\sigma_{B}^m(x,\xi,\gl=0)$.
\end{example}
In fact we have in general:
\begin{lemma} Let $A\in\CL^m(M,E;\Gamma)$ with parameter dependent leading symbol
$\sigma_A^m(x,\xi,\mu)$. For fixed $\mu_0\in\Gamma$ the operator $A(\mu_0)\in\CL^m(M,E)$
has leading symbol $\sigma_{A(\mu_0)}^m(x,\xi)=\sigma_A^m(x,\xi,0)$.
\end{lemma}
\begin{proof} It suffices to prove this locally in a chart $U$ for a scalar
operator $A$. Since the leading
symbols are homogeneous it suffices to consider $\xi$ with $|\xi|=1$.
So suppose
that $A$ has complete symbol $a(x,\xi,\mu)$ in $U$.
Write $a(x,\xi,\mu)=a_m(x,\xi,\mu)+\tilde a(x,\xi,\mu)$
with $\tilde a\in\CS^{m-1}(U;\R^n\times\Gamma)$ and $a_m(x,r\xi,r\mu)=r^m a_m(x,\xi,\mu)$
for $r\ge 1,|\xi|^2+|\mu|^2\ge 1$.
Then for fixed $\mu_0\in\Gamma$ we have
$\tilde a(\cdot,\cdot,\mu_0)\in\CS^{m-1}(U;\R^n)$ and hence
$\lim\limits_{r\to\infty} r^{-m}\tilde a(x,r\xi,\mu_0)=0$.
Consequently
\begin{equation}\begin{split}
\sigma_{A(\mu_0)}^m(x,\xi)&=\lim_{r\to\infty} r^{-m} a_m(x,r\xi,\mu_0)\\
&=\lim_{r\to\infty} a_m(x,\xi,\mu_0/r)=a_m(x,\xi,0).\qedhere
\end{split}
\end{equation}
\end{proof}
\subsubsection{Parameter dependent ellipticity} This is now defined as the invertibility
of the parametric leading symbol.
The basic example of a pseudodifferential operator with parameter is the resolvent of an
elliptic differential operator (cf. Remark \ref{rem:20081120} and
Example \ref{ex:20081120}). The following two results can also be found
in \cite[Section II.9]{Shu:POST}.
\begin{theorem}\label{t:elliptic-regularity}
Let $M$ be a closed manifold and $E,F$ complex vector bundles over $M$.
Let $A\in\CL^m(M,E,F;\Gamma)$ be \emph{elliptic}. Then there exists a
$B\in\CL^{-m}(M,F,E;\Gamma)$ such that $AB-I\in\CL^{-\infty}(M,F;\Gamma)$,
$BA-I\in \CL^{-\infty}(M,E;\Gamma)$.
\end{theorem}
Note that in view of Theorem \plref{t:l2continuity}
this implies the estimates
\begin{equation}\label{eq:smoothing-estimate}
\|B(\mu)A(\mu)-I\|_{s,t}+\|A(\mu)B(\mu)-I\|_{s,t}\le C(s,t,N) (1+|\mu|)^{-N}
\end{equation}
for all $s,t\in\R, N>0$.
This result has an important implication:
\begin{cor}\label{c:elliptic-regularity}
Under the assumptions of Theorem \plref{t:elliptic-regularity}, for each
$s\in\R$ there is a $\mu_0\in\Gamma$ such that for $|\mu|\ge |\mu_0|$
the operator
\[
A(\mu):L^2_s(M,E)\longrightarrow L^2_{s-m}(M,F)
\]
is invertible.
\end{cor}
\begin{proof}In view of \eqref{eq:smoothing-estimate}
there is a $\mu_0=\mu_0(s)$ such that
\[\|(BA-I)(\mu)\|_s<1 \text{ and } \|(AB-I)(\mu)\|_{s-m}<1,\]
for $|\mu|\ge |\mu_0|$ and hence $AB:L^2_s\longrightarrow L^2_s$ and
$BA:L^2_{s-m}\longrightarrow L^2_{s-m}$
are invertible.
\end{proof}
\begin{remark}\label{rem:elliptic-regularity}
This result causes an interesting constraint on those pseudodifferential
operators which may appear as special values of an elliptic parametric family. Namely,
if $A\in\CL^m(M,E,F;\Gamma)$ is parametric elliptic then for each $\mu$ the operator
$A(\mu)\in\CL^m(M,E,F)$ is elliptic. Furthermore, by the previous Corollary and the
stability of the Fredholm index we have $\ind A(\mu)=0$ for all $\mu$.
\end{remark}
\section{Extending the Hilbert space trace to pseudodifferential
operators}
\label{s:EHS}
We pause the discussion of pseudodifferential operators and
look at the Hilbert space trace $\Tr$ on pseudodifferential operators.
\subsection{$\Tr$ on operators of order $<-\dim M$}\label{ss:coordinate-invariance}
Consider the local situation, i.e. a compactly
supported operator $A=\Op(a)\in\CL^{m,k}(U,E)$ in a local chart.
If $m<-\dim M$ then $A$ is trace class and the trace
is given by integrating the kernel of $A$ over the diagonal:
\begin{equation}\label{eq:20081121-1}
\begin{split}
\Tr(A)&=\int_U \tr_{E_x}\bigl(k_A(x,x)\bigr) dx\\
&=\int_U \int_{\R^n} \tr_{E_x} \bigl(a(x,\xi)\bigr)\dbar\xi dx,
\end{split}
\end{equation}
where we have used \eqref{eq:Schwartz-kernel}.
The right hand side is indeed coordinate invariant.
To explain this consider
a coordinate transformation $\kappa:U\to V$. Denote
variables in $U$ by $x,y$ and variables in $V$ by $\tilde x,\tilde y$.
It is not so easy to write down the symbol of $\kappa_*A$.
However, an amplitude function (these
are ``symbols'' which depend on $x$ and $y$, otherwise the
basic formula \eqref{eq:psido} still holds)
for $\kappa_* A$ is given by
\begin{equation}\label{eq:symbol-coordinate-change}
(\tilde x,\tilde y,\xi)\mapsto a(\kappa^{-1}\tilde x,
\phi(\tilde x,\tilde y)^{-1}\xi) \frac{|\det D\kappa^{-1}(\tilde x,\tilde
y)|}{|\det \phi(\tilde x,\tilde y)|},
\end{equation}
cf. \cite[Sec. 4.1, 4.2]{Shu:POST}, where $\phi(\tilde x,\tilde y)$ is
smooth with $\phi(\tilde x,\tilde x)=D\kappa^{-1}(\tilde x)^t$.
Comparing the trace densities in the two coordinate systems requires
a \emph{linear} coordinate change in the $\xi$--variable.
Indeed,
\begin{equation}\label{eq:tr-coordinate-invariance}
\begin{split}
\Tr(\kappa_*A)&=\int_V
\int_{\R^n}\tr_{E_{\tilde x}}\bigl( a(\kappa^{-1}\tilde
x,\phi(\tilde x,\tilde x)^{-1}\xi)\bigr)\dbar\xi d\tilde x\\
&=\int_V \int_{\R^n} \tr_{E_{\tilde x}}\bigl( a(\kappa^{-1}\tilde
x,\xi)\bigr)\dbar\xi\,|\det D\kappa^{-1}(\tilde x)|d\tilde x,\\
&=\int_U \int_{\R^n} \tr_{E_{x}}\bigl( a(x,x,\xi)\bigr)\dbar\xi\,dx=\Tr(A).
\end{split}
\end{equation}
Therefore, the trace of a pseudodifferential operator $A\in\CL^{m,k}(M,E)$
of order $m<-\dim M=:-n$ on the closed manifold $M$ may be calculated from the complete symbol of
$A$ in coordinates as follows. Choose a finite open cover by coordinate
neighborhoods $U_j, j=1,\ldots, r,$ and a subordinated partition of unity
$\varphi_j, j=1,\ldots,r$. Furthermore, let $\psi_j\in\cinfz{U_j}$ with
$\psi_j\varphi_j=\varphi_j$. Denoting by $a_j(x,\xi)$ the complete symbol
in the coordinate system on $U_j$ we obtain
\begin{equation}\label{eq:20090514}
\Tr(A)=\sum_{j=1}^r \Tr(\varphi_j A\psi_j)=\sum_{j=1}^r
\int_{U_j}\int_{\R^n} \varphi_j(x) \tr_{E_x}\bigl(a_j(x,\xi)\bigr)\dbar\xi\,dx.
\end{equation}
\begin{sloppy}
A priori the previous argument is valid only for operators of order $m<-n$.
However, the symbol function $a_j(x,\xi)$ is rather well--behaved in
$\xi$. If for a class of pseudodifferential operators
we can regularize $\int_{\R^n} a_j(x,\xi) \dbar\xi $ in such a way
that the change of variables
\eqref{eq:tr-coordinate-invariance} works then indeed \eqref{eq:20090514}
extends the trace to this class of operators. Such a regularization is provided by:
\end{sloppy}
\subsection{The Hadamard partie finie regularized integral}\label{ss:partie-finie
The problem of regularizing divergent integrals is in fact quite old.
The method we are going to present here goes back to \textsc{Hadamard} who used
his method to regularize integrals which arose when solving the wave equation
\cite{Had:PCE}.
Given a function $f\in \CS^{m,k}(\R^p)$, e.g. $a(x,\cdot)$ above for fixed
$x$. Then $f$ has an asymptotic expansion
\begin{equation}\label{eq:20081120-6}
f(x)\sim_{|x|\to\infty}
\sum_{j=0}^\infty \sum_{l=0}^{k} f_{jl}(x/|x|)|x|^{m-j}\log^l|x|.
\end{equation}
Integrating over balls of radius $R$ gives the asymptotic expansion
\begin{equation}\label{eq:20081120-7}
\int_{|x|\le R} f(x) dx \sim_{R\to\infty}
\sum_{j=0}^\infty \sum_{l=0}^{k+1} \tilde f_{jl} R^{m+n-j} \log^l R.
\end{equation}
The \emph{regularized integral}
$\displaystyle \regint_{\R^p} f(x) dx$ is, by definition, the
constant term in this asymptotic expansion. Some authors call
the regularized integral \emph{partie finie integral} or
\emph{cut--off integral}.
It has a couple of peculiar
properties, cf.~\cite{Mel:EIF}, which were further investigated in
\cite[Sec.~5]{Les:NRP} and \cite{LesPfl:TAP}.
The most notable features are a modified change of
variables rule for linear coordinate changes
and, as a consequence, the fact that Stokes' theorem does
not hold in general:
\begin{prop}\textup{\cite[Prop.~5.2]{Les:NRP}}\label{p:change-variables}
Let $A\in {\rm GL}(p,\R)$ be a regular matrix. Further\-more, let
$f \in\CS^{m,k}(\R^p)$ with expansion \eqref{eq:20081120-6}.
Then we have the change of variables formula
\begin{multline}
\regint_{\R^p} f(A\xi) d\xi\\=
|\det A|^{-1}\left( \regint_{\R^p} f(\xi)d\xi+
\sum_{l=0}^{k}\frac{(-1)^{l+1}}{l+1}\int_{S^{p-1}} f_{-p,l}(\xi)
\log^{l+1} |A^{-1}\xi| d\xi\right).
\end{multline}
\end{prop}
\commentary{, o
in other words $\reginttext$
is not a closed functional on $\gO^*({\rm PS}(\R^p))$.
More precisely, we extend $\reginttext$ to $\gO^* ({\rm PS}^*(\R^p))$
by putting
\begin{equation}
\regint:\go\mapsto\casetwo{0}{\go\in \gO^k,k<p,}{
\regint_{\R^p}f(\xi)d\xi}{\go=f(\xi) d\xi_1\wedge\ldots\wedge d\xi_p.}
\end{equation}
In this way we obtain a graded trace on the complex
$(\gO^* ({\rm PS}^*(\R^p)),d)$. This would be a cycle in the sense of
{\rm Connes} \cite[Sec. III.1.$\alpha$]{Con:NG} if $\reginttext$
were closed.
The next lemma shows that $d\reginttext$, which
is defined by $(d\reginttext)\go:=\reginttext d\go$,
is nontrivial. However, it is local in the sense that it depends
only on the $\log$--polyhomogeneous expansion of $\go$.}
The following proposition, which substantiates the mentioned fact that
Stokes' Theorem does not hold for $\reginttext$, was stated as a Lemma in \cite{LesPfl:TAP}.
A couple of years later
it was rediscovered by \textsc{Manchon}, \textsc{Maeda}, and
\textsc{Paycha} \cite{Manetal:SFC}, \cite{Pay:NRC}.
\begin{prop}\textup{\cite[Lemma 5.5]{LesPfl:TAP}} \label{S2-4.4}
Let $f\in \CS^{m,k}(\R^p)$
with asymptotic expansion \eqref{eq:20081120-6}.
Then
\[\regint_{\R^p} \frac{\pl f}{\pl \xi_j} d\xi=
\int_{S^{p-1}} f_{1-p,k}(\xi) \xi_j d{\rm vol}_S(\xi).\]
\end{prop}
We will come back to this below when we discuss the residue trace.
\subsection{The Kontsevich--Vishik canonical trace}
Using the Hadamard partie finie integral we can now follow the
scheme outlined in Subsection \ref{ss:coordinate-invariance}.
Let $A\in\CL^{a,k}(M,E)$ be a $\log$--polyhomogeneous pseudodifferential
operator on a closed manifold $M$. If $a\not\in\Z$ we put, using
the notation of \eqref{eq:20090514} and
\eqref{eq:tr-coordinate-invariance},
\begin{equation}
\TR(A):=\sum_{j=1} \int_{U_j}\regint_{\R^n}\varphi_j(x) \tr_{E_x}\bigl(
a_j(x,\xi)\bigr)\dbar\xi\,dx.
\end{equation}
By Proposition \plref{p:change-variables} one shows exactly as in
\eqref{eq:tr-coordinate-invariance} that $\TR(A)$ is well--defined.
In fact we have (essentially) proved the following:
\begin{theorem}[\textsc{Kontsevich--Vishik} {\cite{KonVis:GDE},
\cite{KonVis:DEP}},\newline \textsc{Lesch} {\cite[Sec. 5]{Les:NRP}}]\label{t:kont-vish}
There is a linear functional $\TR$ on
$$\bigcup_{a\in \C\setminus\{-n,-n+1,-n+2,\ldots\},k\ge 0} \CL^{a,k}(M,E)$$
such that
\begin{enumerate}\renewcommand{\labelenumi}{{\rm (\roman{enumi})}}
\item In a local chart $\TR$ is given by \eqref{eq:20081121-1}, with
$\int_{\R^n}$ to be replaced by the cut--off integral $\regint_{\R^n}$.
\item $\TR\restriction\CL^{a,k}(M,E)=\Tr\restriction\CL^{a,k}(M,E)$ if $a<-\dim M$.
\item $\TR([A,B])=0$ if $A\in\CL^{a,k}(M,E), B\in CL^{b,l}(M,E)$, $a+b\not\in\Z$.
\end{enumerate}
\end{theorem}
We mention a stunning application of this result
\cite[Cor. 4.1]{KonVis:GDE}. Let $G$ be a domain in the complex
plane and let $A(z),B(z)$ be holomorphic families of operators
in $\CL^{\bullet,k}(M,E)$ with $\ord A(z)=\ord B(z)=z$. We do not
formalize the notion of a holomorphic family here. What we have
in mind are e.g. families of complex powers $A(z)=A^z$. Assume
that $G$ contains points $z$ with $\Re z<-\dim M$.
Then $\TR(A(z))$ is the analytic continuation of
$\Tr(A(\cdot))\restriction G\cap \bigsetdef{z\in\C}{\Re z<-\dim M}$;
a similar statement
holds for $B(z)$.
If for a point $z_0\in G\setminus \{-n,-n+1,\dots\}$
we have $A(z_0)=B(z_0)$ we can conclude
that the value of the analytic continuation of
$\Tr(A(\cdot))\restriction G\cap \bigsetdef{z\in\C}{\Re z<-\dim M}$
to $z_0$ coincides with the value of the corresponding
analytic continuation of $\Tr(B(\cdot))\restriction G\cap
\bigsetdef{z\in\C}{\Re z<-\dim M}$.
Namely, we obviously have $\TR(A(z_0))=\TR(B(z_0))$.
The author does not know of a direct proof of this fact.
Proposition \plref{p:change-variables} shows that if $A$ is of integral
order additional terms show up when making the linear change of coordinates
\eqref{eq:tr-coordinate-invariance}, indicating that $\TR$ cannot
be extended to a trace on the algebra of pseudodifferential operators.
The following no go result shows that the order constraints in Theorem
\plref{t:kont-vish} are indeed sharp:
\begin{prop}\label{p:no-trace} There is no trace $\tau$ on the algebra
$\CL^0(M)$ of classical pseudodifferential operators of order $0$
such that $\tau(A)=\Tr(A)$ if $A\in\CL^{-\infty}(M)$.
\end{prop}
\begin{proof} We reproduce here the very easy proof: from Index Theory
we use the fact that on $M$ there exists an elliptic system
$T\in \CL^0(M,\C^r)$ of non--vanishing Fredholm index; in general
we cannot find a scalar elliptic operator with non--trivial
index. Let $S\in\CL^0(M,\C^r)$ be a pseudodifferential parametrix (cf.
Theorem \ref{t:elliptic-regularity}) such
that $I-ST, I-TS\in \CL^{-\infty}(M,\C^r)$. $\tau$ and
$\Tr$ extend to traces on
$\CL^0(M,\C^r)=\CL^0(M)\otimes \operatorname{M}(r,\C)$
via $\tau(A\otimes X)=\tau(A)\Tr(X)$, $A\in\CL^a(M),X\in\operatorname{M}(r,\C)$
and $\Tr(X)$ is the usual trace on matrices.
Since smoothing operators are of trace class one has
\begin{equation}\label{eq:index-trace-formula}
\ind T =\Tr(I-ST)-\Tr(I-TS)
\end{equation}
and we arrive at the contradiction
\begin{equation}
\begin{split}
0&\not=\ind T=\Tr(I-ST)-\Tr(I-TS)\\&=\tau(I-ST)-\tau(I-TS)=\tau([T,S])=0.\qedhere
\label{eq:trace-contradiction}
\end{split}
\end{equation}
\end{proof}
\section{Pseudodifferential operators with parameter: Asymptotic expansions
\label{s:POPAE}
We take up Section \plref{s:POP} and continue the discussion of
pseudodifferential operators with parameter.
\subsection{The Resolvent Expansion}
The following result is the main technical result needed for the residue
trace. It goes back to \textsc{Minakshisundaram} and \textsc{Pleijel}
\cite{MinPle:SPE} who follow carefully \textsc{Hadamard}'s method
of the construction of a fundamental solution for the wave equation
\cite{Had:PCE}. It is at the heart of the Local Index Theorem and
therefore has received much attention.
In the form stated below it is essentially due to \textsc{Seeley} \cite{See:CPE},
see also \cite{GruSee:WPP}.
The (straightforward) generalization to $\log$--polyhomogeneous
symbols was done by the author \cite{Les:NRP}. Of the latter the published
version contains annoying typos, the arxiv version is correct.
\begin{theorem}\label{t:parametric-expansion}
\textup{1.} Let $U\subset\R^n$ open, $\Gamma\subset\R^p$ a cone, and
$a\in\CS^{m,k}(U;\Gamma),$ $m+n<0$,
$A=\Op(a)$. Let $k_A(x;\mu):=\int_{\R^n}a(x,\xi,\mu)\dbar\xi$ be
the Schwartz kernel (cf. Eq. \eqref{eq:Schwartz-kernel})
of $A$ on the diagonal. Then
$k_A\in\CS^{m+n,k}(U;\Gamma)$. In particular there is an asymptotic expansion
\begin{equation}\label{eq20081120-1}
k_A(x,x;\mu)\sim_{|\mu|\to\infty}\sum_{j=0}^\infty\sum_{l=0}^k e_{m-j,l}(x,\mu/|\mu|) |\mu|^{m+n-j}\log^k|\mu|.
\end{equation}
\textup{2.} Let $M$ be a compact manifold,
$\dim M=:n$, and $A \in \CL^{m,k} (M,E;\Gamma)$.
If $m + n< 0$ then $A(\mu)$ is trace class for all $\mu \in \Gamma$ and
$ \Tr \, A (\cdot) \in \CS^{m + n,k} (\Gamma)$.
In particular,
\[\Tr \, A(\mu)\sim_{|\mu|\to\infty}
\sum\limits_{j=0}^\infty\sum\limits_{l=0}^k e_{m-j,l}(\mu/|\mu|) |\mu|^{m+n-j}\log^k|\mu|.
\]
\textup{3.} Let $P\in\CL^m(M,E)$ be an elliptic classical pseudodifferential operator
and assume for simplicity that with respect to some Riemannian structure on $M$ and
some Hermitian structure on $E$
the operator $P$ is self--adjoint and non--negative. Furthermore, let $B\in\CL^{b,k}(M,E)$
be a pseudodifferential operator. Let $\Lambda=\bigsetdef{\gl\in\C}{|\arg\gl|\ge \eps}$
be a sector in $\C\setminus\R_+$. Then for $N>(b+n)/m, n:=\dim M,$ the
operator $B(P-\gl)^ {-N}$ is of trace class and there is an asymptotic expansion
\begin{equation}\label{eq20081120-3}\begin{split}
\Tr (B(P-\lambda)^{-N})\sim_{\lambda\to \infty}&\sum_{j=0}^\infty\sum_{l=0}^{k+1} c_{jl}
\lambda^{\frac{n+b-j}{m}-N}\log^l \gl+\\
&+ \sum_{j=0}^\infty d_j\; \lambda^{-j-N}
\end{split},\quad \gl\in\Lambda.
\end{equation}
Furthermore, $c_{j,k+1}=0$ if $(j-b-n)/m\not\in\Z_+$.
\end{theorem}
\begin{proof}
We present a proof of 1. and 2. and sketch the proof of 3. in a special case.
Since $a \in \CS^{m,k} (U;\Gamma)$ we have Eq. \eqref{ML-G2.2}.
Thus we write
\begin{equation}
a = \sum_{j=0}^{N} \, a_{m-j} \, + R_N,
\end{equation}
with $R_N \in \sym^{m-N} (U;\Gamma)$.
In fact, $R_N\in \sym^{m-N-1+\eps}(U;\Gamma)$ for every $\eps>0$,
but we don't need this below.
Now pick $L \subset \Gamma, K\subset U,$ compact and a multi--index $\alpha$. Then
for $x\in K$ the kernel $k_{A,N}$ of $R_N$ satisfies
\begin{equation}\label{eq20081120-2}
\begin{split}
\Bigl| \partial^{\alpha}_{\mu} &k_{A,N}(x,x;\mu) \Bigr|\\
& = \Bigl| \int_{\R^n} \partial_{\mu}^{\alpha} R_N(x , \xi , \mu )
\dbar \xi \Bigr| \\
& \leq C_{\alpha, K , L} \int_{\R^n} ( 1 + ( |\xi|^2
+|\mu|^2)^{1/2} )^{m -|\alpha| - N} \, \dbar\xi \\
& \leq C_{\alpha,K,L} (1 + |\mu|)^{m+n - |\alpha| - N }.
\end{split}
\end{equation}
Now consider one of the summands of \eqref{ML-G2.2}. We write
it in the form
\begin{equation}
b_{m-j,l}(x,\xi,\mu)=\tilde b_{m-j,l}(x,\xi,\mu) \log^l(|\xi|^2+|\mu|^2),
\end{equation}
with
\begin{equation}
\tilde b_{m-j,l}(x,r\xi,r\mu)=r^{m-j}\tilde b_{m-j,l}(x,\xi,\mu),
\quad \text{ for }
r\ge 1, |\xi|^2+|\mu|^2\ge 1.
\end{equation}
Then the contribution $k_{m-j,l}$
of $b_{m-j,l}$ to the kernel of $A$ satisfies
\begin{equation}
\begin{split}
k_{m-j,l}&(x,x;r \mu)\\
& = \int_{\R^n}
\tilde b_{m-j,l}(x, \xi, r\mu ) \,\log^l(|\xi|^2+r^2|\mu|^2)\, \dbar \xi\\
& = r^{m-j} \, \int_{\R^n} \tilde b_{m-j,l}(x , r^{-1} \xi ,\mu ) \bigl(\log r^2+
\log(|r^{-1}\xi|^2+|\mu|^2)\bigr)^l\,\dbar \xi \\
& = r^{m+n-j} \int_{\R^n} \tilde b_{m-j,l}(x,\xi,\mu) \bigl(\log r^2+
\log(|\xi|^2+|\mu|^2)\bigr)^l\,\dbar \xi,
\end{split}
\end{equation}
proving the expansion \eqref{eq20081120-1}.
2. follows simply by integrating \eqref{eq20081120-1}. In view of
\eqref{eq20081120-2} the expansion \eqref{eq20081120-1} is uniform
on compact subsets of $U$ and hence may be integrated over compact
subsets. Covering the compact manifold $M$ by finitely many charts
then gives the claim.
3. We cannot give a full proof of 3. here; but we at least want to explain
where the additional $\log$ terms in \eqref{eq20081120-3} come from.
Note that even if $B\in\CL^b(M,E)$ is classical there are $\log$ terms
in \eqref{eq20081120-3}. In general the highest $\log$ power occurring
on the rhs of \eqref{eq20081120-3} is one higher than the $\log$ degree
of $B$.
For simplicity let us assume that $P$ is a differential operator. This
ensures that $(P-\gl^m)^{-N}$ (note the $\gl^m$ instead of $\gl$) is in the parametric calculus
(cf. Remarks \plref{rem:20081120} 2., \plref{ex:20081120}).
We first describe the local expansion of the symbol of $B(P-\gl^m)^{-N}$.
To obtain the claim as stated one then has to replace $\gl^m$ by $\gl$
and integrate over $M$:
choose a chart and denote the complete symbol of $B$ by $b(x,\xi)$
and the complete parametric symbol of $(P-\gl^m)^{-N}$ by
$q(x,\xi,\gl)$. Then the symbol of the product is given by
\begin{equation}\label{eq:product-symbol}
(b*q)(x,\xi,\gl)\sim\sum_{\ga\in\Z_+^n} \frac{i^{-\ga}}{\ga!}
\bigl(\partial_\xi^\ga b(x,\xi)\bigr)\bigl(\partial_x^\ga
q(x,\xi,\gl)\bigr).
\end{equation}
Expanding the rhs into its homogeneous components gives
\begin{equation}\label{eq:product-symbol-a}
\begin{split}
(b&*q)(x,\xi,\gl)\\
&\sim\sum_{j=0}^\infty\sum_{|\ga|+l+l'=j} \frac{i^{-\ga}}{\ga!}
\underbrace{\underbrace{\bigl(\partial_\xi^\ga b_{b-l}(x,\xi)\bigr)}_{(b-l-|\ga|)-\text{(log)homogeneous}}
\underbrace{\bigl(\partial_x^\ga
q_{-mN-l'}(x,\xi,\gl)\bigr)}_{(-mN-l')-\text{homogeneous}}
}_{(b-mN-j)-\text{(log)homogeneous}}.
\end{split}
\end{equation}
The contribution to the Schwartz kernel of $B(P-\gl^m)^{-N}$ of a summand
is given by
\begin{equation}\label{eq20081120-4}
\frac{i^{-\ga}}{\ga!} \int_{\R^n}
\bigl(\partial_\xi^\ga b_{b-l}(x,\xi)\bigr) \bigl(\partial_x^\ga
q_{-mN-l'}(x,\xi,\gl)\bigr)\, \dbar\xi.
\end{equation}
We will see that the asymptotic expansion of each of these integrals
a priori contributes to the term $\gl^{-N}$ in the expansion \eqref{eq20081120-3}. So
additional considerations, which we will not present here, are
necessary to show that by expanding the individual integrals
\eqref{eq20081120-4} one indeed obtains the asymptotic expansion
\eqref{eq20081120-3}.
The asymptotic expansion of \eqref{eq20081120-4} will be singled out
as Lemma \plref{l:expansion-lemma} below. The proof of it will in
particular explain why the highest possible $\log$-power in
\eqref{eq20081120-3} is one higher than the $\log$-degree of $B$
\end{proof}
The following expansion Lemma is maybe of interest in its own right.
Its proof will explain the occurrence of higher $\log$ powers in the resolvent respectively
heat expansions. The homogeneous version of the Lemma can again be found
in \cite{GruSee:WPP}. We generalize it here slightly to the
$\log$--polyhomogeneous setting (cf. \cite{Les:NRP}).
\begin{lemma}\label{l:expansion-lemma}
Let $B\in\cinf{\R^n}, Q\in\cinf{\R^n\times [1,\infty)}$
and assume that $B,Q$ have the following properties
\begin{equation}
\begin{split}
B(\xi)&= \tilde B(\xi/|\xi|) |\xi|^b \log^k|\xi|,\quad |\xi|\ge 1,\\
Q(r\xi,r\gl)&=r^q Q(\xi,\gl),\quad r\ge 1, \gl\ge 1, \\
|Q(\xi,1)| &\le C (|\xi|+1)^{-q},
\end{split}
\end{equation}
where $b,q\in\R$ and $b+q+n<0$.
Then the following asymptotic expansion holds:
\begin{equation}\label{eq:expansion-lemma}
\begin{split}
F(\gl)&= \int_{\R^n} B(\xi)Q(\xi,\gl) d\xi\\
&\sim_{\gl\to\infty} \sum_{j=0}^{k+1} c_j \gl^{q+b+n}\log^j\gl +\sum_{j=0}^\infty d_j \gl^{q-j}.
\end{split}
\end{equation}
$c_{k+1}=0$ if $b$ is not an integer $\le -n$.
The coefficients $c_j,d_j$ will be explained in the proof.
\end{lemma}
\begin{proof}
The integral on the lhs of \eqref{eq:expansion-lemma} exists since
$b+q+n<0$.
We split the domain of integration into the three regions:\\
$1\le \gl\le |\xi|, |\xi|\le 1,$ and $1\le |\xi|\le \gl$.
\paragraph*{$ 1\le\gl\le |\xi|$:} Here we are in the domain of homogeneity
and a change of variables yields
\begin{equation}
\begin{split}
&\int_{\gl\le |\xi|} B(\xi)Q(\xi,\gl) d\xi\\
&= \gl^{q} \int_{\gl\le|\xi|}
\tilde B(\xi/|\xi|) |\xi|^b \bigl(\log^k|\xi|\bigr) Q(\xi/\gl,1) d\xi\\
&=\gl^{q+b+n} \int_{1\le |\xi|}
\tilde B(\xi/|\xi|) |\xi|^b \bigl(\log\gl+\log |\xi|\bigr)^kQ(\xi,1) d\xi,\\
&=\sum_{j=0}^k \ga_j \gl^{q+b+n}\log^j\gl,
\end{split}
\end{equation}
giving a contribution to the coefficient $c_j$ for $0\le j\le k$.
\paragraph*{$ |\xi|\le 1$:} For the remaining two cases we employ
the Taylor expansion of the smooth function $\eta\mapsto Q(\eta,1)$
about $\eta=0$:
\begin{equation}\label{eq:Taylor-q}
Q(\eta,1)= \sum_{j=0}^N Q_j(\eta) +R_N(\eta),
\end{equation}
where $Q_j(\eta)\in\C[\eta_1,,\ldots,\eta_n]$ are homogeneous polynomials of degree
$j$ and $R_N$ is a smooth function satisfying $R_N(\eta)=O(|\eta|^{N+1}),\;\eta\to 0$.
Respectively, for $\xi\in\R^n, \gl\ge 1$,
\begin{equation}\label{eq:Taylor-ql}
Q(\xi,\gl)=Q(\xi/\gl,1)\;\gl^q =\sum_{j=0}^N Q_j(\xi)\;\gl^{q-j} +
R_N(\xi/\gl)\;\gl^q.
\end{equation}
Plugging \eqref{eq:Taylor-ql} into the integral for $|\xi|\le 1$ we find
\begin{equation}
\begin{split}
\int_{|\xi|\le 1} &B(\xi) Q(\xi,\gl)d\xi=\\
&=\sum_{j=0}^N \int_{|\xi|\le 1} B(\xi)Q_j(\xi) d\xi\; \gl^{q-j} +O(\gl^{q-N-1}),
\quad \gl\to\infty,
\end{split}
\end{equation}
giving a contribution to the coefficient $d_j$.
\paragraph*{$1\le |\xi|\le \gl$:} We again use the Taylor expansion
\eqref{eq:Taylor-ql} with $N$ large enough such that $b+N+1>-n$
to ensure $\int_{|\xi|\le 1} |\xi|^b \log^j|\xi| \;|R_N(\xi)|d\xi<\infty$ for all $j$.
Let $B^h(\xi):=\tilde B(\xi/|\xi|) |\xi|^b\log^k|\xi|$ be the homogeneous extension
of $B(\xi)$ to all $\xi\not=0$.
The
\begin{equation}
\int_{|\xi|\le 1} \bigl(|B(\xi)|+|B^h(\xi)|\bigr) \gl^q |R_N(\xi/\gl)|d\xi
=O(\gl^{q-N-1}),\quad \gl\to\infty,
\end{equation}
and thus
\begin{equation}
\begin{split}
&\int_{1\le |\xi|\le \gl} B(\xi) \gl^q R_N(\xi/\gl) d\xi\\
&= \int_{0\le |\xi|\le \gl} B^h(\xi) \gl^q R_N(\xi/\gl)d\xi+O(\gl^{q-N-1})\\
&= \int_{|\xi|\le 1} \tilde B(\xi/|\xi|) |\xi|^b \bigl(\log\gl+\log|\xi|\bigr)^k R_N(\xi) d\xi\; \gl^{q+b+n}+\\
&\quad +O(\gl^{q-N-1}),
\quad \gl\to\infty.
\end{split}
\end{equation}
So the contribution of the ``remainder'' $R_N$ to the expansion is not
small, rather it contributes to the coefficient $c_j$ of the
$\gl^{q+b+n}\log^j\gl$ term for $0\le j\le k$. Note that so far we have not obtained
any contribution to the coefficient $c_{k+1}$.
Such a contribution will show up only now when
we finally deal with the summands in the Taylor expansion.
Using polar coordinates we find
\begin{equation}\label{eq:expansion-lemma-proof}
\begin{split}
&\int_{1\le |\xi|\le \gl} B(\xi) Q_j(\xi)d\xi\; \gl^{q-j}\\
&= \gl^{q-j}\int_1^\gl \int_{S^{n-1}} \tilde B(\go) r^b \bigl(\log^k r\bigr)
Q_j(r\go) r^{n-1}d\vol_{S^{n-1}}(\go) dr \\
&= C_j \gl^{q-j} \int_1^\gl r^{b+n-1+j}\log^k r dr\\
&=C_j \gl^{q-j} \begin{cases}
\sum\limits_{\sigma=0}^k \ga'_\sigma\gl^{b+n+j} \log^\sigma\gl+\beta_j ,& b+n+j\not=0,\\[1em]
\frac{1}{k+1}\log^{k+1}\gl, & b+n+j=0.
\end{cases}
\end{split}
\end{equation}
As a side remark note the explicit formula
\begin{multline}\label{eq:log-int-explicit}
\int_1^\gl r^\ga \log^k r dr\\
= \begin{cases}
\sum\limits_{j=0}^k \frac{(-1)^j k!}{(k-j)!(\ga+1)^{j+1}} \gl^{\ga+1}\log^{k-j}\gl+\frac{(-1)^{k+1}k!}{(\ga+1)^{k+1}},& \ga\not=-1,\\
\frac{1}{k+1}\log^{k+1}\gl, & \ga=-1.
\end{cases}
\end{multline}
The constant term in \eqref{eq:log-int-explicit} respectively $\beta_j$ on the rhs of
\eqref{eq:expansion-lemma-proof} was omitted in \cite[Eq. 3.16]{Les:NRP}.
Fortunately the error was inconsequential for the formulation of the expansion result
because $\beta_j$ is just another contribution to the coefficient $d_j$.
\end{proof}
\subsection{Resolvent expansion vs. heat expansion
\label{ss:resolvent-expansion}
\setlength{\unitlength}{1.0cm}
\begin{figure}
\begin{picture}(5.0,5.0)
\put(2.5,0){\vector(0,1){5.0}}
\put(0,2.5){\vector(1,0){5.0}}
\put(2.5,2.5){\linethickness{1mm}\line(1,0){2.5}
\put(2.5,3.5){\vector(1,1){1.5}}
\put(2.5,1.5){\line(1,-1){1.5}}
\qbezier(2.5,1.5) (1.5,2.5) (2.5,3.5)
\end{picture}
\caption{
\label{figureone} Contour of integration for calculating $Be^{-tP}$ from
the resolvent.}
\end{figure}
From the resolvent expansion one can easily derive the heat expansion
and the meromorphic continuation of the $\zeta$--function. In fact
under a mild additional assumption the resolvent expansion can be
derived from the heat expansion of the meromorphic continuation of
the $\zeta$--function (cf. e.g. \textsc{Lesch} \cite[Theorem 5.1.4 and 5.1.5]{Les:OFT},
\textsc{Br\"uning--Lesch} \cite[Lemma 2.1 and 2.2]{BruLes:EIC}).
Let $B,P$ be as above. Next let $\gamma$ be a contour in the complex plane
as sketched in Figure \ref{figureone}. Then $B e^{-tP}$ has the following contour integral
representation:
\begin{equation}\label{eq:heat-contour-rep}
\begin{split}
B e^{-tP}&= \frac{-1}{2\pi i} \int_\gamma e^{-t\gl} B(P-\gl)\ii d\gl\\
&= -(-t)^{-N+1} \frac{(N-1)!}{2\pi i}\int_\gamma e^{-t\gl} B(P-\gl)^{-N}d\gl.
\end{split}
\end{equation}
Taking the trace on both sides and plugging in the asymptotic expansion
of $\Tr(B(P-\gl)^{-N})$ one easily finds
\begin{equation}\label{eq:log-heat-expansion}
\Tr(B e^{-tP})\sim_{t\to 0+}\sum_{j=0}^\infty\sum_{l=0}^{k+1} a_{jl}(B,P) t^{\frac{j-b-n}{m}}\log^l t
+\sum_{j= 0}^\infty \tilde d_j(B,P)\; t^j.
\end{equation}
$a_{j,k+1}=0$ if $(j-b-n)/m\not\in\Z_+$.
\subsection{Heat expansion vs. $\zeta$--function
\label{ss:heat-zeta}
Finally we briefly explain how the meromorphic continuation of the
$\zeta$--function can be obtained from the heat expansion. As before
let $B\in\CL^{b,k}(M,E)$ and let $P\in\CL^m(M,E)$ be an elliptic
operator which is self--adjoint with respect to some
Riemannian structure on $M$ and some Hermitian structure on $E$.
Furthermore, assume that $P\ge 0$ is non--negative.
Let $\Pi_{\ker P}$ be the orthogonal projection onto $\ker P$
and put for $\Re s>0$
\begin{equation}\label{eq:20090511-4}
P^{-s}:= \bigl(I-\Pi_{\ker P}\bigr)\bigl(P+\Pi_{\ker P}\bigr)^{-s}.
\end{equation}
I.e. $P^{-s}\restriction \ker P=0$ and for $\xi\in\im P$
we let $P^{-s}\xi$ be the unique $\eta\in \ker P^\perp$
with $P^s\eta=\xi$.
The $\zeta$--function of $(B,P)$ is defined
(up to a $\Gamma$--factor) as the \emph{Mellin transform} of
the heat trace $\Tr(B (I-\Pi_{\ker P})e^{-tP})$:
\begin{equation}
\begin{split}
\zeta(B,P;s)&=\Tr\bigl(B P^{-s}\bigr)\\
&= \frac{1}{\Gamma(s)}\int_0^\infty t^{s-1} \Tr\bigl(B
(I-\Pi_{\ker P})e^{-tP}\bigr) dt,\quad \Re s\gg 0.
\end{split}
\label{eq:zeta-function}
\end{equation}
$\Tr\bigl(B(I-\Pi_{\ker P}) e^{-tP}\bigr)$ decays exponentially as
$t\to\infty$.
The meromorphic continuation is thus obtained by plugging the short time
asymptotic expansion \eqref{eq:log-heat-expansion} into the
rhs of \eqref{eq:zeta-function} (cf. e.g. \cite[Sec. II.1]{Les:OFT}):
\begin{equation}
\begin{split}
\Gamma(s)\zeta(B,P;s)&=\int_0^1 t^{s-1} \Tr(B e^{-tP}) dt \\
&\qquad-\frac 1s\Tr\bigl(B\Pi_{\ker P}\bigr)+\text{ Entire function}(s),\\
&\sim\sum_{j=0}^\infty \sum_{j=0}^{k+1}
\frac{a_{jl}'(B,P)}{(s-\frac{n+b-j}{m})^{j+1}}+\sum_{j=0}^\infty
\frac{\tilde d_j'(B,P)}{s+j},
\end{split}
\label{eq:zeta-pole-structure}
\end{equation}
where the formal sum on the right is meant to display the principal
parts of the Laurent series at the poles of $\Gamma(s)\zeta(B,P;s)$.
The $\Gamma$--function has simple poles in $\Z_-=\{0,-1,-2,\dots\}$, hence
the $\tilde d_j'$ do not contribute to the poles of $\zeta(B,P;s)$. The $a_{jl}'$ depend linearly on the $a_{jl}$ and consequently
$a_{j,k+1}'=0$ if $(n+b-j)/m$ is \emph{not} a pole of the
$\Gamma$--function. Let us summarize:
\begin{theorem}\label{t:zeta-meromorphic} Let $M$ be a compact closed
manifold of dimension $n$.
Let $B\in\CL^{b,k}(M,E)$ and let $P\in\CL^m(M,E)$ be an elliptic
operator which is self--adjoint with respect to some
Riemannian structure on $M$ and some Hermitian structure on $E$.
Then the $\zeta$--function $\zeta(B,P;s)$ is meromorphic for
$s\in\C$ with poles of order at most $k+1$ in $(n+b-j)/m$.
\end{theorem}
\section{Regularized traces
\label{s:RT}
\subsection{The Residue Trace (Noncommutative Residue)
We have seen in Proposition \plref{p:no-trace} that the Hilbert
space trace $\Tr$ cannot be extended to all classical pseudodifferential
operators.
However,
in his seminal papers \cite{Wod:LIS}, \cite{Wod:NRF}
\textsc{M. Wodzicki} was able to show
that, up to a constant,
the algebra $\CL^\bullet(M)$ has a unique trace which he called the
noncommutative residue; we prefer to call it residue trace.
The residue trace was independently
discovered by \textsc{V. Guillemin} \cite{Gui:NPW} as a byproduct
of his axiomatic approach to the Weyl asymptotics.
In \cite{Les:NRP} the author generalized the residue trace to
the algebra $\CL^{\bullet,\bullet}(M,E)$. Strictly speaking
there is no residue trace on the full algebra $\CL^{\bullet,\bullet}(M,E)$.
Rather one has to restrict to operators with a given bound on
the $\log$ degree.
In detail: let $A\in\CL^{a,k}(M,E)$ and let $P\in\CL^m(M,E)$ elliptic, non--negative
and invertible, cf. Subsection \ref{ss:heat-zeta}. Put
\begin{equation}\label{eq:def-NCR}
\begin{split}
\Res_k(&A,P)\\
&:= m^{k+1} \Res_{k+1} \Tr(AP^{-s})|_{s=0}\\
&= m^{k+1}(-1)^{k+1} (k+1)!\times \text{ coefficient of }\;
\log^{k+1} t\text{ in the }\\
&\quad \text{asymptotic expansion of}\;\Tr(Ae^{-tP})\text{ as } t\to 0.\\
\end{split}
\end{equation}
In \cite{Les:NRP} it was assumed in addition that the leading
symbol of $P$ is scalar. This assumption allows one to use Duhamel's
principle and to systematically
exploit the fact that the order of a commutator $[A,P]$ is at most
$\ord A+\ord P-1$. Using the resolvent approach
it was shown in \textsc{Grubb} \cite{Gru:RAT} that for defining $\Res_k$
and to derive its properties one does not need to assume that
$P$ has scalar leading symbol.
The main properties of $\Res_k$ can now be summarized as follows:
\begin{theorem}[Wodzicki--Guillemin; $log$--polyhomogeneous case \cite{Les:NRP}]
\indent\par Let $A\in\CL^{a,k}(M,E)$ and let $P\in\CL^m(M,E)$
be elliptic, non--negative and invertible.
\textup{1.} $\Res_k(A,P)=:\Res_k(A)$ is independent of $P$, i.e.
\[\Res_k:\CL^{\bullet,k}(M,E)\longrightarrow \C\] is a
linear functional.
\textup{2.} If $A\in\CL^{a,k}(M,E), B\in\CL^{b,l}(M,E)$
then $\Res_k([A,B])=0$. In particular, $\Res:=\Res_0$
is a trace on $\CL^\bullet(M,E)$.
\textup{3.} For $A\in\CL^{a,k}(M,E)$ the $k$-th residue
$\Res_k(A)$ vanishes if \[a\not\in -\dim M+\Z_+.\]
\textup{4.} In a local chart one puts
\begin{equation}
\go_k(A)(x
=\frac{(k+1)!}{(2\pi)^n} \Big(\int_{|\xi|=1} \tr_{E_x}(a_{-n,k}(x,\xi)) |d\xi|
\Big) |dx|.
\label{eq:residue-density}
\end{equation}
Then $\go_k(A)\in\Gamma^\infty(M,|\Omega|)$ is a density (in particular
independent of the choice of coordinates), which depends
functorially on $A$. Moreover
\begin{equation}
\Res_k(A)=\int_M \go_k(A).
\end{equation}
\textup{5.} If $M$ is connected and $n=\dim M>1$ then
$\Res_k$ induces an isomorphism
$\CL^{a,k}(M)/[\CL^{a,k}(M),\CL^{1,0}(M)]\longrightarrow\C$.
In particular, $\Res$ is up to scalar multiples the only
trace on $\CL^\bullet(M)$.
\end{theorem}
\commentary{\begin{remark}\TODO{no so clear}
5. is usually stated only for scalar operators. However, the extension
to operators in a vector bundle is straightforward.
\end{remark}
}
\begin{example}\label{ex:20090525}
1. Let $A$ be a classical pseudodifferential operator of order $-n=-\dim M$
which is assumed to be elliptic, non--negative and invertible. To calculate
the residue trace of $A$ we may use $P:=A\ii$. Thus
\begin{equation}\label{eq:ML20090525-1}
\Res(A)=n \Res \Tr(A^{1+s})|_{s=0}=n \Res \zeta(A\ii;s)|_{s=1}>0,
\end{equation}
where $\zeta(A\ii;s)=\zeta(I,A\ii;s)$ is the $\zeta$--function of
the elliptic operator $A\ii$. The positivity follows from
Eq. \eqref{eq:residue-density}.
2. Let $\Delta$ be the Laplacian on a closed
Riemannian manifold $(M,g)$. Then the heat expansion
\eqref{eq:log-heat-expansion} (with $B=I$ and $P=\Delta$) simplifies:
since $\Delta$ is a differential operator there are no $\log$ terms
and by a parity argument every other heat coefficient vanishes
\cite{Gil:ITH}. Thus we have an asymptotic expansion
\begin{equation}\label{eq:heat-expansion-delta}
\Tr(e^{-t\Delta})\sim_{t\to 0} \sum_{j=0}^\infty a_j(\Delta)
t^{(j-n)/2},\quad a_{2j+1}(\Delta)=0.
\end{equation}
The $a_j(\Delta)$ are enumerated such that \eqref{eq:heat-expansion-delta}
is consistent with \eqref{eq:log-heat-expansion}.
The first few $a_j(\Delta)$ have been calculated\
although the computational complexity increases drastically with $j$
(cf. e.g. \cite{Gil:ITH}). One has
\begin{equation}\label{eq:20081118-3}
\begin{split}
a_0(\Delta)&=c_n\vol(M)\\
a_2(\Delta)&=c_n' \int_M \operatorname{scal}(M,g)d\vol.
\end{split}
\end{equation}
The latter is known as the \emph{Einstein-Hilbert action} in the physics
literature. Therefore the following relation between the heat coefficients
(and in particular the EH action) and the residue trace has received
some attention from the physics community, e.g. \textsc{Kalau--Walze} \cite{KalWal:GNC},
\textsc{Kastler} \cite{Kas:DOG}.
We find for real $\ga$
\begin{align}
\Res(\Delta^\ga)&= 2 \lim_{s\to 0} s \Tr(\Delta^{\ga-s})\nonumber\\
&= 2 \lim_{s\to 0} s\zeta(I,\Delta;s-\ga)\nonumber\\
&= 2\lim_{s\to 0} \frac{s}{\Gamma(s-\ga)}\int_0^1
t^{s-\ga-1}\bigl(\Tr(e^{-t\Delta})-\dim\ker\Delta\bigr)dt\label{eq:20081118-1}\\
&=2\sum_{j=0}^\infty \lim_{s\to 0}
\frac{a_j(\Delta)s}{\Gamma(s-\ga)(s-\ga+\frac{j-n}{2})}\label{eq:20081118-2}\\
&= \begin{cases} \frac{2 a_j(\Delta)}{\Gamma(\frac{n-j}{2})},&
\ga=\frac{j-n}{2}<0,\\
0,&\text{otherwise.}
\end{cases}\label{eq:20081118-4}
\end{align}
Here we have used that the $\zeta$--function of $\Delta$ has only
simple poles (cf. Theorem \ref{t:zeta-meromorphic}). Furthermore,
in \eqref{eq:20081118-1} we use that due to
the exponential decay of $(\Tr(e^{-t\Delta})-\dim\ker\Delta)$ the function
$s\mapsto \int_1^\infty t^{s-\ga-1}(\Tr(e^{-t\Delta})-\dim\ker\Delta)dt$ is entire and
hence does not contribute to the residue at $s=0$. Furthermore,
note that the sum in \eqref{eq:20081118-2} is finite.
In view of \eqref{eq:20081118-3} we have the following special cases of
\eqref{eq:20081118-4}:
\begin{align}
\Res(\Delta^{-n/2})&=\frac{2 a_0(\Delta)}{\Gamma(\frac n2)}=c_n\vol(M)\label{eq:20081118-5},\\
\Res(\Delta^{1-n/2})&=c_n' \operatorname{EH}(M,g),\label{eq:20081118-6}
\end{align}
where $\operatorname{EH}$ denotes the above mentioned Einstein-Hilbert
action. It is formula \eqref{eq:20081118-6} which caused physicists to become
enthusiastic about this business. Needless to say, the calculation we
present here goes through for any Dirac Laplacian. One only has to replace
the scalar curvature in \eqref{eq:20081118-3} by the second local heat
coefficient, which can be calculated for any Dirac Laplacian.
We wanted to show that the relation
between the heat asymptotic and the poles of the $\zeta$--function,
which is an easy consequence of the Mellin transform, leads to a
straightforward proof of \eqref{eq:20081118-6}. There also exist
``hard'' proofs of this fact which check that the \emph{local}
Einstein-Hilbert action coincides with the residue density of the operator
$\Delta^{1-n/2}$ \cite{KalWal:GNC},\cite{Kas:DOG}.
\end{example}
\subsection{Connes' Trace Theorem}
The famous trace Theorem of Connes gives a relation between the Dixmier
trace and the Wodzicki--Guille\-min residue trace for pseudodifferential
operators of order minus $\dim M$. It was extended by \textsc{Carey} et. al.
\cite{Caretal:SFD}, \cite{Caretal:DTA} to the von Neumann algebra setting.
\begin{theorem}[Connes' Trace Theorem {\cite{Con:AFN}}]\label{t:connes-trace}
Let $M$ be a closed manifold of dimension $n$ and let $E$ be a smooth vector bundle
over $M$. Furthermore let $P\in\CL^{-n}(M,E)$ be a pseudodifferential operator
of order $-n$. Then $P\in\cL^{(1,\infty)}(L^2(M,E))$ and for any $\go$ satisfying
the assumptions of the previous Proposition one has
\begin{equation}
\Tr_\go(P)=\frac 1n \Res P.
\end{equation}
\end{theorem}
We give a sketch of the proof of Connes' Theorem using a Tauberian argument. This was mentioned
without proof in \cite[Prop. 4.2.$\beta$.4]{Con:NG} and has been elaborated in various ways
by many authors. The argument we present here is an adaption of an argument in
\cite{Caretal:SFD} to the type I case.
Let us mention the following simple version of Ikehara's Tauberian Theorem:
\begin{theorem}[{\cite[Sec. II.14]{Shu:POST}}] \label{t:Ikehara} Let $F:[1,\infty)\to \R$ be an increasing function such that
\begin{enumerate}
\item[\textup{(1)}] $\zeta_F(s)=\int_1^\infty \gl^{-s} dF(\gl)$ is analytic for $\Re s>1$,
\item[\textup{(2)}] $\lim\limits_{s\to 1+} (s-1)\zeta_F(s)=L$.
\end{enumerate}
Then
\begin{equation}
\lim\limits_{\gl\to\infty} \frac{F(\gl)}{\gl}=L.
\label{eq:20081114-4}
\end{equation}
\end{theorem}
\begin{cor}\label{t:20081114-11}
Let $F:[1,\infty)\to \R$ be an increasing function such that
$\int_1^\infty e^{-t\gl} dF(\gl)=\frac{L}{t}+O(t^{\eps-1}), t\to 0+,$
for some $\eps>0$. Then Ikehara's Theorem applies to $F$ and \eqref{eq:20081114-4}
holds.
\end{cor}
\begin{proof} The $\zeta$--function of $F$ satisfies
\begin{equation}
\begin{split}
\zeta_F(s)&= \int_1^\infty \gl^{-s}dF(\gl)\\
&= \int_1^\infty \frac{1}{\Gamma(s)} \int_0^\infty t^{s-1}e^{-t\gl}dt\; dF(\gl)\\
&= \int_0^1 \frac{t^{s-1}}{\Gamma(s)} \int_1^\infty e^{-t\gl}dF(\gl)\; dt+\text{ holomorphic near } s=1\\
&\sim \frac{1}{\Gamma(s)}\frac{L}{s-1}\text{ near } s=1.\qedhere
\end{split}
\label{eq:20081114-5}
\end{equation}
\end{proof}
\begin{proof}[Proof of Connes' Trace Theorem]
Each $P\in\CL^{-n}(M,E)$ is a linear combination of at most $4$ non--negative operators:
to see this we first write $P=\frac 12(P+P^*)+\frac{1}{2i}(P-P^*)$ as a linear
combination of two self--adjoint operators. So consider a self--adjoint $P=P^*$. We
choose an elliptic operator $Q\in \CL^{-n}(M,E)$ with $Q>0$ and positive definite leading symbol.
Since we are on a compact manifold it then follows that $c\cdot Q-P\ge 0$ for $c$ large enough.
Hence $P=c\cdot Q-(c\cdot Q-P)$ is the desired decomposition of $P$ as a difference of non--negative
operators.
So it suffices to prove the claim for a non--negative operator $P$. Then $P+\eps Q$ is
elliptic and invertible for each $\eps>0$. By an approximation argument we are ultimately
left with the problem of proving the claim for an
\emph{elliptic} positive operator $P\in\CL^{-n}(M,E)$.
Let $\mu_1\ge \mu_2\ge \mu_3\ge \dots>0$ be the eigenvalues of $P$ counted with multiplicity.
We consider the counting function
\begin{equation}
F(\gl)=\#\bigsetdef{j\in\N}{\mu_j^{-1}\le\gl}.
\label{eq:20081114-6}
\end{equation}
The associated $\zeta$--function
\begin{equation}
\zeta_F(s)=\int_1^\infty \gl^{-s}dF(\gl)=\Tr(P^{s})-\sum_{\mu_j>1}\mu_j^{s}
\label{eq:20081114-7}
\end{equation}
is, up to the entire function $\sum\limits_{\mu_j>1}\mu_j^{s}$, the $\zeta$--function of the elliptic operator $P^{-1}$. Thus by
Theorem \ref{t:zeta-meromorphic} the function
$\zeta_F$ is holomorphic for $\Re s>1$ and it has a meromorphic extension to the complex plane,
and $1$ is a simple pole with
\begin{equation}
\lim_{s\to 1} (s-1)\zeta_F(s)=\frac 1n \Res(P)\not=0,
\label{eq:20081114-8}
\end{equation}
cf. Example \plref{ex:20090525} 1.
Thus Ikehara's Theorem \plref{t:Ikehara} applies to $F$ and hence
\begin{equation}\label{eq:20090511-5}
\lim_{\gl\to\infty} \frac{F(\gl)}{\gl}=\frac 1n \Res(P).
\end{equation}
\emph{Claim:}
\begin{equation}\label{eq:claim}
\lim\limits_{j\to\infty} j \mu_j=\frac 1n \Res(P)=:L.
\end{equation}
To see this let $\eps>0$ be given. Then there exists a $\gl_0$ such
that for $\gl\ge\gl_0$
\begin{equation}
1-\eps \le \frac{F(\gl)}{\gl L} \le 1+\eps.
\end{equation}
Thus
\begin{equation}
\exists_{\gl_0}\forall_{\gl\ge\gl_0} \quad (1-\eps) \gl L\le
\#\bigsetdef{j\in\N}{\mu_j^{-1}\le\gl}\le (1+\eps)\gl L.
\end{equation}
Hence for $j\ge (1+\eps)\gl L$ we have $\mu_j^{-1}\ge\gl$
and for $j\le (1-\eps)\gl L$ we have $\mu_j^{-1}\le \gl$.
For a given fixed $j_0$ large enough we therefore infer
\begin{equation}\label{eq:20090512-6}
(1-\eps)L\le j \mu_j\le (1+\eps) L,\quad j\ge j_0,
\end{equation}
proving the Claim.
Now consider
\begin{equation}
\begin{split}
\gb(u)=\int_1^{e^u} \gl^{-1}dF(\gl) =\sum_{\mu_j\ge e^{-u}} \mu_j.
\end{split}
\label{eq:20081114-9}
\end{equation}
We check that Ikehara's Tauberian Theorem applies to $\gb$:
\begin{equation}
\begin{split}
\int_1^\infty &e^{-s\gl}d\gb(\gl)=\int_1^\infty e^{-(s+1)\gl}dF(e^\gl)\\
&=\int_e^\infty x^{-s-1}dF(x)=\zeta_F(1+s)\\
&=\frac{\Res(P)}{n s} +O(1),\quad s\to 0.
\end{split}
\label{eq:20081114-10}
\end{equation}
Thus Corollary \plref{t:20081114-11} implies
\begin{equation}
\frac 1u \sum_{\mu_j\ge e^{-u}} \mu_j=\frac{\gb(u)}{u}\xrightarrow{u\to\infty} \frac 1n \Res(P).
\label{eq:20081114-12}
\end{equation}
To infer Connes' Trace Theorem from \eqref{eq:20081114-12} we choose $j_0$
such that \eqref{eq:20090512-6} holds for $\eps=1/2$ and $j\ge j_0$.
Then put for $N$ large enough $u_N:=\log\frac{N}{(1-\eps)L}$. Hence
we have $\mu_j\ge \mu_N\ge e^{-u_N}$ for $1\le j\le N$ and thus
\begin{equation}
\begin{split}
\frac{1}{\log(N+1)}\sum_{j=1}^N\mu_j &\le
\frac{1}{\log(N+1)}\sum_{\mu_j\ge \exp(-u_N)} \mu_j\\
&=\frac{u_N}{\log{N+1}} \frac{1}{u_N} \sum_{\mu_j\ge \exp(-u_N)} \mu_j\\
&\longrightarrow L,\quad\text{for }N\to \infty,
\end{split}
\end{equation}
by \eqref{eq:20081114-12} and since $u_N/\log(N+1)\to 1$.
This proves
\begin{equation}
\limsup_{N\to\infty} \frac{1}{\log(N+1)}\sum_{j=1}^N \mu_j\le L=\frac
1n\Res(P).
\end{equation}
Arguing with $u_N=\log\frac{N}{(1+\eps)L}$ instead of $u_N=\log\frac{N}{(1-\eps)L}$
one shows
\begin{equation}
\liminf_{N\to\infty} \frac{1}{\log(N+1)}\sum_{j=1}^N \mu_j\ge L=\frac
1n\Res(P),
\end{equation}
and Connes' Trace Theorem is proved.
\end{proof}
The attentive reader might have noticed that we did not use the full
strength of the Claim \eqref{eq:claim}. We only used that there
exist positive constants $c_1,c_2$ such that
$c_1\le j\mu_j\le c_2$ for $j\ge j_0$.
\subsection{Parametric case: The symbol valued trace
In contrast to Proposition \plref{p:no-trace} the situation is entirely
different for the algebra of parametric pseudodifferential
operators.
Fix a compact smooth manifold $M$ without boundary of dimension $n$.
Denote the coordinates in $\R^p$ by $\mu_1,\ldots,\mu_p$ and
let $\polyn$ be the algebra of polynomials
in $\mu_1,\ldots,\mu_p$. By a slight abuse of notation we denote
by $\mu_j$ also the operator of multiplication by the
$j$-th coordinate function. Then we have maps
\begin{equation}\label{eq:20081120-5}
\begin{split}
&\partial_j:\CL^m(M,E;\R^p)\rightarrow \CL^{m-1}(M,E;\R^p),\\
&\mu_j:\CL^m(M,E;\R^p)\rightarrow \CL^{m+1}(M,E;\R^p).
\end{split}
\end{equation}
Also $\partial_j$ and $\mu_j$ act naturally on the parametric
symbols over the one--point space
$\CS^{\bullet,\bullet} (\R^p):=\CS^{\bullet,\bullet}(\{\textup{pt}\};\R^p)$
and on polynomials $\polyn$. Thus they act on the quotient
$\CS^{\bullet,\bullet} (\R^p)/\polyn$.
After these preparations we can summarize one of the main results
of \cite{LesPfl:TAP}.
Let $E$ be a smooth vector bundle on $M$ and consider $A\in\CL^m(M,E;\R^p)$
with $m+n< 0$. Then for $\mu\in\R^p$
the operator $A(\mu)$ is trace class; hence we may define the function
$\TR(A):\mu\mapsto \Tr(A(\mu))$. The map $\TR$ is obviously tracial,
i.e.~$\TR(AB)=\TR(BA)$, and commutes with $\partial_j$ and $\mu_j$.
In fact, the following theorem holds.
\begin{theorem}\textup{\cite[Theorems 2.2, 4.6 and Lemma 5.1]{LesPfl:TAP}}
\label{t:Lesch-Pflaum}
There is a unique linear extension
\[\TR:\CL^\bullet (M,E;\R^p)\rightarrow \CS^{\bullet,\bullet}(\R^p)/\polyn\]
of $\TR$ to operators of all orders such that
\begin{enumerate}
\item \label{thm21.1}$\TR(AB)=\TR(BA)$, i.e. $\TR$ is tracial.
\item \label{thm21.2}$\TR(\partial_j A)=\partial_j \TR(A)$ for $j=1,\dots,p$.
\end{enumerate}
This unique extension $\TR$ satisfies furthermore:
\begin{enumerate}
\setcounter{enumi}{2}
\item \label{thm21.3}$\TR(\mu_j A)=\mu_j\TR(A)$ for $j=1,\dots,p$.
\item \label{thm21.4}$\TR(\CL^m(M,E;\R^p))\subset \CS^{m+p,1}(\R^p)/\polyn$.
\end{enumerate}
\end{theorem}
This Theorem is an example where functions with $\log$--poly\-homo\-ge\-neous
expansions occur naturally. Note that although an operator
$A\in\CL^m(M,E;\R^p)$ has a homogeneous symbol expansion without
$\log$ terms the trace function $\TR(A)$ is $\log$--polyhomogeneous.
\begin{proof}[Sketch of Proof]
\commentary{We briefly present two arguments which help explain why thi
theorem is true.
\subsubsection*{1.~Taylor expansion}
Given $A \in \CL^m (M,E;\R^p)$. Since differentiation by the parameter
reduces the order of the operator, the Taylor expansion around $0$
yields for $\mu\in\R^p$ (cf.~\cite[Prop.~4.9]{LesPfl:TAP})
\begin{equation} \label{LesPfl:G1-3.7}
A(\mu) - \sum_{|\alpha| \leq N-1} \,
\frac{(\partial_\mu^\alpha A)(0)}{\alpha !} \,
\mu^\alpha \in \CL^{m-N} (M,E).
\end{equation}
Hence, if $N$ is so large that $m-N+n < 0$, then the difference
\eqref{LesPfl:G1-3.7} is trace class and we put
\begin{equation}
\label{LesPfl:G1-3.8}\begin{split}
\TR(A)(\mu)
:= \tr &\Big( A(\mu) - \sum_{|\alpha| \leq N-1}
\frac{(\partial_\mu^\alpha A) (0)}{\alpha !} \, \mu^\alpha
\Big) \; \mod {\polyn}.
\end{split}
\end{equation}
Since we mod out by polynomials, the result is in fact independent
of $N$. This defines $\TR$ for operators of all order and the properties
(1)--(3) are straightforward to verify.
However, \eqref{LesPfl:G1-3.7} does not give any asymptotic
information and hence does not justify the fact that $\TR$ takes
values in $\PS$.
}
The main observation for the proof is that differentiating by the
parameter \eqref{eq:20081120-5}
lowers the degree and hence differentiating often enough we obtain
a parametric family of trace class operators:
Given $A\in \CL^m (M,E;\R^p)$, then
$\partial^\alpha A\in \CL^{m-|\alpha|}(M,E,\R^p)$ is of
trace class if $m-|\alpha|+ \dim M <0$. Now integrate the function
$\TR(\partial^\alpha A)(\mu)$
back. Since we mod out polynomials this procedure is independent
of $\alpha$ and the choice of anti--derivatives. This integration procedure
also explains the possible occurrence of $\log$ terms in the
asymptotic expansion and hence why $\TR$ ultimately takes values in
$\CS^{\bullet,\bullet}(\R^p)$.
For details, see \cite[Sec.~4]{LesPfl:TAP}.
\end{proof}
$\TR$ is not a trace in the usual sense since it maps into
a quotient space of the space of parametric symbols over a point.
However, composing any linear functional on $\CS^{\bullet,\bullet}(\R^p)/\polyn$
with $\TR$ yields a trace on $\CL^\bullet (M,E;\R^p)$.
A very natural choice for such a trace is the Hadamard partie finie
integral $\reginttext$ introduced in Subsection \plref{ss:partie-finie}.
Let us first note that for a polynomial $P(\mu)\in\C[\mu_1,\dots,\mu_p]$
of degree $r$ the function
\begin{equation}
\int_{|\mu|\le R} P(\mu)d\mu=\sum_{j=p}^{p+r} a_j R^j
\end{equation}
is a polynomial of degree $p+r$ without constant term. In particular
\begin{equation}
\regint_{\R^p} P(\mu)d\mu=0
\end{equation}
and hence $\regint_{\R^p}$ induces a linear functional on
the quotient space\\ $\CS^{\bullet,\bullet}(\R^p)/\polyn$.
Thus putting for $A\in\CL^\bullet(M,E;\R^p)$
\begin{equation}\label{eq:parametric-trace}
\overline{\TR}(A):=\regint_{\R^p} \TR(A)(\mu)d\mu
\end{equation}
we obtain a trace $\overline{\TR}$ on $\CL^\bullet(M,E;\R^p)$
which extends the natural trace on operators of order
$<-\dim M -p$
\begin{equation}\label{eq:def-fTR}
\bigl(\int\Tr\bigr)(A):=\int_{\R^p}\Tr(A(\mu))d\mu.
\end{equation}
However, since $\reginttext$ is not closed
on $\CS^{\bullet,\bullet}(\R^p)$ (Prop. \plref{S2-4.4}),
$\overline{\TR}$ is not closed on $\CL^\bullet(M,E;\R)$.
Therefore we obtain derived traces
\begin{equation}
\partial_j\overline{\TR}(A):=\widetilde\TR_j(A)
:=\regint_{\R^p}\TR(\partial_j A)(\mu)d\mu.
\end{equation}
The relation between $\fTR$ and $\widetilde{\TR}_j$ can be explained
more elegantly in terms of differential forms on $\R^p$ with coefficients
in $\CL^\infty (M,E;\R^p)$ (see \textsc{Lesch, Moscovici} and
\textsc{Pflaum} \cite{LesMosPfl:RPC}). Let
$\Lambda^\bullet:= \Lambda^\bullet (\R^p)^*=\C[d\mu_1,\ldots,d\mu_p]$
be the exterior algebra of the vector space $(\R^p)^*$ and put
\begin{equation}\label{ML-G2.8}
\Omega_p:=\CL^\infty (M,E;\R^p)\otimes \Lambda^\bullet .
\end{equation}
Then, $\Omega_p$ consists of pseudodifferential operator-valued
differential forms, the coefficients of $d\mu_I$ being
elements of $\CL^\infty (M,E;\R^p)$.
For a $p$-form $A(\mu)d\mu_1\wedge\ldots\wedge d\mu_p$
we define the \emph{regularized trace} by
\begin{equation}\label{ML-G2.9}
\fTR(A(\mu)d\mu_1\wedge\ldots\wedge d\mu_p)
:= \regint_{\R^p} \TR(A)(\mu)d\mu_1\wedge\ldots\wedge d\mu_p.
\end{equation}
On forms of degree less than $p$ the regularized trace
is defined to be $0$. $\fTR$ is a \emph{graded trace} on the
differential algebra $(\Omega_p,\, d)$. In general, $\fTR$ is
not closed. However, its boundary,
$$
\lTR:= d\fTR := \fTR\circ d \, ,
$$
called the \emph{formal trace},
is a closed graded trace of degree $p-1$. It is
shown in \cite[Prop.~5.8]{LesPfl:TAP}, \cite[Prop.~6]{Mel:EIF}
that $\lTR$ is \emph{symbolic}, i.e.~it descends to a well-defined
closed graded trace of degree $p-1$ on
\begin{equation}\label{ML-G2.10}
\partial \Omega_p:= \CL^\infty (M,E;\R^p)/\CL^{-\infty}(M,E;\R^p)
\otimes\Lambda^\bullet.
\end{equation}
The properties of the formal trace $\lTR$ resemble those of the
residue trace.
Denoting by $r$ the quotient map $\Omega_p\to\partial \Omega_p$
we see that Stokes' formula with `boundary'
\begin{equation}\label{ML-G2.11}
\fTR(d\omega)=\lTR(r\omega)
\end{equation}
now holds by construction for any $\omega\in\Omega$.
Finally we mention an interesting linear form on
$\CS^{\bullet,\bullet}(\R^p)/\polyn$
in the spirit of the residue trace. Let
\begin{equation}
\Omega^r\CS^{\bullet,\bullet}(\R^p)=\CS^{\bullet,\bullet}(\R^p)\otimes\Lambda^\bullet
\end{equation}
be the $r$--forms on $\R^p$ with coefficients in
$\CS^{\bullet,\bullet}(\R^p)$. We extend the notion of homogeneous
functions to differential forms in the obvious way.
If $\go=f d\mu_{i_1}\wedge\dots\wedge d\mu_{i_r}$ is a form of degree $r$
and $f\in \CS^{a,k}(\R^p)$ then we define the \emph{total degree} of
$\go$ to be $r+a$. The exterior derivative preserves the total degree
and each $\go\in\Omega^\bullet\CS^{\bullet,\bullet}(\R^p)$
of total degree $a$
has an asymptotic expansion
\begin{equation}
\go\sim \sum_{j=0}^\infty \go_{a-j}
\end{equation}
where $\go_{a-j}$ are forms of total degree $a-j$ which are $\log$--polyhomogeneous
in the sense of \eqref{ML-G2.2}, see \eqref{eq:classical}.
More concretely, if $f\in\CS^{a,k}(\R^p)$ then for
$\go=f\,d\mu_1\wedge\dots d\mu_r$ we have
\begin{equation}
\go_{a+r-j}=f_{a-j}.
\end{equation}
Accordingly we define $\go_{a+r-j,l}:=f_{a-j,l}$.
Finally let $X=\sum_{j=1}^p \mu_j\frac{\pl}{\pl \mu_j}$ be the Liouville
vector field on $\R^p$.
After these preparations we put for $\go=fd\mu_1\wedge\dots\wedge d\mu_p\in
\Omega^p\CS^{\bullet,\bullet}(\R^p)$
\begin{equation}
\res(\go):=\frac{1}{(2\pi)^p}\int_{S^{p-1}} i_X(\go_0)=
\frac{1}{(2\pi)^p}\int_{S^{p-1}} f_{-p,0} d\vol_S.
\end{equation}
On forms of degree $<p$ we put $\res(\go)=0$.
\begin{prop}\label{p:Stokes-property}
If $f\in\C[\mu_1,\dots,\mu_p]$ is a polynomial then
\[\res(fd\mu_1\wedge\dots\wedge d\mu_p)=0.\]
If $\go\in\Omega^\bullet\CS^{a,0}(\R^p)$ then $\res(d\go)=0$.
\end{prop}
\noindent The second statement is due to \textsc{Manchon, Maeda} and \textsc{Paycha}
\cite{Manetal:SFC}.
\begin{proof}
For $f\in\C[\mu_1,\ldots,\mu_p]$ the component of
homogeneity degree $0$ of
$fd\mu_1\wedge\dots\wedge d\mu_p$ is obviously $0$.
Using Cartan's identity we have
\begin{equation}
\begin{split}
\res(d\go)&=\int_{S^{p-1}} i_X(d \go_0)=\int_{S^{p-1}} (i_Xd+d i_X)(\go_0)\\
&=\int_{S^{p-1}} \mathcal{L}_X \go_0 =0,
\end{split}
\end{equation}
since the Lie derivative of a form of homogeneity degree $0$
with respect to the Liouville vector field $X$ is $0$.
\end{proof}
Composing the $\res$ functional with $\TR$ we obtain another trace
on the algebra $\CL^\bullet(M,E;\R^p)$ which despite
of the previous Proposition is not closed. The point
here is that the range of $\TR$ is not contained
in $\CS^\bullet(\R^p)$ but rather in $\CS^{\bullet,1}(\R^p)$.
The significance of this functional and its relation to the noncommutative
residue is still to be clarified.
\newcommand{\rrr}{\!\!\upharpoonright\!}
\newcommand{\textup{sa}}{\textup{sa}}
\newcommand{{1/2}}{{1/2}}
\newcommand{\operatorname{comp}}{\operatorname{comp}}
\newcommand{d_{\textrm{str}}{d_{\textrm{str}}
}
\newcommand{\marginpar{check}}{\marginpar{check}}
\newcommand{\textup{cl}}{\textup{cl}}
\newcommand{[0,\infty)}{[0,\infty)}
\newcommand{Fr{\'e}chet}{Fr{\'e}chet}
\section{Differential forms whose coefficients are symbol functions
\label{s:DFC}
Proposition \plref{p:Stokes-property} says
that the $\res$ functional on $\Omega^\bullet\CS^{\bullet}(\R^n)$
descends to a linear functional on the $n$--th de Rham cohomology
of differential forms with coefficients in $\CS^{\bullet}(\R^n)$.
In \textsc{Paycha} \cite{Pay:NRC} it is shown that the space
of linear functionals on $\CS^{\bullet}(\R^n)$ having the Stokes property
is one--dimensional. From this statement in fact the uniqueness
of the residue trace can be derived. Translated into our terminology this
means that the dual of the $n$--th de Rham cohomology group of $\R^n$ with
coefficients in $\CS^{\bullet}(\R^n)$ is spanned by $\res$.
In particular the $n$-th de Rham cohomology group of $\R^n$
with coefficients in $\CS^{\bullet}(\R^n)$
is one--dimensional. In \cite{Pay:NRC} it is shown furthermore
that the uniqueness statement for linear functionals having the
Stokes property is basically equivalent to the uniqueness
statement for the residue trace.
We take up this theme and study in a rather general setting
the de Rham cohomology of differential forms whose coefficients
are symbol functions. The
results announced here are inspired by
\cite{Pay:NRC} but are more general.
We pursue here an axiomatic approach. Details will appear elsewhere.
\subsection{Differential forms with prescribed asymptotics
\begin{dfn}\label{def-1.1} Let $\cA\subset C^\infty{[0,\infty)}$ be a
Fr{\'e}chet space with the following properties.
\begin{enumerate}
\item $\cinfz{[0,\infty)}\subset \cA\subset \cinf{[0,\infty)}$ are continuous embeddings. $\cinf{[0,\infty)}$ carries
the usual Fr{\'e}chet topology of uniform convergence of all derivatives on compact sets and $\cinfz{\R}$
has the standard LF-space topology as inductive limit of the Fr{\'e}chet\ spaces $\bigsetdef{f\in\cinf{[0,\infty)}}{
\supp f\subset [0,N]}$, $N\in\N$.
We denote by $\cA_0=\bigsetdef{f\in\cA}{\supp f\subset (0,\infty)}.$
\item The derivative $\pl:=\frac{d}{dx}$ maps $\cA$ into $\cA$.
\item There is a non--trivial linear functional $\reginttext:\cA\to \C$ with
the following properties:
\begin{enumerate}
\item The restriction of $\reginttext$ to $\cinfz{[0,\infty)}$ is a multiple of the integral
$\int_0^\infty$. That is, there is a $\gl\in\C$ such that for $f\in\cinfz{[0,\infty)}$
we have $\reginttext f=\gl\int_0^\infty f(x) dx$.
\item $\reginttext$ is \emph{closed} on $\cA_0$. That is, for $f\in\cA_0$ we have $\reginttext f=0$.
\item If $f\in \cA_0$ and $\reginttext f=0$ then the function $F:=\int_0^\bullet f\in\cA$.
\end{enumerate}
\end{enumerate}
\end{dfn}
\begin{remark}
It follows from (1) that
if $\chi\in\cinf{[0,\infty)}$ with $\chi(x)=1, x\ge x_0$ and $f\in\cA$ then
$\chi f\in \cA$ because $(1-\chi)f\in\cinfz{[0,\infty)}\subset\cA$.
2. Since $\cA$ is Fr{\'e}chet\, it follows from (1) and (2) and the Closed Graph Theorem
that $\frac{d}{dx}:\cA\to\cA$ is continuous.
3. If $\gl$ in (3a) is nonzero we can renormalize $\reginttext$ such that $\gl=1$. Thus
we are left with two major cases: $\gl=1$ and $\gl=0$. In the first case $\reginttext$
is a regularization of the ordinary integral while in the second case $\reginttext$
is an analogue of the residue trace. This will be explained below in the examples.
\end{remark}
\begin{example}
1. The Schwartz space $\cS(\R)$, $\reginttext=\int$.
2. Let $\CS^a([0,\infty))$, $a\in[0,\infty)$ be the classical symbols of order $a$.
This space carries a natural Fr{\'e}chet\ topology.
If $a\not\in\{-1,0,1,\dots\}$ then let $\reginttext$ be the regularized
integral in the partie finie sense described in Subsection \plref{ss:partie-finie}.
This integral is continuous with respect to the Fr{\'e}chet\ topology
on $\CS^a([0,\infty)).$
If $a\in\{-1,0,1,\dots\}$ then let $\reginttext$ be the
residue integral (cf. \eqref{eq:residue-density}),
i.e. if
\begin{equation}
f(x)\sim_{x\to \infty}\sum_{j=0}^\infty f_{a-j} x^{a-j}
\end{equation}
then
\begin{equation}
\regint f := f_{-1}.
\end{equation}
One can vary this example. With some care one can also deal with
$\log$--polyhomogeneous symbols. Moreover, there are classes of symbols
of integral order where the regularized integral has the Stokes property
\cite{Pay:NRC}. These ``odd class symbols'' also fit into the present
framework.
\end{example}
From now on $\cA$ will always denote a Fr{\'e}chet\ space as in
Def. \plref{def-1.1}.
Starting from $\cA$ we can construct associated spaces of functions
on $\R^n$ respectively on cones over a manifold.
Let $M$ be an oriented compact manifold. By $\cA_0([0,\infty)\times M)$
we denote the space of functions $f\in\cinf{[0,\infty)\times M}$
such that
\begin{itemize}
\item There is an $\eps>0$ such that $f(r,p)=0$ for $r<\eps, p\in M$.
\item For fixed $p\in M$ we have $f(\cdot,p)\in\cA$.
\end{itemize}
Note that for $f\in\cA_0([0,\infty)\times M)$ the map
$M\to \cA, p\mapsto f(\cdot,p)$ is smooth. This follows
from the Closed Graph Theorem.
\details{Sketch: Since $\cA\hookrightarrow \cinf{[0,\infty)}$
is continuously embedded and $M\ni p\mapsto f(\cdot,p)\in\cinf{[0,\infty)}$
is smooth the claim follows.}
As a consequence we have a continuous integration along the fiber
\begin{equation}\label{eq:int-along-fiber}
\regint_{([0,\infty)\times M)/M}:\cA_0([0,\infty)\times M)\longrightarrow
\cinf{M}, \quad f\mapsto \regint f(\cdot,p).
\end{equation}
We put
\begin{equation}
\cA_0(\R^n)=\bigsetdef{\pi^*f}{f\in \cA_0([0,\infty)\times S^{n-1}},
\end{equation}
where $\pi:\R^n\setminus\{0\}\longrightarrow [0,\infty)\times S^{n-1}, x\mapsto (\|x\|, x/\|x\|)$
is the polar coordinate diffeomorphism.
Furthermore we put $\cA(\R^n):= \cinfz{\R^n}+\cA_0(\R^n)$. $\cA_0(\R^n)$ carries a natural LF-topology
while $\cA(\R^n)$ carries a natural Fr{\'e}chet\ topology.
\begin{remark} Composing the integral \eqref{eq:int-along-fiber} with an
integral over $M$ yields a natural integral on $\cA_0([0,\infty))\times
M)$. In the case of $M=S^{n-1}$ and the standard integral on $S^{n-1}$
this integral even extends to an integral
on $\cA(\R^n)$ which has the Stokes property.
If $\cA=\CS^a([0,\infty))$ the so constructed integral on $\cA(\R^n)$
is the Hadamard regularized
integral if $a\not\in\{-1,0,1,\dots\}$ and the residue integral
if $a\in\{-1,0,1,\dots\}$.
Thus our approach allows us to discuss these two, a priori rather different,
regularized integrals within one common framework.
\end{remark}
Finally we denote by $\Omega^k\cA_0([0,\infty)\times M)$ the space of differential forms
whose coefficients are locally in $\cA_0([0,\infty)\times U)$ for any chart $U\subset M$.
A more global description in terms of projective tensor products is also possible:
\begin{equation}
\cA_0([0,\infty)\times M)= \cA_0\otimes_\pi \cinf{M},
\end{equation}
respectively
\begin{equation}
\Omega^\bullet \cA_0([0,\infty)\times M)= (\cA_0\oplus \cA_0 dr)\otimes_\pi \Omega^\bullet(M).
\end{equation}
By Def. \plref{def-1.1}, (2) the exterior derivative maps $\Omega^k\cA_{(0)}(X)$ to $\Omega^{k+1} \cA_{(0)}(X)$
for $X=[0,\infty)\times M$, respectively $X=\R^n$. The corresponding cohomology groups are denoted by
$H^k \Omega^\bullet\cA_{(0)}(X)$. Our goal is to calculate these cohomology groups.
\begin{dfn} We call the $\cA$ of \emph{type I} if $\gl$ in Def.
\plref{def-1.1} (3a) is $1$ and of \emph{type II} if $\gl$ is $0$.
\end{dfn}
\begin{lemma}\label{l:20081121-1} $\cA$ is of type II if and only if the constant function $1$
is in $\cA$. Moreover we have for $k=0,1$
\begin{equation}
H^k\cA([0,\infty))\simeq
\begin{cases}0&, \text{ if } \cA \text{ is of type I,}\\
\C&, \text{ if } \cA \text{ is of type II.}
\end{cases}
\end{equation}
$H^k\cA([0,\infty))$ (obviously) vanishes for $k\ge 2$.
Furthermore $\reginttext$ induces an isomorphism
$H^1\cA_0([0,\infty))\simeq \C$.
\end{lemma}
\subsection{Integration along the fiber and statement of the main result
\label{ss:int-fiber}
\subsubsection{Integration along the fiber}\label{sec-2.1}
The integration \eqref{eq:int-along-fiber} extends to an integration along the
fiber of differential forms as follows (cf. \cite{BotTu:DFA}):
A $k$--form $\go\in\Omega^k\cA_0([0,\infty)\times M)$ is, locally on $M$, a sum
of differential forms of the form
\begin{equation}\label{eq:special-forms}
\go= f_1(r,p)\pi^*\eta_1+f_2(r,p) \pi^*\eta_2\wedge dr
\end{equation}
with $f_j\in\cA_0([0,\infty)\times M), \eta_1\in \Omega^k(M), \eta_2\in\Omega^{k-1}(M)$.
For such forms we put
\begin{equation}
\pi_*\go:= \Bigl( \regint_{([0,\infty)\times M)/M} f_2\Bigr)\pi^*\eta_2.
\end{equation}
\begin{lemma} $\pi_*$ extends to a well--defined homomorphism
\[\Omega^k \cA_0([0,\infty)\times M)\longrightarrow
\Omega^{k-1}\cA_0([0,\infty)\times M).\]
Furthermore, $\pi_*$ commutes with exterior differentiation, i.e.
\[d_M\circ \pi_* = \pi_*\circ d_{\R_+\times M}.\]
\end{lemma}
For the proof of this Lemma the closedness of $\reginttext$ is crucial.
\subsubsection{Statement of the main result}\label{sec-2.2}
We are now able to state our main result:
\begin{theorem} \label{thm:1}
\textup{Type I:} If $\cA$ is of type I then the natural
inclusion $\Omega_c^\bullet(\R^n)\hookrightarrow \Omega^\bullet\cA(\R^n)$
of compactly supported forms induces an isomorphism in cohomology.
\textup{Type II:} If $\cA$ is of type II then
\begin{equation}
H^k\cA(\R^n)\simeq\begin{cases} \C,& k=0,1,n,\\
0,&\text{otherwise.}
\end{cases}
\end{equation}
In both cases $\reginttext$ induces an isomorphism
$H^n\cA(\R^n)\longrightarrow\C$.
\end{theorem}
\begin{remark}
1. The groups $H^k\cA(\R^n)$ can be described more explicitly.
Namely, the natural inclusion $\Omega^\bullet\cA_0(\R^n)\hookrightarrow
\Omega^\bullet\cA(\R^n)$ induces isomorphisms
\[H^k\cA_0(\R^n)\longrightarrow H^k\cA(\R^n)\]
for $k\ge 1$. Furthermore, integration along the fiber
induces isomorphisms
\[\pi_*: H^k\cA_0(\R^n)\longrightarrow H^{k-1}(S^{n-1}), \quad \text{for }
k\ge 1.\]
Thus there is a natural extension of integration along the fiber
to closed forms $\pi_*:\Omega_{\textup{cl}}^k\cA(\R^n)\to\Omega^{k-1}(S^{n-1})$.
The isomorphisms
$H^k\cA_0(\R^n)\longrightarrow \C,\quad k=1,n$
are given by integration along the fiber.
2. This Theorem generalizes the results of \cite[Sec. 1]{Pay:NRC}
on the characterization of the residue integral and the regularized
integral in terms of the Stokes property.
3. The proof of the Theorem is based on the Thom isomorphism below.
\end{remark}
\details{
\details{Textbaustein, existence of $\pi_*$ on $\Omega\cA(\R^n)$ is an issue.
integration along the fiber yields an isomorphism $\pi_*: H^k\cA(\R^n)\longrightarrow H^{k-1}(S^{n-1})$.
Thus if $k=1$ and $\go\in\Omega^1\cA(\R^n), d\go=0$ then $\pi_*\go=\gl_\go \cdot 1$ and
$[\go]\mapsto \gl_\go$ is the isomorphism $H^1\cA(\R^n)\longrightarrow \C$.
If $k=n$ then the canonical isomorphism $H^n\cA(\R^n)\longrightarrow\C$ is given by total
integration
\begin{equation}
H^n\cA(\R^n)\ni [\go]\mapsto \int_{S^{n-1}} \pi_*\go.
\end{equation}
}
\commentary{The proof of the Theorem will be based on the Thom isomorphism presented in Sec. ???}
\begin{prop} Let $\cA$ be as in Def. \plref{def-1.1}. Then integration along the fiber
as defined in ???? extends to a homomorphism $\pi_*:\Omega^k\cA(\R^n)\to \Omega^{k-1}(\R^{n-1})$
which commutes with exterior differentiation.
\end{prop}
\begin{proof}
\end{proof}
}
\subsubsection{The Thom isomorphism}
We consider again a Fr{\'e}chet space $\cA$ as in Def. \plref{def-1.1}. Having established
integration along the fiber the Thom isomorphism is proved along the lines of the classical
case of smooth compactly supported forms. The result is as follows:
\begin{theorem}\label{thm:Thom-isom} Let $\cA$ be a Fr{\'e}chet\ algebra as in Def. \plref{def-1.1}. Let $M$ be a compact
oriented manifold of dimension $n$. Furthermore let
\[\pi_*:\Omega^k\cA_0([0,\infty)\times M)\longrightarrow
\Omega^{k-1}([0,\infty)\times M)\]
be integration along the fiber as defined in Section \plref{sec-2.1}.
Then $\pi_*$ induces an isomorphism
\begin{equation}\label{eq:Thom-isom}
H^k\cA_0([0,\infty)\times M)\longrightarrow H^{k-1}_{\textup{dR}}(M)
\end{equation}
for all $k\ge 0$ (meaning $H^0\cA_0([0,\infty)\times M)\simeq \{0\}$.)
\end{theorem}
\details{
\begin{proof}
As in the proof of the Thom isomorphism for compactly supported smooth forms we construct a right inverse
of $\pi_*$ and an appropriate homotopy operator. \TODO{refer to Bott/Tu}
In detail. By Def. \plref{def-1.1} (3) there is a $\phi\in\cA_0$ with $\regint \phi=1$.
We then put
\begin{equation}
\begin{split}
s_*:\Omega^{k-1}(M)&\longrightarrow \Omega^k\cA_0([0,\infty)\times M)\\
\eta&\mapsto \phi(r) \pi^*\eta\wedge dr.
\end{split}
\end{equation}
Obviously, $s_*$ commutes with $d$ and
\begin{equation}
\pi_*\circ s_*= \id.
\end{equation}
Next we define a homotopy operator
\begin{equation}\label{eq:homotopy-operator}
K:\Omega^k\cA_0([0,\infty)\times M)\longrightarrow \Omega^{k-1}\cA_0([0,\infty)\times M)
\end{equation}
as follows. For a $k$--form $\go$ of the form \eqref{eq:special-forms}
we put
\begin{equation}\label{eq:homotopy-operator-a}
K\go:= (-1)^{k-1}\int_0^r\Bigl(f_2(s,p) -\bigl(\regint_0^\infty f_2(\cdot,p)\bigr)\phi(s)\Bigr)ds\; \pi^*\eta_2.
\end{equation}
Note that by construction
$\reginttext_0^\infty\Bigl(f_2(s,p) -\bigl(\regint_0^\infty f_2(\cdot,p)\bigr)\phi(s)\Bigr)ds=0$ and
thus
$r\mapsto \int_0^r\Bigl(f_2(s,p) -\bigl(\regint_0^\infty f_2(\cdot,p)\bigr)\phi(s)\Bigr)ds$
is in $\cA_0$ by Def. \plref{def-1.1} (4).
Extending $K$ by linearity gives indeed a well--defined homomorphism as claimed in Eq. \eqref{eq:homotopy-operator}.
Exploiting the fact that $\reginttext$ is closed a straightforward calculation yields
\begin{equation}
dK+Kd=\id-s_*\pi_*.
\end{equation} ya
Thus $K$ is a homotopy operator showing that at the level of cohomology $s_*$ is
the inverse of $\pi_*$. The Theorem is proved.
\end{proof}
}
|
1,108,101,562,774 | arxiv | \section{Introduction}
Statistical inference in general state space hidden Markov models involves computation
of the posterior distribution of a set
$\chunk{X}{t}{t'} \eqdef [X_t, \dots, X_{t'}]$ of hidden state variables conditionally on a record $\chunk{Y}{0}{T}$ of observations,
which we denote as $\post{t:t'}{\chunk{Y}{0}{T}}$.
Of particular interest is the so called \emph{joint smoothing distribution} (\jsd) $\post{0:T}{\chunk{Y}{0}{T}}$.
Any marginal or fixed-interval smoothing distribution can be obtained from the \jsd by marginalization.
The \jsd can be expressed in closed-form only in very specific cases, principally, when the state space model is linear and Gaussian or when the state space of the hidden Markov chain is a finite set. In the vast majority of cases, nonlinearity or non-Gaussianity render analytic solutions intractable.
This limitation has lead to an increase of interest in computational strategies handling more general state and measurement equations. Among these, \emph{sequential {M}onte {C}arlo} (SMC) methods play a central role. SMC methods---in which the \emph{sequential importance sampling} and \emph{sampling importance resampling} methods proposed by \citet{HandschinM:1969} and \citet{Rubin:1987}, respectively, are combined---refer to a class of algorithms approximating a sequence of probability distributions, defined on a sequence of probability spaces. This is done by updating recursively a set of random \emph{particles} with associated nonnegative importance weights. The SMC methodology has emerged as a key tool for approximating \jsd flows in general state space models; see \citet{delmoral:2004,DelMoralD:2009,DoucetJ:2011} for general introductions as well as applications and theoretical results for SMC methods.
However, a well known problem with SMC methods is that the particle approximation of any marginal smoothing distribution
$\post{t:t}{\chunk{Y}{0}{T}}$ becomes inaccurate for $t \ll T$. The reason is that the particle trajectories degenerate gradually as the interacting particle
system evolves \citep{GodsillDW:2004,FearnheadWT:2010}.
To address this problem, several methods have been proposed; see \citet{LindstenS:2013} and the references therein.
Among these methods, the recently introduced particle Markov chain Monte Carlo (PMCMC) framework,
proposed in the seminal paper by \citet{AndrieuDH:2010}, plays a prominent role.
PMCMC samplers make use of \smc (or variants thereof) to construct efficient, high-dimensional \mcmc kernels which are reversible \wrt\ the \jsd.
These methods can then be used as components of more general sampling schemes relying on Markov kernels,
for instance enabling joint state and parameter inference in general state space models.
We will not discuss such composite sampling schemes in this paper, but instead focus on one of the PMCMC kernels that can be used to simulate from the \jsd.
Coupling SMC and MCMC is very useful since the distribution of the state sequence given the stream of observations
is generally both high-dimensional and strongly dependent, rendering the design of alternative \mcmc procedures,
such as single-state
Gibbs samplers and Metropolis-Hastings samplers, problematic.
PMCMC has already found many applications in areas such as hydrology \citep{VrugtBDS:2013},
finance \citep{PittSGK:2012}, systems biology \citep{GolightlyW:2011}, and epidemiology \citep{RasmussenRK:2011}, to mention a few.
Several methodological developments of the framework have also been made; see \eg\ \citet{WhiteleyAD:2010,LindstenJS:2012,ChopinS:2013,PittSGK:2012}.
PMCMC algorithms can, broadly speaking, be grouped into two classes of methods: those based on particle independent Metropolis-Hastings (PIMH) kernels and those based on particle Gibbs (\pg) kernels. The two classes of kernels are motivated in different ways and they have quite different
properties. The former class, PIMH, exploits the fact that the \smc method defines an unbiased
estimator of the likelihood, which is used in place of the intractable likelihood in the MH acceptance probability.
This method can thus be viewed as a special case of the pseudo-marginal method introduced by \citet{beaumont:2003,AndrieuR:2009} and later analyzed by
\citet{AndrieuV:2012,lee:latuszynski:2012}.
The latter class, \pg, on the other hand relies on
conditioning the underlying \smc sampler on a reference trajectory to enforce the correct limiting distribution of the kernel; see \autoref{sec:pg}.
This algorithm can be interpreted as a Gibbs sampler for an extended model where the random variables generated by the \smc sampler are treated as auxiliary variables.
One of the main practical issues with PMCMC algorithms is the choice of the number, $N$, of particles.
Using fewer particles will result in faster computations at each iteration, but can at the same time
result in slower mixing of the resulting Markov kernel.
For a fixed computational budget, there is a trade-off between
taking the number of particles $N$ large to get a faster mixing kernel, and to run many iterations of the MCMC sampler.
\citet{AndrieuR:2009,AndrieuV:2012,lee:latuszynski:2012} investigate
the rate of convergence of the pseudo-marginal method and characterize the approximation of the marginal algorithm by the pseudo-marginal algorithm
in terms of the variability of their respective ergodic averages.
\citet{DoucetPK:2012} and \citet{PittSGK:2012} conclude, using partially heuristic arguments,
that it is close to optimal to let $N$ scale at least linearly with $T$.
The theoretical properties of the \pg kernel, however, are not as well understood.
\citet{AndrieuDH:2010} establish under weak conditions that the \pg kernel is $\phi$-irreducible and aperiodic for any $N\geq 2$
(see \citet{MeynT:2009} for definitions).
However, this does not provide a control for the rate of convergence of the iterates of the \pg kernel to stationarity.
In this work, we establish that the \pg kernel is, under mild assumptions, uniformly ergodic. This interesting property has already been established in an earlier
work by \citet{ChopinS:2013}, but we give here a more straightforward proof under weaker conditions, which in addition
provides an explicit lower bound for the convergence rate.
During the preparation of this manuscript, a preprint was made available by \citet{AndrieuLV:2013}, who, independently, have found similar results as presented
here. Indeed, they establish basically the same lower bound on the minorizing constant for the \pg kernel
(which they refer to as the iterated conditional SMC kernel), though using a different proof technique based on
a ``doubly conditional'' SMC algorithm.
There are, however, several differences between these two contributions. We focus in particular on analyzing
the minorizing constant under mixing conditions for the state space model
which hold very generally, even if the state space is not compact (see \autoref{sec:moment-assumption}).
We then study how the number of particles $N$ should be increased
with the number of observations $T$. We show that under weak assumptions,
it suffices to increase the number of particles $N$ as $T^\delta$ where $\delta\geq 1$ can
be determined explicitly.
This is in contrast with \citet{AndrieuLV:2013} who, effectively, assume a compact state space; see \autoref{rem:comparison-alv} and \autoref{sec:strong-mixing}.
On the other hand, \citet{AndrieuLV:2013} study necessary (\ie, not only sufficient) conditions for uniform ergodicity
and translate the convergence results for the \pg kernel to a composite \mcmc scheme for simulating
both states and parameters of a state space model.
Given these differences, we believe that the two contributions complement each other.
This paper is organized as follows: In \autoref{sec:notation-prob-form} we introduce our notation,
and in \autoref{sec:pg} we review the \pg sampler and
formally define the \pg Markov kernel. In \autoref{sec:main-result} we state the main results,
starting with a minorization condition for the \pg kernel followed by mixing conditions that allow for
time uniform control of the convergence rate.
In \autoref{sec:examples} we study, in detail, two commonly used
state space models (with non-compact state spaces) to illustrate how the conditions of our results can be verified
in practice.
The proofs of the main theorems are postponed to Sections~\ref{sec:proof}~and~\ref{sec:proof:prop:N-depend-on-T}.
\section{Notations and problem statement}\label{sec:notation-prob-form
Let $(\Xset,\Xsigma)$ and $(\Yset,\Ysigma)$ be two measurable spaces
and let $\setP$ be the set of all probability measures on $(\Xset,\Xsigma)$.
Let $M$ be a kernel on $(\Xset,\Xsigma)$ and $G$ a kernel on $(\Xset,\Ysigma)$. Assume that for all $x\in \Xset$, $G(x,\cdot)$ is dominated by some common nonnegative measure $\kappa$ on $(\Yset,\Ysigma)$ and denote by $g(x,\cdot)$ its Radon-Nikodym derivative, \ie, for all $(x,y) \in \Xset \times \Yset$,
$$
g(x,y)=\frac{\rmd G(x,\cdot)}{\rmd \kappa(\cdot)}(y)\eqsp.
$$
Let $\{(X_t,Y_t)\,, t \in \nset\}$ be a hidden Markov chain associated to the pair $(M,G)$. That is, $\{(X_t,Y_t)\,, t \in \nset\}$ is a Markov chain with transition kernel defined by: for all $(x,y) \in \Xset \times \Yset$ and all $C \in \Xsigma\otimes \Ysigma$,
$$
((x,y), C) \mapsto \iint_{C} M(x,\rmd x') G(x',\rmd y')\eqsp.
$$
The sequence \sequence{X}[t][\nset] is usually not observed and inference should be carry out on the basis of the observations \sequence{Y}[t][\nset] only.
With $\Xinitv \in \setP$ being the initial distribution of the hidden state process, for all $t\geq 0$, denote by
$$
\chunk{y}{0}{t} \mapsto \dens[\Xinitv]{\chunk{y}{0}{t}}\eqdef \int \Xinitv(\rmd x_0) g(x_0,y_0) \prod_{s=1}^t M(x_{s-1},\rmd x_s) g(x_s,y_s) \eqsp,
$$
the density of the observations $\chunk{Y}{0}{t}$ with respect to $\kappa^{\otimes (t+1)}$.
In what follows, we set, by abuse of notation, for all $x \in\Xset$,
\begin{equation}
\label{eq:def:p-delta}
\dens[x]{\chunk{y}{0}{t}}=\dens[\delta_x]{\chunk{y}{0}{t}} \eqsp,
\end{equation}
where $\delta_x$ is the Dirac measure at $x$.
For all $y \in \Yset$, define the (unnormalized) kernel $\Kunf{y}$ on $(\Xset,\Xsigma)$ by
\begin{equation}
\label{eq:def-Kun}
\Kun{y}{x}{A}= \int M(x,\rmd x') g(x',y) \1_A(x') \eqsp,
\end{equation}
and for all $s \leq t$ and all $\chunk{y}{s}{t} \in\Yset^{t-s+1}$, define the kernel $\Kunf{\chunk{y}{s}{t}}$ on $(\Xset,\Xsigma)$ by
\begin{equation}
\label{eq:def-chunk-Kun}
\Kun{\chunk{y}{s}{t}}{x}{A}= \Kunf{y_s} \Kunf{y_{s+1}} \dots \Kun{y_t}{x}{A} \eqsp.
\end{equation}
In what follows, we set by convention $\Kun{\chunk{y}{s}{t}}{x}{A}= 1$ for all $s>t$.
With these notations, $\dens[\Xinitv]{\chunk{y}{0}{t}} = \Xinitv \Kunf{\chunk{y}{0}{t}}\bigone$ where $\bigone$ is the constant function, $\bigone(x) = 1$ for all $x \in \Xset$.
For all $\Xinitv\in\setP$ and for all $0\leq s\leq t$, denote
\begin{align*}
& \cdens[\Xinitv]{\chunk{y}{s}{t}}{\chunk{y}{0}{s-1}} \eqdef \begin{cases}
\dens[\Xinitv]{\chunk{y}{0}{t}}/ \dens[\Xinitv]{\chunk{y}{0}{s-1}}\,, &\mbox{if } \dens[\Xinitv]{\chunk{y}{0}{s-1}} \neq 0\eqsp,\\
0\,, &\mbox{otherwise}
\end{cases}
\end{align*}
with the convention $\cdens[\Xinitv]{\chunk{y}{0}{t}}{\chunk{y}{0}{-1}}=\dens[\Xinitv]{\chunk{y}{0}{t}}$.
A quantity of central interest is the \jsd, given by
\begin{align}
\label{eq:jsd-def}
\post{\Xinitv,0:t}{\chunk{y}{0}{t}}(D) \eqdef \frac{1}{\dens[\Xinitv]{\chunk{y}{0}{t}}} \int \Xinitv(\rmd x_0) g(x_0,y_0) \prod_{s=1}^t M(x_{s-1},\rmd x_s) g(x_s,y_s) \1_D(x_{0:t}) \eqsp,
\end{align}
for all $D\in\Xsigma^{\otimes (t+1)}$.
With $T$ being some final time point, the \pg sampler (reviewed in the subsequent section) defines a Markov kernel which is reversible \wrt\ $\post{\Xinitv,0:T}{\chunk{y}{0}{T}}$. Samples drawn from the \pg kernel can thus be used to draw inference
about the states (and/or parameters) of the state space model.
\section{The particle Gibbs sampler}\label{sec:pg
Consider first an \smc sampler targeting the sequence of \jsd{s} defined in~\eqref{eq:jsd-def}.
The \smc sampler approximates $\post{\Xinitv,0:t}{\chunk{Y}{0}{t}}$ by a collection of weighted samples
$\{(\epart{0:t}{i}, \ewght{t}{i})\}_{i=1}^N$, in the sense that
\begin{align*}
\post[hat]{\Xinitv,0:t}{\chunk{Y}{0}{T}}(h) \eqdef \sum_{i=1}^N \frac{ \ewght{t}{i} }{ \sum_{\ell=1}^N \ewght{t}{\ell} } h(\epart{0:t}{i})
\end{align*}
is an estimator of $\post{\Xinitv,0:t}{\chunk{Y}{0}{t}}(h)$ for a measurable function $h: \Xset^{t+1}\to \rset$. These
weighted samples can be generated in several different ways, see \eg\ \citet{DoucetGA:2000,delmoral:2004,DoucetJ:2011,CappeMR:2005}
and the references therein. Here we review a basic method, though it should be noted that the \pg sampler
can be generalized to more advanced procedures, see \citet{AndrieuDH:2010,ChopinS:2013}.
Initially, $\post{\Xinitv,0:0}{Y_0}$ is approximated by importance sampling. That is, we simulate independently $\{\epart{0}{i}\}_{i=1}^N$
from a proposal distribution: $\epart{0}{i} \sim \Xinitis{Y_0}{\cdot}$. The samples, commonly referred to as \emph{particles}, are then
assigned importance weights,
\begin{align}
\label{eq:particle-weight-0}
\ewght{0}{i}= \ewghtfuncfInit{Y_0}(\epart{0}{i}) \eqsp,
\end{align}
where $\ewghtfuncfInit{Y_0}(x)=g(x,Y_0) \frac{ \rmd \Xinitv }{\rmd \Xinitisf{Y_0}} (x)$, provided that $\Xinitisf{Y_0}$ is such that $\Xinitv \ll \Xinitisf{Y_0}$.
We proceed inductively.
Denote by $\mcff{t}{N}$ the filtration generated by the particles and weights up to the current time instant $t$:
\begin{equation}
\label{eq:particle-filtration}
\mcff{t}{N} \eqdef \sigma\left( \{(\epart{0:s}{i}, \ewght{s}{i})\}_{i=1}^N, 0 \leq s \leq t \right).
\end{equation}
Assume that we have at hand a weighted sample $\{(\epart{0:t-1}{i}, \ewght{t-1}{i})\}_{i=1}^N$
approximating the \jsd $\post{\Xinitv,0:t-1}{\chunk{Y}{0}{t-1}}$ at time $t-1$. This weighted sample is then propagated sequentially \emph{forward in time}. This is done by sampling,
conditionally independently given the particle history $\mcff{t-1}{N}$, for each particle $i \in \{1,\ldots, N\}$ an \emph{ancestor index} $A_t^i$ with probability
\begin{equation}
\label{eq:ancestor-t}
\CPP{A_t^i = j}{\mcff{t-1}{N}} = \frac{ \ewght{t-1}{j} }{ \sum_{\ell=1}^N \ewght{t-1}{\ell}} \eqsp, \quad j \in \{1,\dots,N\} \eqsp,
\end{equation}
and then by sampling a new particle position from the proposal kernel $\Kisf{Y_t}$:
\begin{align}
\label{eq:particle-t}
\epart{t}{i} \sim \Kis{Y_t}{\epart{t-1}{A_t^i}}{ \cdot }\eqsp.
\end{align}
The particle trajectories (\ie, the ancestral paths of the particles $\epart{t}{i}$, $i \in \{1,\dots,N\}$)
are constructed sequentially by associating the current particle $\epart{t}{i}$ with the particle trajectory of its ancestor:
\begin{equation}
\label{eq:ancestral-path}
\epart{0:t}{i} \eqdef ( \epart{0:t-1}{A_t^i}, \epart{t}{i} )\eqsp.
\end{equation}
Finally, similarly to \eqref{eq:particle-weight-0}
the particles are assigned importance weights given by
\begin{align}
\label{eq:particle-weight-t}
\ewght{t}{i}= \ewghtfunc{Y_t}{\epart{t-1}{A_t^i}}{\epart{t}{i}}
\eqdef \frac{\rmd \Kun{Y_t}{\epart{t-1}{A_t^i}}{\cdot}}{\rmd \Kis{Y_t}{\epart{t-1}{A_t^i}}{\cdot}} (\epart{t}{i})\eqsp,
\end{align}
where $\Kunf{y}$ is defined in \eqref{eq:def-Kun} and, as before, it is assumed that
$ \Kun{y}{x}{\cdot} \ll \Kis{y}{x}{\cdot}$. This results in a weighted particle system
$\{(\epart{0:t}{i}, \ewght{t}{i})\}_{i=1}^N$ targeting $\post{\Xinitv,0:t}{\chunk{Y}{0}{t}}$, completing the induction. Two classical choices for the proposal kernel $\Kisf{y}$ are:
\begin{equation} \label{eq:bootstrap-fully-adapted}
\Kis{y}{x}{\rmd x'}=
\begin{cases}
M(x,\rmd x') & \mbox{bootstrap filter,}\\
\frac{M(x,\rmd x') g(x',y)}{\int M(x,\rmd x') g(x',y)} & \mbox{fully-adapted filter.}
\end{cases}
\end{equation}
Assume now that $T$ is some final time point and that we are interested in simulating
from the \jsd $\post{\Xinitv,0:T}{\chunk{Y}{0}{T}}$ using an \mcmc procedure.
For that purpose, it is required to define a Harris positive recurrent Markov kernel on the path space $(\Xset^{T+1},\Xsigma^{\otimes (T+1)})$ having the \jsd $\post{\Xinitv,0:T}{\chunk{Y}{0}{T}}$ as its unique invariant distribution.
The \pg sampler accomplishes this by making
use of \smc. From an algorithmic point of view, the difference between \pg and a standard \smc sampler is that in the former,
one particle trajectory, denoted as $x_{0:T}^\prime = \prange{x_0^\prime}{x_T^\prime} \in \Xset^{t+1}$, is specified \emph{a priori}.
This trajectory is used as a reference for the \pg sampler, as discussed below.
The reference trajectory is taken into account by simulating only $N-1$ particles in the usual way.
The $N$th particle is then set deterministically according to the reference. At the initialization, we thus simulate
independently $\{\epart{0}{i}\}_{i=1}^{N-1}$ with $\epart{0}{i} \sim \Xinitis{Y_0}{\cdot}$ and set $\epart{0}{N} = x_0^\prime$.
We then compute importance weights for all particles, $i = \range{1}{N}$, according to~\eqref{eq:particle-weight-0}.
Analogously, at any consecutive time point $t$, we sample the first $N-1$ particles
$\{ (A_t^i, \epart{t}{i}) \}_{i=1}^{N-1}$ conditionally independently given $\mcff{t-1}{N}$ according to \eqref{eq:ancestor-t}--\eqref{eq:particle-t}.
Note that these particles will depend on the reference trajectory through the resampling step \eqref{eq:ancestor-t}.
The $N$th particle and its ancestor index are then set deterministically: $\epart{t}{N} = x_t^\prime$ and $A_t^N = N$.
Finally, importance weights are then computed for all the particles according to \eqref{eq:particle-weight-t}.
Note that, by construction, the $N$th particle trajectory will coincide with the reference trajectory for all $t$, $\epart{0:t}{N} = x_{0:t}^\prime$.
After a complete pass of the above procedure, a trajectory $\epart{0:T}{\star}$ is sampled from among the
particle trajectories at time $T$ (see \eqref{eq:ancestral-path}), with probability proportional to the importance
weight $\ewght{T}{i}$, $i \in \{1,\dots, N\}$, \ie\,
\begin{equation}
\label{eq:probability-selection}
\CPP{\epart{0:T}{\star} = \epart{0:T}{i}}{ \mcff{T}{N} } = \frac{\ewght{T}{i}}{\sum_{\ell=1}^N \ewght{T}{\ell}} \eqsp, \quad i \in \{1,\dots,N\} \eqsp.
\end{equation}
This procedure thus associates each trajectory $x_{0:T}^\prime \in \Xset^{T+1}$ to a probability distribution on $(\Xset^{T+1}, \Xsigma^{\otimes(T+1)})$,
defining a Markov kernel on $(\Xset^{T+1}, \Xsigma^{\otimes(T+1)})$. More specifically, this kernel is given by
\begin{align}
\label{eq:pg-kernel}
\kernelPG(x_{0:T}^\prime, D) \eqdef \PE{ \frac{\sum_{i=1}^N \ewght{T}{i} \1_D(\epart{0:T}{i})}{\sum_{i=1}^N \ewght{T}{i}}},
\end{align}
for $(x_{0:T}^\prime, D) \in \Xset^{T+1} \times \Xsigma^{\otimes(T+1)}$,
where $\PE{}$ refers to expectation \wrt\ the random variables generated by the \pg algorithm.
We refer to $\kernelPG$ as the \pg kernel.
As shown by \citet{AndrieuDH:2010}, the conditioning on a reference trajectory implies that the \pg kernel leaves the \jsd invariant:
\begin{align*}
\post{\Xinitv,0:T}{\chunk{Y}{0}{T}}(D) &= \int \kernelPG(x_{0:T}^\prime, D) \post{\Xinitv,0:T}{\chunk{Y}{0}{T}}(\rmd x_{0:T}^\prime)\,, & \forall D &\in \Xsigma^{(T+1)}.
\end{align*}
Quite remarkably, this invariance property holds for any $N\geq 1$.
Empirically, it has been found that the mixing of the \pg kernel can be improved significantly by
updating the ancestor indices $A_t^N$ for $t \in \{1,\dots,T\}$, either as part of the forward recursion \citep{LindstenJS:2012}
or in a separate backward recursion \citep{WhiteleyAD:2010}. We shall not specifically analyze these modified \pg algorithms
in this work, although our uniform ergodicity result apply straightforwardly to these algorithms as well.
\section{Main result}
\label{sec:main-result}
In this section we state the main results. First, in \autoref{sec:minorization}, we give a minorization condition for the \pg kernel.
Following this we discuss how to increase the number of particles $N=N_T$ as a function of the number of observations $T$ in order to
obtain a non-degenerate lower-bound. We consider first
a strong mixing condition and then a much weaker moment assumption in \autoref{sec:strong-mixing}~and~\autoref{sec:moment-assumption}, respectively.
\subsection{Minorization condition}\label{sec:minorization}
Define the sequence of nonnegative random variables $\{B_{t,T}\}_{t=0}^T$ by
\begin{equation}
\label{eq:def-b-T}
B_{t,T} = \sup_{0\leq \ell \leq T-t}
\frac{\supnorm{\ewghtfuncf{Y_t}} \supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell}} \bigone}}{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}}\\
\end{equation}
where, by convention, $\supnorm{\Kunf{\chunk{Y}{t+1}{t}} \bigone} = 1$.
\begin{theorem}
\label{theo:doeblin-condition-PG}
For all $x_{0:T}^\prime \in \Xset^{T+1}$ and $D \in \Xsigma^{\otimes (T+1)}$,
\begin{equation}
\label{eq:doeblin-condition-PG}
\kernelPG(x_{0:T}^\prime, D) \geq \mineps{T}{N} \eqsp \post{\Xinitv,0:T}{\chunk{Y}{0}{T}}(D) \eqsp,
\end{equation}
where
\begin{equation}
\label{eq:def-epsilon}
\mineps{T}{N} = \prod_{t=0}^T \frac{N-1}{2 B_{t,T}+N-2} \eqsp.
\end{equation}
\end{theorem}
\begin{proof}
The proof is postponed to \autoref{sec:proof}.
However, to provide some intuition for the result,
the main ideas of the proof are outlined below.
Using the representation of the \pg kernel from \eqref{eq:pg-kernel} we can write
\begin{align*}
\kernelPG(x_{0:T}^\prime, D) \geq (N-1) \PE{\frac{ \ewght{T}{1} \1_D(\epart{0:T}{1})}{\sum_{i=1}^N \ewght{T}{i}}}
\geq (N-1) \PE{\CPE{ \frac{ \ewght{T}{1} \1_D(\epart{0:T}{1})}{2\supnorm{\ewghtfuncf{Y_t}} + \sum_{i=2}^{N-1} \ewght{T}{i}} }{ \mcff{T-1}{N} }}
\eqsp,
\end{align*}
where, for the first inequality, we have simply discarded the $N$th term (corresponding to the reference particle) and used the fact that the $N-1$ weighted particles
$\{(\epart{0:T}{i}, \ewght{T}{i})\}_{i=1}^{N-1}$ are equally distributed. For the second inequality, we bound the first and the last term of the
sum in the denominator by $\supnorm{\ewghtfuncf{Y_t}}$. This has the effect that the random variables entering the numerator and the denominator of the expression
are conditionally independent given $\mcff{T-1}{N}$. By convexity of $x \mapsto 1/x$ and using Jensen's inequality we therefore obtain the bound
\begin{align*}
\kernelPG(x_{0:T}^\prime, D) \geq (N-1) \PE{\frac{ \CPE{ \ewght{T}{1} \1_D(\epart{0:T}{1}) }{ \mcff{T-1}{N}} }
{ 2\supnorm{\ewghtfuncf{Y_t}} + (N-2) \CPE{ \ewght{T}{2} }{\mcff{T-1}{N}} }}\eqsp.
\end{align*}
The inner conditional expectations can be computed explicitly. Principally, the result follows by repeating this procedure for time $T-1$, then for $T-2$, etc.
\end{proof}
\begin{corollary}\label{cor:finite-T}
Assume that $g(x,y)>0$ for all $(x,y) \in \Xset \times \Yset$ and $\supnorm{\ewghtfuncf{y}}<\infty$ for all $y \in \Yset$. Then,
for fixed $T$,
$$
\mineps{T}{N} \geq 1 + \frac{1}{N-1} \sum_{t=0}^{T} (1-2 B_{t,T}) + O_{\PP}(N^{-2}) \eqsp,
$$
and $\lim_{N\to \infty} \mineps{T}{N} = 1 $.
\end{corollary}
\begin{proof}
From the definition \eqref{eq:def-epsilon} we have
\begin{align*}
\mineps{T}{N} =
\exp\left\{ - \sum_{t=0}^T \log\left( 1 + \frac{2 B_{t,T}-1}{N-1} \right) \right\}
\geq \exp\left\{ \frac{1}{N-1} \sum_{t=0}^T \left( 1-2 B_{t,T} \right) \right\}\eqsp.
\end{align*}
For a fixed $T$,
we thus obtain the result
provided that $B_{t,T}<\infty$ for all $t \in \{0,\ldots,T\}$.
However, the positivity of $g$ implies that $\cdens[\Xinitv]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}>0$ for all $\ell \geq 0$, and since
$\supnorm{\ewghtfuncf{y}}<\infty$ for all $y \in \Yset$, it can be easily checked that
$$
\supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell}} \bigone} \leq \prod_{s=t+1}^{t+\ell} \supnorm{\ewghtfuncf{Y_s}}<\infty\eqsp,
$$
which immediately implies that $B_{t,T}<\infty$ for all $t \in \{0,\ldots,T\}$.
\end{proof}
\begin{remark}\label{rem:comparison-alv}
The minorization condition of \autoref{theo:doeblin-condition-PG} is similar to Proposition~6 by \citet{AndrieuLV:2013}.
However, they express the minorizing constant in terms of the expectation of a likelihood estimator \wrt\ the law of
a ``doubly conditional SMC'' algorithm. They do not pursue an analysis of the effect on the minorization condition
by the forgetting of the initial condition of the state space model. To obtain an explicit rate of convergence they
assume, in our notation, that the triangular array of random variables $\{ B_{t,T} \}_{0\leq t \leq T}$ is uniformly
bounded for $T \geq 0$. This is the case, basically, only when the model satisfies strong mixing conditions, as we discuss in
the subsequent section. Indeed, \citet[Proposition~14~and~Lemma~17]{AndrieuLV:2013} is the same as our \autoref{prop:strong-mixing-2}.
\end{remark}
\subsection{Strong mixing condition}\label{sec:strong-mixing}
We first assume a strong mixing condition for the kernel $M$:
\begin{hyp}{S}
\item \label{assum:strong-mixing} There exist positive constants $(\sigma_-,\sigma_+)$, a nonnegative measure $\gamma$ and an integer $m \in \nset$ such that for all $x \in \Xset$,
$$
\sigma_- \gamma(\rmd x') \leq M^m(x,\rmd x') \leq \sigma_+ \gamma(\rmd x')\eqsp.
$$
\end{hyp}
This condition has been introduced by \citet{DelMoralG:1999} to establish the uniform-in-time convergence of the particle filter. This condition, which is stronger than the Doeblin condition, typically requires that the state space is compact. It is overly restrictive but is often use in the analysis of state space models because it implies a form of uniform forgetting of the initial condition of the filter, which is key to obtaining long-term stability of the particle filter.
\begin{proposition}
\label{prop:strong-mixing}
Assume that \ref{assum:strong-mixing} holds with $m=1$ and that the proposal kernel is fully-adapted as defined in \eqref{eq:bootstrap-fully-adapted}. Then, taking $N_T \sim \lambda T$ for some $\lambda>0$, we have
$$
\liminf_{T \to \infty} \mineps{T}{N_T} \geq \exp\left(\frac{1-2 (\sigma_+/\sigma_-)^2}{\lambda}\right)>0\,, \quad \as
$$
\end{proposition}
\begin{proof}
First, note that for all $\ell \geq 1$,
\begin{equation}
\label{eq:majo:unif:mix}
\supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell}} \bigone} \leq \sigma_+ \int \gamma(\rmd x_{t+1}) g(x_{t+1},Y_{t+1}) \Kunf{\chunk{Y}{t+2}{t+\ell}} \bigone (x_{t+1})
\end{equation}
and
\begin{align}
\label{eq:mino:unif:mix}
\cdens[\Xinitv]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}\geq \sigma_-^2 \int \gamma(\rmd x_{t})g(x_{t},Y_{t})
\int \gamma(\rmd x_{t+1}) g(x_{t+1},Y_{t+1}) \Kunf{\chunk{Y}{t+2}{t+\ell}} \bigone (x_{t+1})\eqsp.
\end{align}
Now, in the fully-adapted case, we have:
$$
\Kis{y}{x}{\rmd x'}=\frac{M(x,\rmd x')g(x',y)}{\int M(x,\rmd u)g(u,y)}\eqsp,
$$
so that by the definition of $\ewghtfuncf{y}$,
$$
\supnorm{\ewghtfuncf{Y_t}}=\sup_{x \in \Xset} \left|\int M(x,\rmd x_t)g(x_,Y_t)\right| \leq \sigma_+ \int \gamma(\rmd x_t) g(x_t,Y_t)\eqsp.
$$
Combining this equality with \eqref{eq:majo:unif:mix} and \eqref{eq:mino:unif:mix} yields:
$$
B_{t,T} = \sup_{0\leq \ell \leq T-t}
\frac{\supnorm{\ewghtfuncf{Y_t}} \supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell}} \bigone}}{\cdens[\Xinitv]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}} \leq \left(\frac{\sigma_+}{\sigma_-}\right)^2 \eqsp.
$$
By the definition \eqref{eq:def-epsilon}, we then obtain:
$$
\mineps{T}{N} \geq \prod_{t=0}^T \frac{N-1}{2 B_{t,T}+N-2} \geq \left( \frac{N-1}{N-2+2 (\sigma_+/\sigma_-)^2} \right)^{T+1} \eqsp.
$$
Finally, letting $N_T \sim \lambda T$, we obtain
$$
\liminf_{T \to \infty} \mineps{T}{N_T} \geq \exp\left(\frac{1-2 (\sigma_+/\sigma_-)^2}{\lambda}\right)>0\,, \quad \as
$$
\end{proof}
It is worthwhile to stress that \autoref{prop:strong-mixing} holds whatever the distribution of the observation process, $\sequence{Y}[t][\nset]$ is. This is a consequence of the strong mixing condition \ref{assum:strong-mixing} which provides a simple result, but at the expense of an assumption which is rarely met in practice. If instead of the fully adapted case, we consider the bootstrap filter (see \eqref{eq:bootstrap-fully-adapted}), we may also obtain a uniform-in-time bound. However, this requires an even stronger assumption of the existence of a lower and an upper bound for the observation likelihood.
\begin{hyp}{S}
\item \label{assum:strong-mixing-likelihood} There exists a positive constant $\delta$, such that for all $y \in \Yset$,
$$
1 \leq \frac{\sup_{x \in \Xset} g(x,y)}{\inf_{x \in \Xset} g(x,y)} \leq \delta \eqsp.
$$
\end{hyp}
\begin{proposition}
\label{prop:strong-mixing-2}
Assume that \ref{assum:strong-mixing}-\ref{assum:strong-mixing-likelihood} hold and that the bootstrap proposal is used: $\Kis{y}{x}{\cdot}= M(x,\cdot)$.
Then, taking $N_T \sim \lambda T$ for some $\lambda>0$, we have
$$
\liminf_{T \to \infty} \mineps{T}{N_T} \geq \exp\left(\frac{1-2 \delta^m \sigma_+/\sigma_- }{\lambda}\right)>0\,, \quad \as \eqsp,
$$
where $m$ is defined in \ref{assum:strong-mixing}.
\end{proposition}
\begin{proof}
For the bootstrap filter, $\ewghtfunc{y}{x}{x'}= g(x',y)$. Therefore, $\supnorm{\ewghtfuncf{y}}= \sup_{x \in \Xset} g(x,y)$. On the other hand, for $\ell \geq m$,
\begin{align}
\label{eq:majo:unif:mix-1}
\supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell}} \bigone} \leq \sigma_+ \left( \prod_{s=t+1}^{t+m-1} \sup_{x \in \Xset} g(x,Y_s) \right)
\int \gamma(\rmd x_{t+m}) g(x_{t+m},Y_{t+m}) \Kunf{\chunk{Y}{t+m+1}{l}} \bigone(x_{t+m}) \eqsp,
\end{align}
and
\begin{align}
\label{eq:mino:unif:mix-1}
\cdens[\Xinitv]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}\geq \sigma_- \left( \prod_{s=t}^{t+m-1} \inf_{x \in \Xset} g(x,Y_s) \right)
\int \gamma(\rmd x_{t+m}) g(x_{t+m},Y_{t+m}) \Kunf{\chunk{Y}{t+m+1}{t+\ell}} \bigone(x_{t+m}) \eqsp.
\end{align}
Combining \eqref{eq:majo:unif:mix-1} and \eqref{eq:mino:unif:mix-1} yields
\begin{equation}
B_{t,T} \leq \delta^m \frac{\sigma_+}{\sigma_-} \eqsp.
\end{equation}
The result follows as in the proof of \autoref{prop:strong-mixing}.
\end{proof}
\subsection{Moment assumption}\label{sec:moment-assumption}
Under the strong mixing condition \ref{assum:strong-mixing} and the even more restrictive \ref{assum:strong-mixing-likelihood},
we obtained non-degenerate uniform convergence bounds when the number of trajectories $N_T$ depends linearly on the number of observations $T$.
However, these conditions are very restrictive and hardly ever satisfied when the state space is non-compact.
We now turn to the analysis of the minorization condition under a much weaker moment assumption.
However, when the strong mixing assumption is relaxed, we are no longer able to obtain bounds that hold uniformly \wrt\ the observation sequence.
Instead, we will take a probabilistic approach. In \autoref{prop:N-depend-on-T} below, we
show that the minorizing constant will be bounded away from zero, in probability,
provided that $N_T$ is a power of $T$.
Moreover, the result presented in this section is not restricted to the fully-adapted or the bootstrap \pg kernel and may be obtained for virtually any proposal kernel.
This result holds \wrt\ the law of the observation process $\sequence{Y}[t][\nset]$. It is therefore of interest
to carry out the analysis for a parametric family of state space models $\{ (M^\param, G^\param), \param \in \Param\}$,
where $\Param$ is a compact subset of a Euclidean space.
Informally, this allows us to analyse the ergodicity of the \pg kernel, even when the algorithm is executed using a misspecified model.
We consider a sequence of parameters $\sequence{\param}[T][\nset]$ that become increasingly close to some ``true'' parameter $\param_\star$
(in a sense that will be made precise in \autoref{prop:N-depend-on-T} below), converging at a rate $1/\sqrt{T}$.
The rationale for this assumption is that we are considering the large $T$ regime and we can therefore expect $\param_T$ to be close to $\tparam$. We discuss this further in \autoref{rem:kullback} below.
Note that, for a fixed observation sequence $\chunk{Y}{0}{T}$ (with finite $T$) we can instead appeal to \autoref{cor:finite-T}.
\begin{hyp}{A}
\item \label{assum:stationarity} For all $\param \in \Param$, the kernel $M^\param$ has a unique stationary distribution denoted as $\pi^\param$.
\end{hyp}
In what follows, for $\param\in\Param$
we let $\PE[\param]{}[\mu]$ and $\PP^\param_\mu$ refer to the expectation and probability, respectively, induced on $((\Xset\times\Yset)^{\nset},(\Xsigma\otimes\Ysigma)^{\otimes \nset})$ by a
Markov chain $\{(X_t, Y_t), t\in\nset\}$ evolving according to the state space model $(M^\param, G^\param)$ starting with $X_0\sim \mu$.
For simplicity, we write $\PEstat[\param]{}=\PE[\param]{}[\pi^\param]$ and $\PPstat^\param=\PP^\param_{\pi^\param}$.
For $1 \leq s \leq t$, we write
$$ \dens[\mu,s]{\chunk{y}{s}{t}}[\param] = \int \dens[\mu]{\chunk{y}{0}{t}}[\param]\kappa^{\otimes s}(\rmd y_{0:s-1}) $$
for the marginal probability density function of $Y_{s:t}$ \wrt\ $\kappa^{\otimes(t-s+1)}$.
Define for all $(t,\ell) \in \nset \times \nset^\star$,
\begin{align}
\label{eq:def-bar-b-t-ell}
&\Bbar[t]{t+\ell}{\mu}{\param} \eqdef \frac{\supnorm{\ewghtfuncf[\param]{Y_t}} \supnorm{\Kunf[\param]{\chunk{Y}{t+1}{t+\ell}} \bigone}}{\dens[\mu,t]{\chunk{Y}{t}{t+\ell}}[\param]}\eqsp, \\
\label{eq:def-bar-c-t-ell}
&\Cbar[t]{t+\ell}{\mu}{\param} \eqdef
\frac{\supnorm{\ewghtfuncf[\param]{Y_t}} \int \lambda(\rmd x_{t+1}) g^\param(x_{t+1},Y_{t+1}) \Kunf[\param]{\chunk{Y}{t+2}{t+\ell}} \bigone(x_{t+1})}{\dens[\mu,t]{\chunk{Y}{t}{t+\ell}}[\param]} \eqsp,
\end{align}
with, by convention,
\begin{align*}
\Bbar{t}{\mu}{\param} &= \supnorm{\ewghtfuncf[\param]{Y_t}}/ \dens[\mu,t]{Y_t}[\param] \eqsp, &
\Cbar[t]{t+1}{\mu}{\param} &= \supnorm{\ewghtfuncf[\param]{Y_t}} \int \lambda(\rmd x_{t+1}) g^\param(x_{t+1},Y_{t+1}) / \dens[\mu,t]{\chunk{Y}{t}{t+1}}[\param] \eqsp .
\end{align*}
\begin{hyp}{A}
\item \label{assum:m-upper-bound} There exists a constant $\sigma_+ \in \rset^+$ and a nonnegative measure $\lambda$ such that for all $\param \in\Param$ and $(x,A)\in \Xset \times \Xsigma$,
$$
M^\param(x,A) \leq \sigma_+ \lambda(A)\eqsp.
$$
\end{hyp}
Denote by $m^\param(x,\cdot)$ the Radon-Nikodym derivative
\begin{equation}
\label{eq:definition-density}
m^\param(x,x')=\frac{\rmd M^\param(x,\cdot)}{\rmd \lambda(\cdot)}(x')\eqsp.
\end{equation}
Under \ref{assum:m-upper-bound}, the stationary distribution $\pi^\param$ is absolutely continuous \wrt\ $\lambda$.
Furthermore, for notational simplicity
it is assumed that
the initial distribution $\mu$ is absolutely continuous \wrt\ $\lambda$.
By abuse of notation, we write $\pi^\param$ and $\mu$ also for the corresponding density functions.
\begin{hyp}{A}
\item \label{assum:m-g-positive} For all $\param \in\Param$ and $(x,x',y) \in \Xset^2 \times \Yset$, $m^\param(x,x')>0$ and $ g^\param(x,y)>0$.
\end{hyp}
\begin{hyp}{A}
\item \label{assum:b-moment-bound}
There exist constants $(\ell_\star,\alpha) \in \nset^\star \times (0,1)$ such that,
\begin{align}
& \sup_{t \in \nset}\sup_{\param \in \Param} \PE[\param]{( \Bbar[t]{t+\ell}{\mu}{\param} )^\alpha}[\mu] <\infty\,, \quad \mbox{for all } \ell \in \{0,\ldots, \ell_\star-1\} \eqsp, \label{eq:bound-moment-B}\\
& \sup_{t \in \nset} \sup_{\param \in \Param} \PE[\param]{( \Cbar[t]{t+\ell_\star}{\mu}{\param} )^\alpha}[\mu] <\infty\eqsp. \label{eq:bound-moment-tilde-B}
\end{align}
\end{hyp}
\begin{theorem} \label{prop:N-depend-on-T}
Assume that \ref{assum:stationarity}, \ref{assum:m-upper-bound}, \ref{assum:m-g-positive}, and \ref{assum:b-moment-bound} hold.
Let $\tparam \in \Param$ and
let $\sequence{\param}[T][\nset]$ be a sequence of parameters such that
\begin{equation}
\label{eq:nuit-de-singapour}
\limsup_{T \to \infty} T
\PEstat[\tparam]{\ln\left(\frac{ m^{\tparam}(X_0,X_1)g^{\tparam}(X_1,Y_1)}
{ m^{\param_T}(X_0,X_1)g^{\param_T}(X_1,Y_1)}\right)}
< \infty
\eqsp.
\end{equation}
Furthermore, assume that
\begin{equation}
\label{eq:nuit-de-singapour-2}
\PEstat[\tparam]{\ln\left(\frac{ \pi^{\tparam}(X_0)}{ \mu(X_0)}\right)} < \infty
\eqsp,
\end{equation}
where $\mu$ is the initial distribution used in the \pg algorithm.
Then, for all $0\leq \gamma<\alpha$ (where $\alpha$ is defined in \ref{assum:b-moment-bound})
and for all sequences of integers $\{N_T\}_{T \geq 1}$ such that $N_T \sim T^{1/\gamma}$,
the sequence $\{\mineps{T}{N_T}^{-1}(\param_T)\}_{T \geq 1}$, defined in \eqref{eq:def-epsilon} is $\PPstat^\tparam$-tight (bounded in probability).
\end{theorem}
\begin{proof}
The proof is postponed to \autoref{sec:proof:prop:N-depend-on-T}.
\end{proof}
\begin{remark} \label{rem:kullback}
For any $\param \in \Param$,
\begin{align*}
\operatorname{D}(\tparam || \param) \eqdef \PEstat[\tparam]{\ln\left(\frac{ m^{\tparam}(X_0,X_1)g^{\tparam}(X_1,Y_1)}
{ m^{\param}(X_0,X_1)g^{\param}(X_1,Y_1)}\right)} \eqsp,
\end{align*}
is the expectation under the stationary distribution $\tpi$ of the Kullback-Leibler divergence between
the conditional distribution of $\cdens{X_1,Y_1}{X_0}[\tparam]$ and $\cdens{X_1,Y_1}{X_0}[\param]$. Hence,
$\operatorname{D}(\tparam || \param) \geq 0$ for all $\param \in \Param$ and $\operatorname{D}(\tparam || \tparam)=0$.
Assuming that $\tparam$ belongs to the interior of $\Param$ and that the function $\param \mapsto \operatorname{D}(\tparam||\param)$ is twice differentiable at $\tparam$, a Taylor expansion at $\tparam$ yields
\begin{align*}
\operatorname{D}(\tparam||\param)= \frac{1}{2} (\tparam - \param)^t H^\tparam (\tparam - \param) + o( \Vert \tparam - \param \Vert^2) \eqsp,
\end{align*}
where $H^\param$ is the Hessian of $\param \mapsto \operatorname{D}(\tparam|| \param)$.
Consequently, for regular statistical models, \eqref{eq:nuit-de-singapour} holds provided that $\theta_T$ converges to $\tparam$ at a rate $1/\sqrt{T}$,
\ie,
$$
\param_T = \tparam + \varrho_T/ \sqrt{T}\eqsp,
$$
where the sequence $\sequence{\varrho}[T][\nset]$ is bounded: $\sup_{T \geq 0} \| \varrho_T \| < \infty$.
\end{remark}
\begin{remark}\label{rem:deterministic}
It should be noted that our results do not cover explicitly the case when the sequence $\sequence{\varrho}[T][\nset]$ is stochastic.
Still, we believe that our results hint at the possibility of obtaining a non-degenerate lower bound on the
minorizing constant also in the stochastic case, given that $\sequence{\varrho}[T][\nset]$ is tight, under conditions that are much weaker than the previously considered
strong mixing assumption.
\end{remark}
\begin{remark}
It is interesting to note that we do not require the initial distribution $\mu$ to be equal to $\tpi$,
but only that the Kullback-Leibler divergence \eqref{eq:nuit-de-singapour-2} is bounded.
Hence, we may use a quite arbitrary initial distribution and still obtain a sequence of inverse minorization constants that is tight \wrt\ $\PPstat^\tparam$.
\end{remark}
A straightforward generalization of the above result is to let the initial distribution belong to a parametric family of distributions,
$\{\mu^\param : \param\in\Param\}$. The condition \eqref{eq:nuit-de-singapour-2} should then be replaced by
\begin{align*}
\limsup_{T\to\infty} \PEstat[\tparam]{\ln\left(\frac{ \pi^{\tparam}(X_0)}{ \mu^{\param_T}(X_0)}\right)}
< \infty
\eqsp.
\end{align*}
Allowing for the initial distribution to depend on $\param$ can be useful in some cases. For instance, if the stationary distribution $\pi^\theta$ is known
it may serve as a natural choice for the initial distribution used in the algorithm.
\section{Examples}\label{sec:examples
In this section we consider two examples to illustrate how the assumptions of \autoref{prop:N-depend-on-T}
can be verified in practice.
We preface the examples by a technical lemma, which will be very useful for checking the assumptions.
\begin{lemma}
\label{lem:holder-inequality}
Let $(\Zset,\Zsigma)$ be a measurable set and $\xi$ be a measure on $(\Zset,\Zsigma)$.
Let $\alpha\in\ooint{0,1}$ and let $\varphi,\psi$ and $q$ be nonnegative measurable functions, such that
\begin{align}
\label{eq:conditions-holder-1}
&\int \psi(z) \varphi(z) \xi(\rmd z) < \infty \eqsp,\\
\label{eq:conditions-holder-2}
&\int \varphi^{-\frac{\alpha}{1-\alpha}}(z) q(z) \xi(\rmd z) < \infty \eqsp.
\end{align}
Then,
\begin{equation}
\label{eq:hoder-inequality-conclusion}
\int \psi^\alpha(z) q^{1-\alpha}(z) \xi (\rmd z) < \infty \eqsp.
\end{equation}
\end{lemma}
\begin{proof}
The result follows from H{\"o}lder's inequality:
\begin{align*}
\int \psi^\alpha(z) q^{1-\alpha}(z) \xi(\rmd z)
&= \int \left[ \psi(z) \varphi(z) \right]^\alpha \, \left[\varphi^{-\frac{\alpha}{1-\alpha}}(z) q(z)\right]^{1-\alpha} \xi(\rmd z) \\
&\leq \left( \int \psi(z) \varphi(z) \xi(\rmd z) \right)^{\alpha} \left( \int \varphi^{-\frac{\alpha}{1-\alpha}}(z) q(z)
\xi(\rmd z) \right)^{1-\alpha} \eqsp.
\end{align*}
\end{proof}
\subsection{A nonlinear model with additive measurement noise}\label{sec:examples:addtive-noise}
\newcommand\ANparam{\xi}
\newcommand\ANParam{\Xi}
We consider first a class of nonlinear state space models where the latent process is observed in additive noise,
\begin{align}
\label{eq:state-equation-additive-noise}
X_{t+1}&= h^\ANparam(X_t) + \sigma_W W_{t+1} \\
\label{eq:measuremant-equation-additive-noise}
Y_t &= \phi X_t + \sigma_U U_t
\end{align}
where $\sequence{W}[t][\nset]$ and $\sequence{U}[t][\nset]$ are two independent sequences of \iid\ standard Gaussian random variables and
$\{ h^\ANparam, \ANparam \in \ANParam \}$ is a parametric family of measurable real-valued functions, where $\ANParam$ is a compact subset of a Euclidean space. We denote by $\param = (\ANparam,\phi,\sigma_U,\sigma_W)$ the parameters of the model. It is assumed that $\param \in \Param$, where
$\Param$ is a compact subset of $\ANParam \times \ooint{0,\infty}^3$. We assume that for all $\ANparam \in \ANParam$, $x \mapsto h^\ANparam(x)$ is continuous and $\sup_{\ANparam \in \ANParam} \limsup_{x \to \infty} |h^\ANparam(x)|/|x| < 1$. For any $\delta > 0$, we set $V_\delta(x) = \rme^{\delta |x|}$. It is easily seen that there exist constants $\lambda_\delta \in \ooint{0,1}$ and $b_\delta < \infty$ such that
\begin{equation}
\label{eq:drift-condition-additive-noise}
\sup_{\param \in \Param} \PE[\param]{ V_\delta( X_1) }[x] \leq \lambda_\delta V_\delta(x) + b_\delta \eqsp.
\end{equation}
The Markov chain is strong Feller, Harris recurrent, all the compact sets are small, and the Markov chain admits a single invariant distribution. Therefore, \ref{assum:stationarity} and \ref{assum:m-upper-bound} are satisfied.
Since both the transition density and the observation density are Gaussian, \ref{assum:m-g-positive} is also readily satisfied.
We will thus focus on verifying the moment assumption \ref{assum:b-moment-bound}.
First, note that
\begin{equation}
\label{eq:moment-control-additive-noise}
\sup_{t \in \nset} \sup_{\param \in \Param} \PE[\param]{V_\delta(X_t)}[x]
\leq \lambda_\delta^t V_\delta(x) + b_\delta(1+\lambda_\delta+\cdots+\lambda_\delta^{t-1})
\leq V_\delta(x) + b_\delta / (1-\lambda_\delta) \eqsp.
\end{equation}
We assume that the initial distribution $\mu$ is such that $\mu(V_\delta) < \infty$. Therefore,
\begin{equation}
\label{eq:moment-control-additive-noise-1}
\sup_{t \in \nset} \sup_{\param \in \Param} \PE[\param]{V_\delta(X_t)}[\mu] < \infty \eqsp.
\end{equation}
Interestingly for the model \eqref{eq:state-equation-additive-noise}--\eqref{eq:measuremant-equation-additive-noise} it is possible
to use the fully adapted proposal kernel \citep{DoucetGA:2000} as defined in \eqref{eq:bootstrap-fully-adapted}, for which
\begin{align}
\nonumber
\ewghtfunc[\param]{y}{x}{x'}&= \int m^\param(x, x^{\prime\prime}) g^\param(x^{\prime\prime},y)\rmd x^{\prime\prime} \\
\label{eq:additive-noise:definition-weightfunc}
&= \frac{1}{\sqrt{2\pi (\phi^2\sigma_W^2 + \sigma_U^2) }}\exp\left( -\frac{1}{2(\phi^2\sigma_W^2 + \sigma_U^2)}\left(y-\phi h^\ANparam(x) \right)^2 \right) \eqsp,
\end{align}
for all $(x,x') \in \rset \times \rset$, $y \in \rset$, and $\param \in \Param$.
It can be seen that, for any $\param\in\Param$ and any $y \in \rset$,
\[
\int_{-\infty}^{\infty} g^\param(x,y) \rmd x = \frac{1}{\phi} \eqsp, \quad \text{and} \quad
\supnorm{\ewghtfuncf[\param]{y}} \leq \frac{1}{\sqrt{2\pi (\phi^2\sigma_W^2 + \sigma_U^2) }} \eqsp,
\]
which implies the existence of constants $D_1$ and $D_2$ such that
\begin{equation}
\label{eq:borne-util-additive-noise}
\sup_{\param \in \Param} \int_{-\infty}^{\infty} g^\param(x,y) \rmd x \leq D_1\eqsp, \quad \text{and} \quad
\sup_{\param \in \Param} \supnorm{\ewghtfuncf[\param]{y}} \leq D_2 \eqsp.
\end{equation}
Analogous bounds hold also if we would instead consider the bootstrap proposal (see \eqref{eq:bootstrap-fully-adapted}).
To verify \ref{assum:b-moment-bound} we let $\ell_\star=1$ and show that,
\begin{equation}
\label{eq:additive-noise:moment:A4}
\sup_{t \in \nset} \sup_{\param\in\Param} \PE[\param]{ ( \Bbar{t}{\mu}{\param})^\alpha}[\mu]<\infty\, , \quad
\sup_{t \in \nset} \sup_{\param\in\Param} \PE[\param]{ (\Cbar[t]{t+1}{\mu}{\param})^\alpha}[\mu] <\infty\eqsp,
\end{equation}
for some (and actually any) $\alpha \in \coint{0,1}$.
Consider first
\begin{equation}
\label{eq:additive-noise:borne-B00alpha}
\PE[\param]{ (\Bbar{t}{\mu}{\param} )^\alpha}[\mu] = \int \supnorm[\alpha]{\ewghtfuncf[\param]{y_t}} \{\dens[\mu,t]{y_t}[\param]\}^{1-\alpha} \rmd y_t
\leq D_2^\alpha \int \{\dens[\mu,t]{y_t}[\param]\}^{1-\alpha} \rmd y_t \eqsp,
\end{equation}
where the inequality follows from \eqref{eq:borne-util-additive-noise}.
We apply \autoref{lem:holder-inequality} to establish a bound for the right-hand side of \eqref{eq:additive-noise:borne-B00alpha}.
Let $\psi(y) = 1$ and $\varphi(y) = 1/(1\vee |y|^2)$.
With these definitions the first condition in \eqref{eq:conditions-holder-1} is satisfied. To check \eqref{eq:conditions-holder-2},
note that
\begin{equation*}
\varphi^{-\frac{\alpha}{1-\alpha}}(y) = (1 \vee |y|^2)^{\frac{\alpha}{1-\alpha}} \leq 1+|y|^{2 \alpha/(1-\alpha)} \eqsp.
\end{equation*}
The integral in \eqref{eq:conditions-holder-2} may be expressed as
\begin{equation}
\label{eq:additive-noise:what-we-need}
\int \varphi^{-\frac{\alpha}{1-\alpha}}(y_t) \dens[\mu,t]{y_t}[\param] \rmd y_t = \PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(Y_t)}[\mu]=
\PE[\param]{\PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(Y_0)}[X_t]}[\mu] \eqsp.
\end{equation}
Since $Y_0= \phi X_0 + \sigma U_0$, we get that for any $x \in \Xset$,
$\PE[\param]{ \varphi^{-\frac{\alpha}{1-\alpha}}(Y_0) }[x] \leq 1 + \PE{ |\phi x + U|^{2\alpha/(1-\alpha)}}$,
where $U$ is standard normal. This implies that there exists a constant $D_3$ such that, for all $x \in \Xset$ and all $\param\in\Param$,
\begin{equation}
\label{eq:gainesville}
\PE[\param]{ \varphi^{-\frac{\alpha}{1-\alpha}}(Y_0) }[x]
\leq D_3 (1+ |x|^{2\alpha/(1-\alpha)}) \eqsp.
\end{equation}
Plugging this into \eqref{eq:additive-noise:what-we-need} and using \eqref{eq:moment-control-additive-noise-1}, this verifies the second condition in \eqref{eq:conditions-holder-2}.
\autoref{lem:holder-inequality} can thus be used to conclude that
$ \PE[\param]{ (\Bbar{t}{\mu}{\param})^\alpha}[\mu] < \infty$ for all $\alpha \in \ooint{0,1}$.
Since this holds for any $t \in \nset$ and $\param\in\Param$, we obtain the first part of \eqref{eq:additive-noise:moment:A4}.
Next, we consider
\begin{align}
\nonumber
\PE[\param]{(\Cbar[t]{t+1}{\mu}{\param})^\alpha}[\mu]
&= \PE[\param]{\frac{\supnorm[\alpha]{\ewghtfuncf[\param]{Y_t}} \left( \int g^\param(x_{t+1},Y_{t+1}) \rmd x_{t+1} \right)^\alpha}{ \{\dens[\mu,t]{\chunk{Y}{t}{t+1}}[\param] \}^\alpha}}[\mu]\\
\nonumber
&\leq D_1^\alpha D_2^\alpha \iint \{ \dens[\mu,t]{\chunk{y}{t}{t+1}}[\param] \}^{1-\alpha} \rmd \chunk{y}{t}{t+1} \eqsp.
\label{eq:additive-noise:borne-B01alpha}
\end{align}
We will again make use of \autoref{lem:holder-inequality} to bound this quantity.
Proceeding analogously to above, we let $\psi(y_0,y_1) = 1$ and
\begin{equation}
\varphi(y_0,y_1) = \frac{1}{(y_0^2 \vee 1) \, (y_1^2 \vee 1)} \eqsp,
\label{eq:additive-noise:definition-phi-1}
\end{equation}
for which \eqref{eq:conditions-holder-1} is satisfied.
To check \eqref{eq:conditions-holder-2}, we use the conditional independence of the observations given the states and \eqref{eq:additive-noise:what-we-need} to get, for any $\param \in \Param$,
\begin{multline}
\nonumber
\iint \varphi^{-\frac{\alpha}{1-\alpha}}(\chunk{y}{t}{t+1}) \dens[\mu,t]{\chunk{y}{t}{t+1}}[\param] \rmd \chunk{y}{t}{t+1}
= \PE[\param]{\CPEdoup[\mu]{\param}{\varphi^{-\frac{\alpha}{1-\alpha}}(\chunk{Y}{t}{t+1})}{ X_{t:t+1} }}[\mu] \\
\leq \PE[\param]{\PE[\param]{1 + |Y_0|^{\frac{2\alpha}{1-\alpha}}}[X_t] \PE[\param]{1 + |Y_0|^{\frac{2\alpha}{1-\alpha}}}[X_{t+1}] }[\mu] \leq D_3^2 \PE[\param]{(1+ |X_t|^{\frac{2\alpha}{1-\alpha}}) (1+ |X_{t+1}|^{\frac{2\alpha}{1-\alpha}})}[\mu]
\eqsp.
\label{eq:additive-noise:interediary-result-1}
\end{multline}
From the Cauchy-Schwarz inequality we get, by using \eqref{eq:moment-control-additive-noise-1},
\begin{equation*}
\sup_{t \in \nset}
\sup_{\param \in \Param} \PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(\chunk{Y}{t}{t+1})}[\mu]
\leq D_3^2 \sup_{t \in \nset} \sup_{\param \in \Param} \PE[\param]{(1+|X_t|^{\frac{2\alpha}{1-\alpha}})^2}[\mu] < \infty \eqsp.
\end{equation*}
This shows that \eqref{eq:conditions-holder-2} is satisfied for any $\param\in\Param$ and any $t\in\nset$ which, by \autoref{lem:holder-inequality}
implies $\sup_{t\in\nset}\sup_{\param\in\Param}\PE[\param]{(\Cbar[t]{t+1}{\mu}{\param})^\alpha} < \infty$ for all $\alpha \in \coint{0,1}$, verifying
\ref{assum:b-moment-bound}.
Provided that $\theta_T$ converges to $\tparam$ at a rate $1/\sqrt{T}$ (see \autoref{rem:kullback}),
we may therefore apply \autoref{prop:N-depend-on-T} which shows that for any $\gamma \in \ooint{0,1}$, $\{\mineps{T}{N_T}^{-1}(\param_T) \}_{T \geq 1}$ is tight with $N_T \sim T^{1/\gamma}$.
\subsection{A stochastic volatility model}
The canonical model in stochastic volatility for discrete-time data has been introduced by
\cite{taylor:1982} and worked out since then by many authors; see \cite{hull:white:1987} and \cite{jacquier:polson:rossi:1994} for early references and \cite{shephard:andersen;2009} for an up-to-date survey. In this model, the hidden
volatility process, $\sequence{X}[t][\nset]$,
follows a first order autoregression,
\begin{align}
\label{eq:stochasticvolatilitycanonical-hidden}
X_{t+1} & = \phi X_t + \sigma W_{t+1} \,, \\
\label{eq:stochasticvolatilitycanonical-observation}
Y_t & = \beta \exp(X_t/2) U_t \,.
\end{align}
where $\sequence{W}[t][\nset]$ and
$\sequence{U}[t][\nset]$ are white Gaussian noise with mean zero and unit variance. The error processes
$\sequence{W}[t][\nset]$ and $\sequence{U}[t][\nset]$
are assumed to be mutually independent. We denote by $\param= (\phi,\sigma,\beta) \in \Param$, where $\Param$ is
a compact subset of $\ooint{-1,1} \times \ooint{0,\infty}^2$.
For $\delta > 0$, denote by $V_\delta(x)= \rme^{\delta |x|}$ and let
$\mu$ an arbitrary distribution on $(\rset, \borel(\rset))$, for which $\mu(V_\delta) < \infty$.
For this model the transition kernel and the likelihood of the observation are given by
\begin{align}
\label{eq:likelihood-SV}
&m^\param(x,x')= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left( - \frac{1}{2 \sigma^2} (x'- \phi x)^2 \right) \qquad\text{and} \\
&g^\param(x,y)= \frac{1}{\sqrt{2 \pi \beta^2}} \rme^{-(x/2 + (y^{2}/2\beta^2) \rme^{-x})} \eqsp,
\end{align}
respectively. For any $\param \in \Param$, the autoregressive process $\sequence{X}[t][\nset]$ has a unique stationary distribution $\pi^\param$, which is Gaussian, with mean 0 and variance $\sigma^2/(1-\phi^2)$. Hence, \ref{assum:stationarity} is satisfied.
We consider the bootstrap proposal kernel as defined in \eqref{eq:bootstrap-fully-adapted}, in which case
\begin{equation}
\label{eq:definition-weightfunc-SV}
\ewghtfunc[\param]{y}{x}{x'}= g^\param(x',y) \eqsp, \quad \text{for all $(x,x') \in \rset \times \rset$ and $y \in \rset$} \eqsp.
\end{equation}
Note that, $\supnorm{\ewghtfuncf[\param]{y}}= \supnorm{g^\param(\cdot,y)}$.
Assumptions \ref{assum:m-upper-bound} and \ref{assum:m-g-positive} are readily satisfied.
We finally check \ref{assum:b-moment-bound}. It is easily shown that, for all $\param \in \Param$,
\begin{align}
\label{eq:borne-util-SV-1}
&\int_{-\infty}^{\infty} g^\param(x,y) \rmd x = \frac{D_1}{|y|} \eqsp, && D_1 \eqdef \frac{1}{\sqrt{2 \pi}} \int_0^\infty \frac{\rme^{-u/2}}{\sqrt{u}} \rmd u \eqsp \\
\label{eq:borne-util-SV-2}
&\sup_{x \in \rset} g^\param(x,y) = \frac{D_2}{|y|} \eqsp, && D_2 \eqdef \frac{1}{\sqrt{2 \pi \rme}} \eqsp.
\end{align}
Similarly to \autoref{sec:examples:addtive-noise} we will check \ref{assum:b-moment-bound} with $\ell_\star=1$, \ie, we show that
\begin{equation}
\label{eq:moment:A4}
\sup_{t \in \nset} \, \sup_{\param\in\Param} \PE[\param]{ ( \Bbar{t}{\mu}{\param} )^\alpha}[\mu]<\infty\, , \quad
\sup_{t \in \nset} \, \sup_{\param\in\Param} \PE[\param]{ (\Cbar[t]{t+1}{\mu}{\param})^\alpha}[\mu]<\infty\eqsp,
\end{equation}
for any $\alpha \in (0,1)$. Note that we cannot expect \eqref{eq:moment:A4} to hold with $\alpha=1$ since,
\begin{align*}
\PE[\param]{\Bbar{t}{\mu}{\param} }[\mu]=\PE[\param]{\frac{\supnorm{\ewghtfuncf[\param]{Y_t}} }{\dens[\mu,t]{Y_t}[\param]}}[\mu]
=\int \sup_{x \in \rset} g^\param(x,y) \rmd y=D_2 \int \frac{1}{|y|} \rmd y =\infty\eqsp.
\end{align*}
We now turn to the proof of \eqref{eq:moment:A4}.
Note first that
\[
\limsup_{|x| \to \infty} \sup_{\param \in \Param} \frac{\PE[\param]{V_\delta(X_1)}[x]}{V_\delta(x)} =0 \eqsp,
\]
and for any $M < \infty$,
\(
\sup_{|x| \leq M} \sup_{\param \in \Param} \PE[\param]{V_\delta(X_1)}[x] < \infty .
\)
Therefore, there exist constants $\lambda_\delta \in \ooint{0,1}$ and $b_\delta < \infty$ such that, for all $x \in \Xset$,
\begin{equation}
\label{eq:drift-stochastic-volatility}
\sup_{\param \in \Param} \PE[\param]{V_\delta(X_1)}[x] \leq \lambda_\delta V_\delta(x) + b_\delta \eqsp.
\end{equation}
Analogously to \eqref{eq:moment-control-additive-noise}, this implies that, for all $\delta>0$,
\begin{equation}
\label{eq:nuit-de-sumatra}
\sup_{t \in \nset} \sup_{\param \in \Param} \PE[\param]{V_\delta(X_t)}[\mu] \leq \mu(V_\delta) + b_\delta / (1-\lambda_\delta) < \infty \eqsp.
\end{equation}
Using \eqref{eq:definition-weightfunc-SV} and \eqref{eq:borne-util-SV-2},
we get
\begin{equation}
\label{eq:borne-B00alpha}
\PE[\param]{ (\Bbar{t}{\mu}{\param} )^\alpha}[\mu] = \int \supnorm[\alpha]{\ewghtfuncf[\param]{y_t}} \{\dens[\mu,t]{y_t}[\param] \}^{1-\alpha} \rmd y_t
\leq D_2^\alpha \int |y_t|^{-\alpha} \{\dens[\mu,t]{y_t}[\param] \}^{1-\alpha} \rmd y_t \eqsp.
\end{equation}
We apply \autoref{lem:holder-inequality} to establish a bound for \eqref{eq:borne-B00alpha}. Consider the functions $\varphi$ and $\psi$ given by
\begin{align}
\label{eq:definition-psi}
&\psi(y) = 1/ |y| \eqsp, \\
\label{eq:definition-phi}
&\varphi(y)= \frac{|y|^\gamma}{|y|^2 \vee 1} \eqsp, \quad \text{with} \quad \frac{\gamma \alpha}{1 -\alpha} < 1 \eqsp, \quad 0 < \gamma < 1 \eqsp.
\end{align}
With these definitions, we get
\begin{equation*}
\int \varphi(y) \psi(y) \rmd y = \int \frac{1}{|y|} \frac{|y|^\gamma}{|y|^2 \vee 1} \rmd y < \infty \ \, ,
\end{equation*}
showing that the first condition in \eqref{eq:conditions-holder-1} is satisfied. We now check \eqref{eq:conditions-holder-2}:
\begin{equation}
\label{eq:what-we-need}
\int \varphi^{-\frac{\alpha}{1-\alpha}}(y_t) \dens[\mu,t]{y_t}[\param] \rmd y_t
= \PE[\param]{\PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(Y_0)}[X_t]}[\mu]\eqsp.
\end{equation}
Since $Y_0 = \beta\exp(X_0/2)U_0$ it follows that
$\PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(Y_0)}[x]= \PE{\varphi^{-\frac{\alpha}{1-\alpha}}(\beta \rme^{x/2} U)}$ where
$U$ is standard normal. We have
\begin{align*}
&\PE{\left( \frac{\beta^2 \rme^{x} U^2 \vee 1}{\beta^\gamma \rme^{\gamma x/2} |U|^\gamma} \right)^{\frac{\alpha}{1-\alpha}}} \\
&\qquad\leq \PE{(\beta \rme^{x/2} |U|)^{-\frac{\gamma \alpha}{1-\alpha}} \1_{\{\beta |U| \rme^{x/2} \leq 1\}}} +
\PE{(\beta^2 \rme^{x} U^2)^{\frac{\alpha}{1-\alpha}} \1_{\{\beta |U| \rme^{x/2} > 1\}}} \\
&\qquad\leq (\beta \rme^{\frac{x}{2}})^{-\frac{\gamma \alpha}{1-\alpha}} \PE{|U|^{-\frac{\gamma \alpha}{1-\alpha}}} +
(\beta^2 \rme^{x})^\frac{\alpha}{1-\alpha} \PE{|U|^{\frac{2 \alpha}{1-\alpha}}} \eqsp.
\end{align*}
Since $\gamma \alpha/(1-\alpha) < 1$ it holds that $\PE{|U|^{-\frac{\gamma \alpha}{1-\alpha}}} < \infty$ and, additionally,
$\PE{|U|^{\frac{2 \alpha}{1-\alpha}}} < \infty$. Therefore, there exist constants $D_3 < \infty$ and $\delta > 0$ such that,
for all $x \in \rset$ and $\param \in \Param$,
\begin{equation}
\label{eq:laborne}
\PE[\param]{\varphi^{-\frac{\alpha}{1-\alpha}}(Y_0)}[x] \leq D_3 \rme^{\delta |x|} = D_3 V_\delta(x) \eqsp.
\end{equation}
Using \eqref{eq:what-we-need}, \eqref{eq:laborne} and \eqref{eq:nuit-de-sumatra} verifies the second condition in \eqref{eq:conditions-holder-2}.
\autoref{lem:holder-inequality} can thus be used to conclude that
$ \PE[\param]{ (\Bbar{t}{\mu}{\param})^\alpha}[\mu] < \infty$ for all $\alpha \in \ooint{0,1}$.
Since this holds for any $t \in\nset$ and $\param\in\Param$, we establish the first part of \eqref{eq:moment:A4}.
We will now check that, for all $\alpha \in \ooint{0,1}$, $\PE[\param]{(\Cbar[t]{t+1}{\mu}{\param})^\alpha}[\mu] < \infty$.
Using \eqref{eq:borne-util-SV-1} and \eqref{eq:borne-util-SV-2}, we get
\begin{align}
\nonumber
\PE[\param]{(\Cbar[t]{t+1}{\mu}{\param})^\alpha}[\mu]
&= \PE[\param]{\frac{\supnorm[\alpha]{\ewghtfuncf[\param]{Y_t}} \left( \int g^\param(x_{t+1},Y_{t+1}) \rmd x_{t+1} \right)^\alpha}{ (\dens[\mu,t]{\chunk{Y}{t}{t+1}}[\param] )^\alpha}}\\
\nonumber
&= \iint \supnorm[\alpha]{\ewghtfuncf[\param]{y_t}} \left( \int g^\param(x_{t+1},y_{t+1}) \rmd x_{t+1} \right)^{\alpha} ( \dens[\mu,t]{\chunk{y}{t}{t+1}}[\param] )^{1-\alpha} \rmd \chunk{y}{t}{t+1} \eqsp, \\
&\leq D_1^\alpha D_2^\alpha \iint |y_t y_{t+1}|^{-\alpha} ( \dens[\mu,t]{\chunk{y}{t}{t+1}}[\param] )^{1-\alpha} \rmd \chunk{y}{t}{t+1} \eqsp.
\label{eq:borne-B01alpha}
\end{align}
We use again \autoref{lem:holder-inequality} with
\begin{equation}
\label{eq:definition-psi-1}
\psi(y_0,y_1) = |y_0|^{-1} |y_1|^{-1}
\end{equation}
and
\begin{equation}
\varphi(y_0,y_1) = \frac{|y_0|^\gamma |y_1|^\gamma}{(y_0^2 \vee 1) \, (y_1^2 \vee 1)} \eqsp,
\label{eq:definition-phi-1}
\end{equation}
with $\gamma \alpha / (1-\alpha) < 1$ and $\gamma \in \ooint{0,1}$. Note first that
\begin{equation}
\label{eq:ile-de-java}
\iint \psi(y_0,y_1) \varphi(y_0,y_1)\rmd \chunk{y}{0}{1}
= \iint \left\{ (|y_0| |y_1|)^{1-\gamma} (y_0^2 \vee 1) (y_1^2 \vee 1) \right\}^{-1} \rmd \chunk{y}{0}{1} < \infty \eqsp.
\end{equation}
Hence, \eqref{eq:conditions-holder-1} is satisfied. We finally check \eqref{eq:conditions-holder-2}.
Using the conditional independence of the observations given the states and
\eqref{eq:laborne},
\begin{multline*}
\iint \varphi^{-\frac{\alpha}{1-\alpha}}(\chunk{y}{t}{t+1}) \dens[\mu,t]{\chunk{y}{t}{t+1}}[\param] \rmd \chunk{y}{t}{t+1}
= \PE[\param]{\CPEdoup[\mu]{\param}{\varphi^{-\frac{\alpha}{1-\alpha}}(\chunk{Y}{t}{t+1})}{ X_{t:t+1} }}[\mu] \\
= \PE[\param]{
\PE[\param]{\left( \frac{\beta^2 \rme^{X_0} U^2 \vee 1}{\beta \rme^{\gamma X_0/2} |U|^\gamma} \right)^{\frac{\alpha}{1-\alpha}}}[X_t]
\PE[\param]{\left( \frac{\beta^2 \rme^{X_0} U^2 \vee 1}{\beta \rme^{\gamma X_0/2} |U|^\gamma} \right)^{\frac{\alpha}{1-\alpha}}}[X_{t+1}]
}[\mu] \leq D_3^2 \PE[\param]{\rme^{\delta |X_t|} \rme^{\delta |X_{t+1}|}}[\mu] \eqsp.
\end{multline*}
Using \eqref{eq:drift-stochastic-volatility}, we get, from the Cauchy-Schwarz inequality,
\begin{align*}
\PE[\param]{\rme^{\delta |X_t|} \rme^{\delta |X_{t+1}|}}[\mu] \leq \left(\PE[\param]{\rme^{2 \delta |X_t|}}[\mu]\PE[\param]{\rme^{2\delta |X_{t+1}|}}[\mu] \right)^{1/2} \eqsp,
\end{align*}
Applying \eqref{eq:nuit-de-sumatra} with $\delta$ replaced by $2\delta$ yields \eqref{eq:conditions-holder-2}.
Using \autoref{lem:holder-inequality} thus establishes \eqref{eq:moment:A4} and thereby, \ref{assum:b-moment-bound} holds.
Provided that $\theta_T$ converges to $\tparam$ at a rate $1/\sqrt{T}$ (see \autoref{rem:kullback}),
we may therefore apply \autoref{prop:N-depend-on-T} which shows that for any $\gamma \in \ooint{0,1}$,
$\{\mineps{T}{N_T}^{-1}(\param_T) \}_{T \geq 1}$ is tight with $N_T= T^{1/\gamma}$.
\section{Proof of \autoref{theo:doeblin-condition-PG}}
\label{sec:proof}
We will now turn to the proof of the minorization condition in \autoref{theo:doeblin-condition-PG}.
As in the statement of the theorem, we will not explicitly indicate any possible dependence
on unknown model parameters in the notation in this section.
This is done for notational convenience and is without loss of generality.
Throughout this section, $\PP{}$ and $\PE{}$ refer to probability and expectation, respectively,
\wrt\ the random variables generated by the \pg algorithm.
The proof is inductive and follows from a series of lemmas.
\begin{lemma}\label{lem:jensens}
Let $X\geq0$ and $Y>0$ be independent random variables. Then,
\begin{equation*}
\PE{\frac{X}{Y}}\geq \frac{ \PE{X}}{ \PE{Y}} \eqsp.
\end{equation*}
\end{lemma}
\begin{proof}
Since $f(y) = 1/y$ is convex on $y > 0$ the result follows by independence and Jensen's inequality.
\end{proof}
\begin{lemma}
\label{lem:singapore}
Let $f$ and $h$ be nonnegative measurable functions. For $t \in \{0,\dots,T-1\}$, we have
\begin{multline}
\label{eq:bound-fond-1}
\CPE{\frac{\sum_{i=1}^N \ewght{t+1}{i} f(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h(\epart{t+1}{i})}}{\mcff{t}{N}} \\
\geq
\frac{\sum_{i=1}^N \ewght{t}{i} \int \Kun{Y_{t+1}}{\epart{t}{i}}{\rmd x_{t+1}} f(\epart{0:t}{i},x_{t+1})}
{\sum_{i=1}^N \ewght{t}{i} \left[ \frac{N-2}{N-1} \Kunf{Y_{t+1}} h(\epart{t}{i}) + \frac{2}{N-1} \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h(x') \right]} \eqsp,
\end{multline}
and
\begin{equation}
\label{eq:bound-fond-2}
\PE{\frac{\sum_{i=0}^N \ewght{0}{i} f(\epart{0}{i})}{\sum_{i=0}^N \ewght{0}{i} h(\epart{0}{i})}}
\geq \frac{(N-1)\Xinit{g(\cdot,Y_0) f(\cdot)}}{(N-2) \Xinit{g(\cdot,Y_0) h(\cdot)} +2 \sup_x [\ewghtfuncz{Y_0}{x} h(x)]} \eqsp.
\end{equation}
\end{lemma}
\begin{proof}
Using that
$$
\ewght{t+1}{1} h(\epart{t+1}{1})+\ewght{t+1}{N} h(\epart{t+1}{N}) \leq 2 \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h(x') \eqsp,
$$
and that the weighted particles $\{( \epart{t+1}{i}, \ewght{t+1}{i}) \}_{i=1}^{N-1}$ are conditionally \iid\ \wrt\ $\mcff{t}{N}$, we get
\begin{align}
\nonumber
&\CPE{\frac{\sum_{i=1}^N \ewght{t+1}{i} f(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h(\epart{t+1}{i})}}{\mcff{t}{N}}
\geq \CPE{\frac{\sum_{i=1}^{N-1} \ewght{t+1}{i} f(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h(\epart{t+1}{i})}}{\mcff{t}{N}} \\ \nonumber
&\qquad\geq (N-1) \CPE{\frac{\ewght{t+1}{1} f(\epart{0:t+1}{1})}{\sum_{i=2}^{N-1} \ewght{t+1}{i} h(\epart{t+1}{i}) + 2 \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h(x')}}{\mcff{t}{N}} \\
\label{eq:lower-bound}
&\qquad\geq (N-1) \frac{\CPE{\ewght{t+1}{1} f(\epart{0:t+1}{1})}{\mcff{t}{N}}}
{\CPE{\sum_{i=2}^{N-1} \ewght{t+1}{i} h(\epart{t+1}{i})}{\mcff{t}{N}}+ 2 \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h(x')} \eqsp,
\end{align}
where the last inequality follows from \autoref{lem:jensens}. Consider first the numerator in the \rhs\ of \eqref{eq:lower-bound}. We have
\begin{align}
\nonumber
\CPE{\ewght{t+1}{1} f(\epart{0:t+1}{1})}{\mcff{t}{N}}
&= \frac{1}{ \sum_{l=1}^N \ewght{t}{l} } \sum_{j=1}^N \ewght{t}{j} \int \Kis{Y_{t+1}}{\epart{t}{j}}{\rmd x_{t+1}} \ewghtfunc{Y_{t+1}}{\epart{t}{j}}{x_{t+1}} f(\epart{0:t}{j},x_{t+1}) \\ \label{eq:condexp}
&= \frac{1}{ \sum_{l=1}^N \ewght{t}{l} } \sum_{j=1}^N \ewght{t}{j} \int \Kun{Y_{t+1}}{\epart{t}{j}}{\rmd x_{t+1}} f(\epart{0:t}{j},x_{t+1}) \eqsp.
\end{align}
We now consider the denominator in the \rhs\ of \eqref{eq:lower-bound}:
\begin{align*}
\CPE{\sum_{i=2}^{N-1} \ewght{t+1}{i} h(\epart{t+1}{i})}{\mcff{t}{N}} &= (N-2) \CPE{\ewght{t+1}{1} h(\epart{t+1}{1})}{\mcff{t}{N}} \\
&= (N-2) \frac{1}{ \sum_{l=1}^N \ewght{t}{l} } \sum_{j=1}^N \ewght{t}{j} \Kunf{Y_{t+1}} h(\epart{t}{j}) \eqsp,
\end{align*}
where the last identity follows from \eqref{eq:condexp} with $f(\chunk{x}{0}{t+1})= h(x_{t+1})$. The proof of \eqref{eq:bound-fond-1} follows.
Consider now \eqref{eq:bound-fond-2}. Since the particles $\{\epart{0}{i}\}_{i=1}^{N-1}$ are \iid, we obtain, using again \autoref{lem:jensens},
\begin{align*}
\PE{\frac{\sum_{i=1}^N \ewght{0}{i} f(\epart{0}{i})}{\sum_{i=1}^N \ewght{0}{i} h(\epart{0}{i})}}
&\geq (N-1) \PE{\frac{\ewght{0}{1} f(\epart{0}{1})}{\sum_{i=2}^{N-1} \ewght{0}{i} h(\epart{0}{i}) + 2 \sup_x \ewghtfuncz{Y_0}{x} h(x)}}\\
&\geq \frac{(N-1) \PE{\ewght{0}{1} f(\epart{0}{1})}}{\PE{\sum_{i=2}^{N-1} \ewght{0}{i} h(\epart{0}{i})} + 2 \sup_x \ewghtfuncz{Y_0}{x} h(x)} \eqsp.
\end{align*}
The numerator is given by
\[
\PE{\ewght{0}{1} f(\epart{0}{1})}= \int \Xinitis{Y_0}{\rmd x_0} \ewghtfuncz{Y_0}{x_0} f(x_0) = \int \Xinit{\rmd x_0} g(x_0,Y_0) f(x_0) \eqsp.
\]
Similarly, we get
\begin{align*}
\PE{\sum_{i=2}^{N-1} \ewght{0}{i} h(\epart{0}{i})} = (N-2) \PE{\ewght{0}{1} h(\epart{0}{1})}
=(N-2) \int \Xinit{\rmd x_0} g(x_0,Y_0) h(x_0) \eqsp.
\end{align*}
\end{proof}
Define a sequence of nonnegative scalars $\{\beta_t\}_{t=0}^T$ by the backward recursion: $\beta_T=\supnorm{\ewghtfuncf{Y_T}}$, and for $t = T-1,T-2,\dots,0$,
\begin{multline}
\label{eq:def-M-t}
\beta_t = \supnorm{\ewghtfuncf{Y_t}} \left\{ \frac{2}{N-1} \sum_{\ell=1}^{T-t} \left( \frac{N-2}{N-1}\right)^{\ell-1} \beta_{t+\ell} \, \supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell-1}}\bigone} \right. \\
\left. + \left( \frac{N-2}{N-1} \right)^{T-t} \supnorm{\Kunf{\chunk{Y}{t+1}{T}} \bigone} \right\} \eqsp.
\end{multline}
Given $\{\beta_t\}_{t=0}^T$, define the functions $\{ h_t \}_{t=0}^T$, $h_t: \Xset \to \rset_+$, by the backward recursion: $h_T = \bigone$ and, for all $t= T-1, T-2,\dots,0$,
\begin{equation}
\label{eq:recursive-def-h-t}
h_t: x \mapsto h_t(x) = \frac{N-2}{N-1} \; \Kunf{Y_{t+1}} h_{t+1}(x) + \frac{2}{N-1} \beta_{t+1} \eqsp.
\end{equation}
By solving the backward recursion, \eqref{eq:recursive-def-h-t} implies
\begin{multline}
\label{eq:def-h-t}
h_t(x) = \frac{2}{N-1} \sum_{\ell=1}^{T-t} \left( \frac{N-2}{N-1} \right)^{\ell-1} \beta_{t+\ell} \Kunf{\chunk{Y}{t+1}{t+\ell-1}} \bigone(x)
+
\left( \frac{N-2}{N-1} \right)^{T-t} \Kunf{\chunk{Y}{t+1}{T}}\bigone(x) \eqsp.
\end{multline}
For $D \in \Xsigma^{\otimes (T+1)}$, set $f_T(\chunk{x}{0}{T})= \1_D(\chunk{x}{0}{T})$ and, for $t = T-1, T-2, \dots, 0$,
\begin{equation}
\label{eq:recursive-def-f-t}
f_t(\chunk{x}{0}{t})= \int \Kun{Y_{t+1}}{x_t}{\rmd x_{t+1}} f_{t+1}(\chunk{x}{0}{t+1}) \eqsp,
\end{equation}
or equivalently,
\begin{equation}
\label{eq:def-f-t}
f_t(\chunk{x}{0}{t})= \int \prod_{\ell=1}^{T-t} \Kun{Y_{t+\ell}}{x_{t+\ell-1}}{\rmd x_{t+\ell}} \1_D(\chunk{x}{0}{T}) \eqsp.
\end{equation}
\begin{lemma}
For any $D \in \Xsigma^{\otimes (T+1)}$,
\begin{equation*}
\PE{\frac{\sum_{i=1}^N \ewght{T}{i} \1_D(\epart{0:T}{i})}{\sum_{i=1}^N \ewght{T}{i}}} \geq \frac{(N-1)\dens[\Xinitv]{\chunk{Y}{0}{T}}}{(N-2)\Xinit{g(\cdot,Y_0) h_0(\cdot)}+2\beta_0} \post{\Xinitv,0:T}{\chunk{Y}{0}{T}}(D) \eqsp.
\end{equation*}
\end{lemma}
\begin{proof}
Note first that, by construction,
\begin{equation}
\label{eq:kuala-lumpur}
\PE{\frac{\sum_{i=1}^N \ewght{T}{i} \1_D(\epart{0:T}{i})}{\sum_{i=1}^N \ewght{T}{i}}} =
\PE{\frac{\sum_{i=1}^N \ewght{T}{i} f_T(\epart{0:T}{i})}{\sum_{i=1}^N \ewght{T}{i} h_T(\epart{T}{i})}}
\eqsp.
\end{equation}
We now show that, by backward induction, for all $t \in \{0,\dots,T-1\}$,
\begin{equation}
\label{eq:malaisie}
\PE{\frac{\sum_{i=1}^N \ewght{t+1}{i} f_{t+1}(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h_{t+1}(\epart{t+1}{i})}}
\geq
\PE{\frac{\sum_{i=1}^N \ewght{t}{i} f_t(\epart{0:t}{i})}{\sum_{i=1}^N \ewght{t}{i} h_t(\epart{t}{i})}} \eqsp.
\end{equation}
To obtain \eqref{eq:malaisie}, note first that the tower property of the conditional expectation, \autoref{lem:singapore}, and
\eqref{eq:recursive-def-f-t} imply
\begin{align*}
&\PE{\frac{\sum_{i=1}^N \ewght{t+1}{i} f_{t+1}(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h_{t+1}(\epart{t+1}{i})}}
= \PE{\CPE{\frac{\sum_{i=1}^N \ewght{t+1}{i} f_{t+1}(\epart{0:t+1}{i})}{\sum_{i=1}^N \ewght{t+1}{i} h_{t+1}(\epart{t+1}{i})}}{\mcff{t}{N}}}\\
&\qquad\geq \PE{\frac{\sum_{i=1}^N \ewght{t}{i} f_t(\epart{0:t}{i})}{\sum_{i=1}^N \ewght{t}{i} \left[ \frac{N-2}{N-1} \Kunf{Y_{t+1}} h_{t+1}(\epart{t}{i}) + \frac{2}{N-1} \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h_{t+1}(x') \right]}} \eqsp.
\end{align*}
By the triangle inequality, it follows directly from \eqref{eq:def-M-t} and \eqref{eq:def-h-t} that
\begin{align}
\label{eq:bound-beta0}
&\sup_x \ewghtfuncz{Y_0}{x} h_0(x) \leq \beta_0 \eqsp, \\
\label{eq:bound-betat}
&\sup_{x,x'} \ewghtfunc{Y_{t+1}}{x}{x'} h_{t+1}(x') \leq \beta_{t+1} \eqsp, \quad t \in \{0, \dots, T-1 \} \eqsp.
\end{align}
Combining the inequality \eqref{eq:bound-betat} with the definition of $h_t$ in \eqref{eq:recursive-def-h-t} yields
\begin{equation*}
\sum_{i=1}^N \ewght{t}{i} \left[ \frac{N-2}{N-1} \Kunf{Y_{t+1}} h_{t+1}(\epart{t}{i}) + \frac{2}{N-1} \sup_{(x,x')} \ewghtfunc{Y_{t+1}}{x}{x'} h_{t+1}(x') \right]
\leq \sum_{i=1}^N \ewght{t}{i} h_t(\epart{t}{i}) \eqsp,
\end{equation*}
showing \eqref{eq:malaisie}. Combining \eqref{eq:malaisie} with \eqref{eq:kuala-lumpur} and using \autoref{lem:singapore}-\eqref{eq:bound-fond-2}
establishes that
\begin{align*}
\PE{\frac{\sum_{i=1}^N \ewght{T}{i} \1_D(\epart{0:T}{i})}{\sum_{i=1}^N \ewght{T}{i}}}
\geq \PE{\frac{\sum_{i=1}^N \ewght{0}{i} f_0(\epart{0}{i})}{\sum_{i=1}^N \ewght{0}{i} h_0(\epart{0}{i})}}
\geq \frac{(N-1)\Xinit{g(\cdot,Y_0) f_0(\cdot)}}{(N-2)\Xinit{g(\cdot,Y_0) h_0(\cdot)}+2 \beta_0} \eqsp,
\end{align*}
where the last inequality stems from \eqref{eq:bound-beta0}. The proof is completed by noting that
\[
\Xinit{g(\cdot,Y_0) f_0(\cdot)} = \post{\Xinitv,0:T}{\chunk{Y}{0}{T}}(D) \dens[\Xinitv]{\chunk{Y}{0}{T}} \eqsp. \]
\end{proof}
Finally, to prove \autoref{theo:doeblin-condition-PG} it remains to show the following.
\begin{lemma}
\label{lem:bound-denominator}
With $B_{t,T}$ defined as in \eqref{eq:def-b-T}, it holds that
\begin{equation}
\label{eq:bound-denominator}
(N-2)\Xinit{g(\cdot,Y_0) h_0(\cdot)}+2\beta_0
\leq (N-1) \dens[\Xinitv]{\chunk{Y}{0}{T}} \left[\prod_{t=0}^T \frac{2 B_{t,T}+N-2}{N-1} \right]\eqsp.
\end{equation}
\end{lemma}
\begin{proof}
Define for $t \in \{0,\dots,T\}$,
\begin{equation}
\label{eq:def-Q-t}
\alpha_t= \frac{\beta_t}{\cdens[\Xinitv]{\chunk{Y}{t}{T}}{\chunk{Y}{0}{t-1}}} \eqsp,
\end{equation}
with the convention $\cdens[\Xinitv]{\chunk{Y}{0}{T}}{\chunk{Y}{0}{ -1 }}=\dens[\Xinitv]{\chunk{Y}{0}{T}}$. In particular,
$\alpha_0= \beta_0/\dens[\Xinitv]{\chunk{Y}{0}{T}}$.
Eq.~\eqref{eq:def-M-t} implies
\begin{multline*}
\alpha_t = \supnorm{\ewghtfuncf{Y_t}}
\left\{ \frac{2}{N-1} \sum_{\ell=1}^{T-t} \left( \frac{N-2}{N-1} \right)^{\ell-1} \alpha_{t+\ell} \left[ \frac{\supnorm{\Kunf{\chunk{Y}{t+1}{t+\ell-1}}\bigone} \cdens[\Xinitv]{\chunk{Y}{t+\ell}{T}}{\chunk{Y}{0}{t+\ell-1}}}{\cdens[\Xinitv]{\chunk{Y}{t}{T}}{\chunk{Y}{0}{t-1}}} \right] \right. \\
\left. + \left( \frac{N-2}{N-1} \right)^{T-t} \frac{\supnorm{\Kunf{\chunk{Y}{t+1}{T}}\bigone}}{\cdens[\Xinitv]{\chunk{Y}{t}{T}}{\chunk{Y}{0}{t-1}}} \right\} \eqsp.
\end{multline*}
The identity
\[
\frac{\cdens[\Xinitv]{\chunk{Y}{t+\ell}{T}}{\chunk{Y}{0}{t+\ell-1}}}{\cdens[\Xinitv]{\chunk{Y}{t}{T}}{\chunk{Y}{0}{t-1}}}= \frac{1}{\cdens[\Xinitv]{\chunk{Y}{t}{t+\ell-1}}{\chunk{Y}{0}{t-1}}}
\]
and the definition in \eqref{eq:def-b-T} imply that
\begin{equation}
\label{eq:ineq-alpha}
\alpha_t \leq B_{t,T} \left\{ \frac{2}{N-1} \sum_{\ell=1}^{T-t} \left( \frac{N-2}{N-1} \right)^{\ell-1} \alpha_{t+\ell} + \left( \frac{N-2}{N-1} \right)^{T-t} \right\} \eqsp.
\end{equation}
By a backward induction, define the sequence $\{\tilde{\alpha}_t \}_{t=0}^T$ as follows: set $\tilde{\alpha}_T = B_{T,T}$ and
\begin{equation}
\label{eq:def-tilde-alpha}
\tilde{\alpha}_t= B_{t,T} \left[ \frac{2}{N-1} \sum_{\ell=1}^{T-t} \left( \frac{N-2}{N-1} \right)^{\ell-1} \tilde{\alpha}_{t+\ell} + \left( \frac{N-2}{N-1} \right)^{T-t} \right] \eqsp.
\end{equation}
Since by construction, $\alpha_T \leq B_{T,T} = \tilde{\alpha}_T$, an elementary backward recursion using \eqref{eq:ineq-alpha} shows that,
\begin{equation}
\label{eq:tilde-alpha-bound-alpha}
\text{for all $t\in \{0,\dots,T\}$, $\alpha_t \leq \tilde{\alpha}_t$} \eqsp.
\end{equation}
However
\begin{align*}
\tilde{\alpha}_{t-1} &= B_{t-1,T} \left[ \frac{2}{N-1} \sum_{s=1}^{T-t+1} \left( \frac{N-2}{N-1} \right)^{\ell-1} \tilde{\alpha}_{t+\ell-1} + \left( \frac{N-2}{N-1} \right)^{T-t+1} \right]\\
&= B_{t-1,T} \left[ \frac{2}{N-1} \tilde{\alpha}_t + \frac{2}{N-1} \sum_{k=1}^{T-t} \left( \frac{N-2}{N-1} \right)^{k} \tilde{\alpha}_{t+k} + \left( \frac{N-2}{N-1} \right)^{T-t+1} \right] \\
& = \frac{2 B_{t-1,T}}{N-1} \tilde{\alpha}_t + B_{t-1,T} \frac{N-2}{N-1} \frac{\tilde{\alpha}_t}{B_{t,T}} \\
& = \frac{B_{t-1,T}}{B_{t,T}} \left[ \frac{2 B_{t,T}}{N-1} + \frac{N-2}{N-1} \right] \tilde{\alpha}_t \eqsp.
\end{align*}
Therefore
\begin{equation}
\label{eq:tilde-alpha-value}
\tilde{\alpha}_0 = B_{0,T} \prod_{t=1}^{T} \frac{2 B_{t,T} + N -2}{N-1} \eqsp.
\end{equation}
Now, since
\begin{equation*}
h_0(x) = \frac{2}{N-1} \sum_{s=1}^{T} \left( \frac{N-2}{N-1} \right)^{s-1} \beta_{s} \Kunf{\chunk{Y}{1}{s-1}} \bigone(x) \\
+
\left( \frac{N-2}{N-1} \right)^{T} \Kunf{\chunk{Y}{1}{T}}\bigone(x) \eqsp,
\end{equation*}
we have
\begin{equation*}
\Xinit{g(\cdot,Y_0) h_0(\cdot)}= \frac{2}{N-1} \sum_{s=1}^{T} \left( \frac{N-2}{N-1} \right)^{s-1} \beta_{s} \dens[\Xinitv]{\chunk{Y}{0}{s-1}}
+ \left( \frac{N-2}{N-1} \right)^{T} \dens[\Xinitv]{\chunk{Y}{0}{T}} \eqsp.
\end{equation*}
Plugging \eqref{eq:def-Q-t} into this equation and using that
$$
\dens[\Xinitv]{\chunk{Y}{0}{s-1}}
=\frac{ \dens[\Xinitv]{\chunk{Y}{0}{T}} }{ \cdens[\Xinitv]{\chunk{Y}{s}{T}}{\chunk{Y}{0}{s-1}} }
$$
yield
\begin{align*}
\Xinit{g(\cdot,Y_0) h_0(\cdot)}= \frac{2}{N-1} \sum_{s=1}^{T} \left( \frac{N-2}{N-1} \right)^{s-1} \alpha_{s}
\dens[\Xinitv]{\chunk{Y}{0}{T}}
+
\left( \frac{N-2}{N-1} \right)^{T} \dens[\Xinitv]{\chunk{Y}{0}{T}} \eqsp.
\end{align*}
Finally, using \eqref{eq:tilde-alpha-bound-alpha} and then \eqref{eq:def-tilde-alpha},
\begin{align*}
&(N-2)\Xinit{g(\cdot,Y_0) h_0(\cdot)}+2 \beta_0\\
&\quad \leq\dens[\Xinitv]{\chunk{Y}{0}{T}} \left\{ (N-2)\left[\frac{2}{N-1} \sum_{s=1}^{T} \left( \frac{N-2}{N-1} \right)^{s-1} \alpha_{s}+\left(\frac{N-2}{N-1} \right)^{T}\right] +2 \alpha_0\right\}\\
&\quad \leq \dens[\Xinitv]{\chunk{Y}{0}{T}} \left\{ (N-2)\left[\frac{2}{N-1} \sum_{s=1}^{T} \left( \frac{N-2}{N-1} \right)^{s-1} \tilde{\alpha}_{s}+\left(\frac{N-2}{N-1} \right)^{T}\right] +2 \tilde{\alpha}_0\right\}\\
&\quad \leq \dens[\Xinitv]{\chunk{Y}{0}{T}} \left((N-2) \frac{\tilde{\alpha}_0}{B_{0,T}}+2 \tilde{\alpha}_0 \right)\\
&\quad =\dens[\Xinitv]{\chunk{Y}{0}{T}} (N-2+2 B_{0,T})\prod_{t=1}^{T} \frac{2 B_{t,T} + N -2}{N-1} \eqsp,
\end{align*}
where the last equality follows from \eqref{eq:tilde-alpha-value}. The proof follows.
\end{proof}
\section{Proof of \autoref{prop:N-depend-on-T}}
\label{sec:proof:prop:N-depend-on-T}
Define
\begin{align}
\label{eq:def-hat-B-nu-0}
&\Bhat[t]{t+\ell}{\mu}{\param}\eqdef
\frac{\supnorm{\ewghtfuncf[\param]{Y_t}} \supnorm{\Kunf[\param]{\chunk{Y}{t+1}{t+\ell}} \bigone}}{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]}
\eqsp, \\
&\Chat[t]{t+\ell}{\mu}{\param} \eqdef \frac{\supnorm{\ewghtfuncf[\param]{Y_t}} \int \lambda(\rmd x_{t+1}) g^\param(x_{t+1},Y_{t+1}) \Kunf[\param]{\chunk{Y}{t+2}{t+\ell}} \bigone(x_{t+1})}{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]}
\eqsp. \label{eq:def-check-b-t-ell}
\end{align}
Note that
\begin{align}
\Bhat[t]{t+\ell}{\mu}{\param} &= \Bbar[t]{t+\ell}{\mu}{\param} \frac{ \dens[\mu,t]{\chunk{Y}{t}{t+\ell}}[\param] }{ \cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param] }
&&\text{and} &
\Chat[t]{t+\ell}{\mu}{\param} &= \Cbar[t]{t+\ell}{\mu}{\param} \frac{ \dens[\mu,t]{\chunk{Y}{t}{t+\ell}}[\param] }{ \cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param] }
\eqsp,
\label{eq:rel-bar-B-hat-B}
\end{align}
where $\Bbar[t]{t+\ell}{\mu}{\param}$ and $\Cbar[t]{t+\ell}{\mu}{\param}$ are defined in \eqref{eq:def-bar-b-t-ell} and
\eqref{eq:def-bar-c-t-ell}, respectively.
\begin{lemma} \label{lem:tilde-B-martingale}
For all $\param \in \Param$, the sequence $\{\Chat[t]{t+\ell}{\mu}{\param}\}_{\ell \geq 0}$ defined in \eqref{eq:def-check-b-t-ell} is a $(\PP^\param_\mu,\{\mcf_{t+\ell}\}_{\ell \geq 0})$-martingale, where $\mcf_{t}=\sigma(\chunk{Y}{0}{t})$.
\end{lemma}
\begin{proof}
For all $\ell \geq 0$,
\begin{multline*}
\CPEdoup[\mu]{\param}{\Chat[t]{t+\ell+1}{\mu}{\param}}{\mcf_{t+\ell}}
= \supnorm{\ewghtfuncf[\param]{Y_t}} \int
\Bigg\{ \cdens[\mu]{y_{t+\ell+1}}{\chunk{Y}{0}{t+\ell}}[\param] \\
\times \frac{ \int \lambda(\rmd x_{t+1}) g^\param(x_{t+1},Y_{t+1}) \Kun[\param]{\chunk{Y}{t+2}{t+\ell}}{x_{t+1}}{\rmd x_{t+\ell}} \Kunf[\param]{y_{t+\ell+1}}\bigone(x_{t+\ell})}{\cdens[\mu]{\chunk{Y}{t}{t+\ell},y_{t+\ell+1}}{\chunk{Y}{0}{t-1}}[\param]}
\kappa(\rmd y_{t+\ell+1}) \Bigg\}
\eqsp.
\end{multline*}
Combining this identity with
$$
\cdens[\mu]{\chunk{Y}{t}{t+\ell},y_{t+\ell+1}}{\chunk{Y}{0}{t-1}}[\param]=\cdens[\mu]{y_{t+\ell+1}}{\chunk{Y}{0}{t+\ell}}[\param]
\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param] \eqsp,
$$
and $\int \Kunf[\param]{y_{t+\ell+1}}\bigone(x_{t+\ell}) \kappa(\rmd y_{t+\ell+1})=M^\param(x_{t+\ell},\Xset)=1$, we obtain
\begin{multline*}
\CPEdoup[\mu]{\param}{\Chat[t]{t+\ell+1}{\mu}{\param}}{\mcf_{t+\ell}} \\
= \frac{\supnorm{\ewghtfuncf[\param]{Y_t}} \int \lambda(\rmd x_{t+1}) g^\param(x_{t+1},Y_{t+1}) \Kunf[\param]{\chunk{Y}{t+2}{t+\ell}}\bigone(x_{t+1}) }{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]}=\Chat[t]{t+\ell}{\mu}{\param}\eqsp,
\end{multline*}
which completes the proof.
\end{proof}
\begin{lemma} \label{lem:bound:NTU}
For all $0\leq \gamma<1$ and all $\ell \in \nset$,
\begin{align}
&\PE[\param]{(\Bhat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]\leq \PE[\param]{(\Bbar[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]\eqsp, \label{eq:ntu}\\
&\PE[\param]{(\Chat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]\leq \PE[\param]{(\Cbar[t]{t+\ell}{\mu}{\param})^\gamma }[\mu]\eqsp.\label{eq:nus}
\end{align}
\end{lemma}
\begin{proof}
Using \eqref{eq:rel-bar-B-hat-B}, the proof of \eqref{eq:ntu} and \eqref{eq:nus} follow from the inequality:
\begin{equation} \label{eq:ineg-b}
\PE[\param]{\frac{\psi(\chunk{Y}{t}{t+\ell})}{\left\{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^\gamma}}[\mu]
\leq
\PE[\param]{\frac{\psi(\chunk{Y}{t}{t+\ell})}{\left\{\dens[\mu,t]{\chunk{Y}{t}{t+\ell}}[\param]\right\}^\gamma}}[\mu]\eqsp,
\end{equation}
which holds for any nonnegative measurable function $\psi: \Yset^{\ell+1} \to \rset^+$. We now show \eqref{eq:ineg-b}. Note first that, by applying the tower property of the conditional expectation and then the Tonelli-Fubini theorem, we get
\begin{align}
\nonumber
\PE[\param]{\frac{\psi(\chunk{Y}{t}{t+\ell})}{\left\{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^\gamma}}[\mu]
&= \PE[\param]{\CPEdoup[\mu]{\param}{ \frac{\psi(\chunk{Y}{t}{t+\ell})}{\left\{\cdens[\mu]{\chunk{Y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^\gamma} }{\chunk{Y}{0}{t-1}} }[\mu] \\
\nonumber
&= \PE[\param]{\int \psi(\chunk{y}{t}{t+\ell}) \left\{\cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^{1-\gamma}
\kappa^{\otimes(\ell+1)}(\rmd y_{t:t+\ell})}[\mu] \\
\label{eq:borne-B00alpha-1}
&= \int \psi(\chunk{y}{t}{t+\ell}) \PE[\param]{\left\{\cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^{1-\gamma}}[\mu]
\kappa^{\otimes(\ell+1)}(\rmd y_{t:t+\ell}) \eqsp.
\end{align}
By the Jensen identity, $\PE[\param]{\left\{\cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]\right\}^{1-\gamma}}[\mu] \leq
\left\{ \PE[\param]{\cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]}[\mu] \right\}^{1-\gamma}$. On the other hand,
\begin{align}
\PE[\param]{\cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{Y}{0}{t-1}}[\param]}[\mu]
= \int \cdens[\mu]{\chunk{y}{t}{t+\ell}}{\chunk{y}{0}{t-1}}[\param] \dens[\mu]{\chunk{y}{0}{t-1}}[\param] \kappa^{\otimes t}(\rmd y_{0:t-1})
= \dens[\mu,t]{\chunk{y}{t}{t+\ell}}[\param]
\eqsp.
\end{align}
The proof of \eqref{eq:ineg-b} follows by combining the above relations.
\end{proof}
\begin{lemma} \label{lem:tilde-B-moment-alpha}
Assume \ref{assum:m-upper-bound} and \ref{assum:b-moment-bound}. Then, for all $0\leq \gamma<\alpha$,
$$
\sup_{t \geq 0} \sup_{\param \in \Param}\PE[\param]{\left(\sup_{\ell \geq 0} \Bhat[t]{t+\ell}{\mu}{\param}\right)^\gamma}[\mu]<\infty\eqsp.
$$
where $\alpha$ is defined in \ref{assum:b-moment-bound}.
\end{lemma}
\begin{proof}
Under \ref{assum:m-upper-bound}, we obtain by definitions of $\Bhat[t]{t+\ell}{\mu}{\param}$
and $\Chat[t]{t+\ell}{\mu}{\param}$,
$$
\sup_{\ell \geq 0} \Bhat[t]{\ell}{\mu}{\param} \leq \sum_{\ell=0}^{\ell_\star-1}
\Bhat[t]{\ell}{\mu}{\param}
+ \sup_{\ell \geq \ell_\star} \Bhat[t]{t+\ell}{\mu}{\param} \leq
\sum_{\ell=0}^{\ell_\star-1}\Bhat[t]{t+\ell}{\mu}{\param}
+ \sigma_+ \sup_{\ell \geq \ell_\star} \Chat[t]{t+\ell}{\mu}{\param}\eqsp,
$$
where $\sigma_+$ and $\ell_\star$ are defined in \ref{assum:m-upper-bound} and \ref{assum:b-moment-bound}, respectively. Then, by subadditivity of $u \mapsto u^\gamma$,
$$
\PE[\param]{\left(\sup_{\ell \geq 0} \Bhat[t]{t+\ell}{\mu}{\param} \right)^\gamma}[\mu] \leq \sum_{\ell=0}^{\ell_\star-1}\PE[\param]{(\Bhat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]
+ (\sigma_+)^\gamma \PE[\param]{\sup_{\ell \geq \ell_\star} (\Chat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]\eqsp.
$$
Applying \autoref{lem:bound:NTU} and \eqref{eq:bound-moment-B}, it is thus sufficient to
bound $$\PE[\param]{ \sup_{\ell \geq \ell_\star} (\Chat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu] \eqsp.$$
Since by \autoref{lem:tilde-B-martingale}, $\{\Chat[t]{t+\ell}{\mu}{\param}\}_{k\geq 0}$ is a
$\{\mcf_{t+\ell}\}_{\ell \geq 0}$-martingale and $\alpha \in (0,1)$, we have that $\{(\Chat[t]{t+\ell}{\mu}{\param})^\alpha\}_{\ell \geq 0}$ is a nonnegative $\{\mcf_{t+\ell}\}_{\ell \geq 0}$-supermartingale. The Doob maximal inequality then applies: for all $a>0$,
$$
a \CPPdoup[\mu]{\param}{\sup_{\ell \geq \ell_\star} (\Chat[t]{t+\ell}{\mu}{\param})^\alpha \geq a}{\mcf_{t+\ell_\star-1}}
\leq \CPEdoup[\mu]{\param}{(\Chat[t]{t+\ell_\star}{\mu}{\param})^\alpha}{\mcf_{t+\ell_\star-1}} \eqsp.
$$
Take now the expectation in both sides of the previous inequality and set $\delta=a^{\gamma/\alpha}$. We obtain
\begin{align*}
\PPdoup[\mu]{\param}{\sup_{\ell \geq \ell_\star} (\Chat[t]{t+\ell}{\mu}{\param})^\gamma \geq \delta}
\leq \delta^{-\alpha/\gamma}
\PE[\param]{(\Chat[t]{t+\ell_\star}{\mu}{\param})^\alpha}[\mu] \eqsp.
\end{align*}
Combining this with the inequality $\PE{U} \leq 1+ \int_{1}^\infty \PP\left[ U>\delta \right]\,\rmd \delta$ which holds for all nonnegative random variable $U$, we obtain under \ref{assum:b-moment-bound}
\begin{align*}
\PE[\param]{\sup_{\ell \geq \ell_\star} (\Chat[t]{t+\ell}{\mu}{\param})^\gamma}[\mu]
&\leq 1 + \left(\int_1^\infty \delta^{-\alpha/\gamma} \rmd \delta \right)
\PE[\param]{(\Chat[t]{t+\ell_\star}{\mu}{\param})^\alpha }[\mu] \\
&=1+\frac{\gamma}{\alpha-\gamma}\PE[\param]{(\Chat[t]{t+\ell_\star}{\mu}{\param})^\alpha}[\mu] \eqsp.
\end{align*}
The proof follows by applying again \autoref{lem:bound:NTU} under \ref{assum:b-moment-bound}.
\end{proof}
\begin{proof}[Proof of \autoref{prop:N-depend-on-T}]
For simplicity we will use in this proof the notations
$\densstat{\chunk{Y}{0}{t}}[\param] \eqdef \dens[\pi^\param]{\chunk{Y}{0}{t}}[\param]$ and
$\cdensstat{\chunk{Y}{t}{s}}{\chunk{Y}{0}{t-1}}[\param]= \cdens[\pi^\param]{\chunk{Y}{t}{s}}{\chunk{Y}{0}{t-1}}[\param]$.
First note that
\begin{align*}
\PPstat^\tparam \left\{\frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}> \rho\right\}&=
\PPstat^\tparam \left\{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}+\frac{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}{\densstat{\chunk{Y}{0}{T}}[\tparam]}-1
> \ln \rho+\frac{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}{\densstat{\chunk{Y}{0}{T}}[\tparam]}-1\right\}\\
&\leq \PPstat^\tparam \left\{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}+\frac{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}{\densstat{\chunk{Y}{0}{T}}[\tparam]}-1
> \ln \rho-1\right\} \eqsp.
\end{align*}
Now, since for all $u > 0$, $\ln(u)+u^{-1}-1\geq 0$, we obtain for for all $\rho>\rme= \exp(1)$:
\begin{align*}
\PPstat^\tparam \left\{\frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]} > \rho\right\}&\leq \frac{1}{\ln \rho-1}
\PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}+\frac{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}{\densstat{\chunk{Y}{0}{T}}[\tparam]}-1} \\
&= \frac{1}{\ln \rho-1}
\PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}} \eqsp.
\end{align*}
This implies that for all $M>0$ and all $\rho>\rme$,
\begin{align}
\PPstat^\tparam(\mineps{T}{N_T}^{-1}(\param_T)>M) &\leq \PE[\param_T]{\frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}
\1_{\left\{\frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}\leq \rho \right\}}
\1_{\left\{\mineps{T}{N_T}^{-1}(\param_T)>M \right\}}}[\mu]+\PPstat^\tparam \left\{\frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}> \rho\right\} \nonumber\\
&\leq \rho \PPdoup[\mu]{\param_T}{\mineps{T}{N_T}^{-1}(\param_T)>M}+ \frac{1}{\ln \rho-1} \PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}} \nonumber\\
&\leq \rho \PPdoup[\mu]{\param_T}{\mineps{T}{N_T}^{-1}(\param_T)>M}+ \frac{1}{\ln \rho-1}\left( \sup_{T \geq 0}\PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}} \right) \eqsp. \label{eq:one}
\end{align}
We consider first the last term of the \rhs. Note first that, by the tower property,
\[
\PEstat[\tparam]{\ln \frac{\cdensstat{\chunk{X}{0}{T}}{\chunk{Y}{0}{T}}[\tparam]}{\cdens[\mu]{\chunk{X}{0}{T}}{\chunk{Y}{0}{T}}[\param_T]}} =\PEstat[\tparam]{
\idotsint
\left(\ln \frac{\cdensstat{\chunk{x}{0}{T}}{\chunk{Y}{0}{T}}[\tparam]}{\cdens[\mu]{\chunk{x}{0}{T}}{\chunk{Y}{0}{T}}[\param_T]}\right)
\cdensstat{\chunk{x}{0}{T}}{\chunk{Y}{0}{T}}[\tparam]
\prod_{i=0}^T \lambda(\rmd x_i)
} \geq 0
\]
because this quantity is the expectation under the stationary distribution of a Kullback-Leibler divergence.
This implies that
\begin{equation*}
\PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}} \leq \PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}}+\PEstat[\tparam]{\ln \frac{\cdensstat{\chunk{X}{0}{T}}{\chunk{Y}{0}{T}}[\tparam]}{\cdens[\mu]{\chunk{X}{0}{T}}{\chunk{Y}{0}{T}}[\param_T]}}= \PEstat[\tparam]{\ln \frac{\densstat{\chunk{X}{0}{T},\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{X}{0}{T},\chunk{Y}{0}{T}}[\param_T]}}.
\end{equation*}
On the other hand, using
\begin{multline*}
\PEstat[\tparam]{\ln \frac{\densstat{\chunk{X}{0}{T},\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{X}{0}{T},\chunk{Y}{0}{T}}[\param_T]}} \\=
\PEstat[\tparam]{\ln \left(\frac{ \pi^{\tparam}(X_0) g^{\tparam}(X_0,Y_0) }{\mu(X_0) g^{\param_T}(X_0,Y_0)} \right)}
+T \PEstat[\tparam]{\ln\left(\frac{m^{\tparam}(X_0,X_1)g^{\tparam}(X_1,Y_1) }
{ m^{\param_T}(X_0,X_1)g^{\param_T}(X_1,Y_1)}\right)}\eqsp,
\end{multline*}
we obtain, under \eqref{eq:nuit-de-singapour} and \eqref{eq:nuit-de-singapour-2}, that
\begin{equation}
\label{eq:bound-sup-T-exp}
\sup_{T \geq 0}\PEstat[\tparam]{\ln \frac{\densstat{\chunk{Y}{0}{T}}[\tparam]}{\dens[\mu]{\chunk{Y}{0}{T}}[\param_T]}}<\infty\eqsp.
\end{equation}
Assume first that
\begin{equation}\label{eq:two}
\limsup_{M \to \infty} \sup_{T \geq 0}\PPdoup[\mu]{\param_T}{\mineps{T}{N_T}^{-1}(\param_T)>M}=0\eqsp.
\end{equation}
The proof of the tightness of $\{\mineps{T}{N_T}^{-1}(\param_T)\}_{T \geq0}$ then follows by plugging \eqref{eq:bound-sup-T-exp} into \eqref{eq:one} and by noting that \eqref{eq:one} holds for all $\rho>\rme$, combined with \eqref{eq:two}.
To complete the proof, it thus remains to show \eqref{eq:two}. Rewriting the definition \eqref{eq:def-epsilon}, we obtain
\begin{equation*}
\mineps{T}{N_T}^{-1}(\param_T) \leq
\prod_{t=0}^T \frac{2 B_t^{\param_T}+N_T-2}{N_T-1} = \exp\left\{\sum_{t=0}^T \ln\left(\frac{2 B_t^{\param_T}+N_T-2}{N_T-1}\right)\right\}
\leq \exp\left\{\sum_{t=0}^T \frac{2 B_t^{\param_T}-1}{N_T-1}\right\} \eqsp.
\end{equation*}
where $B_t^\param\eqdef \sup_{\ell \geq 0} \Bhat[t]{t+\ell}{\mu}{\param}$. This implies that for $M>1$,
$$
\PPdoup[\mu]{\param_T}{\mineps{T}{N_T}^{-1}(\param_T)>M} \leq \PPdoup[\mu]{\param_T}{\sum_{t=0}^T \frac{2 B_t^{\param_T}-1}{N_T-1}>\ln M}
\leq \frac{1}{(\ln M)^\gamma} \PE[\param_T]{\left(\sum_{t=0}^T \frac{2 B_t^{\param_T}-1}{N_T-1}\right)^\gamma}[\mu] \eqsp.
$$
The proof of \eqref{eq:two} follows by noting that $N_T\sim T^{1/\gamma}$ and by using
$$
\PE[\param_T]{\left(\frac{\sum_{t=0}^T 2B_t^{\param_T}-1}{T^{1/\gamma}} \right)^\gamma}[\mu]
\leq \PE[\param_T]{\frac{\sum_{t=0}^T (2B_t^{\param_T})^\gamma}{T}}[\mu] \leq 2^\gamma \sup_{t \geq 0} \sup_{\param \in \Param}\PE[\param]{(B_{t}^\param)^\gamma}[\mu]<\infty \eqsp,
$$
where the last inequality follows from \autoref{lem:tilde-B-moment-alpha}.
\end{proof}
\section*{Author address}
\noindent
{Fredrik Lindsten \\
Division of Automatic Control \\
Link\"oping University \\
SE--581 83 Link\"oping, Sweden\\
E-mail: \texttt{[email protected]}\par}
\bibliographystyle{Chicago}
|
1,108,101,562,775 | arxiv | \section{Introduction}
In recent years, atomistic machine learning models have become increasingly popular as a way to perform fast predictions of molecular and material properties with the accuracy of first-principle quantum mechanical calculations~\cite{behler2017}, but a much reduced cost.
The success of these methods has gone hand-in-hand with the progress in constructing representations for molecular and materials configurations that are flexible enough to be transferred across a wide spectrum of different atomic arrangements, while satisfying, at the same time, stringent symmetry constraints~\cite{behler2007,bartok2013,Shapeev2015,Glielmo2017,grisafi2018}.
At the core of the vast majority of transferable machine-learning model for physical properties lies the local nature of the underlying atomistic representation.
This is usually constructed by considering the set of atomic coordinates that are included within spherical environments of a given radial cutoff around any arbitrary atomic center~\cite{bartok2017,Chmiela2017,Zhang2018}.
The prediction of a given physical property is therefore formally decomposed in the sum of atom-centered contributions that effectively incorporate information associated with many-body structural correlations between atoms in each local environment. This locality assumption is very convenient, as it keeps at bay the dimensionality of the regression problem that is modelled by ML, and is physically justified by the nearsigthedness principle of electronic matter~\cite{Prodan2005}.
The major drawback is that it neglects long-range physical effects. Long-range electrostatic interactions, for example, are known to play a fundamental role in the description of ionic systems~\cite{kjellander2018}, macroscopically polarized interfaces~\cite{guo2018}, electrode surfaces~\cite{jorn2013} and nano-science in general~\cite{french2010}.
In all these cases, the pathologically slow decay $\sim 1/r$ of the Coulomb interaction makes it virtually impossible to reach convergence while using a local machine-learning scheme, which is reflected in an effective limit to the accuracy that can be reached by these models.
The problem of incorporating long-range effects in electronic energy predictions is usually tackled by explicitly separating the local many-body contribution to the total energy from a classical electrostatic term approximated via pairwise Coulomb interactions.
This can be done either by direct subtraction of the Ewald-like electrostatic energy of the system~\cite{bartok2010,Deng2019}, or by machine learning, in turn, the partial charges and the atomic multipoles that determine the long-range electrostatics~\cite{Artrith2011,Bereau2015,Bereau2017,Bleiziffer2018,Nebgen2018,Yao2018}.
Other more sophisticated models, specifically designed for ionic systems, rely on a charge equilibration scheme~\cite{Ghasemi2015,Faraji2017}. %
Beyond electrostatic energies, the breakdown of a local machine learning model is particularly pronounced when dealing with intrinsically non-local quantities like the dielectric response of a condensed-phase medium~\cite{grisafi2018}. This non-locality has to do both with the effect of the far-field electrostatics~\cite{bottcher1978}, and to the topological quantum nature of the macroscopic polarization of an infinitely extended material~\cite{resta1994,Resta2010}.
In this case, the problem can possibly be bypassed by adopting specific physical prescriptions. Examples of this can be found in Ref.~\cite{grisafi2018}, where the dielectric tensor $\boldsymbol{\varepsilon}_\infty$ of liquid water is learned indirectly by building a model for an effective molecular polarizability that is mapped to $\boldsymbol{\varepsilon}_\infty$ through the Clausius-Mossotti relationship~\cite{bottcher1978}.
In the context of reproducing the autocorrelation function of the macroscopic polarization of liquid water, another strategy has recently been adopted, where the selected learning targets are the positions of the Wannier centers that are used to recast the electron density of the system into a set of point-charges~\cite{Zhang2019arxiv}.
By and large, the learning models previously described tackle the problem of including long-range phenomena by making use of an \textit{ad hoc} definition of the electrostatic energy, or dielectric response, in terms of local atomic quantities.
Although successful, these kind of approaches have the downside of being very system dependent and, as such, hardly transferable across systems that have a different nature, e.g., those related to charge transfer, or to charge polarizability in (near)-metallic systems~\cite{Wilkins2019}.
Capturing long-range effects without any prior assumption on the nature of the learning target is a difficult task to accomplish with the methods currently available. Most of the approaches that have explicitly attempted to do so, such as Coulomb kernels~\cite{Rupp2012}, many-body tensor representations~\cite{Huo2017arxiv}, or multi-scale invariants~\cite{Hirn2017}, are built upon a global representation of the system rather than on an additive atom-centred model.
Here we propose a simple, yet elegant, solution to this problem, where the non-local character of the target property is incorporated in a symmetry-equivariant fashion into an atom-centered representation. In doing so, we construct a formalism that ensures that the resulting features exhibit the correct asymptotic dependence on the distribution of atoms in the far-field.
This representation can be incorporated straightforwardly into conventional, additive machine learning models.
While the idea is very general, we present as an example a model that has an asymptotic behavior consistent with electrostatic interactions. We show that it can be used successfully to build a local machine learning model that accurately reproduces Coulomb interactions between point particles, the binding curves of charged organic fragments, and the electronic dielectric response of bulk water.
\section{Long-distance equivariant representation}
Let us start from the same formal definition of a ML representation of a structure $\mathcal{A}$ that was introduced in Ref.~\cite{willatt2019}, written in the position basis as a decorated atom density
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{A}} = \sum_i g(\mathbf{r} -\mathbf{r}_i) \ket{\alpha_i},
\label{eq:rA-ket}
\end{equation}
where the index $i$ runs over all the atoms in the structure, $g$ is a Gaussian (or another localized function) peaked at each atom's position $\mathbf{r}_i$, and $\ket{\alpha_i}$ is an abstract vector that encodes the chemical nature of the atom.
We now introduce an atom-density potential representation
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{V}^p} = \sum_i \ket{\alpha_i} \int \textrm{d}\br'\, \frac{g(\mathbf{r}' -\mathbf{r}_i)}{\left|\mathbf{r}'-\mathbf{r}\right|^p}. \label{eq:rVp-ket}
\end{equation}
The rationale for performing this transformation (that can be seen as the action of a linear integral operator on $\ket{\mathcal{A}}$) is that, whereas $\bra{\mathbf{r}}\ket{\mathcal{A}}$ contains information only about the atoms in the vicinity of $\mathbf{r}$, $\bra{\mathbf{r}}\ket{\mathcal{V}^p}$ contains information about the position of \emph{all} atoms in the structure, with a dependence on the position of the $i$-th atom that decays asymptotically as $\left|\mathbf{r}-\mathbf{r}_i\right|^{-p}$\footnote{Evaluation of the integral for $p>1$ require some form of regularization or short-distance cutoff to remove the singularity for $\mathbf{r}\rightarrow\mathbf{r}_i$}.
The physical significance of $\ket{\mathcal{V}^p}$ is obvious, if one considers typical forms of the interactions between atoms and molecules.
For instance, if we had a single species and interpreted \eqref{eq:rA-ket} as a charge density, $\bra{\mathbf{r}}\ket{\mathcal{V}^{1}}$ would correspond to the electrostatic potential generated by such charge density. Analogously, the $p=6$ case would provide the formally correct asymptotic limit of the energy per particle associated with dispersion interactions~\cite{Dreizler2012}, which has inspired previous representations of local environmets such as aSLATM~\cite{Huang2019arxiv}.
Proceeding as in Ref.~\citenum{willatt2019}, we can symmetrize the representation over the continuous translation group, taking a tensor product with the density representation to preserve structural information. One obtains the symmetrized ket
\begin{equation}
\ibraket{\mathbf{r}}{\mathcal{A}\mathcal{V}^p}{\hat{t}} =
\int \dd \hat{t} \bra{\boldsymbol{0}}\hat{t}\ket{\mathcal{A}}\bra{\mathbf{r}}\hat{t}\ket{\mathcal{V}^p} =
\sum_j \ket{\alpha_j}\bra{\mathbf{r}}\ket{\mathcal{V}^p_j},
\label{eq:rAVp-ket}
\end{equation}
where we introduced the shorthand notation (see the SI for a full derivation)
\begin{equation}
\bra{\mathbf{r}}\ket{\mathcal{V}^p_j} =
\sum_{i} \ket{\alpha_i} \int \textrm{d}\br' \frac{(g\star g)(\mathbf{r}' -(\mathbf{r}_i-\mathbf{r}_j))}{\left|\mathbf{r}'-\mathbf{r}\right|^p}\label{eq:rVj-ket}.
\end{equation}
Modulo the re-definition of the atom density function as the auto-correlation of $g$, $\bra{\mathbf{r}}\ket{\mathcal{V}^p_j}$ is just the atom-density potential~\eqref{eq:rVp-ket} computed using $\mathbf{r}_j$ as the origin of the reference frame.
Symmetrization over the translation group leads naturally to a structural representation that amounts to a sum over atom-centred descriptors -- foreshadowing an additive property model built on such feature vector. Particularly for low values of the potential exponent $p$, however, the integral in Eq.~\eqref{eq:rVj-ket} introduces a substantially non-local behavior. The value of $\bra{\mathbf{r}}\ket{\mathcal{V}^p_j}$ in the vicinity of the central atom $j$ can in principle depend on the position of atoms that are very far from it, \emph{even if one introduces a cutoff function that restricts the range of $\bra{\mathbf{r}}\ket{\mathcal{V}^p_j}$ around the central atom, and hence its complexity}.
One can then symmetrize further over the rotation group and over inversion symmetry. We will refer from now on to the resulting class of atomistic representations that capture long-range interactions based on the local value of an atom-density potential as the \textit{long-distance equivariant}~(LODE) framework. In the following we will focus on the case of $p=1$, that corresponds to electrostatic interactions.
It is instructive to first consider the case of the first order spherical invariant, and to take the limit in which the atom density is represented by Dirac-$\delta$ distributions. It is easy to see that in this limit
\begin{equation}
\bra{\alpha\mathbf{r}}\ket{\mathcal{V}_j^1} =
\sum_{i\in\alpha} \frac{1}{\left|\mathbf{r} - \mathbf{r}_{ij}\right|}\label{eq:rV1-delta},
\end{equation}
where $\mathbf{r}_{ij}=\mathbf{r}_i-\mathbf{r}_j$.
Integrating over the SO(3) group yields the first invariant
\begin{equation}
\bra{\alpha r}\ket{{\mathcal{V}_j^1}^{(1)} } =
\int \dd \hat{R} \bra{\alpha r\hat{\mathbf{r}}}\hat{R}\ket{\mathcal{V}^1_j}=
\sum_{i\in\alpha} \min\left[\frac{1}{r},\frac{1}{r_{ij}}\right]\label{eq:rV1-delta-1},
\end{equation}
that simply sums up $1/r_{ij}$ terms for all atoms \emph{outside} the region over which the LODE representation is computed.
Ignoring the contribution from the atoms within the cutoff, that can be better characterized by other atomic structure representations, a linear model built on these features is equivalent to a fixed point-charge electrostatic model.
In other words, in this limit the radial dependence of the regression weights is integrated out, and the weights associated with each pair of central atom type $\alpha'$ and neighbor type $\alpha$ corresponds to the product of the atomic charges $q_{\alpha'}$ and $q_{\alpha}$.
While this construction is very revealing, it is clear that its descriptive power is limited. Non-linear kernel models can provide a more flexible functional form, but higher-order invariants provide a systematic way of incorporating more information on structural features.
As in the SOAP framework for the atom density~\cite{bartok2013,de+16pccp,willatt2019}, the most convenient way to compute such invariants involves writing the scalar field associated with the species $\alpha$ on a basis of radial functions $R_n(r)$ and spherical harmonics~$Y^l_m(\hat{\mathbf{r}})$,
\begin{equation}
\bra{\alpha n l m}\ket{\mathcal{V}^p_j} = \int \dd \mathbf{r}\, R_n(r) Y^{l}_{m}(\hat{\mathbf{r}})^\star \bra{\alpha \mathbf{r}}\ket{\mathcal{V}^p_j}
\label{eq:anlm-ket}
\end{equation}
and then computing the appropriate spherically-covariant combinations. For example, for rotationally invariant representations of order $\nu=2$ (the form that is equivalent to the SOAP power spectrum and that we will use in applications)
\begin{equation}
\bra{\alpha n \alpha' n' l}\ket{{\mathcal{V}^p_j}^{(2)}} = \sum_{|m|\le l} \frac{\bra{\alpha n l m}\ket{\mathcal{V}^p_j}^\star\bra{\alpha' n' l m}\ket{\mathcal{V}^p_j}}{\sqrt{2l+1}}
\label{eq:nnm-ket}.
\end{equation}
The extension to higher orders in spatial correlations $\nu>2$ and/or to rotationally covariant representations of a given spherical-tensor order $\lambda>0$ is straightforward based on the analogous density-based counterparts~\cite{grisafi2018,willatt2019,grisafi2019-arxiv}.
Note that it is also possible to compute representations that combine different values of $p$, and even $p=0$, corresponding to the atom-density field. A systematic investigations of the various combinations, and their physical meaning, is left for future work.
\subsection{Efficient evaluation of the LODE representation}
As discussed in the SI, for molecules and clusters the expansion~\eqref{eq:anlm-ket} can be computed conveniently in real space, by numerical integration on appropriate atom-centred grids. For a bulk system, described by a periodically-repeated supercell, the long-range nature of the integral kernel that appears in~\eqref{eq:rVj-ket} would make computing the expansion prohibitive.
This is exactly the same problem one faces when evaluating electrostatic interactions in the condensed phase, and fortunately it has long been solved, e.g., with the many techniques based on the use of a plane-waves auxiliary basis~\cite{Ewald1921, Essmann1995}.
Consider the plane-wave definition as $\bra{\mathbf{r}}\ket{\mathbf{k}} = e^{\mathrm{i}\mathbf{k}\cdot\mathbf{r}}$, with $\left\{\mathbf{k}\right\}$ representing a set of wave-vectors that are compatible with the simulation box. The fact we start from a smooth, Gaussian atom density, means that in practice one needs only a manageable number of plane waves. In particular, the width $\sigma$ of the Gaussian density determines the minimum wavelength that should be introduced in the the plane-wave expansion, so that $\mathbf{k}$-vectors only need to be generated within a sphere of radius $k_\text{max}$ of the order of $2\pi/\sigma$. In order to evaluate the local potential projections,
it is then enough to include the identity resolution $\sum_{\mathbf{k}}\ket{\mathbf{k}}\bra{\mathbf{k}}$ within the braket of Eq.~\eqref{eq:anlm-ket}, i.e.
\begin{equation}\label{eq:k-resolution}
\bra{\alpha n l m}\ket{\mathcal{V}^p_j}
= \sum_{\mathbf{k}}\bra{n l m}\ket{\mathbf{k}}\bra{\alpha \mathbf{k}}\ket{\mathcal{V}^p_j} .
\end{equation}
As detailed in the SI, $\bra{n l m}\ket{\mathbf{k}}$ corresponds to the expansion in plane waves of the basis of the local environment representation, and can be computed analytically once and for all if the radial functions are taken to be Gaussian type orbitals~\cite{Cahill2013}. Conversely, $\bra{\alpha \mathbf{k}}\ket{\mathcal{V}^p_j}$ represents the Fourier components of the potential generated by the Gaussian density of element $\alpha$ for the entire system, and can be readily computed analytically~\cite{Allen1989}.
As a result, the geometric local nature of the representation of Eq.~\eqref{eq:k-resolution} is formally factorized from its system-dependent global character. It has not escaped our attention that Eqn.~\eqref{eq:k-resolution} could also be used to compute efficiently the coefficients of the density expansion that enter, for instance, the SOAP framework.
In the context of electrostatic interactions, one should note that although the fictitious charge density distribution of Eq.~\eqref{eq:rA-ket} does not satisfy charge neutrality, one can avoid a divergence of the potential by ignoring the $\mathbf{k}$=$\boldsymbol{0}$ component from the sum of Eq.~\eqref{eq:k-resolution}.
Similarly, divergences in the potential for $p>1$ can be eliminated by appropriately regularizing the $1/r^p$ divergence in reciprocal space.
\section{Results}
We now proceed to test the performance of the LODE representation in the context of predicting scalar electrostatic properties.
In all cases we use Gaussian process regression using simple polynomial kernels, to emphasize the role of the features - as opposed to the regression scheme - on the performance of the model. Details of the parameters used in each example are reported in the SI.
We use the SOAP framework as the baseline for a comparison, which is appropriate given the close relation between the two approaches, and the excellent performances demonstrated by SOAP-based models.
It is important however to stress that \emph{any} local model with a finite cutoff will exhibit similar behavior as what we observe with SOAP.
We also benchmark the combination of SOAP and LODE, that incorporates the advantages of both short-range and long-range models, realizing a kind of range-separated machine learning framework.
\begin{figure}[tbp]
\centering
\includegraphics[width=9cm]{fig1.pdf}
\caption{Learning curves for the electrostatic energy of an idealized random gas of point charges. The model is trained on 1500 randomly selected configurations and tested on other 500 independent configurations. (\textit{black full and dashed lines}) Local ML (SOAP) results at environment cutoffs of 3, 6 and 9~{\AA}. (\textit{red lines}) LODE($\nu=1$) results at an environment cutoff of 2~{\AA} and Gaussian smearing of 0.5 and 1.0~{\AA}, and LODE($\nu=2$) results with a cutoff of 3{\AA}.\label{fig:random_nacl}}
\end{figure}
\subsection{A gas of point charges}
We begin by considering a toy system made of randomly distributed point-charges in a cubic box that is infinitely repeated in the three dimensions using periodic boundary conditions. The number of positive charges is equal to the number of negative charges, so that the system is overall neutral.
To limit the amplitude of energy fluctuations, we discard configurations in which two charges are closer together than~2.5~{\AA}.
Following these prescriptions, we generate a total of 2000 configurations, each of which contains 64 atoms in cubic boxes spanning a broad range of densities, with side lengths between 12 and 20~{\AA}.
For each of these configurations, we compute the electrostatic energy using the Ewald method, as implemented in LAMMPS~\cite{Plimpton1995}. Fig.~\ref{fig:random_nacl} compares the learning performance obtained using a local SOAP representation with different cutoffs, to the one obtained by direct application of the LODE representation. In both cases, a Gaussian width of $\sigma$=1.0~{\AA} has been used to construct the density distribution of Eq.~\eqref{eq:rA-ket}.
The figure clearly demonstrate the inefficiency of a local model when attempting to learn a property that is dominated by long-range effects. Given that the training set contains few configurations with atoms closer than 3~{\AA}, the model with $r_\text{cut}$=3~{\AA} is almost completely ineffective. Even increasing the cutoff up to 9~{\AA}, a SOAP model barely reaches an accuracy of about 20\% RMSE when using the maximum number of training structures.
A linear model built using the LODE($\nu=1$) representation, on the other hand yields an error below 1\%{} by using a handful of training points.
As discussed above, this model represents exactly Coulomb interactions between fixed point charges, and the only reason the error does not converge to zero is the fact we use a Gaussian smearing in the definition of LODE, rather than $\delta$ distributions. This is apparent in the dramatic reduction of the error when halving the value of $\sigma$.
A LODE($\nu=2$) model, although initially less effective, possesses sufficient descriptive power to reach, and then overcome, the accuracy of the linear $\nu=1,\sigma=1${\AA} model.
This simple example highlights how difficult it is to incorporate long-range physics with a conventional local structure representation, and demonstrates that the LODE features can, on their own, be used as a very efficient description to predict the electrostatic energy of a system of fixed point charges.
\begin{figure*}[bhtb]
\centering
\includegraphics[width=0.9\linewidth]{fig2.pdf}
\caption{Comparison of reference and predicted binding curves of six molecular dimers. (\textit{black dots}) DFT reference calculations, (\textit{red lines}) local SOAP predictions, (\textit{green lines}) combined SOAP and LODE(1) predictions, (\textit{blue lines}) combined SOAP and LODE(2) predictions. Full lines and shaded background represent the range of distances that is comparable to the geometries included in the training set. Dashed lines refer to predictions carried out in an extrapolative (long-range) regime. \label{fig:binding_curves} }
\end{figure*}
\subsection{Binding curves of charged dimers}
We now consider a more realistic scenario, namely the problem of predicting the binding curves of a dataset of organic molecular dimers that carry an electric charge. We extract 661 different dimers containing H, C, N and O atoms from the BioFragment Database (BFDb)~\cite{Burns2017}, where at least one of the two monomers in each dimer configuration has a net charge.
This choice ensures that we focus the exercise on a problem for which permanent electrostatic interactions play a prominent role. Contrary to the NaCl toy system, however, one cannot expect that a fixed point-charge model would suffice to predict the binding curves.
The dataset contains a multitude of chemical moieties, including neutral polar fragments, highly polarizable groups, and provides a realistic assessment of how well a LODE model can perform in practice.
For each of the 661 dimers, we consider 13 configurations where the reciprocal distance between the two monomers, defined as the distance between their geometric centers, spans an interval that can go from a minimum of $\sim$3~{\AA} to a maximum of $\sim$8~{\AA}.
For each of these configurations, unrelaxed binding curves are computed at the DFT/B3LYP level using the FHI-aims quantum-chemistry package~\cite{Blum2009}.
The training dataset is defined by considering the binding curves of the first 600 dimers out of the total of 661, while predictions are tested on the remaining 61. We also include the isolated monomers in the training set, so that the ML model has knowledge of the dissociation limit, and compute a few additional reference energies at larger separations, which are however not used for training.
SOAP and LODE representations are defined within spherical environments of $r_\text{cut}=3.0$~{\AA}, while the Gaussian width of the density field is chosen to be $\sigma$=0.3 and 1.0~{\AA} respectively.
Before carrying out the learning exercise, the reference DFT energies are baselined with respect to the monomer energies, so that the model only has to reproduce the interaction energies between the two fragments.
Upon this baselining, we find that optimal SOAP performances correspond to a RMSE $\sim$20\%, whereas a suitable combination between SOAP and LODE($\nu=2$) allows us to bring the error down to $\sim$4\%.
This substantial improvement can be justified by the large discrepancy between the SOAP and SOAP+LODE accuracy in representing the interaction between the monomers at intermediate and large distance.
To clarify the issue further, we plot in Fig.~\ref{fig:binding_curves} the predicted binding curves of 6 test dimers, against the reference DFT calculations.
We observe that a SOAP-based local description is overall able to capture the short-range interactions with good accuracy. However, it becomes less and less effective as the distance between the monomers increases, to the point of being completely blind to changes in interatomic distances when the environments cutoff distance is overcome. Note that the performance of the local model at small separations is degraded substantially by the inclusion of fully dissociated dimers in the training set, because the representation cannot distinguish these configurations from those barely beyond the cutoff distance, that correspond to a non-zero value of the binding curve.
The SOAP+LODE multiscale description, in contrast, can recognize the changes in separation between the monomers, leading to a smooth asymptotic behavior of the predicted binding curve.
Although a linear model incorporating LODE($\nu=1$) allows us to halve the error made by SOAP down to $\sim$10\%, it is not sufficiently expressive to achieve predictive accuracy - particularly for binding curves that involve neutral monomers that do not have a $1/r$ asymptotic behavior.
This limitation can be addressed using a non-linear kernel based on SOAP+LODE($\nu=2$). The resulting model is able to accurately predict the binding curves in the entire domain of distances, demonstrating its transferability across a vast spectrum of different chemical species and intermolecular configurations.
This is particularly remarkable, as the SOAP+LODE($\nu=2$) model does not only predict accurately systems that are dominated by monopole electrostatics (Fig.3-(a,b,c,e)), but also systems in which only one of the molecules is charged, and so interactions involve polarization as well as charge-dipole electrostatics (Fig.3-(d,f)).
It should be noted, however, that the current scheme cannot transparently describe the physics of polarization or charge transfer. While the use of a composite SOAP+LODE kernel can describe how the environment of an atom affects its response to an external field, there is no explicit provision to represent how the field generated by far-away atoms depends on their neighboring structure.
\subsection{Dielectric response of liquid water}
As a final example, we revisit the problem of constructing a model of the infinite-frequency dielectric response tensor $\boldsymbol{\varepsilon}_\infty$ of liquid water. Details about the dataset generation and the computation of the dielectric tensors are reported in Ref.~\cite{grisafi2018}.
In that work, we argued that a local model was inefficient in learning dielectric response because of its collective nature, and showed that using the Clausius-Mossotti relationship to map $\boldsymbol{\varepsilon}_\infty$ to more local quantities was greatly improving the model.
Here, LODE learning performances are only tested for the isotropic component of the tensor $\varepsilon_0=\text{Tr}[\boldsymbol{\varepsilon}_\infty]$, which was shown to be most sensitive to the collective nature of the physics of dielectrics.
Similarly to the case of th BFDb, we use a non-linear kernel that combines a SOAP representations computed using an optimal Gaussian width of $\sigma$=0.3~{\AA}, and LODE($\nu=2$) features constructed starting from a Gaussian density of $\sigma$=1.0~{\AA}.
Figure~\ref{fig:eps0_water} reports results obtained when learning on 800 randomly selected structures and predicting on other 200 independent configurations.
\begin{figure}[tb]
\centering
\includegraphics[width=9cm]{fig3.pdf}
\caption{Learning curves for the isotropic component of the dielectric response tensor $\boldsymbol{\varepsilon}_\infty$ of liquid water. The model is trained on up to 800 randomly selected configurations and tested on other 200 independent configurations. (\textit{black full and dashed lines}) SOAP results with $r_\text{cut}$= 3, 4 and 6~{\AA}. (\textit{red line}) LODE results with $r_\text{cut}$=3~{\AA}. (\textit{blue line}) combined results of SOAP and LODE, both using $r_\text{cut}$=3~{\AA}.\label{fig:eps0_water}}
\end{figure}
Similarly to what has been observed in the previous example, LODE performs much better than SOAP when relying upon a local description of $r_\text{cut}$=3~{\AA}. In this case, however, we observe a substantial improvement of the performance of SOAP when increasing the size of the local environments, eventually overcoming the LODE accuracy with a radial cutoff of $r_\text{cut}$=6~{\AA}.
This might be a consequence of a less pronounced contribution of long-range tails, or - likely - of the fact that a cutoff of 6~{\AA} encompasses the entirety of the supercell, and therefore effectively provides a complete description of the input space of this specific dataset.
Optimal ML predictions can be obtained when combining the fine-grained local description of SOAP at $r_\text{cut}$=3~{\AA} with the coarse-grained and non-local description of LODE at the same cutoff.
This behaviour highlights the multiscale character of $\varepsilon_0$, meaning that both the local many-body information and the long-range electrostatic effects need to be considered to get accurate predictions.
It is also important to stress that a combination of SOAP and LODE is not only beneficial in terms of learning performance, but can also reduce the computational effort in evaluating the feature vector -- much like efficient methods for evaluating empirical potentials often treat separately short-range and long-range interactions. %
\section*{Conclusions}
Machine-learning of atomic-scale properties that are dominated by short-range interactions has reached a stage of maturity, with a substantial consensus about the ingredients of a successful model.
The most commonly used frameworks incorporate symmetries and physical principles into the representation of atomic configurations, and achieve transferability by building additive property models.
Furthermore, there is a growing understanding of the deep connections that exist between many of these methods, which is reflected in the fact that in most applications they reach similar levels of accuracy.
In this paper we show how to extend these schemes in a way that makes it possible to incorporate long-range physics, without sacrificing the transferability of additive property models and the general applicability of rather abstract measures of atomic structure correlations.
The crux lies in the definition of an atom-density potential that folds global information on the structure and composition of a system into a local representation, that (1) has a physically-motivated asymptotic behavior with inter-atomic separation and (2) can be efficiently computed in a symmetry-consistent fashion using similar ideas as those that underlie the SOAP framework and related approaches.
We apply this long-distance equivariant (LODE) representation focusing on the version that is based on a Coulomb-like atom-density potential. We demonstrate that, alone or in combination with SOAP, it outperforms local machine-learning methods in capturing long-range physics, for tasks that involve learning the electrostatic energy of a point-charge model, the binding curve of dimers of electrically charged organic fragments, and the dielectric constant of bulk water.
These examples are little more than an assay that proves that this scheme can incorporate efficiently long-range information in atomistic machine learning.
More work is needed to draw a systematic, formal connection between a ML model built on LODE features and long-range interatomic potentials, much like a connection has been shown between linear models built on density-based features and short-range many-body potentials~\cite{Glielmo2018,willatt2019,Drautz2019}; whether choosing other exponents in $\bra{\mathbf{r}}\ket{\mathcal{V}^p}$ can improve models of dispersion and of long-range effects that do not imply a characteristic asymptotic behavior; whether equivariant local features can be obtained by combining the expansion of the density and that of $\bra{\mathbf{r}}\ket{\mathcal{V}^p}$; whether the combination of SOAP and LODE can be used to improve the accuracy and the computational efficiency of existing ML forcefields; whether it is possible to incorporate polarizable atoms physics into the LODE framework.
Future investigation will address these and many other questions, and unearth the full potential of this physics-inspired approach to atomistic machine learning.
\section*{Acknowledgments}
The Authors would like to thank Clemence Corminboeuf and G\'abor Cs\'any for insightful comments on an early version of the manuscript.
M.C and A.G. were supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 677013-HBMAP), and by the NCCR MARVEL, funded by the Swiss National Science Foundation. A.G. acknowledges funding by the MPG-EPFL Center for Molecular Nanoscience and Technology.
We thank CSCS for providing CPU time under project id s843.
|
1,108,101,562,776 | arxiv | \section{Introduction}
In \cite{HuangSo}, Huang and So presented a solution for any quadratic equation $z^2+\mu z+\nu=0$ over Hamilton's quaternion algebra $\mathbb{H}$.
In \cite{Abrate}, Abrate generalized that result for any quaternion algebra over any field of characteristic not $2$.
Being able to solve a quaternion quadratic equation has proved useful, for example in computing the left eigenvalues of a $2 \times 2$ quaternion matrix (see \cite{Wood}).
In this paper we shall present a solution for such equations over a quaternion division algebra over a field of characteristic $2$.
Let $F$ be a field of characteristic 2.
A quaternion algebra $Q$ over $F$ is a four dimensional algebra $F+F x+F y+F x y$ where $x$ and $y$ satisfy the relations
$$x^2+x=\alpha, y^2=\beta, x y+y x=y$$
for some $\alpha \in F$ and $\beta \in F^\times$.
Every central simple algebra over $F$ of dimension $4$ (or equivalently of degree $2$) is a quaternion algebra \cite[Chapter 8, Section 11]{Scharlau}.
The quaternion algebra is equipped with a canonical involution $\sigma$ defined by
$$\sigma(a+b x+c y+d x y)=a+b+b x+c y+d x y$$
for any $a,b,c,d \in F$.
For any element $q \in Q$, $\sigma(q)$ is called its ``conjugate".
The norm and trace of $q$ are defined to be $\norm(q)=q \sigma(q)$ and $\tr(q)=q+\sigma(q)$, both are in $F$.
These definitions coincide with the general definitions of the reduced norm and trace in central simple algebras.
For any $q \in Q$, $q^2+\tr(q) q+\norm(q)=0$, which means that $\tr(q)=0$ if and only if $q^2 \in F$.
The space $F+F y+F x y$ consists of all the elements of trace zero.
It is known that $Q$ is either a division algebra or the matrix algebra $M_2(F)$.
From now on we shall assume that $Q$ is a division algebra.
In particular it means that $F$ must be an infinite field, following \cite{Maclagan-Wedderburn}.
From \cite{Herstein} it is known that the quadratic equation $z^2+\mu z+\nu=0$ has either infinitely many roots or up to two roots.
The elements $z \in Q \setminus F$ which satisfy $z^2+z \in F$ are called ``Artin-Schreier", and the elements $z \in Q \setminus F$ which satisfy $z^2 \in F$ are called ``square-central".
In particular, in the description of the quaternion algebra above, $x$ is Artin-Schreier and $y$ is square-central.
It is pointed out in \cite[Chapter 8, Section 11]{Scharlau} that for any Artin-Schreier $x' \in Q$ there exists a square-central element $y' \in Q$ satisfying $x' y'+y' x'=y'$, and then $x'$ and $y'$ can replace $x$ and $y$ in the description of the quaternion algebra above. The canonical involution, norm and trace are independent of the choice of generators and therefore remain the same.
Given this Artin-Schreier element $x'$, $Q=V_0+V_1$ where $V_0=F+F x'$ and $V_1=F y'+F x' y'$. Every element in $V_0$ commutes with $x'$ and every element $t \in V_1$ satisfies $t x+x t=t$.
\begin{prop}
For any square-central element $y' \in Q$ there exists an Artin-Schreier element $x' \in Q$ satisfying $x' y'+y' x'=y'$.
\end{prop}
\begin{proof}
By a straight-forward computation, for any $z \in Q$, $z y'+y' z$ commutes with $y'$.
If $z y'+y' z=0$ for every $z \in Q$ then $y'$ is central in $Q$, which means that $y' \in F$, contradictory to the assumption that $y'$ is square-central.
Therefore we can choose some $z$ for which $w=z y'+y' z \neq 0$.
Since $Q$ is a division algebra, $w$ is invertible.
Set $x'=y' w^{-1} z$. Then $x' y'+y' x'=y'$.
By a straight-forward computation, $x'^2+x'$ commutes with $y'$.
Since $Q$ is a division algebra of degree $2$ \cite{Albert}, it has no nontrivial division subalgebras, which means that the subalgebra generated by $x'$ and $y'$ is the whole algebra. Since $x'^2+x'$ commutes with $x'$ and $y'$, $x'^2+x' \in F$.
\end{proof}
\section{Quaternion Quadratic Equations}
Let $F$ be a field of characteristic 2 and $Q$ be a quaternion division algebra over $F$.
Let $z^2+\mu z+\nu=0$ be the equation under discussion.
The coefficients $\mu$ and $\nu$ are arbitrary elements in $Q$.
As a set, $Q$ can be written as the disjoint union of $\set{0}$, $F^\times$, the set of square-central elements, and all the other elements.
We shall denote the latter by $Q'$.
Therefore, $\mu$ belongs to exactly one of these subsets.
If $\mu \in F^\times$ then by dividing the equation by $\mu^2$ we get $(\mu^{-1} z)^2+(\mu^{-1} z)+\mu^{-2} \nu=0$, which means that in this situation it suffices to be able to solve the case of $\mu=1$.
If $\mu \in Q'$ then it satisfies $\mu^2+\eta \mu \in F$ for some $\eta \in F^\times$.
Consequently $(\eta^{-1} \mu)^2+(\eta^{-1} \mu) \in F$, which means that $\eta^{-1} \mu$ is Artin-Schreier and the equation can be expressed as $(\eta^{-1} z)^2+(\eta^{-1} \mu) (\eta^{-1} z)+\eta^{-2} \nu=0$, and so in this situation it suffices to be able to solve the case of an Artin-Schreier $\mu$.
In conclusion, it suffices to be able to solve the following cases:
\begin{enumerate}
\item $\mu$ is Artin-Schreier.
\item $\mu$ is square-central.
\item $\mu=1$.
\item $\mu=0$.
\end{enumerate}
\section{$\mu$ is Artin-Schreier}
Assume that the equation is $z^2+\mu z+\nu=0$ for some $\nu \in Q$ and some Artin-Schreier element $\mu$ satisfying $\mu^2+\mu=\alpha$.
The element $\nu$ splits as $\nu_0+\nu_1$ where $\nu_0 \mu=\mu \nu_0$ and $\nu_1 \mu+\mu \nu_1=\nu_1$.
Furthermore, $\nu_0=\nu_{0,0}+\nu_{0,1} \mu$ where $\nu_{0,0},\nu_{0,1} \in F$.
\begin{thm}
If $\nu_1 \neq 0$ then all the elements $z \in Q$ satisfying $z^2+\mu z+\nu=0$ belong to the set
\begin{eqnarray*}
R & = & \{ b^2+b+\nu_{0,1}+b \mu+(b+\mu)^{-1} \nu_1 : b \in F,\\
& & (b^2+b+\nu_{0,1})^2+(b^2+b) \alpha+(b^2+b+\alpha)^{-1} \nu_1+\nu_{0,0}=0 \}.
\end{eqnarray*}
Otherwise, all the elements $z \in Q$ belong to the subfield $F[\mu]$, and therefore we can simply solve it as a quadratic equation over this field.
\end{thm}
\begin{proof}
Since $\mu$ is Artin-Schreier, $z=z_0+z_1$ where $z_0 \mu=\mu z_0$ and $z_1 \mu+\mu z_1=z_1$.
Furthermore, the expression $I=z^2+\mu z+\nu$ splits into two parts $I_0+I_1$ such that $I_0 \mu=\mu I_0$ and $I_1 \mu+\mu I_1=I_1$.
The equation then splits in the following way:
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+\mu z_0+\nu_0=0\\
I_1 & = & z_0 z_1+z_1 z_0+\mu z_1+\nu_1=0
\end{eqnarray*}
Now, $z_0=a+b \mu$ for some $a,b \in F$. Consequently $z_0 z_1+z_1 z_0=b z_1$. Moreover, $z_0^2=a^2+b^2 \mu+b^2 \alpha$ and $\mu z_0=a \mu+b \mu+b \alpha$
Similarly $\nu_0=\nu_{0,0}+\nu_{0,1} \mu$ for some $\nu_{0,0},\nu_{0,1} \in F$.
The second part of the equations therefore becomes
$b z_1+\mu z_1+\nu_1=0$, which means that $(b+\mu) z_1=\nu_1$.
Now, $((b+\mu) z_1)^2=(b+\mu) z_1 (b+\mu) z_1=(b+\mu) (b+\mu+1) z_1^2=(b^2+b+\alpha) z_1^2$.
If $\nu_1 \neq 0$ then $z_1^2 \neq 0$ and $b^2+b+\alpha \neq 0$, and we obtain $z_1^2=(b^2+b+\alpha)^{-1} \nu_1$.
If $\nu_1=0$ then either $z_1 = 0$ or $b^2+b+\alpha=0$. Since $Q$ is a division algebra, there is no $b \in F$ for which $b^2+b+\alpha=0$, and therefore $z_1=0$, and all the elements $z \in Q$ satisfying $z^2+\mu z+\nu=0$ belong to $F[\mu]$, which means that the equation can be simply solved in the subfield $F[\mu]$.
Assume $\nu_1 \neq 0$.
The first part splits again $I_0=I_{0,0}+I_{0,1}$ where $I_{0,0} \in F$ and $I_{0,1} \in F \mu$.
It splits in the following way:
\begin{eqnarray*}
I_{0,0} & = & a^2+b^2 \alpha+(b^2+b+\alpha)^{-1} \nu_1+b \alpha+\nu_{0,0}=0\\
I_{0,1} & = & b^2 \mu+(a+b) \mu+\nu_{0,1} \mu=0
\end{eqnarray*}
From $I_{0,1}$ we obtain $a=b^2+b+\nu_{0,1}$.
By substituting that in $I_{0,0}$ we obtain $(b^2+b+\nu_{0,1})^2+(b^2+b) \alpha+(b^2+b+\alpha)^{-1} \nu_1+\nu_{0,0}=0$.
\end{proof}
As a result we have the following algorithm for calculating the roots of the equation $z^2+\mu z+\nu=0$ where $\mu$ is Artin-Schreier and $\nu \not \in F[\mu]$:
\begin{algo}
\begin{enumerate}
\item Calculate all the elements $t \in F$ satisfying $(t+\nu_{0,1})^2+t \alpha+(t+\alpha)^{-1} \nu_1+\nu_{0,0}=0$.
\item For each such $t$, find all the elements $b \in F$ satisfying $b^2+b=t$.
\item For each such $b$ (there should be up to $6$ of those in total), substitute the element $b^2+b+\nu_{0,1}+b \mu+(b+\mu)^{-1} \nu_1$ in the original equation to check whether it is really a root. The set of roots consists of all the elements who passed the substitution test.
\end{enumerate}
\end{algo}
\section{$\mu$ is square-central}
Assume $\mu$ is square-central, then there exists some Artin-Schreier $\theta$ satisfying $\theta \mu+\mu \theta=\mu$.
In particular, $\mu^2=\beta$ and $\theta^2+\theta=\alpha$ for some $\alpha,\beta \in F$.
The element $\nu$ splits into $\nu_0+\nu_1$ where $\nu_0 \theta=\theta \nu_0$ and $\nu_1 \theta+\theta \nu_1=\nu_1$.
Furthermore, $\nu_0=\nu_{0,0}+\nu_{0,1} \theta$ and $\nu_1=\nu_{1,0} \mu+\nu_{1,1} \mu \theta$.
\begin{thm}
All the elements $z \in Q$ satisfying $z^2+\mu z+\nu=0$ belong to
\begin{eqnarray*}
R & = & \{b c+\nu_{1,0}+b \theta+c \mu+\beta^{-1} (b^2+\nu_{0,1}) \mu \theta : b,c \in F,\\
& & \beta^{-1} (b^2+\nu_{0,1}) b+b+\nu_{1,1}=0,\\
& & (b c+\nu_{1,0})^2+b^2 \alpha+\beta (c^2+c d+\alpha d^2)+c \beta+\nu_{0,0}=0\}.
\end{eqnarray*}
\end{thm}
\begin{proof}
The element $z$ splits into $z_0+z_1$ where $z_0 \theta=\theta z_0$ and $z_1 \theta+\theta z_1=z_1$.
The expression $I=z^2+\mu z+\nu$ splits into $I_0+I_1$ where $I_0 \theta=\theta I_0$ and $I_1 \theta+\theta I_1=I_1$.
The equation correspondingly splits into two:
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+\mu z_1+\nu_0=0\\
I_1 & = & z_0 z_1+z_1 z_0+\mu z_0+\nu_1=0
\end{eqnarray*}
Now, $z_0=a+b \theta$ and $z_1=c \mu+d \mu \theta$ for some $a,b,c,d \in F$.
$z_0 z_1+z_1 z_0=b z_1=b c \mu+b d \mu \theta$.
$\mu z_0=a \mu+b \mu \theta$, $\mu z_1=c \beta+d \beta \theta$.
$z_0^2=a^2+b^2 \theta+b^2 \alpha$, $z_1^2=\beta (c^2+c d+\alpha d^2)$.
Each of the parts $I_0$ and $I_1$ splits again into two parts $I_0=I_{0,0}+I_{0,1} \theta$ and $I_1=I_{1,0} \mu+I_{1,1} \mu \theta$ where $I_{0,0},I_{0,1},I_{1,0},I_{1,1} \in F$.
Consequently we have the following system of four equations:
\begin{eqnarray*}
I_{0,0} & = & a^2+b^2 \alpha+\beta (c^2+c d+\alpha d^2)+c \beta+\nu_{0,0}=0\\
I_{0,1} & = & b^2+d \beta+\nu_{0,1}=0\\
I_{1,0} & = & b c+a+\nu_{1,0}=0\\
I_{1,1} & = & b d+b+\nu_{1,1}=0
\end{eqnarray*}
From $I_{0,1}$ we obtain $d=\beta^{-1} (b^2+\nu_{0,1})$.
Substituting that in $I_{1,1}$, we obtain $\beta^{-1} (b^2+\nu_{0,1}) b+b+\nu_{1,1}=0$.
From $I_{1,0}$ we obtain $a=b c+\nu_{1,0}$, and by substituting that in equation $I_{0,0}$ we obtain $(b c+\nu_{1,0})^2+b^2 \alpha+\beta (c^2+c d+\alpha d^2)+c \beta+\nu_{0,0}=0$.
\end{proof}
As a result we have the following algorithm for calculating the roots of the equation $z^2+\mu z+\nu=0$ where $\mu$ is square-central:
\begin{algo}
\begin{enumerate}
\item Calculate all the elements $b \in F$ satisfying $\beta^{-1} (b^2+\nu_{0,1}) b+b+\nu_{1,1}=0$.
\item For each such $b$, find all the elements $c \in F$ satisfying $(b c+\nu_{1,0})^2+b^2 \alpha+\beta (c^2+c d+\alpha d^2)+c \beta+\nu_{0,0}=0$.
\item For each such pair $b$ and $c$ (there should be up to $6$ of those in total), substitute the element $b c+\nu_{1,0}+b \theta+c \mu+\beta^{-1} (b^2+\nu_{0,1}) \mu \theta$ in the original equation to check whether it is really a root. The set of roots consists of all the elements who passed the substitution test.
\end{enumerate}
\end{algo}
\section{$\mu=1$}
Every element in $\nu \in Q$ is one of the following: central, square-central or $n x$ for some Artin-Schreier element $x$ and $n \in F^\times$.
\begin{thm}
\begin{enumerate}
\item If $\nu \in F$ then all the elements $z \in Q$ satisfying $z^2+z+\nu=0$ belong to $F$.
\item If $\nu$ is square-central then the elements $z \in Q$ satisfying $z^2+z+\nu=0$ are all the elements of the form $a+\nu$ where $a$ satisfies $a^2+a+\nu^2=0$.
\item If $\nu=n x$ for some Artin-Schreier element $x$ and $n \in F^\times$ then all the elements $z \in Q$ satisfying $z^2+z+\nu=0$ belong to $F[\nu]$.
\end{enumerate}
\end{thm}
\begin{proof}
\textbf{Case 1}
Assume $\nu \in F$. Fix some Artin-Schreier element $x$.
Every element $z \in Q$ decomposes as $z_0+z_1$ where $z_0 x=x z_0$ and $z_1 x+x z_1=z_1$.
The expression $I=z^2+z+\nu$ decomposes similarly into $I_0+I_1$, and we obtain the following system of equations:
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+z_0+\nu=0\\
I_1 & = & z_0 z_1+z_1 z_0+z_1=0
\end{eqnarray*}
The element $z_0$ is equal to $a+b x$ for some $a,b \in F$.
Substituting that in $I_1$ leaves $(b+1) z_1=0$.
If $z_1 \neq 0$ then $b=1$. Then $I_0=a^2+\alpha+1+z_1^2+a+x+\nu=0$.
However, $I_0=I_{0,0}+I_{0,1} x$ where $I_{0,0},I_{0,1} \in F$. In particular, $I_{0,1}=1$, which means that $1=0$ and that creates a contradiction.
Therefore $z$ commutes with $x$, which means that $z \in F[x]$. However, this is true for every Artin-Schreier element $x$, and therefore $z \in F$.
\bigskip
\textbf{Case 2}
Assume $\nu$ is square-central. Then for some fixed Artin-Schreier element $x$ satisfying $x^2+x=\alpha$ we have $x \nu+\nu x=\nu$.
Every element $z \in Q$ decomposes as $z_0+z_1$ where $z_0 x=x z_0$ and $z_1 x+x z_1=z_1$.
The expression $I=z^2+z+\nu$ decomposes similarly into $I_0+I_1$, and we obtain the following system of equations:
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+z_0=0\\
I_1 & = & z_0 z_1+z_1 z_0+z_1+\nu=0
\end{eqnarray*}
$I_1=(b+1) z_1+\nu=0$, which means $\nu=(b+1) z_1$.
Since $\nu \neq 0$, $b+1$ is invertible, and $z_1=(b+1)^{-1} \nu$.
$I_0=a^2+b^2 \alpha+b^2 x+(b+1)^{-1} \nu^2+a+b x=0$
$I_0$ splits into $I_{0,0}+I_{0,1} x$, as follows
\begin{eqnarray*}
I_{0,0} & = & a^2+b^2 \alpha+(b+1)^{-1} \nu^2+a=0\\
I_{0,1} & = & b^2+b=0
\end{eqnarray*}
From $I_{0,1}$ we obtain either $b=0$ or $b=1$.
The second option is not possible, because $b+1$ is invertible.
Consequently $b=0$.
Then $I_{0,0}=a^2+a+\nu^2=0$.
In conclusion, the roots are the elements of the form $a+\nu$ where $a$ satisfies $a^2+a+\nu^2=0$.
\end{proof}
\bigskip
\textbf{Case 3}
Assume $\nu=n x$ for some Artin-Schreier $x$ and some $n \in F^\times$.
Every element $z \in Q$ decomposes as $z_0+z_1$ where $z_0 x=x z_0$ and $z_1 x+x z_1=z_1$.
The expression $I=z^2+z+\nu$ decomposes similarly into $I_0+I_1$, and we obtain the following system of equations:
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+z_0+\nu=0\\
I_1 & = & z_0 z_1+z_1 z_0+z_1=0
\end{eqnarray*}
The element $z_0$ decomposes as $a+b x$ for some $a,b \in F$, and so from $I_1$ we obtain $(b+1) z_1=0$, which means that $z_1=0$ or $b=1$.
If $z_1 \neq 0$ then $b=1$.
Consequently $z_0^2=a^2+x+\alpha$ and $z_0=a+x$.
Now, $I_0=a^2+x+\alpha+z_1^2+a+x+\nu=a^2+\alpha+z_1^2+a+\nu=0$
However $I_0=I_{0,0}+I_{0,1} x$ where $I_{0,0},I_{0,1} \in F$. In particular, $I_{0,1}=\nu=0$, which creates a contradiction.
Consequently, it is enough to solve the equation over the field $F[\nu]$.
\section{$\mu=0$}
If $\nu \in F$ then the equation has a root only if $\nu=a^2 \alpha+(a b+b^2) \beta+c^2$ for some $a,b,c,\alpha \in F$ and $\beta \in F^\times$ where $Q=F[x,y : x^2+x=\alpha, y^2=\beta, x y+y x=y]$, because these are the squares of all the square-central and central elements in $F$.
If there are such $a,b,c$ then $z=a x y+b y+c$ is a root. Similarly, all its conjugates are roots as well, and there are infinitely many of them.
\begin{thm}
\begin{enumerate}
\item If $\nu$ is a square-central element then all the elements $z \in Q$ satisfying $z^2+\nu=0$ belong to $F[x]$ where $x$ satisfies $x \nu+\nu x=\nu$.
\item If $\nu=n x$ for some Artin-Schreier element $x$ and $n \in F^\times$ then all the elements $z \in Q$ satisfying $z^2+\nu=0$ belong to $F[\nu]$.
\end{enumerate}
\end{thm}
\begin{proof}
\textbf{Case 1}
Assume $\nu$ is square-central.
Then $x \nu+\nu x=\nu$ for some Artin-Schreier $x$.
As before, $z=z_0+z_1$ and we obtain the system
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2=0\\
I_1 & = & z_0 z_1+z_1 z_0+\nu=0
\end{eqnarray*}
From $I_1$ we obtain $b z_1=\nu$ which means that either $z_1=0$ or $b=0$.
If $z_1 \neq 0$ then $b=0$ and so $I_0=a^2+z_1^2=0$. But this means that $z_1+a$ is a zero divisor, and that creates a contradiction.
Consequently $z_1=0$, which means that all the roots can be obtained by solving the equation over the field $F[x]$.
\bigskip
\textbf{Case 2}
Assume $\nu=n x$ for some Artin-Schreier element $x$ and some $n \in F^\times$.
As before $z=z_0+z_1$, and we obtain the system
\begin{eqnarray*}
I_0 & = & z_0^2+z_1^2+\nu=0\\
I_1 & = & z_0 z_1+z_1 z_0=0
\end{eqnarray*}
$I_1=b z_1=0$ which means that either $b=0$ or $z_1=0$.
If $z_1 \neq 0$ then $b=0$ and so $I_0=a^2+z_1^2+\nu=0$.
However, $I_0=I_{0,0}+I_{0,1} x$ and $I_{0,1}=n x=0$, which creates a contradiction.
Again, this equation can be solved simply over the field $F[x]=F[\nu]$.
\end{proof}
\section*{Acknowledgements}
I owe thanks to Jean-Pierre Tignol and Uzi Vishne for their help and support.
\section*{Bibliography}
\bibliographystyle{amsalpha}
|
1,108,101,562,777 | arxiv | \section{Introduction}
\par
In this paper we consider the following hydrodynamic system modeling the flow of liquid crystal materials
in three dimensions (see \cite{D,E,L-L,L-L-W})
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial u}{\partial t}+u\cdot\nabla u-\nu\triangle u+\nabla P=-\lambda\nabla\cdot(\nabla d\odot\nabla d),
\ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\frac{\partial d}{\partial t}+u\cdot\nabla d=\gamma(\triangle d+\mid\nabla d\mid^{2}d), \ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\nabla\cdot u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ \mathbb{R}^{+}\times \Omega, \\
\end{array}\right.
\end{equation} with initial-boundary conditions:
\begin{align}
(u(0,x),d(0,x))&=(u_0(x),d_0(x))\ \ \ x\in\Omega,\\
(u(t,x),d(t,x))&=(0,d_0(x))\ \ (t,x)\in\mathbb{R}^{+}\times\Omega.
\end{align}
Suppose that $\Omega\subseteq \mathbb{R}^3$ is a bounded smooth domain, $u(t,x):\mathbb{R}^{+}\times\Omega\rightarrow \mathbb{R}^3$ stands for the velocity field of the flow, $d(t,x):\mathbb{R}^{+}\times\Omega\rightarrow S^{2}$, the unit sphere in $\mathbb{R}^3$, is a unit-vector field that represents the macroscopic molecular orientation of the liquid crystal material and $P(t,x): \mathbb{R}^{+}\times \Omega\rightarrow \mathbb{R}$ is the pressure function.
The constants $\nu,\lambda,\gamma$ are positive constants that stand for the viscosity, the competition between kinetic energy and potential energy, and microscopic elastic relaxation time for the molecular orientation field. $\nabla d\odot\nabla d$ denotes the $3\times 3$ matrix whose the $(i,j)$ entry is given by $\nabla_i d\cdot\nabla_j d$ for $1\leq i,j\leq 3$. It is easy to see that $\nabla d\odot\nabla d=(\nabla d)^T\nabla d$, where $(\nabla d)^T$ denotes the transpose of matrix $\nabla d$.
System (1.1) is a simplified version of the Ericksen-Leslie model. General Ericksen-Leslie model reduces the Ossen-Frank model in the static case, for the hydrodynamics of nematic liquid crystals developed during the period from 1958 to 1968 \cite{D,E,L}. Since the general Ericksen-Leslie system is very complicated, we only study a simplified model of the Ericksen-Leslie system which can derive without destroying the basic structure. It is a macroscopic continuum description of the time evolution of the materials under the influence of both the flow field $u(t,x)$, and the macroscopic description of the microscopic orientation configurations $d(t,x)$ of rod-like liquid crystals. The system (1.1)-(1.3) is a system of the Navier-Stokes equation coupled with the harmonic map flows.
In a series of papers, Lin \cite{Lin} and Lin-Liu \cite{L-L,L-L1} initiated the mathematical analysis of the system (1.1)-(1.3). Since the Erichsen-Leslie system (1.1)-(1.3) with $\mid d\mid=1$ is complicated, Lin and Liu \cite{L-L,L-L1} proposed to consider an approximation model of Ericksen-Leslie system by Ginzburg-Landau functional. More precisely, they replaced the Dirichlet functionals
\begin{align*}
\frac{1}{2}\int_{\Omega}\mid\nabla d\mid^2dx
\end{align*} for $d:\Omega\rightarrow S^{n-1}$ by the Ginzburg-Landau functionals
\begin{align*}
\int_{\Omega}(\frac{1}{2}\mid\nabla d\mid^2+\frac{(1-\mid d\mid^2)^2}{4\epsilon})dx
\end{align*} for $d:\Omega\rightarrow \mathbb{R}^n (\epsilon>0)$. In \cite{L-L}, Lin and Liu proved the global existence of solutions in dimensions two or three. In \cite{L-L1}, Lin and Liu proved partial regularity of weak solutions in dimension three. Furthermore, Lin and Liu in \cite{L-L2} proved existence of solutions for the general Ericksen-Leslie system and also analyzed the limits of weak solutions as $\epsilon\rightarrow 0$. In \cite{H-W}, Hu and Wang give the existence of global strong solution and prove that all the weak solutions constructed in \cite{L-L} must be equal to the unique strong solution.
Recently, Lin, Lin and Wang \cite{L-L-W} studied the system (1.1)-(1.3) in two dimensions. They established the global existence and partial regularity of the global weak solution and performed the blow-up analysis at each singular time. Hong \cite{H} proved the global existence of the system (1.1)-(1.3) in two dimensions independently.
The aim of this paper is to establish the short-time solution for general initial-boundary condition and the global existence for small initial-boundary
conditions for the system (1.1)-(1.3).\\
\noindent\textbf{Notations}\textit{\ In this paper, we denote
$W^{m,q}(\Omega)$ the set of function in $L^q(\Omega)$ whose
derivatives up to order $m$ belong to $L^q(\Omega)$. For $T>0$
and a function space $X$, we denote by $L^p(0,T;X)$ the set of
Bochner measurable $X-value$ time dependent functions $f$ such
that $t\rightarrow \parallel f\parallel_{X}$ belong to $L^p(0,T)$.
The function $u\in V^{1,0}_2(Q_T)$ is defined as
\begin{align*}
u\in C([0,T],L^2(\Omega))\cap W^{1,0}_2(Q_T),\\
\sup_{[0,T]}\parallel u(t,\cdot)\parallel_{L^2(\Omega)}+\parallel \nabla u\parallel_{L^2(Q_T)}\leq \infty,
\end{align*} where $Q_T=(0,T)\times \Omega$.
The space $D_{A_q}^{1-\frac{1}{p},p}$ represents some fraction domain of Stokes operator in $L^q$ (see Sect.2.3 in \cite{Dan}). Roughly, the vector-fields of $D_{A_q}^{1-\frac{1}{p},p}$ are vectors which have $2-\frac{2}{p}$ derivatives in $L^q(\Omega)$, are divergence-free, and vanish on $\partial \Omega$. $B_{q,p}^{s}(\Omega)$ represents the Besov space \cite{B-L} which can be regarded as the interpolation space between $L^q(\Omega)$ and $W^{s+\epsilon,q}(\Omega)$. From Proposition 2.5 in \cite{Dan}, we can get
\begin{align}
D_{A^q}^{1-\frac{1}{p},p}\hookrightarrow B_{q,p}^{2(1-\frac{1}{p})}(\Omega)\cap L^q(\Omega).
\end{align}
Since the space variables are in $\Omega$, if there is no ambiguity, we write $L^q(\Omega)$, $W^{m,q}(\Omega)$, $B_{q,p}^{s}(\Omega)$ as $L^q$, $W^{m,q}$, $B_{q,p}^{s}$
respectively.
}
\\
\noindent\textbf{Definition 1.1.}\textit {\ For $T>0$ and $1<p,q<\infty$,
we denote by $E_{T}^{p,q}$ the set of triplets (u, d, P) such that\\
$$u\in C(0,T; D_{A^{q}}^{1-\frac{1}{p},p})\cap L^p(0,T;W^{2,q}(\Omega)\cap W^{1,q}_0(\Omega)),$$ $$\partial_t u\in L^p(0,T;L^q(\Omega)), \nabla\cdot u=0,$$\\
$d\in C(0,T;B_{q,p}^{2(1-\frac{1}{p})})\cap L^p(0,T;W^{2,q}(\Omega))$, $\partial_t d\in L^p(0,T;L^q(\Omega))$,\\
$P\in L^p(0,T;W^{1,q}(\Omega))$, $\int_{\Omega}P dx=0$.\\
The corresponding norm is denoted by $\parallel \cdot\parallel_{E_{T}^{p,q}}$.
\begin{align*}
\parallel (u,d,P)\parallel_{E_{T}^{p,q}}=&\sup_{[0,T]}\parallel u \parallel_{D_{A^q}^{1-\frac{1}{p},p}}+\sup_{[0,T]}\parallel d \parallel_{B_{q,p}^{2(1-\frac{1}{p})}}+\parallel u \parallel_{L^p(0,T;W^{2,q})}\\&+\parallel \partial_t u\parallel_{L^p(0,T;L^q)}+\parallel d \parallel_{L^p(0,T;W^{2,q})}+\parallel \partial_t d \parallel_{L^p(0,T;L^q)}.\notag
\end{align*}
}\\
\par
Our main results can be stated as follows.
\\
\noindent\textbf{Theorem 1.1.}\textit{\ Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^3$ and $(1-\frac{2}{p})\cdot q>3$. If $u_0\in D_{A^q}^{1-\frac{1}{p},p}$ and $d_0\in B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega),$ then
\\
\\
(1) There exists a $T_0>0,$ such that the system (1.1) with the initial-boundary condition (1.2)-(1.3) has a unique local strong solution $(u,d,P)\in E_{T_0}^{p,q}$ in $(0,T_0)\times \Omega$. Moreover the solution continuously depends on initial data.
\\
\\
(2) For any given unit vector $e\in S^2$,there exists a $\delta>0$, such that, if the initial data satisfies $d_0|_{\partial \Omega}=e$ and
\begin{align*}
\parallel u_0 \parallel_{D_{A^q}^{1-\frac{1}{p},p}}+\parallel d_0-e \parallel_{B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)}\leq \delta,
\end{align*} then the system (1.1)-(1.3) has a unique global strong solution $(u,d,P)\in E_{T}^{p,q}$ in $(0,T)\times\Omega$ for all $T>0.$
}\\
We obtain our results in the spirit of \cite{Dan,L-L-W}. The main difficulty is the low integrability of nolinear item $\nabla\cdot(\nabla d\odot\nabla d)$. To overcome this problem, we add a certain condition to the initial data such that we can get the estimate of the $\parallel \nabla d\parallel_{L^\infty(0,T;L^\infty)}$.
The paper is written as follows. In Section 2, we give some useful lemmas. In Section 3, we prove the local well-posedness. In Section 4, we prove the global existence.
\noindent\textbf{Remark 1.1}\textit{\ In this paper we only prove the results in three dimensions, but we point out that our method can deal with the system (1.1)-(1.3) in higher dimensions.}
\section{Preliminaries }
\par
In this section, we give some useful lemmas which will be used in the sequel.
\noindent\textbf{Lemma 2.1} \cite{A}\textit{\ Given $1<p,q<\infty,$ $u_0\in B_{q,p}^{2(1-\frac{1}{p})}$
and $f\in L^p(0,T;L^q)$}. Then the Cauchy problem
\begin{align}
\frac{\partial u}{\partial t}-\triangle u=f, \ \ u|_{t=0}=u_0,
\end{align}
has a unique solution $u$ satisfying
\begin{align}
&\parallel u\parallel_{W^{1,p}(0,T; L^q)}+\parallel u\parallel_{L^p(0,T;W^{2,q})}\\\leq &C_1(\parallel f\parallel_{L^p(0,T;L^q)}+\parallel u_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}),\notag
\end{align} where $C_1$ is independent of $u_0,$ $f,$ and $T$. Moreover, there exists a positive constant $C_2$ independent of $f$ and $T$ such that
\begin{align}
\sup_{t\in(0,T)}\parallel u\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}\leq C_2(\parallel f\parallel_{L^p(0,T;L^q)}+\parallel u_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}).
\end{align}
\noindent\textbf{Lemma 2.2} \cite{Dan}\textit{\ Let $\Omega$ be a $C^{2+\epsilon}$ bounded domain in $\mathbb{R}^N$ and $1<q,p<\infty$. Assume that $u_0\in D_{A^q}^{1-\frac{1}{p},p}$ and $f\in L^p(\mathbb{R}^{+},L^q).$ Then the system
\\\begin{equation}
\left\{\begin{array}{ll} \frac{\partial u}{\partial t}-\triangle u+\nabla P=f,\ \ \int_{\Omega}P dx=0,&\\
\nabla\cdot u=0,\ \ u|_{\partial\Omega}=0 ,\\
u|_{t=0} = u_0,
\end{array}\right.
\end{equation}
has a unique solution $u,P$ satisfying the following inequality for all $T\geq 0$:
\begin{align}
&\parallel u(T)\parallel_{D_{A^q}^{1-\frac{1}{p},p}}+(\int_0^T\parallel (\nabla P,\nabla^2 u,\partial_t u)\parallel_{L^q}^p dt)^{\frac{1}{p}}\\\leq &C_3(\parallel u_0\parallel_{D_{A^q}^{1-\frac{1}{p},p}}+(\int_0^T\parallel f(t)\parallel_{L^q}^p)^{\frac{1}{p}}),\notag
\end{align}
with $C_3=C(q,p,N,\Omega)$.}
\\ Using Lemma 2.1, we can prove there is a similar conclusion for initial-boundary problem.
\\
\noindent\textbf{Theorem 2.1}\textit{\ Let $\Omega$ be a bounded smooth domain in $\mathbb{R}^3$ and $(1-\frac{2}{p})\cdot q>3$. If $u_0\in B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)$ and $f\in L^p(0,T;L^q)$, then the initial-boundary problem
\begin{align}
\frac{\partial u}{\partial t}-\triangle u=f,\ \ \ u|_{\partial {p}Q_T}=u_0
\end{align} has a unique solution $u$ satisfying
\begin{align}
&\parallel u\parallel_{L^\infty(0,T;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel u\parallel _{L^p(0,T;W^{2,q})}+\parallel \partial_t u\parallel_{L^p(0,T;L^q)}\\\leq & C(\parallel f\parallel_{L^p(0,T;L^q)}+\parallel u_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}+T^{\frac{1}{p}}\parallel u_0\parallel_{C^{2,\alpha}(\partial \Omega)}),\notag
\end{align} where $\partial_ {p}{Q_T}=(0,T)\times\partial \Omega\cup\{0\}\times \Omega$.
}
\begin{proof}
Since $u_0\in C^{2,\alpha}(\partial \Omega)$, by the standard elliptic theory we get that there exists a unique solution $u^1\in C^{2,\alpha}(\bar\Omega)$ satisfies
\begin{align}
\triangle u^1=0,\ \ \ u^1|_{\partial \Omega}=u_0.
\end{align} It is clear that $u_0-u_1\in {B_{q,p}^{2(1-\frac{1}{p})}}$ and $(u_0-u_1)|_{\partial \Omega}=0$. Using Lemma 2.1, the equation
\begin{align*}
\frac{\partial u}{\partial t}-\triangle u=f, \ \ u|_{t=0}=u_0-u_1,
\end{align*} has a solution $u_2\in B_{q,p}^{2(1-\frac{1}{p})}$ and $u_2(t,x)|_{\partial \Omega}=0$.
A direct computation shows that $u_1+u_2$ is a solution of the initial-boundary problem (2.6). Using lemma 2.1 and Schauder's estimate, we deduce (2.7).
\end{proof}
\noindent\textbf{Theorem 2.2}\textit{\ Let $(1-\frac{2}{p})\cdot
q>3$ and $u\in L^\infty(0,T;D_{A^q}^{1-\frac{1}{p},p})$. If $d\in
L^p(0,T;W^{2,q})\cap W^{1,p}(0,T;L^q)\cap
L^\infty(0,T;B_{q,p}^{2(1-\frac{1}{p})})$ is a solution of the
following nonlinear parabolic problem
\begin{align}
\frac{\partial d}{\partial t}-\triangle d-\mid\nabla d\mid^2d+u\cdot\nabla d&=0,\ \ in \ \ (0,T)\times \Omega,\\
d|_{\partial p_{Q_T}}&=d_0,
\end{align} where $d_0: \Omega\rightarrow S^2$. Then, $\mid d\mid=1$ in $[0,T)\times \Omega$.
}
\begin{proof}
Multiplying (2.9) by $d$, we get
\begin{align}
\frac{\partial (\mid d\mid^2-1)}{\partial t}-\triangle(\mid d\mid^2-1)-2\mid \nabla d\mid^2(\mid d\mid^2-1)-u\cdot\nabla(\mid d\mid^2-1)=0.
\end{align} By the assumption $(1-\frac{2}{p})\cdot q>3$ and (1.4), we have
\begin{align}
\parallel \nabla d\parallel_{L^\infty(0,T;L^\infty)} \leq\parallel \nabla d\parallel_{L^\infty(0,T;B_{q,p}^{1-\frac{2}{p}})}\leq\parallel d\parallel_{L^(0,T;B_{q,p}^{2(1-\frac{1}{p})})},\\
\parallel u\parallel_{L^\infty(0,T;L^\infty)}\leq \parallel u\parallel_{L^\infty(0,T; B_{q,p}^{2(1-\frac{1})})}\leq\parallel u\parallel_{L^\infty(0,T;D_{A^q}^{1-\frac{1}{p},p})}.
\end{align} Noticing that $(1-\frac{2}{p})\cdot q>3$, we get
\begin{align*}
\parallel \nabla (\mid d\mid^2-1)\parallel_{L^2(Q_T)}\leq &C \parallel d\parallel_{L^\infty(Q_T)}\parallel\nabla d\parallel_{L^2(Q_T)}\\\leq &C\parallel d\parallel_{L^\infty(0,T;B_{q,p}^{2(1-\frac{1}{p})})}\parallel d\parallel_{L^p(0,T;W^{2,q})}.\notag
\end{align*} Since $\partial_t d\in L^p(0,T;L^q)$ and $d\in L^\infty(0,T;L^\infty)$, we can get that $$(\mid d\mid^2-1)\in C([0,T],L^2).$$
This yields $(\mid d\mid^2-1)\in V^{1,0}_2(Q_T)$.
Thus, we have that the function $\mid d\mid^2-1$ satisfies the following equation
\begin{align}
\frac{\partial f}{\partial t}-\triangle f-2\mid \nabla d\mid^2f-u\cdot\nabla f &=0,\\
f|\partial P_{Q_T}&=0,
\end{align} in $V_2^{1,0}(Q_T)$. The uniqueness implies $\mid d\mid^2-1=0$ in $Q_T$. This proves the lemma.
\end{proof}
\section{Local well-posedness}
\par
In this section, we prove the local well-posedness of the system (1.1) with the initial boundary value (1.2)-(1.3).
Noticing that $\nabla P=\nabla (P-\int_{\Omega}Pdx)$, we can assume that
\begin{align*}
\int_{\Omega} Pdx=0.
\end{align*}
Since the exact values of $\nu,\lambda,\gamma$ do not play a role, we henceforth assume$$\nu=\lambda=\gamma=1.$$ Thus, we can rewrite the system (1.1) as
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial u}{\partial t}+u\cdot\nabla u-\triangle u+\nabla P=-\nabla\cdot(\nabla d\bigodot\nabla d),
\ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\frac{\partial d}{\partial t}+u\cdot\nabla d=(\triangle d+\mid\nabla d\mid^{2}d), \ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\nabla\cdot u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ \mathbb{R}^{+}\times \Omega, \\
\int_{\Omega} Pdx=0,
\end{array}\right.
\end{equation} with initial-boundary conditions
\begin{align}
\left\{\begin{array}{ll}
(u(0,x),d(0,x))=(u_0(x),d_0(x))\ \ x\in\Omega,\\
(u(t,x),d(t,x))=(0,d_0(x)),\ \ \ \ \ (t,x)\in\mathbb{R}^{+}\times\partial\Omega.
\end{array}\right.
\end{align}
Now, we prove the local existence.
Firstly, we linearize the system (3.1)-(3.2) and construct approximate solutions. Set $(u^0(t,x),d^0(t,x))=(u_0,d_0)$. Then given $(u^n,d^n,P^n)$ as the solution of\\
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial u^n}{\partial t}-\triangle u^n+\nabla P^n=-u^{n-1}\cdot\nabla u^{n-1}-\nabla\cdot(\nabla d^{n-1}\bigodot\nabla d^{n-1}),
\ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\frac{\partial d^n}{\partial t}-\triangle d^n=-u^{n-1}\cdot\nabla d^{n-1}+\mid\nabla d^{n-1}\mid^{2}d^{n-1}, \ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\nabla\cdot u^n=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ \mathbb{R}^{+}\times \Omega, \\
\int_{\Omega} P^ndx=0,
\end{array}\right.
\end{equation} with initial-boundary data
\begin{align}
\left\{\begin{array}{ll}
(u^n(0,x),d^n(0,x))=(u_0(x),d_0(x))\ \ x\in\Omega,\\
(u^n(t,x),d^n(t,x))=(0,d_0(x))\ \ \ \ \ (t,x)\in\mathbb{R}^{+}\times\partial\Omega.
\end{array}\right.
\end{align}
From Theorem 2.1 and Lemma 2.2, we can obtain that the sequence $\{(u,^n,d^n,P^n)\}_{n\in \mathbb{N}}$
belong to $E_{T}^{p,q}$ for any $T>0$. Moreover we have the following estimate between $(u^n,d^n,P^n)$ and $(u^{n-1},d^{n-1},P^{n-1})$.
\begin{align}
&\parallel u^{n+1}\parallel_{L^\infty(0,T;D_{A^q}^{1-\frac{1}{p},p})}+(\int_{0}^{T}\parallel (\nabla P^{n+1},\nabla^2 u^{n+1},\partial_t u^{n+1})\parallel^p_{L^q}dt)^{\frac{1}{p}}\\\leq &C(\parallel u_0\parallel_{D_{A^q}^{1-\frac{1}{p},p}}+(\int_{0}^{T}\parallel u^n\cdot\nabla u^n+\nabla\cdot(\nabla d^n\odot\nabla d^n)\parallel^p_{L^q}dt)^{\frac{1}{p}}),\notag
\end{align}
\begin{align}
&\parallel d^{n+1}\parallel_{L^\infty(0,T;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel d^{n+1}\parallel_{L^p(0,T;W^{2,q})}+\parallel \partial_t d^{n+1}\parallel_{L^{p}(0,T;L^q)}\\\leq &C(\parallel d_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}+(\int_{0}^{T}\parallel -u^n\cdot\nabla d^n+\mid\nabla d^n\mid^2d^n\parallel^p_{L^q}dt)^{\frac{1}{p}}+T^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}).\notag
\end{align} Note that $u^{n+1}|_{\partial \Omega}=0$. Then the divergence theorem implies
\begin{align*}
\int_\Omega \nabla u^{n+1}dx=0.
\end{align*} Thus, using Poincar\'{e}'s inequality, we get
\begin{align*}
\parallel u^{n+1}\parallel_{W^{2,q}}\leq C \parallel \nabla^2 u^{n+1}\parallel_{L^q}.
\end{align*} Therefore (3.5) can be written as
\begin{align}
&\parallel u^{n+1}\parallel_{L^\infty(0,T;D_{A^q}^{1-\frac{1}{p},p})}+(\int_{0}^{T}\parallel (\nabla P^{n+1},u^{n+1},\nabla^2 u^{n+1},\partial_t u^{n+1})\parallel^p_{L^q}dt)^{\frac{1}{p}}\\\leq &C(\parallel u_0\parallel_{D_{A^q}^{1-\frac{1}{p},p}}+(\int_{0}^{T}\parallel u^n\cdot\nabla u^n+\nabla\cdot(\nabla d^n\odot\nabla d^n)\parallel^p_{L^q}dt)^{\frac{1}{p}}).\notag
\end{align}
Secondly, we present a uniform estimate for the sequence $\{(u^n,d^n,P^n)\}_{n\in \mathbb{N}}$.
Define
\begin{align}
F_n(t)=&\parallel u^n\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel u^n\parallel_{L^p(0,t;W^{2,q})}\\&+\parallel \partial_t u^n\parallel_{L^p(0,t;L^q)}+\parallel \nabla P^n\parallel_{L^p(0,t;L^q)}\notag,\\
E_n(t)=&\parallel d^n\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel d^n\parallel_{L^p(0,t;W^{2,q})}+\parallel \partial_t d^n\parallel_{L^p(0,t;L^q)},\\
H_n(t)=&F_n(t)+E_n(t),\\
F_0=&\parallel u_0\parallel_{D_{A^q}^{(1-\frac{1}{p}),p}},\ \ E_0=\parallel d_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}},\ \ H_0=F_0+E_0.
\end{align}
\noindent\textbf{Lemma 3.1}\textit{\ Let $(1-\frac{2}{p})\cdot q>3$ and $u_0(x)\in D_{A^q}^{1-\frac{1}{p},p},$ $d_0(x)\in B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)$. Then,
there exists a positive $T_0$ such that the sequence $\{(u^n,d^n,P^n)\}_{n\in\mathbb{N}}$ is uniformly bounded in $E_{T_0}^{p,q}$.
}
\begin{proof}
Using the fact
\begin{align}
\nabla \cdot(\nabla d\odot\nabla d)=&\nabla(\frac{\mid\nabla d\mid^2}{2})+\triangle d\cdot\nabla d\\=&\nabla^2d\cdot\nabla d+\triangle d\cdot \nabla d\notag
\end{align} and (2.12)-(2.13), we have that
\begin{align}
F_{n+1}(t)\leq &C(F_0+\parallel u^n\cdot\nabla u^n\parallel_{L^p(0,t;L^q)}\\&+\parallel\nabla\mid\nabla d^n\mid^2\parallel_{L^p(0,t;L^q)}+\parallel \triangle d^n\cdot\nabla d^n\parallel_{L^p(0,t;L^q)})\notag\\\leq &C(F_0+t^{\frac{1}{p}}\parallel u^n\parallel_{l^\infty(0,t;L^\infty)}\parallel \nabla u^n\parallel_{L^\infty(0,t;L^\infty)}\notag\\&+\parallel \nabla d\parallel_{L^\infty(0,t;L^\infty)}\parallel \triangle d^n\parallel_{L^{p}(0,t;L^q)})\notag\\\leq &C(F_0+t^{\frac{1}{p}}\parallel u^n\parallel^2_{L^\infty(0,t; D_{A^q}^{1-\frac{1}{p},p})}\notag\\&+ \parallel d^n\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel d^n\parallel_{L^p(0,t;W^{2,q})})\notag\\\leq &C(F_0+t^{\frac{1}{p}}F^2_n(t)+E^2_n(t)),\notag
\end{align} due to (3.7). Similarly, we obtain from (3.6) that
\begin{align}
E_{n+1}(t)\leq& C(\parallel u^n\cdot \nabla d^n\parallel_{L^p(0,t;L^q)}+\parallel \mid\nabla d^n\mid^2 d^n\parallel_{L^p(0,t;L^q)})\\\leq &C(E_0+t^{\frac{1}{p}}\parallel u^n\parallel_{L^\infty(0,t;L^\infty)} \parallel \nabla d^n\parallel_{L^\infty(0,t;L^\infty)}\notag\\&+t^{\frac{1}{p}}\parallel \nabla d^n\parallel^2_{L^\infty(0,t;L^\infty)}\parallel d^n\parallel_{L^\infty(0,t;L^\infty)}+t^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})\notag\\\leq &C(E_0+t^\frac{1}{p}F_n(t)E_n(t)+t^{\frac{1}{p}}E^3_n(t)+t^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}).\notag
\end{align}
Combining (3.13) with (3.14) yields that
\begin{align}
H_{n+1}(t)\leq C\{H_0+t^\frac{1}{p}[F^2_n(t)+F_n(t)E_n(t)+E^3_n(t)+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}]+E^2_n(t)\}.
\end{align} Plugging (3.14) into (3.15), we get that
\begin{align}
H_{n+1}(t)\leq &C\{H_0+t^\frac{1}{p}(F^2_n(t)+F_n(t)E_n(t)+E^3_n(t)+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})\\
+&C^2(E_0+t^\frac{1}{p}F_{n-1}(t)E_{n-1}(t)+t^{\frac{1}{p}}E^3_{n-1}(t)+t^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})^2\}.\notag
\end{align} By a direct computation, we obtain
\begin{align}
H_{n+1}(t)\leq C&\{H_0+C^2H^2_0+(C^2+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})t^\frac{2}{p}(H^6_{n-1}+H^5_{n-1}+H^4_{n-1})\\
+&t^\frac{1}{p}(H^3_n+H^2_n)+(2C^2H_0+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})t^\frac{1}{p}(H^3_{n-1}+H^2_{n-1})\},\notag\\
+&t^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}.\notag
\end{align} where the constant $C$ only depends on $\Omega$. Since $H_n(t)\rightarrow H_0$ as $t\rightarrow 0$, we can assume that there exists a $T>0$ such that
$H_n(t)\leq 2CK H_0$ and $H_{n-1}(t)\leq 2CK H_0$ on $[0,T]$ for some fixed $n$, where $K=1+C^2H_0$. Then,
\begin{align}
H_{n+1}(t)\leq &C\{(H_0+C^2H^2_0)+C^3t^\frac{2}{p}(2^6K^6H^6_0+2^5K^5H^5_0+2^4K^4H^4_0)\\
&+t^\frac{1}{p}(C+2C^2H_0+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})(2^3K^3H^3_0+2^2K^2H^2_0)\notag\\&+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}t^{\frac{2}{p}}(2^6K^6H^6_0+2^5K^5H^5_0+2^4K^4H^4_0)\},\notag\\=&CKH_0+C\{C^3t^\frac{2}{p}(2^6K^6H^6_0+2^5K^5H^5_0+2^4K^4H^4_0)\notag\\
&+t^\frac{1}{p}(C+2C^2H_0+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})(2^3K^3H^3_0+2^2K^2H^2_0)+t^\frac{1}{p}\notag\\&+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}t^{\frac{2}{p}}(2^6K^6H^6_0+2^5K^5H^5_0+2^4K^4H^4_0)\}\notag\\&+t^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}\notag
\end{align} If we choose $T_0\leq T$ such that
\begin{align}
&(C^3+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})T_0^\frac{2}{p}(2^6K^6H^5_0+2^5K^5H^4_0+2^4K^4H^3_0)
\\&+T_0^\frac{1}{p}(C+2C^2H_0+\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)})(2^3K^3H^2_0+2^2K^2H_0)\notag\\\leq &K-1,\notag
\end{align} and
\begin{align*}
T_0^{\frac{1}{p}}\parallel d_0\parallel_{C^{2,\alpha}(\partial \Omega)}\leq CH_0,
\end{align*}then
\begin{align}
H_{n+1}(T_0)\leq 2CK H_0.
\end{align} Since $H_{n+1}(t)$ is increasing, we get
\begin{align}
H_{n+1}(t)\leq 2CK H_0, \ \ \ on\ \ \ [0,T_0].
\end{align}
Arguing by induction, we deduce that $\{(u^n,d^n,P^n)\}_{n\in \mathbb{N}}$ is uniformly bounded in $E^{q,p}_{T_0}$.
\end{proof}
Next, we establish the convergence of this approximate solutions sequence $\{(u^n,d^n,P^n)\}_{n\in \mathbb{N}}$.
\noindent\textbf{Lemma 3.2}\textit{\ Let $(1-\frac{2}{p})\cdot q>3,$ $u_0\in D_{A_q}^{1-\frac{1}{p},p}$ and $d_0\in B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega).$ Given solution sequence $\{(u^n,d^n,P^n)\}$ constructed in Lemma 3.1. Then, there
exists a positive constant $T_1\leq T_0$ such that $\{(u^n,d^n,P^n)\}_{n\in \mathbb{N}}$ converges in $E_{T_1}^{q,p}$.
}
\begin{proof}
Let
\begin{align}
D(u^n)\doteq u^{n+1}-u^n;\ \ D(d^n)\doteq d^{n+1}-d^n;\ \ D(P^n)\doteq P^{n+1}-P^n.
\end{align}
Define
\begin{align}
DF_n(t)=&\parallel D(u^n)\parallel_{L^\infty(0,t;D_{A^q}^{1-\frac{1}{p},p})}+\parallel D(u^n)\parallel_{L^p(0,t;W^{2,q})}\\&+\parallel\partial_t D(u^n)\parallel_{L^p(0,t;L^q)}+\parallel \nabla D(P^n)\parallel_{L^p(0,t;L^q)},\notag\\DE_n(t)=&\parallel D(d^n)\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel D(d^n)\parallel_{L^p(0,t;W^{2,q})}\\&+\parallel \partial_t D(d^n)\parallel_{L^p(0,t;L^q)},\notag\\
DH_n(t)=&DF_n(t)+DE_n(t).
\end{align} A direct computation shows that the triplet $(D(u^n),D(d^n),D(P^n))$ satisfies
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial D(u^n)}{\partial t}-\triangle D(u^n)+\nabla D(P^n)\\=-u^n\cdot\nabla u^n+u^{n-1}\cdot \nabla u^{n-1}-\nabla(\frac{\mid \nabla d^n\mid^2}{2})\\+\nabla(\frac{\mid \nabla d^{n-1}\mid^2}{2})
-\triangle d^n\cdot\nabla d^n+\triangle d^{n-1}\cdot \nabla d^{n-1},
\ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\\
\frac{\partial D(d^n)}{\partial t}-\triangle D(d^n)\\=-u^n\cdot \nabla d^n+u^{n-1}\cdot\nabla d^{n-1}\\
+\mid\nabla d^n\mid^2d^n-\mid\nabla d^{n-1}\mid^2d^{n-1}, \ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\\
\nabla\cdot D(u^n)=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ \mathbb{R}^{+}\times \Omega, \\
\\
\int_{\Omega} D(P^n)dx=0,
\end{array}\right.
\end{equation} with initial-boundary value
\begin{align}
D(u^n)|_{t=0}=D(u^n)|_{\partial \Omega}=0, \\
D(d^n)|_{t=0}=D(d^n)|_{\partial \Omega}=0.
\end{align} Using (2.12)-(2.13), we get
\begin{align}
&\parallel-u^n\cdot\nabla u^n+u^{n-1}\cdot\nabla u^{n-1}\parallel_{L^p(0,t;L^q)}\\\leq &\parallel u^n\cdot \nabla(D(u^{n-1}))\parallel_{L^p(0,t;L^q)}+\parallel \nabla u^{n-1}\cdot D(u^{n-1})\parallel_{L^p(0,t;L^q)}\notag
\\\leq &t^\frac{1}{p}\parallel u^n\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}\parallel D(u^{n-1})\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}\notag\\&+t^\frac{1}{p}\parallel u^{n-1}\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}\parallel D(u^{n-1})\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})},\notag\\
&\parallel\triangle d^n\cdot\nabla d^n-\triangle d^{n-1}\cdot\nabla d^{n-1}\parallel_{L^p(0,t;L^q)}\\\leq
&\parallel d^n\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel D(d^{n-1})\parallel_{L^p(0,t;W^{2,q})}\notag\\&+\parallel d^{n-1}\parallel_{L^p(0,t;W^{2,q})}\parallel D(d^{n-1})\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})},\notag\\
&\parallel \nabla^2 d^n\cdot \nabla d^n-\nabla^2 d^{n-1}\cdot \nabla d^{n-1}\parallel_{L^p(0,t;L^q)}\\\leq
&\parallel d^n\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel D(d^{n-1})\parallel_{L^p(0,t;W^{2,q})}\notag\\&+\parallel d^{n-1}\parallel_{L^p(0,t;W^{2,q})}\parallel D(d^{n-1})\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})},\notag\\
&\parallel u^n\cdot \nabla d^n-u^{n-1}\cdot \nabla d^{n-1}\parallel_{L^p(0,t;L^q)}\\\leq &t^\frac{1}{p}\parallel u^n\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}\parallel D(d^{n-1})\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\notag\\&+t^\frac{1}{p}\parallel d^{n-1}\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel D(u^{n-1})\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})},\notag\\
&\parallel \mid\nabla d^n\mid^2d^n-\mid\nabla d^{n-1}\mid^2d^{n-1}\parallel_{L^p(0,t;L^q)}\\\leq&2t^\frac{1}{p}\parallel d^n\parallel^2_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel D(d^{n-1})\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\notag\\&+t^\frac{1}{p}\parallel d^{n-1}\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel D(d^{n-1})\parallel_{L^\infty(0,t;B_{q,p}^{B_{q,p}^{2(1-\frac{1}{p})}})}.\notag
\end{align} Using (3.21) and (3.29)-(3.33), we obtain that
\begin{align}
&DE_n(t)\leq Ct^\frac{1}{p}(DE_{n-1}(t)+DF_{n-1}(t)),\\
&DF_n(t)\leq C(t^\frac{1}{p}DF_{n-1}(t)+DE_{n-1}(t)).
\end{align} Thus, we get
\begin{align}
DH_{n+1}\leq 4 C^2(t^\frac{2}{p}+t^\frac{1}{p})H_{n-1}(t),
\end{align} where the constant $C$ only depends on $u_0,d_0$ and $\Omega$.
Choose $T_1\leq T_0$ such that $4C^2(T_1^\frac{2}{p}+T_1^\frac{1}{p})\leq \frac{1}{2}.$ Then
\begin{align}
DH_{n+1}(t)\leq\frac{1}{2} DH_{n-1}(t), \ \ \ on \ \ \ [0,T_1].
\end{align} It is clear that $\{(u^n,d^n,P^n)\}_{n\in\mathbb{N}}$ converges in $E^{q,p}_{T_1}.$
\end{proof}
Let $(u,d,p)$ be the limit of $\{(u^n,d^n,P^n)\}$, we prove that $(u,d,P)$ is a solution to the system
(3.1)-(3.2). It suffices to show that the items in the system (3.3) converge to corresponding items in $L^p(0,T_1;L^q).$ We only prove the convergence of $\nabla\cdot(\nabla d^n\odot\nabla d^n)$, the others can be proved similarly.
\begin{align}
&\parallel \nabla\cdot(\nabla d^n\odot\nabla d^n)-\nabla\cdot(\nabla d\odot\nabla d)\parallel_{L^p(0,T_1;L^q)}\\\leq &\parallel \nabla \frac{\mid \nabla d^n\mid^2}{2}-\nabla \frac{\mid \nabla d\mid^2}{2}\parallel_{L^p(0,T_1;L^q)}\notag\\&+
\parallel \triangle d^n\cdot \nabla d^n-\triangle d\cdot \nabla d\parallel_{L^p(0,T_1,L^q)}\notag\\\leq &2(\parallel d^n-d\parallel_{L^p(0,T_1;W^{2,q})}\parallel d^n\parallel_{L^\infty(0,T_1;B_{q,p}^{2(1-\frac{1}{p})})}\notag\\&+\parallel d\parallel_{L^p(0,T_1;W^{2,q})}\parallel d^n-d\parallel_{L^\infty(0,T_1;B_{q,p}^{2(1-\frac{1}{p})})}).\notag
\end{align} Using the convergence of $(u^n,d^n)$ and (3.21) we obtain
\begin{align}
&\parallel \nabla\cdot(\nabla d^n\odot\nabla d^n)-\nabla\cdot(\nabla d\odot\nabla d)\parallel_{L^p(0,T_1;L^q)}\rightarrow 0,\\
&\parallel u^n\cdot\nabla u^n-u\nabla u\parallel_{L^p(0,T_1;L^q)}\rightarrow 0,\\
&\parallel u^n\cdot\nabla d^n-u\cdot\nabla d\parallel_{L^p(0,T_1;L^q)}\rightarrow 0,\\
&\parallel \mid\nabla d^n \mid^2d^n-\mid\nabla d \mid^2d\parallel_{L^p(0,T_1;L^q)}\rightarrow 0.
\end{align} It is easy to see $\nabla u=0$ and $\int_\Omega Pdx=0$. As a direct consequence of Theorem 2.2, we get $\mid d\mid=1$. Thus, we have proved that the function triplet $(u,d,P)$ is a solution to the system (3.1)-(3.2).
Now, we prove the uniqueness. Let $(u^1,d^1,P^1)$ and $(u^2,d^2,P^2)$ be two solutions of the system (3.1) with the initial-boundary conditions (3.2). Denote
\begin{align*}
\delta u=u^1-u^2,\ \ \delta d=d^1-d^2,\ \ \delta P=P^1-P^2.
\end{align*} Then, the triplet $(\delta u,\delta d,\delta P)$ satisfies the following system
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial \delta u}{\partial t}-\triangle \delta u+\nabla \delta P\\=-u^2\cdot\nabla u^2+u^1\cdot \nabla u^1-\nabla(\frac{\mid \nabla d^2\mid^2}{2})\\+\nabla(\frac{\mid \nabla d^1\mid^2}{2})
-\triangle d^2\cdot\nabla d^2+\triangle d^1\cdot \triangle d^1,
\ \ &in\ \ (0,T_1)\times \Omega,\\
\\
\frac{\partial \delta d}{\partial t}-\triangle \delta d\\=-u^2\cdot \nabla d^2+u^1\nabla d^1\\
+\mid\nabla d^2\mid^2d^2-\mid \nabla d^1\mid^2d^1, \ \ &in\ \ (0,T_1)\times \Omega,\\
\\
\nabla\cdot \delta u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ (0,T_1)\times \Omega, \\
\\
\int_{\Omega} \delta Pdx=0
\end{array}\right.
\end{equation} with initial-boundary value
\begin{align}
\delta u|_{t=0}=\delta u|_{\partial \Omega}=0, \\
\delta d|_{t=0}=\delta d|_{\partial \Omega}=0.
\end{align} Let
\begin{align}
G(t)=&\parallel \delta_n u\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}+\parallel \delta_n u\parallel_{L^p(0,t;W^{2,q})}+\parallel \partial_t \delta_n u\parallel_{L^p(0,t;L^q)}\\&+\parallel \delta_n d\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel \delta_n d\parallel_{L^p(0,t;W^{2,q})}+\parallel\partial_t\delta_n d\parallel_{L^p(0,t;L^q)}\notag\\&+\parallel \nabla \delta_n P\parallel_{L^p(0,t;L^q)}.\notag
\end{align} Repeating the argument from (3.22)-(3.37), we get
\begin{align}
G(t)\leq \frac{1}{2}G(t) \ \ \ on\ \ \ [0,T_1].
\end{align} Hence, $G(t)=0$ for $t\in[0,T_1]$, which implies the uniqueness on the interval $[0,T_1]$.
To complete this section, we show the continuous dependence. Noticing the proof of Lemma 3.1 and Lemma 3.2, more precisely, from (3.19) and (3.36), we can deduce that for any given initial-boundary data, there exists a $T_1>0$ only depending on the initial-boundary data such that $H(t)\leq2C(1+C^2H_0) H_0$ on $[0,T_1]$. Let
\begin{align} &(u_0,d_0)\in D_{A^q}^{1-\frac{1}{p},p}\times (B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)),\\ &(u^n_0,d^n_0)\rightarrow (u_0,d_0)\ \ in\ \ D_{A^q}^{1-\frac{1}{p},p}\times (B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)).\end{align} Assume that $(u^n,d^n)$ is the corresponding solution with the initial-boundary condition $(u^n_0,d^n_0)$ and $(u,d)$ is the solution with initial-boundary data $(u_0,d_0)$. Define
\begin{align}
\delta_n u=u-u^n,\ \ \delta_n d=d-d^n,\ \ \delta_n P=P-P^n.
\end{align} Then, the triplet $(\delta_nu,\delta_nd,\delta_n P)$ satisfies the following system
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial \delta_n u}{\partial t}-\triangle \delta_n u+\nabla \delta_n P\\=-u\cdot\nabla u+u^n\cdot \nabla u^n-\nabla(\frac{\mid \nabla d\mid^2}{2})\\+\nabla(\frac{\mid \nabla d^n\mid^2}{2})
-\triangle d\cdot\nabla d+\triangle d^n\cdot \triangle d^n,
\ \ &in\ \ (0,T_1)\times \Omega,\\
\\
\frac{\partial \delta_n d}{\partial t}-\triangle \delta_n d\\=-u\cdot \nabla d+u^n\nabla d^n\\
+\mid\nabla d\mid^2d-\mid \nabla d^n\mid^2d^n, \ \ &in\ \ (0,T_1)\times \Omega,\\
\\
\nabla\cdot \delta_n u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ (0,T_1)\times \Omega, \\
\\
\int_{\Omega} \delta_n Pdx=0
\end{array}\right.
\end{equation} with initial-boundary conditions
\begin{align}
\delta_nu|_{\partial p Q_{T_1}}=u_0-u^n_0,\\
\delta_nd|_{\partial p Q_{T_1}}=d_0-d^n_0.
\end{align} Let
\begin{align}
G_n(t)=&\parallel \delta u\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}+\parallel \delta u\parallel_{L^p(0,t;W^{2,q})}+\parallel \partial_t \delta u\parallel_{L^p(0,t;L^q)}\\&+\parallel \delta d\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel \delta d\parallel_{L^p(0,t;W^{2,q})}+\parallel\partial_t\delta d\parallel_{L^p(0,t;L^q)}\notag\\&+\parallel \nabla \delta P\parallel_{L^p(0,t;L^q)}.\notag\\
G_n(0)=&\parallel u_0-u^n_0\parallel_{D_{A^q}^{1-\frac{1}{p},p}}+\parallel d_0-d^n_0\parallel_{B_{q,p}^{2(1-\frac{1}{p})}\cap C^{2,\alpha}(\partial \Omega)}.
\end{align} Using Lemma 2.2 and Theorem 2.1, and repeating the proof from (3.29) to (3.36), we have that
\begin{align}
G_n(t)\leq CG_n(0)+\frac{1}{2}G_n(t),\ \ on \ \ [0,T_1],
\end{align} where the constant $C$ only depends on $u_0$, $d_0$ and $T_1$. This implies $G_n(t)\leq C G_n(0)$.
Thus, we get the continuous dependence of the initial-boundary data.
Now, we complete the proof of well-posedness.
\section{Global existence }
\par
In this section, our aim is to extend the local solution which is established in Section 3 to a global solution in case of small initial-boundary conditions.
Given any unit vector $e\in S^2$ such that $d_0|\partial \Omega=e$. Let $(u,d)$ is a solution of the system
(3.1)-(3.2). A directly computation shows that $(u, d-e)$ satisfies
\begin{equation}
\left\{\begin{array}{ll}
\frac{\partial u}{\partial t}+u\cdot\nabla u-\triangle u+\nabla P=-\nabla\cdot(\nabla d\bigodot\nabla d),
\ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\frac{\partial d}{\partial t}+u\cdot\nabla d=(\triangle d+\mid\nabla d\mid^{2}d+e\mid\nabla d\mid^{2}), \ \ &in\ \ \mathbb{R}^{+}\times \Omega,\\
\nabla\cdot u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &in\ \ \mathbb{R}^{+}\times \Omega, \\
\int_{\Omega} Pdx=0,
\end{array}\right.
\end{equation} with initial-boundary conditions
\begin{align}
\left\{\begin{array}{ll}
(u(0,x),d(0,x))=(u_0(x),d_0(x)-e)\ \ x\in\Omega,\\
(u(t,x),d(t,x))=(0,0),\ \ \ \ \ (t,x)\in\mathbb{R}^{+}\times\partial\Omega.
\end{array}\right.
\end{align}
We first let $T^*< \infty$ be the maximal time of existence for $(u,d-e,P).$ Define
\begin{align}
H(t)=&\parallel u\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}+\parallel u\parallel_{L^p(0,t;W^{2,q})}+\parallel \partial_t u\parallel_{L^p(0,t;L^q)}\\&+\parallel d-e\parallel_{L^p(0,t;B_{q,p}^{2(1-\frac{1}{p})})}+\parallel \partial_t (d-e)\parallel_{L^p(0,t;L^q)}+\parallel P\parallel_{L^p(0,t;W^{1,q})}\notag\\&+\parallel d-e\parallel_{L^p(0,t;W^{2,q})},\notag\\
H_0=&\parallel u_0\parallel_{D_{A_q}^{1-\frac{1}{p},p}}+\parallel d_0-e\parallel_{B_{q,p}^{2(1-\frac{1}{p})}}.
\end{align} Using the system (3.1), Theorem 2.1 and Lemma 2.2, we get
\begin{align}
H(t)\leq &C(H_0+\parallel u\cdot\nabla u\parallel_{L^p(0,t;L^q)}+\parallel \nabla\cdot (\nabla (d-e)\odot\nabla (d-e))\parallel_{L^p(0,t;L^q)}\\&+\parallel u\cdot\nabla (d-e)\parallel_{L^p(0,t;L^q)}+\parallel \mid\nabla (d-e)\mid^2(d-e)\parallel_{L^p(0,t;L^q)}+\parallel\mid\nabla (d-e)\mid^2e\parallel_{L^p(0,t;L^q)}).\notag
\end{align} Noticing that $\mid d\mid=\mid e\mid=1$ and (2.12)-(2.13), we have that
\begin{align}
H(t)\leq &C(H_0+\parallel u\parallel_{L^\infty(0,t;D_{A_q}^{2(1-\frac{1}{p})})}\parallel u\parallel_{L^p(0,t;W^{2,q})}\\&+\parallel d-e\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel d-e\parallel_{L^p(0,t;W^{2,q})}\notag\\&+\parallel u\parallel_{L^\infty(0,t;D_{A_q}^{1-\frac{1}{p},p})}\parallel d-e\parallel_{L^p(0,t;W^{2,q})}\notag\\&+\parallel d-e\parallel_{L^\infty(0,t;B_{q,p}^{2(1-\frac{1}{p})})}\parallel d-e\parallel_{L^p(0,t;W^{2,q})}).\notag
\end{align} From the definition of $H(t)$ and (4.4), we have
\begin{align}
H(t)\leq C(H_0+4H^2(t)), \ \ on \ \ [0,T^*).
\end{align} If we take $H_0\leq\frac{1}{32C^2}$, then the continuity and monotonicity of $H(t)$ yields that there exists a positive
$T$ such that
\begin{align}
H(t)\leq 2CH_0\leq \frac{1}{16C}, \ \ on \ \ [0,T].
\end{align} On the other hand, (4.5) yields that
\begin{align}
H(t)\leq \frac{1-(1-16C^2H_0)^{\frac{1}{2}}}{8C}\ \ or\ \ H(t)\geq \frac{1+(1-16C^2H_0)^{\frac{1}{2}}}{8C}\ \ on\ \ [0,T^*).
\end{align}
Since $H(t)\leq \frac{1}{16C^2}$ on $[0,T]$, the continuity of $H(t)$ implies that
\begin{align}
H(t)\leq \frac{1-(1-16C^2H_0)^{\frac{1}{2}}}{8C}.
\end{align} This yields that
\begin{align}
\parallel(u,d-e,P)\parallel_{E^{p,q}_{T^*}}\leq\frac{1}{8C}.
\end{align} It is impossible and the solution exists globally in time. This implies that $(u,d,P)$ is a global solution of the system (3.1)-(3.2).
The proof of Theorem 1.1. is complete.
\bigskip
\noindent\textbf{Acknowledgments} This work was partially
supported by NNSFC (No. 10971235), RFDP (No. 200805580014),
NCET-08-0579 and the key project of Sun Yat-sen University.
|
1,108,101,562,778 | arxiv | \section{Introduction}
Entity linking (EL), the task of detecting mentions of entities in context and disambiguating them to a reference knowledge base (KB), is an essential task for text understanding. Existing works on EL mostly assume that the reference KB is complete, and therefore all mentions can be linked. In practice this is hardly ever the case, as knowledge bases are incomplete and because novel concepts arise constantly. For example, English Wikipedia, which is often used as the reference KB for large scale linking, is growing by more than 17k entities every month.\footnote{\url{https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia} (09.05.2022)}
We created the EDIN benchmark and pipeline\footnote{Model and benchmark are currently not publicly available.} where \emph{unknown entities}, that is entities with no available canonical names, descriptions and labeled mentions, have to be integrated into an existing EL model in an end-to-end fashion.
The benchmark is based on Wikipedia and a subset of news pages of the common-crawl dataset OSCAR \cite{AbadjiOrtizSuarezRomaryetal.2021}, split into two parts, one preceding time $t_1$ and one between $t_1$ and $t_2$. With current approaches, an EL system created at $t_{1}$ is unable to successfully link unknown entities that were added to Wikipedia between $t_{1}$ and a subsequent time $t_{2}$, as it lacks the ability to represent them as part of the entity index. This sets this task apart from zero-shot entity linking \cite{logeswaran-etal-2019-zero}. In the zero-shot (zs) setting, a textual description of the zs entities is assumed available at the time of training. This allows the creation of entity embeddings for them, their insertion in a dense entity index, and their use during training as negative examples. In contrast, no such textual description is initially available for unknown entities.
To adapt the model trained at $t_{1}$, the model can only make use of an \emph{adaptation dataset} and unsupervised techniques. There are two parts to this task: i) Discovery: The entity linking system needs to detect mentions of unknown entities part of the adaptation dataset and classify them as unknown and ii) Indexing: co-referring mentions of unknown entities need to be mapped to a single embedding compatible with the entity index. The EDIN-pipeline developed for this task is the first to adapt a dense-retrieval based EL system created at $t_{1}$ such that it incorporates unknown entities in an end-to-end fashion.
We show that distinguishing known from unknown entities, arguably a key feature of an intelligent system, poses a major challenge to dense-retrieval based EL systems. To successfully do so, a model has to strike the right balance between relying on mention vs. context. On one hand, the model needs to distinguish unknown entities carrying the same name as known entities and co-refer different mentions of the same unknown entities. In both cases, context reliance is necessary. E.g, the model needs to distinguish the 2019 novel `Underland' from multiple other novels and fictional references carrying the same name. On the other hand, the model needs to distinguish unknown entities with new names but semantic similarity. Here, mention reliance is important. E.g., the model needs to distinguishing BioNTech from other biotechnology companies with similar names and contexts.
Another challenge in discovery is class imbalance between known and unknown entities. With Wikipedia-size KBs, there tend to be many fewer mentions of unknown entities than of known entities and an adaptation dataset of finite size limits recall. In this work, we optimize for recall on the cost of precision.
On the side of indexing, inserting unknown entities into a space of known entities poses problems of interference with known entities in their close proximity. Again, consider the BioNTech example from above. Semantically, we want to place BioNTech in proximity of other biotech companies but in a way that dense-retrieval linking can still differentiate between them.
We find that re-training the model after indexing, and therefore giving it the chance to learn from hard negatives in the style of zs entity linking, is crucial to overcome this challenge.
We experiment with different indexing methods. In particular, we contrast single mention-level indexing \citet{fitzgerald-etal-2021-moleman} with indexing clusters of mentions. We find that unifying the information of multiple mentions into a single embedding is beneficial.
By introducing a clear-cut temporal segmentation this benchmark targets unknown entities which are truly novel/unseen to all parts of an EL system, specifically including the pre-trained language model (PLM). Therefore, the EL system cannot rely on implicit knowledge captured by the PLM. This is, to the best of our knowledge, a setting that has not been explored before in the context of dense-retrieval based EL.
Temporal segmentation also lets us study the effect of entity encoder and PLM degradation. We note that EL precision drops for known entities in novel contexts which points to a large problem of PLM staleness also discussed by \cite{TACL3539, NEURIPS2021_f5bf0ba0}. Novel entities appearing in the context around known entities also complicate discovery, making it likely that the known entities will be mistakenly classified as unknown.
We summarize our contributions as follows:
i) We created the EDIN-benchmark, a large scale end-to-end entity linking benchmark dataset where unknown entities need to be discovered and integrated into an existing entity index. ii) We contrast this task with zero-shot entity linking, and provide insight on the additional challenges it poses. iii) We propose the EDIN-pipeline in the form of an extension of existing dense-retrieval architectures. vi) We experiment with different indexing methods, specifically indexing single mentions vs. clusters of mentions.
\section{Task definition}
We formally define end-to-end EL as follows: Given a paragraph $p$ and a set of known entities
$E_{K} = \{e_{i}\}$ from Wikipedia, each with canonical name, the title, $t(e_{i})$
and textual description $d(e_{i})$, our goal is to output a list of tuples, $(e, [i, j])$, where $e \in E_{K}$ is the entity corresponding to the mention $m_{i,j}$ spanning from the $i^{th}$ to $j^{th}$ token in $p$. We call a system that solves this task based on $d(e_{i})$ a \textit{Description-based} entity linking system L.
For EDIN-benchmark, after training a model $L_{t1}$ at time step $t_{1}$, a set of unknown entities $E_{U} = \{e_{i}\}$ with $ E_{U} \bigcap E_{K} = \emptyset$ and no available canonical name, description and labeled mentions is introduced between $t_{1}$ and $t_{2}$, with $t_{2}>t_{1}$. The task is to adapt $L_{t1}$ such that it can successfully link mentions of the union of $E_{U} \bigcup E_{K}$.
We use three dataset splits: the training set $D_{train}$ to train $L_{t1}$, the adaptation dataset $D_{adapt}$ used to adapt $L_{t1}$ and the testset $D_{test}$ to evaluate. Both $D_{adapt}$ and $D_{test}$ include mentions between $t_{1}$ and $t_{2}$ but are disjoint, e.g., $D_{adapt} \bigcap D_{test} = \emptyset$.
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{figures/adapt.pdf}
\caption{\textbf{EDIN-pipeline:} In the adaptation phase, detected mentions in $D_{adapt}$ are mapped into a joint dense space with $E_{K}$ representations. A clustering algorithm groups mentions and entities based on kNN-similarity. Clusters of mentions without entity encoding are classified as $E_{U}$. To integrate these into the index of $E_{K}$, mentions in single sentence contexts are concatenated and mapped to a single embedding using the entity encoder. After adaptation, the updated entity index is used for standard EL.}
\label{fig:method}
\end{figure*}
\section{Model}
Our UEDI-pipeline is built on top of \cite{bela}, an end-to-end extension of the dense-retrieval based model BLINK \cite{wu2019zero}. It is composed of a Mention Detection (MD), Entity Disambiguation (ED) and Rejection (R) head. MD detects entity mention spans $[i, j]$ in context relying on BERT \cite{devlin-etal-2019-bert}. ED links these mentions to $e \in E_{K}$. It relies on bi-encoder architecture running a k-nearest-neighbor (kNN) search between \textit{mention encoding} and candidate \textit{entity encodings} (the entity index). Mention encodings are pooled from BERT-encoded paragraph tokens $p_{1..n}$:
\begin{align*}
\textbf{m}_{i,j} = FFL(BERT([CLS] p_{1} \ldots p_{n} [SEP])_{i...j})
\end{align*}
Entities are represented using BLINK's \textit{frozen} entity encoder:
\begin{align*}
\textbf{e} = BERT_{[CLS]}([CLS] t(e) [SEP] d(e) [SEP])
\end{align*}
Mention-entity candidates are passed to R that controls precision-recall trade-off by thresholding a learned candidate score.
More information about architecture and training are detailed in Appendix \ref{app:model}.
\section{Unknown Entity Discovery and Indexing}
For $E_{U}$ without descriptions, canonical names and labeled mentions, $L_{t1}$ is unable to successfully link mentions of the union of entities $E_{U} \bigcup E_{K}$.
To this end, we introduce an end-to-end pipeline to encode $E_{U}$ into $L_{t1}$'s entity index. The pipeline is depicted in Figure \ref{fig:method}. This pipeline is fully unsupervised and only relies on the adaptation dataset $D_{adapt}$. It follows a two-step process: i) Discovery: The entity linking system needs to detect mentions of unknown entities and classify them as unknown and ii) Indexing: co-referring mentions of unknown entities need to be mapped to a single embedding compatible with the entity index.
\subsection{Unknown Entity Discovery}
\label{disc}
First, $L_{t1}$ detects and encodes mentions part of $D_{adapt}$. The MD head is trained to detect mentions leveraging the context around them, and can therefore detect mentions of both $E_{K}$ and $E_{U}$. Encoded mentions $ \textbf{M} =\{ \textbf{m}_{1}, . . . , \textbf{m}_{|M|}\} $ are then input to a clustering algorithm that partitions $M$ into disjoint clusters $ C =\{ c_{1}, . . . , c_{|C|}\} $. We use \citet{logan-iv-etal-2021-benchmarking}'s greedy NN clustering algorithm where $\textbf{m}_{i}$ is clustered with all $\textbf{m}_{j} \in \textbf{M}$ such that $sim(\textbf{m}_{i}, \textbf{m}_{j}) > \delta$.
Next, entity encodings of $e \in E_{K}$ are assigned to these clusters if $sim(\textbf{e}_{i}, \textbf{m}_{j}) > \tau$ and $\textbf{e}_{i}$ is the nearest entity of any $m_{i} \in c_{i}$. As we find this approach to result in low precision, we add another condition, namely that $sim(\textbf{e}_{i}, \textbf{m}_{j}) > \tau$ must hold for at least 70\% of all $\textbf{m}_{j} \in c_{i}$. $\delta, \tau$ are tuned on $D_{adapt}$-dev. For more details see Appendix C. Following \citet{agarwal2021entity}, all clusters not containing any entity representation are deemed to refer to entities in $E_{U}$. We refer to this subset of automatically identified unknown entities as $E'_{U}$.
\subsection{Unknown Entity Indexing}
Next, clusters identified as $E'_{U}$ are integrated into the EL index of $L_{t1}$. We explore two different methods of entity indexing:
\begin{itemize}
\item \textit{Cluster-based} indexing: We concatenate all mentions part of the same cluster, each with the sentence they occur in, and use the entity encoder to map to a single entity representation. We pool over all $m_{i} \in c_{i}$ and select the most occurring mention as $t(e)$.
\item \textit{Mention-based} indexing: Mentions in single sentence contexts are indexed individually using the entity encoder.
Individual mentions are used as $t(e)$.
\end{itemize}
\section{Evaluation}
For \emph{Discovery}, we report precision and recall of $E_{U}$ classification and clustering metrics.
To compute standard EL metrics, e.g., precision and recall, canonical names of indexed clusters need to be consistent with the set of test labels. Our method of assigning canonical names to clusters based on mentions is not. To resolve this mismatch we pool on mention labels instead of the mentions itself.
Unsupervised clustering of mentions in $D_{adapt}$ naturally comes with error: i) Clusters can be incomplete, e.g., mentions of a single entity can be split into multiple clusters which can lead to indexing the same entity multiple times and ii) Clusters can be impure, e.g., mentions of different entities end in the same cluster which leads to conflation of multiple entities into one representation.
In our evaluation we make use of the gold labels for computing standard EL metrics by associating possibly more than one cluster to each unknown test entity, and considering a prediction correct if a test mention is linked to any of the clusters associated with the correct entity. Since, in a practical setting, gold labels are not available though, EL metrics could fail to capture shortcomings in establishing co-references between mentions.
Therefore, we report clustering metrics alongside EL metrics. We follow \citet{agarwal2021entity} and report normalized mutual information (NMI).
\section{Datasets}
\label{sec:Datasets}
\begin{table}[]
\centering
\footnotesize{
\begin{tabular}{cl ccc }
\toprule
& \bf Wikipedia & \bf OSCAR \\
\midrule
Train & 100k (908k) & 100k (1.7M) \\
Adapt & 17k (183k) & 17k (360k) \\
Dev Train & 8k (78k) & 8k (142k) \\
Dev Adapt & - & 9k (183k)\\
Test & 198k (1.8M)& 569k (11M) \\
\bottomrule
\end{tabular}
\caption{\textbf{Dataset Statistics: } Number of samples (number of mentions) for training, adaptation and testing.}
\label{tab:dataset_stats}}
\end{table}
\begin{table}[t]
\small
\centering
\begin{tabular}{lrrrr}
\toprule
\textbf{Bin} & Support & $E_{K}$ R@1 & Support & $E_{U}$ R@1 \\
\midrule
{[}0) & 68,241 & 21.1 & 7,095 & 17.5 \\
{[}1) & 59,227 & 29.1 & 3,923 & 25.9 \\
{[}1, 10) & 313,232 & 45.6 & 9,939 & 40.7 \\
{[}10, 100) & 901,857 & 65.7 & 7,765 & 57.3 \\
{[}100, 1k) & 2,860,880 & 76.9 & 7,399 & 64.4\\
{[}1k, +) & 5,981,028 & 84.4 & 6,717 & 86.7 \\
\bottomrule
\end{tabular}
\caption{\textbf{Frequency effects: }End-to-end EL performance of upper baseline model $L_{t2}$ per frequency bins.}
\label{tab:freq_bins_mentions}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/data_splits.png}
\caption{\textbf{Dataset splits: } A schema illustrating the composition of $D_{t1}$ and $D_{t2}$. Note, that contrary to what this plot suggests, the number of samples per datasplit is equal for $D_{t1}$ and $D_{t2}$.}
\label{fig:datasplit}
\end{figure}
To construct the reference entity index, we download Wikipedia dumps from $t1$ and $t2$ and extract entity titles and descriptions. Setting $t_1$ to September 2019 (the date when BLINK was trained) the reference KB consists of 5.9M entities, setting $t_2$ to March 2022 an additional set of 0.7M entities is introduced.
Wikipedia and Oscar datasets are constructed as follows.
\\
\noindent
\textbf{Wikipedia:} Since usually only the first mention of an entity inside a Wikipedia article is hyperlinked, we annotate a subset of Wikipedia. We use a version of L that was trained at $t2$ on a labelled non-public dataset. While noisy, these predictions are significantly better than what our best discovery and indexing methods can achieve, therefore we adopt them as pseudo-labels for the purpose of comparing approaches. As discovery and indexing methods improve, manual labelling of the evaluation data will afford more accurate measures. Wikipedia provides time stamps which enables us to separate two time splits.\\
\\
\noindent
\textbf{OSCAR news:} This dataset is based on the common-crawl dataset OSCAR \cite{AbadjiOrtizSuarezRomaryetal.2021}. We select a subset of English language news pages which we label automatically as described above. The dataset consists of 797k samples, which we split based on their publication date.\\
\\
For both types of datasets we created two time splits: $D_1$, containing samples preceding $t1$, which is used to train model $L_{t1}$ and $D_2$, with samples preceding $t2$, which is used to train an upper bound model $L_{t2}$. To adapt $L_{t1}$, we hold out a subset of data from between $t1$ and $t2$ to construct $D_{adapt}$ ($D_{adapt} \cap D_2 = \emptyset$). Remaining samples are randomly split into train, dev, test. Figure \ref{fig:datasplit} illustrates the different data splits. Overall dataset statistics are listed in Table \ref{tab:dataset_stats}.
To construct $D_{adapt}$, we follow \citet{agarwal2021entity}, and set a ratio of mentions of type $E_{U}$ to $E_{K}$ of 0.1. \footnote{Naturally this ratio would lie at 0.02. We made this artificial adjustment to reduce the strong class imbalance and obtain more interpretable and statistically stable results. Such adjustment could be lifted once considerably more precise unknown entity discovery components become available.
}
As $D_{t2}$-test covers both known and unknown entities, we use this dataset for EDIN-pipeline evaluation. In Oscar $D_{t2}$-test, the average number of mentions per $E_{U}$ is 5.6 and is ten times lower than for $E_{K}$. COVID-19 is the most occurring unknown entity with 12k mentions. 638k of $E_{U}$ are not mentioned at all and only 733 have a count larger 10. The distribution of number of $E_{U}$ is more skewed than the one of $E_{K}$ (skewness 529 vs. 106).
\section{Results and Discussion}
In the following sections, we report results for OSCAR data. Results on Wikipedia data are consistently lower than OSCAR results and shown in Appendix F. Our main findings are summarized in Table \ref{tab:e2e} (left) where we report end-to-end performance on OSCAR $D_{t2}$-test. As mentions of type $E_{U}$ are significantly less frequent than mentions of type $E_{K}$, we report results on these two types separately.
We first discuss upper ($L_{t2}$ a model trained on $D_{t2}$) and lower ($L_{t1}$ a model trained at $D_{t1}$) performance bounds. Next, we follow our two-step pipeline in adapting $L_{t1}$ such that it can successfully link mentions of $E_{U}$. Recall, that this pipeline involves two steps: discovery and indexing. Running $L_{t1}$ on $D_{adapt}$, we first discover mentions of $E_{U}$s by clustering. These clusters of mentions of $E_{U}$s are then indexed. We first present results on these components separately and then assemble the full end-to-end pipeline.
\begin{table*}[!htbp]
\centering
\begin{tabular}{lccc|ccc||ccc}
\toprule
&\multicolumn{3}{c}{\bf Known Entities} & \multicolumn{3}{c}{\bf Unknown Entities} & \multicolumn{3}{c}{\bf Unknown Entities filtered} \\
\toprule
\bf Model & \bf R@1 & \bf P@1 & \bf NMI & \bf R@1 & \bf P@1 & \bf NMI & \bf R@1 & \bf P@1 & \bf NMI \\
\midrule[0.01pt]
$L_{t1} $ & 80.1 & 82.0 & 93.5 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\
$L_{t2}$ & 78.7 & 79.7 & 93.1 & 49.2 & 31.8 & 93.8 & 63.1 & 26.0 & 93.4\\
\midrule
$L_{t1}$-Descp & 80.2 & 82.6 & 93.5 & 46.5 & 32.4 & 93.8 &58.3 & 26.2 & 90.5 \\
\midrule[0.01pt]
$L_{t1}$-Mention-Gold & 80.6 & 81.5 & 93.3 & 24.0 & 46.6 & 87.0 &40.7 & 46.6 & 87.0 \\
$L_{t1}$-Mention & 80.3 & 81.9 & 93.4 & 20.5 & 43.7 & 87.6 & 34.5 & 43.5 & 88.7 \\
\midrule[0.01pt]
$L_{t1}$-Cluster-Gold & 80.3 & 82.0 & 94.2 & 30.5 & 51.8 & 85.9 & 51.8 & 51.8 & 85.9 \\
EDIN ($L_{t1}$-Cluster) & 80.3 & 81.9 & 93.4 & 20.8 & 43.1 & 85.9 & 35.4 & 43.1 & 85.3 \\
\bottomrule
\end{tabular}
\caption{\textbf{EL performance} on OSCAR $D_{t2}$-test for unknown entities $E_{U}$ and known entities $E_{K}$. \textbf{Left} shows end-to-end performance and \textbf{Right} shows filtered performance where mentions of $E_{U}$ which are not part of the oracle Cluster-based entity index are dropped from test. \textbf{Upper/Lower limit:} $L_{t1}$ uses Description-based entity representations, was trained at t1 and constitutes the lower performance bound. It lacks representations of $E_{U}$. $L_{t2}$ uses Description-based entity representations, was trained at t2 and constitutes the upper performance bound. $E_{U}$ are part of the entity index and labeled-mentions of $E_{U}$ were part of training. \textbf{Adaptation:} For $L_{t1}$-Descp Description-based entity representations are added to $L_{t1}$'s entity index. For $L_{t1}$-Mention Mention-based representations of i) oracle $E_{U}$ and ii) discovered $E'_{U}$ part of $D_{adapt}$ are added to $L_{t1}$'s entity index. For $L_{t1}$-Cluster Cluster-based representations of i) oracle $E_{U}$ and discovered $E'_{U}$ part of $D_{adapt}$ are added to $L_{t1}$'s entity index.}
\label{tab:e2e}
\end{table*}
\subsection{Lower and upper bound}
Our starting point, and an obvious lower performance bound, is given by model $L_{t1}$.
This model lacks representations of $E_{U}$s and its training data does not contain any corresponding mentions. Therefore, performance on the subset of $E_{U}$s is 0 for all metrics.
For an upper performance bound we take model $L_{t2}$. The entities in $E_{U}$ were introduced to Wikipedia past $t_{1}$ but before $t_{2}$, meaning that to $L_{t2}$ these entities are actually \textit{known}: labeled mentions of $E_{U}$ are part of the training data and entity representations are part of the index.
$L_{t2}$ reaches similar performance as $L_{t1}$ for $E_{K}$. We suspect performance differences can be attributed to the difference in training data.
Performance of $L_{t2}$ on mentions of $E_{U}$ is lower than on mentions of $E_{K}$. The performance discrepancy between $E_{U}$ and $E_{K}$ is largely due to frequency differences, see Table \ref{tab:freq_bins_mentions} where we report results per frequency band. We suspect that the remaining difference can be attributed to the degradation of the pre-trained LM and the entity encoder. Note that while labelled mentions of $E_{U}$ were seen during the training phase of L, BLINK's entity encoder was not retrained. To investigate this hypothesis further, we test $L_{t1}$ on mentions of $E_{K}$ that meet two conditions: i) time stamps of these samples are posterior to t1 and ii) two or more mentions of $E_{U}$ occur in their context. Thus, we target mentions of $E_{K}$ in novel contexts to which neither BLINK nor the PLM have been exposed. We find that recall drops only slightly from 80.1 to 79.9 but precision drops from 82.0 to 75.9. This result points to the conclusion that $E_{U}$ are also a source of noise when trying to link mentions of $E_{K}$.
\subsection{Discovery}
\label{disc}
The first condition for effective discovery is the ability to reliably detect mentions of both known and unknown entities. Recall of $L_{t1}$ on $D_{adapt}$ for Mention Detection task is 90\% for $E_{K}$s and 86\% for $E_{U}$s entities. As expected, recall of mentions of $E_{K}$ is higher as no mentions of $E_{U}$ were seen during training. As a reference, running $L_{t2}$ on $D_{adapt}$, we find that for both $E_{K}$ and $E_{U}$ 91\% of mentions are recalled. Note again, that for $L_{t2}$, $E_{U}$ are \textit{known}. This indicates that mention detection is not affected by frequency differences and PLM degradation.
Next, the clustering step to classify between mentions of $E_{U}$ and $E_{K}$ follows. We report clustering quality of 93.1\% NMI.
When assigning entities to clusters as described in Section \ref{disc}, we find that our first attempt results in 35\% recall and 5\% precision on $D_{adapt}$-dev. This setting is close in spirit to \citet{agarwal2021entity} where a single entity-mention link is decisive for discovery.
A qualitative error analysis reveals that low recall is rooted in the problem that mention embeddings of $E_{U}$ (e.g. BioNTech) can have high similarity with entity representations of $E_{K}$ (e.g. of other biotechnology companies). If no mentions of these $E_{K}$ are part of $D_{adapt}$, we wrongly assign the high similarity entity to the cluster of mentions of $E_{U}$. We suspect that this problem is particularly pronounced in our setting as EDIN-benchmark is a large scale EL benchmark (up to 6 times more entities in the reference KB and up to 36 times more mentions in clustering set compared to \citet{agarwal2021entity}) with many long-tail entities (\citet{agarwal2021entity} entities were randomly selected).
We find that low precision is rooted in the issue that $E_{K}$ are indexed as false positive when occurring in novel contexts, e.g., ``blood tests'' or ``vaccine'' in the context of COVID form their own distinct clusters.
With the modified cluster-entity assignment method, described in \ref{disc}, discovery still results in low but higher precision (8\%) and high recall (80\%) of $E_{U}$. 1196 of 1368 $E_{U}$ part of $D_{adapt}$ with more than three mentions are indexed.
\subsection{Indexing}
After discovery, we need to integrate clusters of mentions of $E_{U}$ into the existing entity index. In the following, we i) quantify the impact of re-training the model after indexing, ii) compare Description-based with Cluster-based indexing and iii) compare Cluster-based with Mention-based indexing.
\subsubsection{Effect of re-training}
In the zero-shot EL problem, all entities are part of the index at training time. In the setting of EDIN-benchmark, discovery and indexing of $E_{U}$ happen after training. We run the following experiments to study the effect of this difference:
\begin{itemize}
\item \textbf{Descp-post-train:} Description-based entity representations are added to the index \emph{after} training L on t1.
\item \textbf{Descp-pre-train:} Description-based entity representations are added to the index \emph{before} training L on t1.
\end{itemize}
Recall and precision of $E_{U}$ in the setting where entity representations are added before the training is 47\% and 32\% respectively, see Table \ref{tab:descp}. %
Recall and precision in the setting where entity representations are added post training is 26\% and 17\% points lower respectively.
We note that entities in $E_{U}$ can potentially be placed in close proximity to $E_{K}$ in embedding space. When these entity encodings were present during training, they can be picked up as hard negatives and the mention encoder can learn to circumvent them. This hypothesis is supported by experiments showing that the mean similarity between mentions and correct known entity embeddings increases significantly when the mention encoder is re-trained after adding the new entities. For details see Appendix D.
The take-away for the EDIN-pipeline is that, after adding new entity representations to the index, another round of training is needed to adapt the mention encoder to the updated index. We adopt this approach for the following experiments. Besides adapting the mention encoder, retraining BLINK could have a similar effect: in such case learning from hard negative can affect the spacing of entity encodings. As re-training BLINK is expensive, we did not explore this option in this work.
\begin{table}[!htbp]
\centering
\resizebox{0.5\textwidth}{!}{\begin{tabular}{lccc|ccccc}
\toprule
& \multicolumn{3}{c}{\bf Unknown Entities} & \multicolumn{3}{c}{\bf Known Entities}\\
\toprule
& \bf R@1 & \bf P@1 & \bf NMI & \bf R@1 & \bf P@1 & \bf NMI \\
\midrule[0.01pt]
Not re-trained & 20.6 & 15.5 & 95.2 & 80.1 & 82.3 & 93.5 \\
Re-trained & 46.5 & 32.4 & 93.8 & 80.2 & 82.6 & 93.5 \\
\bottomrule
\end{tabular}}
\caption{\textbf{Effect of re-training:} End-to-end EL performance on OSCAR $D_{t2}$-test when adding Description-based representation of unknown entities $E_{U}$ to the entity index before and after training of $L_{t1}$.}
\label{tab:descp}
\end{table}
\subsubsection{Description vs. Cluster-based indexing}
We compared Description-based entity representations and Cluster-based indexing. To isolate discovery and indexing performance, we first evaluate performance using oracle clusters, where we replace the discovery method run on $D_{adapt}$ with an oracle where mentions of $E_{U}$ are discovered with perfect precision and recall and perfectly clustered.
In this setting, Cluster-based indexing results in 19\% higher precision but 16\% lower recall, compared to Description-based indexing, see Table \ref{tab:e2e} (left) $L_{t1}$-Cluster-Gold. The drop in recall for Cluster-based indexing can partially be attributed to the fact that a subset of $E_{U}$ part of $D_{t2}$-test are not mentioned in $D_{adapt}$, and therefore no representation can be built for them. When filtering mentions of $E_{U}$ not part of $D_{adapt}$ out of the test set, the gap in recall decreases to 7\%, see Table \ref{tab:e2e} (right).
The take-away is that Cluster-based indexing relying on concatenated mentions in context instead of manually crafted descriptions has high precision potential but recall is challenging as a randomly selected adaptation set is unlikely to cover long-tail entities.
\subsubsection{Mention vs. Cluster-based indexing}
In this section, we compare Mention-based and Cluster-based indexing. Again, to isolate discovery and indexing we first report results using oracle clusters of $E_{U}$. When indexing mentions of these clusters individually, recall drops by 6\% points and precision by 5\%, see Table \ref{tab:e2e} (left), $L_{t1}$-Mention-Gold. When reducing the testset to mentions of entities that were actually discoverable, the difference in recall becomes even more pronounced: 40.7\% R@1 for Mention-based vs. 51.8 for Cluster-based indexing, see Table \ref{tab:e2e} (right).
Interestingly, this means that the ability to attend over multiple mentions in context and unify their information into a single embedding leads to superior representations. Note that here the entity encoder was neither trained to deal with the style of individual mentions in context nor with clusters of mentions in context. For future work, it would be interesting to see if Cluster-based indexing can be generally beneficial to EL, outside of the context of EDIN-pipeline. This would require training the entity encoder to specifically adapt to the new input style.
\subsection{End-to-end model}
The prior section presented results of the two components individually. Now, we assemble the full end-to-end pipeline. We replace the oracle clusters of $E_{U}$ by discovered clusters of $E_{U}'$. Errors in discovery that affect indexing are: i) Clusters of mentions of $E_{K}$ classified false positively as unknown and clusters of mentions of $E_{U}$ false negatively classified as known and ii) incomplete and impure clusters. We find that performance of Mention-based and Cluster-based indexing in terms of R@1 and P@1 converges and is significantly lower than their oracle counterparts. Mention-based indexing has better clustering performance.
When reducing the testset to mentions of entities that were actually discoverable, thus part of $D_{adapt}$, Cluster-based indexing is 1\% point better in terms of recall and 0.4 \% worse in precision, Table \ref{tab:e2e} (right). When reducing the testset further to mentions of entities that were in fact discovered, recall of Cluster-based indexing is with 58.4\% better than that of Mention-based indexing (55.5\%).
Besides end-to-end performance, we also report entity disambiguation performance with oracle mention detection in Table \ref{tab:disambiguation} in Appendix E. Here, we find that Mention-based indexing is performing worse than Cluster-based indexing across all metrics: 1\% point difference in terms of recall, 3\% point difference in terms of precision and 0.7\% points difference in terms of NMI.
Overall, these results show that EDIN-benchmark is challenging. In the end-to-end setting, errors easily propagate. Most notably, we see this manifest when i) comparing Table \ref{tab:disambiguation} and \ref{tab:e2e} (right) results where the recall problem of $E_{U}$ becomes apparent and ii) comparing performance of oracle and automatic clusters where precision drops by 10\% points. Overall, we find that Cluster-based indexing, with the advantage of attending to and unifying the information of multiple mentions, performs better than Mention-based indexing. We call this version the EDIN-pipeline.
Besides yielding an index that scales in memory with the number of entities rather than the number of mentions -- a significant advantage when the number of entities is already large and in view of a streaming extension -- it generates fixed-size entity embeddings as a by-product that can have applications of their own and can be used to enhance PLMs (e.g., \citet{peters2019knowledge}).
In future work, we want to explore a setting where $E_{U}$ are discovered in a streaming fashion, thus scaling up $D_{adapt}$ and dropping the crafted 90\%-10\% ration of $E_{K}$ vs. $E_{U}$. This would pose challenges in terms of scale and precision in discovery. In terms of precision, a human in the loop approach, as proposed by \citet{Hoffart2016TheKA} in the context of keeping KBs fresh, to introduce a component of supervision into our end-to-end pipeline might be needed.
\section{Related work}
Entity linking is an extensively studied task. Prior to the introduction of PLMs, EL systems used frequency and typing information, alias tables, TF-IDF-based methods
and neural networks to model context, mention
and entity \cite{cucerzan-2007-large, bunescu-pasca-2006-using,Milne2008LearningTL, he-etal-2013-learning, Sun2015ModelingMC, lazic-etal-2015-plato, Raiman2018DeepTypeME, kolitsas-etal-2018-end, gupta-etal-2017-entity, ganea-hofmann-2017-deep, DBLP:journals/corr/abs-1811-10547, DBLP:journals/corr/abs-1909-05780}.
\citet{gillick-etal-2019-learning} present a PLM-based dual encoder architecture that encodes mentions and entities in
the same dense vector space and performs EL via kNN search. \citet{logeswaran-etal-2019-zero} proposed the zero-shot EL task and show that domain adaptive training can address the domain shift problem. Subsequently, \citet{Wu2020ScalableZE} showed that pre-trained zero-shot architectures are both highly accurate and computationally efficient at scale. None of these works tackle the problem of unknown entities.
Recently, \citet{fitzgerald-etal-2021-moleman} model EL entirely as mappings between mentions, where inference involves a NN search against all known mentions of all entities in the training set. In this setting mentions need to be labeled. They do not explore their approach in the setting of unknown entities.
Prior to dense retrieval-based EL, unknown entity discovery work includes:
\citet{ratinov-etal-2011-local} train a classifier to determine whether the top ranked EL candidate is unknown relying on local context, global Wikipedia coherence, and additional linker
features. Mentions of known entities that are incorrectly linked are among the training examples for unknown entity discovery thus decreasing quality.; \citet{nakashole-etal-2013-fine} introduce a model for unknown entity discovery and typing leveraging incompatibilities and correlations among entity
types. This method ignores context. Therefore, mentions not registered in the reference KB are regarded unknown; \citet{10.1145/2566486.2568003, Wu2016ExploringMF} study a variety of features for unknown entity discovery: \citet{10.1145/2566486.2568003} use perturbation-based confidence measures and key-phrase representations and \citet{Wu2016ExploringMF} explore different feature spaces, e.g., topical and search engine features. These features are not readily available and incorporating them into PLM-based approaches is not straightforward and outside the scope of this work.; \citet{DBLP:conf/tac/JiNHF15, derczynski-etal-2017-results} introduce shared tasks for discovery. These tasks do not target end-to-end dense-retrieval based EL and are outdated, e.g., treat iPhone as an unknown entity.; \citet{DBLP:conf/ijcai/Akasaki0T19} introduces a time sensitive method of discovering emerging entities. This method relies on training data in form of labeled emerging contexts. This data is collected by searching only exact mentions of unknown entities and therefore neglecting the issue of ambiguity and mention variety.
None of these works consider unknown entities in an end-to-end setting including mention detection, unknown entity discovery and indexing. Also, we cannot use their datasets to evaluate as these entities were part of training the PLM.
Closely related to EL is the task of cross document entity co-reference (CDC), where no reference KB is present \cite{10.3115/980845.980859, gooi-allan-2004-cross, singh-etal-2011-large,dutta-weikum-2015-cross, barhom-etal-2019-revisiting, cattan-etal-2021-realistic, caciularu-etal-2021-cdlm-cross, cattan2021scico}. Most recently, \citet{logan-iv-etal-2021-benchmarking}
benchmark methods for streaming CDC, where mentions are disambiguated in a scalable manner via incremental clustering. Our work can be seen as bridging between the world of CDC and EL. We harvest CDC to discover and cluster unknown entities but then integrate them into a curated list of entities. \citet{dutta-weikum-2015-c3el}
also combine clustering-based CDC decisions and linking but as this work is using sparse bag-of-word representations, it is not well suited for the embedding-based representations used in this work.
Most recently, \citet{angell-etal-2021-clustering} introduce a new EL method using document-level supervised graph-based clustering. \citet{agarwal2021entity} extend this work to cross-document EL and entity discovery. In this work, we adopt a more standard bi-encoder architecture (i.e. BLINK), with better EL scalability potential (memory linear in the number of entities and not in the number of mentions) and an existing end-to-end extension. We use a modified version of their discovery method.
Besides dense-retrieval based EL, \citet{decao2020autoregressive} proposed generative EL. We plan to study the problem of integrating unknown entities into generative models in future work.
\section{Conclusion}
This work created EDIN benchmark and pipeline. EDIN-benchmark is a large-scale, end-to-end entity linking benchmark with a clear cut temporal segmentation for \emph{Unknown Entity Discovery and Indexing}. EDIN-pipeline detects and clusters mentions of unknown entities in context. These clusters of unknown mentions are then collapsed into single embeddings and integrated into the entity index of the original entity linking system.
\bibliographystyle{acl_natbib}
|
1,108,101,562,779 | arxiv | \section{HYDRODYNAMIC MODELS}
\label{ingredients}
Hydrodynamics is one of the main tools for studying the
collective flow in high-energy nuclear collisions. Here, we
shall examine some of the main ingredients of such
a description and see how likely more realistic treatment of
these elements may affect some of the observable quantities.
The main components of any hydrodynamic model are the initial
conditions, the equations of motion, equations of state and
some decoupling prescription. We shall discuss how these
elements are chosen in our studies.
\bigskip
\noindent {\bf Initial Conditions}:
In usual hydrodynamic approach, one assumes some highly
symmetric and smooth initial conditions (IC). However, since
our systems are small, large event-by-event fluctuations are
expected in real collisions, so this effect should be taken
into account. We introduce such IC fluctuations by using an
event simulator. As an example, we show here the energy density for central Au+Au collisions at 130A GeV,\hfilneg\
\vspace*{.1cm}
\begin{figure}[h]
\includegraphics[angle=270,width=7.cm]{condin3.eps}
\caption{The initial energy density at $\eta=0$ is plotted in
units of GeV/fm$^3$. One random event is shown vs. average
over 30 random events ($\simeq$ smooth initial
conditions in the usual hydro approach).}
\label{fig:IC}
\vspace*{-.5cm}
\end{figure}
\noindent given by NeXuS\footnote{Many other simulators, based
on microscopic models, {\it e.g.} HIJING \cite{hijing},
VNI \cite{vni}, URASiMA \cite{urasima}, $\cdots$, show such
event-by-event fluctuations.} \cite{nexus}. Some consequences of such fluctuations have been
discussed elsewhere\cite{fluctuations,hbt-prl,review}. We
shall discuss some others in Sec.\ref{results}.
\smallskip
\noindent {\bf Equations of Motion}:
In hydrodynamics, the flow is governed by the continuity
equations expressing the conservation of energy-momentum,
baryon-number and other conserved charges. Here, for simplicity, we shall consider only the energy-momentum and
the baryon number. Since our systems have no symmetry as discussed above, we developed a special
numerical code called SPheRIO ({\bf S}moothed {\bf P}article
{\bf h}ydrodynamic {\bf e}volution of {\bf R}elativistic heavy
{\bf IO}n collisions) \cite{spherio}, based on the so called
Smoothed-Paricle Hydrodynamics (SPH) algorithm \cite{sph}.
The main characteristic of SPH is the parametrization of the
flow in terms of discrete Lagrangian coordinates attached to
small volumes (called ``particles'') with some conserved
quantities.
\smallskip
\noindent {\bf Equations of State}:
In high-energy collisions, one often uses equations of state
(EoS) with a first-order phase transition, connecting a
high-temperature QGP phase with a low-temperature hadron phase.
A detailed account of such EoS may be found, for instance, in
\cite{review}. We shall denote them 1OPT EoS. However, lattice
QCD showed that the transition line has a critical end point
and for small net baryon surplus the transition is of crossover
type~\cite{LQCD}.
The following parametrization may reproduce this behavior,
in practice:
\vspace*{-.2cm}
\begin{eqnarray}
P &=& \lambda P_H+(1-\lambda)P_Q
+2\delta/{\sqrt{(P_Q-P_H)^2+4\delta}}\ ,\\
s &=& \lambda s_H+(1-\lambda) s_Q\,, \\
\epsilon &=& \lambda\epsilon_H+(1-\lambda)\epsilon_Q
-{2\,\left[1+(\mu/\mu_c)^2\right]\,\delta}/
{\sqrt{(P_Q-P_H)^2+4\delta}}\ ,
\end{eqnarray}
\vspace*{-1.cm}
\begin{figure}[!h]
\hspace{4.cm}
\includegraphics[height=8.2cm,width=17.5cm]{EoS1.eps}
\caption{A comparison of $\varepsilon(T)$, $s(T)$ and $P(T)$
as given by our parametrization with a critical point (solid
lines) and those with a first-order phase transition (dashed
lines).}
\label{fig:EoS1}
\end{figure}
\noindent where
$\lambda\equiv[1-(P_Q-P_H)/\sqrt{(P_Q-P_H)^2+4\delta}\ ]/2$
and suffixes $Q$ and $H$ denote those quantities given by
the MIT bag model and the hadronic resonance gas, respectively,
and $\delta\equiv\delta(\mu_b)=\delta_0\exp[-(\mu_b/\mu_c)^2]$,
with $\mu_c=$const.
As is seen, when $\delta(\mu_b)\not=0$, the transition from
hadron phase to QGP is smooth. We could choose $\delta(\mu_b)$
so to make it exactly 0 when $\mu_b>\mu_c\,$, to guarantee the
first-order phase transition there. However, in practice our choice above showed to be enough. We shall
denote the EoS given above, with $\delta_0\not=0$, CP~EoS.
Let us compare, in Figure~\ref{fig:EoS1}, $\varepsilon(T)$, $s(T)$ and $P(T)$, given by the two
sets of EoS. one can see that the crossover
behavior is correctly reproduced by our parametrization for
CP EoS, while finite jumps in $\varepsilon$ and $s$ are
exhibited by 1OPT EoS, at the transition temperature. It is
also seen, as mentioned above, that at $\mu_b\sim0.4\,$GeV
the two EoS are indistinguishable. Now, since in a
real collision what is directly given is the energy distribution at a certain initial time (besides $n_b$, $s$,
{\it etc.}), whereas $T$ is defined with the use of the former,
we plotted some quantities as function of $\varepsilon$ in
Figure \ref{fig:EoS2}. One immediately sees there some
remarkable
differences between the two sets of EoS: naturally $p$ is not
constant for CP EoS in the crossover region; moreover, $s$ is
larger. We will see in Sec.\ref{results} that these features
affect the observables in non-negligible way.
\bigskip
\noindent {\bf Decoupling Prescription}:
Usually, one assumes decoupling on a sharply defined hypersurface. We call this {\it Sudden Freeze Out} (FO).
However, since our systems are small, particles may escape
from a layer with thickness comparable with the systems' sizes.
We proposed an alternative description called
{\it Continuous Emission} (CE)~\cite{ce} which, as compared to FO, we believe closer to what happens in the actual collisions.
In CE, particles escape from any space-time point $x^\mu$,
according to a momentum-dependent\hfilneg\
\begin{figure}[!t]
\hspace{4.cm}
\includegraphics*[height=7.cm,width=17.5cm]{EoS2.eps}
\caption{Plots of $s/\varepsilon$ and $P$ as function of
$\varepsilon$ for the two EoS shown in Figure~\ref{fig:EoS1}.}
\vspace*{-1.6cm}
\label{fig:EoS2}
\end{figure}
\noindent escaping probability
${\cal P}(x,k)
=\exp\left[-\int_\tau^\infty \rho(x^\prime)\,\sigma v\; \mathrm{d}\tau^\prime\right].$
To implement CE in SPheRIO code, we had to approximate it to make the computation practicable. We
took ${\cal P}$ on the average, {\it i.e.},
\begin{equation}
{\cal P}(x,k)\rightarrow\langle{\cal P}(x,k)\rangle
\equiv {\cal P}(x)
=\exp\left(-\kappa\ s^2/|\mathrm{d}s/\mathrm{d}\tau|\right).
\label{eq:prob}
\end{equation}
The last equality has been obtained by making a linear
approximation of the density
$\rho(x^\prime)=\alpha\, s(x^\prime)$ and
$\kappa = 0.5\,\alpha\,\langle\sigma v\rangle$ is
estimated to be 0.3$\,$, corresponding to
$\langle\sigma v\rangle\approx$ 2~fm$^2$. It will be shown in
Sec. \ref{results} that CE gives important changes in some
observables.
\section{RESULTS}
\label{results}
Let us now show results of computation of some observables, as
described above, for Au+Au at 200A GeV. We start
computing $\eta$ and $p_T$
distributions for charged particles, to fix the parameters.
Then, $v_2$ and HBT radii are computed free of parameters.
\begin{figure}[!t]
\vspace*{-1.4cm}
\includegraphics*[scale=0.22]{eta1.eps}
\includegraphics*[scale=0.22]{pT1.eps}
\vspace*{-2.cm}
\caption{$\eta$ and $p_T$ distributions for the most central
Au+Au at 200A GeV. Results of CP EoS and 1OPT EoS are
compared. The data are from PHOBOS Collab.\cite{phobos1}.}
\label{fig:eta}
\vspace*{-1.5cm}
\end{figure}
\begin{figure}[!b]
\includegraphics*[scale=0.3]{v2.eps}
\vspace*{-.5cm}
\vspace*{-1.2cm}
\includegraphics*[scale=0.29]{dNdv2.eps}
\vspace*{-.5cm}
\caption{Left: $\eta$ distribution of $v_2$ for charged
particles in the centrality $(15-25)$\% Au+Au at 200A GeV,
computed with fluctuating IC. The vertical bars indicate
dispersions. The data are from PHOBOS Collab.\cite{phobos3}.
Right: $v_2$ distribution in the interval $0.48<\eta<0.95\,$,
corresponding to CP EoS and CE.}
\label{fig:v2}
\vspace*{-.5cm}
\end{figure}
\medskip
\noindent{\bf Pseudo-rapidity distribution}:
Figure \ref{fig:EoS2} shows that the inclusion of a critical
end point increases the entropy per energy. This means that, given the same total energy, CP EoS produces larger
multiplicity, which is clearly shown in the left panel of
Figure \ref{fig:eta}, especially in the mid-rapidity region.
Now, we shall mention that, once the equations of state are chosen, fluctuating IC produce smaller multiplicity, for the same decoupling prescription, as compared with the case of
smooth averaged IC~\cite{review}.
\noindent{\bf Transverse-Momentum Distribution}:
As discussed in Sec. \ref{ingredients}, since the pressure does
not remain constant in the crossover region, we expect that the
transverse acceleration is larger for CP EoS, as compared with
1OPT EoS case. In effect, the right panel of Figure
\ref{fig:eta} does show that $p_T$ distribution is flatter for
CP EoS, but the difference is small.
The freezeout temperature suggested by $\eta$ and $p_T$
distributions turned out to be $T_f\simeq135-140\,$MeV.
\medskip
\noindent{\bf Elliptic-Flow Parameter $v_2$}: We show, in
Figure \ref{fig:v2}, results for the $\eta$
distribution of $v_2$ for Au+Au collisions at 200A GeV. As
seen, CP EoS gives larger $v_2\,$, as a consequence of larger
acceleration in this case as discussed in
Sec.\ref{ingredients}.
Notice that CE makes the curves narrower, as a consequence of
earlier emission of particles, so with smaller acceleration,
at large-$\vert\eta\vert$ regions. Due to the IC fluctuations,
the resulting fluctuations of $v_2$ are large, as seen in
Figures \ref{fig:v2}. It would be nice to measure such a
$v_2$ distribution, which would discriminate among several
microscopic models for the initial stage of nuclear
collisions.
\medskip
\begin{figure}[!t]
\includegraphics*[scale=0.28]{RL.eps}
\vspace*{-.5cm}
\caption{$k_T$ dependence of HBT radius $R_L$ for $\pi$
in the most central Au+Au at 200A GeV, computed with
fluctuating IC. The data are from PHENIX Collab.\cite{phenix}.}
\label{fig:RL}
\end{figure}
\begin{figure}[!b]
\includegraphics*[scale=0.28]{Rs.eps}
\vspace*{-.5cm}
\vspace*{.cm}
\includegraphics*[scale=0.28]{Ro.eps}
\vspace*{-1.cm}
\caption{$k_T$ dependence of HBT radii $R_s$ and $R_o$ for
pions in the most central Au+Au at 200A GeV, computed with
event-by-event fluctuating IC. The data are from PHENIX
Collab.\cite{phenix}.}
\label{fig:RoRs}
\end{figure}
\noindent {\bf HBT Radii}:
Here, we show our results for the HBT radii, in Gaussian
approximation as often used, for the most central Au+Au
collisions at 200A GeV. As seen in Figures \ref{fig:RL} and
\ref{fig:RoRs}, the differences between CP EoS results and
those for 1OPT EoS are small.
For $R_s\,$, and especially for $R_o\,$, one sees that CP EoS
combined with continuous emission gives steeper $k_T$
dependence, closer to the data.
However, there is still numerical discrepancy in this case.
\section{CONCLUSIONS AND OUTLOOKS}
\label{conclusions}
In this work, we introduced a parametrization of lattice-QCD
EoS, with a first-order phase transition at large $\mu_b$ and
a crossover behavior at smaller $\mu_b\,$.
By solving the hydrodynamic equations, we studied
the effects of such EoS and the continuous emission.
Some conclusions are:
{\it i}) The multiplicity increases for these EoS in the
mid-rapidity;
{\it ii}) The $p_T$ distribution becomes flatter, although the
difference is small;
{\it iii}) $v_2\,$ increases; CE makes
the $\eta$ distribution narrower;
{\it iv}) HBT radii slightly closer to data.
In our calculations, the effect of the continuous emission on
the interacting component has not been taken into account. A more realistic treatment of this effect probably makes $R_o$
smaller, since the duration for particle emission becomes
smaller in this case. Another improvement we should make is
the approximations we used for ${\cal P}(x,p)$.
\begin{theacknowledgments}
We acknowledge financial support by FAPESP (04/10619-9,
04/15560-2, 04/13309-0), CAPES/PROBRAL, CNPq, FAPERJ and PRONEX.
\end{theacknowledgments}
|
1,108,101,562,780 | arxiv | \section{Introduction}
Since the solitary surface wave was discovered by John Scott Russell \cite{Russell1844} in 1834, many models of solitary waves in shallow water have been developed, such as the Boussinesq equation \cite{Boussinesq1872}, the Korteweg \& de Vries (KdV) equation \cite{KdV}, the Benjamin-Bona-Mahony (BBM) equation \cite{Benjamin1972}, the Camassa-Holm (CH) equation \cite{Camassa1993PRL}, and so on. Unlike other models, the celebrated CH equation
\begin{equation}
u_t + 2 \kappa u_x - u_{xxt} + 3 u u_x = 2 u_x u_{xx} + u u_{xxx} \label{geq:CH}
\end{equation}
possesses both of the global solutions in time and the wave-breaking solutions, where $u$ denotes wave elevation, $x,t$ are the spatial and temporal variables, the subscript denotes the differentiation, and $0\leq \kappa \leq 1/2$ is a constant related to the critical shallow water wave speed, respectively. Especially, when $\kappa = 0$, the CH equation (\ref{geq:CH}) has
a peaked solitary wave
$ u(x,t) = c \; \exp(-|x - c\; t|)$,
where $c$ denotes the phase speed. It should be emphasized that, at the wave crest $x = c \; t$, the above closed-form solution has no continuous derivatives respect to $x$, thus it does not satisfy Eq. (\ref{geq:CH}) (when $\kappa = 0$) at the crest $x = c \; t$. So, it is not a {\em strong} solution. However, Constantin and Molinet \cite{Constantin2000-B} pointed out that $ u(x,t) = c \; \exp(-|x - c\; t|)$ could be understood as a weak solution of the CH equation (\ref{geq:CH}) when $\kappa=0$.
Note that such kind of discontinuity (or singularity) exists widely in natural phenomena, such as dam breaking \cite{Zoppou2000AMM} in hydrodynamics, shock waves in aerodynamics, black holes described by the general relativity, and so on. In the frame of water waves, Stokes \cite{Stokes1894} found that the limiting gravity wave has a sharp corner at the crest. In case of the dam breaking, there exist sharp corners of wave elevation at the beginning $t=0$, and such kind of discontinuity of the derivative of wave elevation does not disappear due to the neglect of the viscosity, as shown by Wu and Cheung \cite{Wu2008IJNMF}. In fact, problems related to such kind of discontinuity belong to the so-called Riemann problem \cite{Wu2008IJNMF, Bernetti2008JCP, Rosatti2010JCP}, which is a classic field of fluid mechanics.
However, such kind of peaked solitary waves have {\em never} been found for other mainstream models of shallow water waves. Solitary wave solutions in shallow water with a fixed speed $c$ and permanent form were found by Korteweg \& de Vries \cite{KdV} using the KdV equation
\begin{equation}
\zeta_t + \zeta_{xxx} + 6 \zeta \; \zeta_x = 0, \label{geq:KdV:original}
\end{equation}
subject to the boundary conditions
\begin{equation}
\zeta\to 0, \zeta_x\to 0, \zeta_{xx}\to 0, \mbox{as $|x|\to +\infty$}, \label{bc:infinity:original}
\end{equation}
where $\zeta$ denotes the wave elevation. The KdV equation (\ref{geq:KdV:original}) admits solitary waves but breaking ones. The traditional solitary wave of KdV equation (\ref{geq:KdV:original}) reads
\begin{equation}
\zeta(x,t) = \frac{c}{2} \; \mbox{sech}^2 \left[\frac{\sqrt{c}}{2} (x-c \; t - x_0) \right],\label{zeta:traditional}
\end{equation}
where sech denotes a hyperbolic secant function and $x_0$ may be any a constant. This solitary wave has a smooth crest with always positive elevation $\zeta(x,t)\geq 0$, and besides its phase speed $c$ is dependent upon the wave height, i.e. $c = 2\zeta_{max}$, so that higher solitary waves propagate faster. To the best of the author's knowledge, no peaked solitary waves of KdV equation (\ref{geq:KdV:original}) were reported.
The KdV equation (\ref{geq:KdV:original}) is an approximation of the fully nonlinear wave equations that admit the limiting gravity wave with a peaked crest, as pointed out by Stokes \cite{Stokes1894}. Besides, as mentioned before, this kind of discontinuity of wave elevation widely exists in hydrodynamic problems such as dam break \cite{Zoppou2000AMM}. In addition, current investigations \cite{Constantin2000, Dullin2001} reveal the close relationships between the CH equation (\ref{geq:CH}) and the KdV equation (\ref{geq:KdV:original}). Therefore, one has many reasons to assume that the progressive solitary waves of the KdV equation (\ref{geq:KdV:original}) might admit peaked solitary waves, too.
\section{Peaked solitary waves of KdV equation}%
Write $\xi = x - c \; t - x_0$ and $\eta(\xi)=\zeta(x,t)$. The original KdV equation (\ref{geq:KdV:original}) becomes
\begin{equation}
-c \eta' + \eta'''+6 \eta \eta' =0, \label{geq:KdV}
\end{equation}
subject to the boundary condition
\begin{equation}
\eta \to0, \;\; \eta'\to 0, \;\; \eta''\to 0, \mbox{as $|\xi|\to +\infty$}, \label{bc:infinity}
\end{equation}
where the prime denotes the differentiation with respect to $\xi$. Besides, there exists the symmetry condition
\begin{equation}
\eta(-\xi) = \eta(\xi), \hspace{1.0cm} \xi \in(-\infty,+\infty) \label{bc:symmetry}
\end{equation}
so that we need only consider the solution in the interval $\xi \geq 0$.
Integrating (\ref{geq:KdV}) with (\ref{bc:infinity}) gives
\begin{equation}
-c \eta + \eta'' + 3 \eta^2 = 0, \hspace{1.0cm} \xi \geq 0 . \label{geq:KdV:1}
\end{equation}
Multiplying it by $2 \eta'$ and then integrating it with (\ref{bc:infinity}), we have
\begin{equation}
\eta'^2 = \eta^2 (c-2\eta), \hspace{1.0cm} \xi \geq 0, \label{geq:KdV:2}
\end{equation}
which has real solutions only when
\begin{equation}
\eta \leq \frac{c}{2}. \label{eta:limit}
\end{equation}
Under the restriction (\ref{eta:limit}), we have from (\ref{geq:KdV:2}) that
\begin{equation}
\eta' = \pm \eta \; \sqrt{c-2\eta}, \label{geq:KdV:3}
\end{equation}
say,
\begin{equation}
\frac{d \eta}{\eta \; \sqrt{c-2\eta}} = \pm d \xi, \;\; \;\; \xi \geq 0. \label{geq:KdV:4}
\end{equation}
Let us first consider the solitary waves with $\eta\geq 0$. Integrating (\ref{geq:KdV:4}) in case of $0\leq \eta \leq c/2$, we have
\begin{equation}
\mbox{tanh}^{-1}\left[ \sqrt{1-\frac{2 \eta}{c}} \right] = \pm \frac{\sqrt{c}}{2} \xi + \alpha,
\end{equation}
where $\alpha$ is a constant. Since $0 \leq 2\eta/c \leq 1$, the left-hand side of the above equation is non-negative so that it holds for
all $\xi\geq 0$ if and only if
\[ \mbox{tanh}^{-1}\left[ \sqrt{1-\frac{2 \eta}{c}} \right] = \frac{\sqrt{c}}{2} \xi + \alpha, \;\;\;\xi\geq 0, \alpha \geq 0, \]
which gives
\begin{equation}
\eta(\xi) = \frac{c}{2} \; \mbox{sech}^2\left[ \frac{\sqrt{c}}{2} \xi + \alpha \right], \;\;\; \xi\geq 0, \alpha\geq 0.
\end{equation}
Using the symmetry condition (\ref{bc:symmetry}), it holds
\begin{equation}
\eta(\xi) = \frac{c}{2} \; \mbox{sech}^2\left[ \frac{\sqrt{c}}{2} |\xi| + \alpha \right], \;\;\; -\infty<\xi<+\infty, \alpha\geq 0
\end{equation}
in the {\em whole} interval. Thus, we have the solitary waves of the first kind
\begin{equation}
\zeta(x,t) = \frac{c}{2}\mbox{sech}^2\left[ \frac{\sqrt{c}}{2} |x-c \;t -x_0| + \alpha \right], \label{zeta:new:positive}
\end{equation}
where $\alpha \geq 0$, with the wave height
\begin{equation}
\zeta_{max} = \left( \frac{c}{2} \right) \mbox{sech}^2(\alpha), \;\;\; \alpha\geq 0.
\end{equation}
Note that $\alpha\geq 0$ is a constant parameter. When $\alpha=0$, it is exactly the same as the traditional solitary waves of the KdV equation with a smooth crest.
In case of $\eta < 0$, it holds $1-2 \eta/c > 1$ so that
\[ \mbox{tanh}^{-1}\left[ \sqrt{1-\frac{2 \eta}{c}} \right] \]
becomes a complex number which has no physical meanings. This is the reason why the solitary solutions of the KdV equation in case of $\eta<0$ was traditionally neglected. So, we must be very careful in this case.
In case of $\eta\leq 0$, write
\[ \sqrt{c - 2\eta} = \sqrt{c} \sqrt{1-\frac{2\eta}{c}} = \sqrt{c} \; z, \]
where $z\geq 1$ and
\[ z^2 = 1-\frac{2 \eta}{c} \geq 1. \]
Then, we have
\begin{equation}
\eta = \frac{c}{2}(1-z^2), \;\; d \eta = - c \; z \; d z.
\end{equation}
Since $z \geq 1$, it holds
\begin{eqnarray}
&&\frac{d\eta}{\eta\sqrt{c-2\eta}} = \frac{1}{\sqrt{c}}\left( \frac{d z}{z-1}-\frac{d z}{z+1}\right)=d\left[\frac{1}{\sqrt{c}} \ln\left(\frac{z-1}{z+1}\right) \right].
\end{eqnarray}
Substituting it into (\ref{geq:KdV:4}) and integrating, we have
\begin{equation}
\ln\left(\frac{z-1}{z+1}\right) = \pm \sqrt{c} \; \xi - 2\beta, \hspace{1.0cm} \xi \geq 0, \label{geq:KdV:5}
\end{equation}
where $\beta$ is a constant. Since $z\geq 1$, it holds
\[ 0 \leq \frac{z-1}{z+1} < 1, \]
which gives
\[ \ln\left(\frac{z-1}{z+1}\right) < 0. \]
Thus, (\ref{geq:KdV:5}) holds for all $\xi\geq 0$ if and only if
\begin{equation}
\ln\left(\frac{z-1}{z+1}\right) = -(\sqrt{c} \; \xi +2 \beta), \;\; \xi\geq 0, \;\; \beta>0,
\end{equation}
which gives
\[ z = \mbox{coth}\left[ \frac{\sqrt{c}}{2} \; \xi +\beta \right], \;\; \xi\geq 0, \;\; \beta>0. \]
So, we have
\[
\eta(\xi) = \frac{c}{2}(1-z^2)=-\left( \frac{c}{2} \right) \mbox{csch}^2\left[ \frac{\sqrt{c}}{2} \; \xi +\beta \right],
\;\; \xi\geq 0, \;\; \beta>0, \]
where csch denotes a hyperbolic cosecant function, and $\beta>0$ is a constant.
Using the symmetry condition (\ref{bc:symmetry}), we have the solitary wave
\[
\eta(\xi) = \frac{c}{2}(1-z^2)=-\left( \frac{c}{2} \right) \mbox{csch}^2\left[ \frac{\sqrt{c}}{2} \; |\xi| +\beta \right],
\;\; -\infty<\xi<+\infty, \;\; \beta>0, \]
which is valid in the {\em whole} interval.
Let $d \leq \zeta(x,t) \leq 0$ denote the restriction of $\zeta(x,t)$. Then, it holds
\[ \eta(0) = -\left( \frac{c}{2} \right) \mbox{csch}^2 (\beta) \geq d, \]
which gives
\[ \beta \geq \mbox{csch}^{-1}\sqrt{-\frac{2 d}{c}}. \]
Thus, we have the solitary waves of the second kind:
\begin{equation}
\zeta(x,t) = -\frac{c}{2} \mbox{csch}^2\left[ \frac{\sqrt{c}}{2}|x - c\;t - x_0| +\beta \right], \label{zeta:new:negative}
\end{equation}
where
\[ \beta \geq \mbox{csch}^{-1}\sqrt{-\frac{2 d}{c}}, \]
with the restriction $d \leq \zeta(x,t) \leq 0$.
The closed-form solutions (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}) of the solitary waves of the first and second kind exactly satisfy the KdV equation (\ref{geq:KdV:original}) in the whole domain $-\infty < x<+\infty$ and $t\geq 0$, but {\em except} $x = c \; t +x_0$ (corresponding to the wave crest). This is rather similar to the closed-form solution $u(x,t) = c \exp(-|x - c \; t|)$ of the CH equation (\ref{geq:CH}) when $\kappa=0$. Thus, the closed-form solutions (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}) should be understood as weak solutions. Besides, the corresponding Rankine-Hogoniot jump condition \cite{Rankine1870} must be satisfied, so as to ensure that (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}) have physical meanings.
To give the corresponding Rankine-Hogoniot jump condition \cite{Rankine1870} of the KdV equation (\ref{geq:KdV:original}), we rewrite it in the flow
\begin{equation}
\zeta_t + [f(\zeta)]_x = 0, \label{geq:jump}
\end{equation}
where $f(\zeta) = \zeta_{xx} + 3\zeta^2$. Then, $\zeta$ is a weak solution of (\ref{geq:jump}), if
\begin{equation}
\int_{0}^{+\infty}\int_{-\infty}^{+\infty} \left[ \zeta \varphi_t + f(\zeta) \varphi_x \right] dx dt + \int_{-\infty}^{+\infty} \zeta(x,0) \varphi(x,0) dx = 0 \label{def:weak}
\end{equation}
for all smooth functions $\varphi$ with compact support. Besides, there exists such a theorem that, if $\zeta$ is a weak solution of (\ref{geq:jump}) such that $\zeta$ is discontinuous across the curve
$x = \sigma(t)$ but $\zeta$ is smooth on either side of $x = \sigma(t)$, then $\zeta$ must satisfy the condition
\begin{equation}
\sigma'(t) = \frac{f(\zeta^-) - f(\zeta^+)}{\zeta^- - \zeta^+}, \label{jump:general}
\end{equation}
across the curve of discontinuity, where $\zeta^-(x, t)$ is the limit of $\zeta$ approaching $(x,t)$ from the
left and $\zeta^+(x,t)$ is the limit of $\zeta$ approaching $(x, t)$ from the right.
Then, due to the symmetry of the progressive peaked solitary waves about the crest $x=c \; t + x_0$, we have
\begin{eqnarray}
c &=& \frac{f(\zeta^-) - f(\zeta^+)}{\zeta^- - \zeta^+} = \frac{(\zeta^-)_{xx} -(\zeta^+)_{xx}+ 3(\zeta^-)^2-3(\zeta^+)^2}{\zeta^- - \zeta^+} \nonumber\\
&=& \frac{(\zeta^-)_{xx} -(\zeta^+)_{xx}}{\zeta^- - \zeta^+} + 3\left( \zeta^- + \zeta^+\right) \nonumber\\
&=& \frac{(\zeta^-)_{xxx} -(\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x} + 3\left( \zeta^- + \zeta^+\right),
\end{eqnarray}
which provides us the so-called Rankine-Hogoniot jump condition of the KdV equation:
\begin{equation}
c = \frac{(\zeta^-)_{xxx} -(\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x} + 3\left( \zeta^- + \zeta^+\right), \hspace{1.0cm} \mbox{as $x \to c \;t + x_0$.}
\label{jump:criterion}
\end{equation}
For the closed-form solution (\ref{zeta:new:positive}) at the crest, we have
\begin{eqnarray}
\zeta^- &=&\zeta^+ = \frac{c}{2} \mbox{sech}^2(\alpha),\\
(\zeta^-)_x & = & \frac{c^{3/2}}{2}\mbox{sech}^2(\alpha) \mbox{tanh}(\alpha), \;\; (\zeta^+)_x = -(\zeta^-)_x,\\
(\zeta^-)_{xxx} & = & \frac{c^{5/2}}{4} \left[ \mbox{cosh}(2\alpha) -5\right] \mbox{sech}^4(\alpha) \mbox{tanh}(\alpha), \;\;
(\zeta^+)_{xxx} = -(\zeta^-)_{xxx}.
\end{eqnarray}
Substituting them into (\ref{jump:criterion}), we have
\begin{equation}
\frac{c}{2} \left[ \mbox{sech}^2(\alpha) +\mbox{sech}^2(\alpha) \mbox{cosh}(2\alpha) -2 \right] = 0,
\end{equation}
which is an identity for not only $\alpha=0$ (corresponding to the traditional smooth solitary wave) but also {\em arbitrary} $\alpha > 0$ (to the peaked solitary wave)!
Similarly, for the closed-form solution (\ref{zeta:new:negative}) at the crest, we have
\begin{eqnarray}
\zeta^- &=&\zeta^+ = -\frac{c}{2} \mbox{csch}^2(\beta),\\
(\zeta^-)_x & = & -\frac{c^{3/2}}{2}\mbox{csch}^2(\beta) \mbox{coth}(\beta), \;\; (\zeta^+)_x = -(\zeta^-)_x,\\
(\zeta^-)_{xxx} & = & -\frac{c^{5/2}}{4} \left[ \mbox{cosh}(2\beta) +5\right] \mbox{csch}^4(\beta) \mbox{coth}(\beta), \;\;
(\zeta^+)_{xxx} = -(\zeta^-)_{xxx}.
\end{eqnarray}
Substituting them into (\ref{jump:criterion}), we have
\begin{equation}
\frac{c}{2} \left[ \mbox{csch}^2(\beta) \; \mbox{cosh}(2\beta)-\mbox{csch}^2(\beta) - 2 \right] = 0,
\end{equation}
which is an identity for {\em arbitrary} constant $\beta > 0$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{NewKdV1.eps}
\caption{ The solitary waves $\zeta(x,t)$ of the first kind of the KdV equation (\ref{geq:KdV:original}) with the same phase speed $c = 2$. Red line: $\alpha=0$; Green line: $\alpha=1/2$; Blue line: $\alpha =1$; Black line: $\alpha = 3/2$. }
\label{figure:KdV-1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{NewKdV2.eps}
\caption{ The solitary waves of the second kind of the KdV equation (\ref{geq:KdV:original}) with the same phase speed $c = 2$. Red line: $\beta=1$; Green line: $\beta=5/4$; Blue line: $\beta =3/2$; Black line: $\beta=7/4$. }
\label{figure:KdV-2}
\end{figure}
Therefore, both of (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}) satisfy the corresponding Rankine-Hogoniot jump condition and should be understood as weak solutions of the KdV equation (\ref{geq:KdV:original}). It should be emphasized that the traditional solitary wave (\ref{zeta:traditional}) with a smooth crest is just only a special case of the solitary waves (\ref{zeta:new:positive}) of the first kind when $\alpha=0$. Note that the solitary waves (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}) of the first and second kind of the KdV equation (\ref{geq:KdV:original}) have a peakon and an anti-peakon, respectively. Since $\alpha\geq 0$ and $\beta >0$ are arbitrary constant, the phase speed of these peaked solitary waves has nothing to do with the wave amplitude, for example as shown in Figs.\ref{figure:KdV-1} and \ref{figure:KdV-2}. This is quite different from the traditional smooth solitary wave. To the best of the author's knowledge, such kind of peaked solitary waves with peakon or anti-peakon have never been reported for the KdV equation. All of these reveal the novelty of the peaked solitary waves (\ref{zeta:new:positive}) and (\ref{zeta:new:negative}).
\section{Peaked solitary waves of modified KdV equation}%
Similarly, the modified KdV equation \cite{mKdV, Zabusky1967}
\begin{equation}
\zeta_t + \zeta_{xxx} \pm 6 \zeta^2 \zeta_x =0 \label{geq:mKdV}
\end{equation}
has the two kinds of solitary waves
\begin{equation}
\zeta(x,t) = \pm \frac{2 c }{e^{\sqrt{c} |x- c\; t-x_0|+\alpha}\pm c \; e^{-\sqrt{c} |x- c\; t-x_0|-\alpha} }, \label{u:mKdV}
\end{equation}
where $c$ is the phase speed and $\alpha\geq 0$ is a constant.
Note that (\ref{jump:general}) holds for (\ref{geq:jump}) in general. Now, we have $f(\zeta) = \zeta_{xx} \pm 2 \zeta^3$. And the corresponding Rankine-Hogoniot jump condition reads
\begin{eqnarray}
c &=& \frac{f(\zeta^-)-f(\zeta^+)}{\zeta^- - \zeta^+} \nonumber\\
&=&\frac{(\zeta^-)_{xx} -(\zeta^+)_{xx} \pm 2\left[ (\zeta^-)^3 - (\zeta^+)^3\right] }{\zeta^- - \zeta^+} \nonumber\\
&=& \frac{(\zeta^-)_{xx} -(\zeta^+)_{xx}}{\zeta^- - \zeta^+} \pm 2\left[ (\zeta^-)^2 +\zeta^-\zeta^+ + (\zeta^+)^2\right] \nonumber\\
&=& \frac{(\zeta^-)_{xxx} -(\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x} \pm 2\left[ (\zeta^-)^2 +\zeta^-\zeta^+ + (\zeta^+)^2\right],
\end{eqnarray}
which provides us the Rankine-Hogoniot jump condition of the modified KdV equation
\begin{equation}
c = \frac{(\zeta^-)_{xxx} -(\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x} \pm 2\left[ (\zeta^-)^2 +\zeta^-\zeta^+ + (\zeta^+)^2\right], \hspace{1.0cm} \mbox{as $x \to c \; t + x_0$}. \label{jump:mKdV}
\end{equation}
Using the closed-form solution (\ref{u:mKdV}), it is found that the above condition is satisfied for {\em arbitrary} constant $\alpha\geq 0$. Therefore, the closed-form solution (\ref{u:mKdV}) exactly satisfies the corresponding Rankine-Hogoniot jump condition of the modified KdV equation (\ref{geq:mKdV}) and should be understood as a weak solution.
\section{Peaked solitary waves of the BBM equation}%
Similarly, the BBM equation \cite{Benjamin1972}
\begin{equation}
\zeta_t + \zeta_x + \zeta \; \zeta_x - \zeta_{xxt} = 0 \label{geq:BBM}
\end{equation}
has the solitary waves of the first kind
\begin{equation}
\zeta(x,t) = 3(c-1) \mbox{sech}^2\left[\frac{\sqrt{1-c^{-1}}}{2}\left|\xi\right|+\alpha \right], \alpha\geq 0, \label{BBM:1st}
\end{equation}
and the solitary waves of the second kind
\begin{equation}
\zeta(x,t) = -3(c-1) \mbox{csch}^2\left[\frac{\sqrt{1-c^{-1}}}{2}\left|\xi\right|+\beta \right], \beta > 0, \label{BBM:2nd}
\end{equation}
where $\xi = x - c \; t - x_0$, $c$ is the phase speed, and $\alpha\geq 0$, $\beta>0$ are constants.
Now, we have $f(\zeta) = \zeta + \zeta^2/2 -\zeta_{xt}$. And the corresponding Rankine-Hogoniot jump condition reads
\begin{eqnarray}
c &=& \frac{f(\zeta^-)-f(\zeta^+)}{\zeta^- - \zeta^+} \nonumber\\
&=&\frac{(\zeta^-)-(\zeta^+) + \left[(\zeta^-)^2-(\zeta^+)^2\right]/2- \left[ (\zeta^-)_{xt} -(\zeta^+)_{xt}\right] }{\zeta^- - \zeta^+} \nonumber\\
&=& 1 + \frac{1}{2}\left(\zeta^- + \zeta^+\right)-\frac{(\zeta^-)_{xxt} -(\zeta^+)_{xxt}}{(\zeta^-)_x - (\zeta^+)_x},
\end{eqnarray}
which provides us the Rankine-Hogoniot jump condition of the MMB equation
\begin{equation}
c = 1 + \frac{1}{2}\left(\zeta^-+\zeta^+\right)-\frac{(\zeta^-)_{xxt} -(\zeta^+)_{xxt}}{(\zeta^-)_x - (\zeta^+)_x}, \;\;\; \mbox{as $x \to c\; t + x_0$}.\label{jump:BBM}
\end{equation}
For the solitary waves (\ref{BBM:1st}) of the first kind at the crest, we have
\begin{eqnarray}
\zeta^- &=& 3(c-1) \mbox{sech}^2(\alpha), \\
(\zeta^-)_x &=& 3 c (1-c^{-1})^{3/2} \mbox{sech}^2(\alpha)\; \mbox{tanh}(\alpha),\;\; \\
(\zeta^-)_{xxt} &=& -\frac{3}{4} c^2 (1-c^{-1})^{5/2} \mbox{sech}^5(\alpha) \left[ \mbox{sinh}(3\alpha) -11 \mbox{sinh}(\alpha)\right],
\end{eqnarray}
and
\[ \zeta^+ = \zeta^-, \;\; (\zeta^+)_x =-(\zeta^-)_x, \;\; (\zeta^+)_{xxt} =-(\zeta^-)_{xxt}. \]
Substituting all of these into (\ref{jump:BBM}), it is found that (\ref{BBM:1st}) satisfies the Rankine-Hogoniot jump condition (\ref{jump:BBM}) for {\em arbitrary} constant $\alpha\geq 0$. Similarly, (\ref{BBM:2nd}) satisfies the Rankine-Hogoniot jump condition (\ref{jump:BBM}) for {\em arbitrary} constant $\beta>0$. Therefore, both of (\ref{BBM:1st}) and (\ref{BBM:2nd}) should be understood as weak solutions of the BBM equation (\ref{geq:BBM}).
\section{Peaked solitary waves of Boussinesq equation}%
In addition, the Boussinesq equation \cite{Boussinesq1872}
\begin{equation}
\frac{\partial^2 \zeta}{\partial t^2} - g h
\frac{\partial^2 \zeta}{\partial x^2} -g h
\frac{\partial^2 }{\partial x^2}\left(\frac{3\zeta^2}{2h} + \frac{h^2}{3}\frac{\partial^2 \zeta}{\partial x^2}
\right) = 0 \label{geq:Boussinesq}
\end{equation}
has the solitary waves of the first kind
\begin{equation}
\zeta = h\left(\frac{c^2}{g h}-1 \right)\mbox{sech}^2\left[\frac{\sqrt{3}}{2 h}\sqrt{\frac{c^2}{g h}-1} \; \left|\xi\right| +\alpha \right], \label{Boussinesq:1st}
\end{equation}
and the solitary waves of the second kind
\begin{equation}
\zeta = -h\left(\frac{c^2}{g h}-1 \right)\mbox{csch}^2\left[\frac{\sqrt{3}}{2 h}\sqrt{\frac{c^2}{g h}-1} \; \left|\xi\right| +\beta \right], \label{Boussinesq:2nd}
\end{equation}
where $\xi = x-c \; t - x_0$, $h$ denotes the water depth, $g$ the acceleration of gravity, $c$ the phase speed of wave, $\alpha\geq 0$, $\beta>0$ are constant, respectively.
For progressive solitary wave, we have
\[ \frac{\partial }{\partial x} = -\frac{1}{c} \frac{\partial }{\partial t}. \]
Then, (\ref{geq:Boussinesq}) can be rewritten in the form
\begin{equation}
\frac{\partial^2 \zeta}{\partial t^2} +\frac{g h}{c}
\frac{\partial^2 \zeta}{\partial x \partial t} +\frac{g h}{c}
\frac{\partial^2 }{\partial x \partial t}\left(\frac{3\zeta^2}{2h} + \frac{h^2}{3}\frac{\partial^2 \zeta}{\partial x^2}
\right) = 0. \label{geq:Boussinesq:2}
\end{equation}
Integrating it with respect to $t$ and using the boundary condition $\zeta(\pm \infty)=0$, we have
\begin{equation}
\zeta_t + \frac{g h}{c} \left[\zeta + \frac{3\zeta^2}{2h} + \frac{h^2}{3} \zeta_{xx} \right]_x = 0.
\end{equation}
Thus, we have here
\[ f(\zeta) = \frac{g h}{c} \left(\zeta + \frac{3\zeta^2}{2h} + \frac{h^2}{3} \zeta_{xx} \right). \]
According to (\ref{jump:general}), we have
\begin{eqnarray}
c &=& \frac{f(\zeta^-) - f(\zeta^+)}{\zeta^- - \zeta^+} \nonumber\\
&=& \left( \frac{g h}{c} \right)\frac{(\zeta^- - \zeta^+) + \frac{3}{2h}\left[(\zeta^-)^2 - (\zeta^+)^2\right] +\frac{h^2}{3}\left[ (\zeta^-)_{xx} - (\zeta^+)_{xx}\right] }{\zeta^- - \zeta^+} \nonumber\\
&=& \left( \frac{g h}{c} \right) \left[ 1 + \frac{3}{2h} \left( \zeta^- + \zeta^+\right) +\left(\frac{h^2}{3}\right)
\frac{ (\zeta^-)_{xx} - (\zeta^+)_{xx}}{\zeta^- - \zeta^+}
\right] \nonumber\\
&=& \left( \frac{g h}{c} \right) \left[ 1 + \frac{3}{2h} \left( \zeta^- + \zeta^+\right) +\left(\frac{h^2}{3}\right)
\frac{ (\zeta^-)_{xxx} - (\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x}
\right],
\end{eqnarray}
which provides us the Rankine-Hogoniot jump condition of the Boussinesq equation
\begin{equation}
\frac{c^2}{g h} = 1 + \frac{3}{2h} \left( \zeta^- + \zeta^+\right) +\left(\frac{h^2}{3}\right)
\frac{ (\zeta^-)_{xxx} - (\zeta^+)_{xxx}}{(\zeta^-)_x - (\zeta^+)_x}, \hspace{1.0cm} \mbox{as $x\to c\; t + x_0 $}.\label{jump:Boussinesq}
\end{equation}
For the peaked solitary wave (\ref{Boussinesq:1st}), we have at the crest that
\begin{eqnarray}
\zeta^- &=& h \left( \frac{c^2}{g h}-1\right)\mbox{sech}^2 (\alpha),\\
(\zeta^-)_x &=& \sqrt{3} \left( \frac{c^2}{g h}-1\right)^{3/2} \mbox{sech}^2(\alpha) \; \mbox{tanh}(\alpha), \\
(\zeta^-)_{xxx} &=& \frac{3\sqrt{3}}{4 h^2} \left( \frac{c^2}{g h}-1\right)^{5/2} \mbox{sech}^2(\alpha)
\left[\mbox{sinh}(3\alpha) -11\mbox{sinh}(\alpha) \right],
\end{eqnarray}
and
\[ (\zeta^+)_{x} =-(\zeta^+)_{x},\;\; (\zeta^+)_{xxx} =-(\zeta^+)_{xxx}. \]
Substituting all of these into (\ref{jump:Boussinesq}), we have
\begin{equation}
\frac{1}{4}\left( \frac{c^2}{g h}-1\right) \left[ \mbox{sech}^2(\alpha)+\mbox{sech}^2(\alpha) \; \mbox{csch}(\alpha) \; \mbox{sinh}(3\alpha)-4 \right] = 0,
\end{equation}
which is an identity for {\em arbitrary} constant $\alpha \geq 0$. Similarly, it is found that (\ref{Boussinesq:2nd}) satisfies the Rankine-Hogoniot jump condition (\ref{jump:Boussinesq}) for {\em arbitrary} constant $\beta>0$. Thus, the peaked solitary waves (\ref{Boussinesq:1st}) and (\ref{Boussinesq:2nd}) should be understood as weak solutions of Boussinesq equation (\ref{geq:Boussinesq}).
\section{Conclusion and discussion}
In this article, we give, for the first time, the closed-form solutions of the peaked solitary waves of the KdV equation \cite{KdV}, the modified KdV equation \cite{mKdV, Zabusky1967}, the BBM equation \cite{Benjamin1972}, and Boussinesq equation \cite{Boussinesq1872}, respectively. All of them exactly satisfy the corresponding Rankine-Hogoniot jump condition for {\em arbitrary} constant $\alpha\geq 0$ or $\beta >0$. Note that the 1st-derivative $\zeta_x$ of the elevation has a jump at the crest. Obviously, for all peaked solitary waves found in this article, $\zeta_x >0$ on the left of the crest, but $\zeta_x<0$ on the right of the crest, respectively. In other words, $\zeta_x^- > \zeta_x^+$. So, these peaked solutions also satisfy the so-called entropy condition. Thus, they could be understood as weak solutions.
Note that the traditional smooth solitary waves are just special cases of the peaked solitary waves when $\alpha=0$. So, these weak solutions are more general. Besides, unlike the smooth solitary waves whose phase speed strongly depends upon wave amplitude, the peaked solitary waves (when $\alpha>0$ or $\beta>0$) have {\em nothing} to do with the wave amplitude, for example as shown in Figs.\ref{figure:KdV-1} and \ref{figure:KdV-2}. This is quite different from the traditional solitary waves. In addition, the solitary waves with negative elevation such as shown in Fig.~\ref{figure:KdV-2} have never been reported for these mainstream models of shallow waves. All of these show the novelty of these peaked solitary waves. If these peaked waves as weak solutions indeed exist and have physical meanings, then nearly all mainstream models for shallow water waves, including the KdV equation \cite{KdV}, the modified KdV equation \cite{mKdV, Zabusky1967}, the BBM equation \cite{Benjamin1972}, and Boussinesq equation \cite{Boussinesq1872} and the CH equation \cite{Camassa1993PRL}, have the peaked solitary waves, no matter whether or not they are integral and admit breaking-wave solutions. Thus, the peaked solitary waves might be a common property of shallow water wave models. As shown currently by Liao \cite{Liao-arXiv-PeakedWave}, even the exactly nonlinear water wave equations also admit the peaked solitary waves: this might reveal the origin of the peaked solitary waves in shallow water predicted by these mainstream models.
Certainly, further investigations on these peaked solitary waves are needed, especially the stability of them, the interactions between multiple peaked solitary waves, and so on. Note that these peaked solitary waves have discontinuous 1st derivative at crest so that their higher derivatives at crest tend to infinity and the perturbation theory does not work. Thus, strictly speaking, the validity of the KdV equation \cite{KdV}, the modified KdV equation \cite{mKdV, Zabusky1967}, the BBM equation \cite{Benjamin1972}, and Boussinesq equation \cite{Boussinesq1872} should be verified carefully in the meaning of the weak solution (\ref{def:weak}).
Note that the peaked solitary waves have never been observed experimentally and in practice. Obviously, it is more difficult to create and remain such kinds of peaked solitary waves than the traditional smooth ones. So, it is an challenging work to observe these peaked solitary waves in experiments.
It should be emphasized that the discontinuity and/or singularity exist widely in natural phenomena, such as dam break in hydrodynamics, shock waves in aerodynamics, black holes described by general relativity and so on. Indeed, the discontinuity and/or singularity are rather difficult to handle by traditional methods. But, the discontinuity and/or singularity can greatly enrich and deepen our understandings about the real world, and therefore should not be evaded easily.
\section*{Acknowledgement}
Thanks to Prof. Y.G. Wang (Dept. of Mathematics, SHJT) for introducing me the concept of the Rankine-Hogoniot jump condition of weak solution. Thanks to Prof. C.C. Mei (MIT) for mentioning the validity of perturbation method to the mainstream models of shallow water waves in case of peaked crest. This work is partly supported by the State Key Lab of Ocean Engineering (Approval No. GKZD010056-6) and the National Natural Science Foundation of China.
\bibliographystyle{unsrt}
|
1,108,101,562,781 | arxiv | \section{Introduction}
We consider the one dimensional equation
\begin{equation}
X_{t}=x+\int_{0}^{t}c(u,a,X_{u-})dN(u,a)+\int_{0}^{t}g(u,X_{u})du \label{i1}
\end{equation}
where $N$ is a Poisson point measure of intensity measure $\mu $ on some
abstract measurable space $E.$ We assume that $c$ and $g$ are infinitely
differentiable with respect to $t$ and $x$, have bounded derivatives of any
order and have linear growth with respect to $x.$ Moreover we assume that
the derivatives of $c$ are bounded by a function $\overline{c}$ such that $
\int_E \overline{c}(a)d\mu(a) <\infty .$ Under these hypotheses the equation has a
unique solution and the stochastic integral with respect to the Poisson point measure is a Stieltjes
integral.
Our aim is to give sufficient conditions in order to prove that the law of $X_{t}$ is
absolutely continuous with respect to the Lebesgue measure and has a smooth
density. If $E=\mathbb{R}^{m}$ and if the measure $\mu$ admits a smooth density $h$ then
one may develop a Malliavin calculus based on the amplitudes of the jumps in
order to solve this problem. This has been done first in \cite{Bi} and then in
\cite{BGJ}. But if $\mu $ is a singular measure this approach fails and one has
to use the noise given by the jump times of the Poisson point measure in
order to settle a differential calculus analogous to the Malliavin calculus. This is a much more
delicate problem and several approaches have been proposed. A first step is
to prove that the law of $X_{t}$ is absolutely continuous with respect to
the Lebesgue measure, without taking care of the regularity. A first result
in this sense was obtained by Carlen and Pardoux in \cite{CP} and was followed
by a lot of other papers (see \cite{D}, \cite{ET}, \cite{K1}, \cite{NS}).\ The second step is to
obtain the regularity of the density. Recently two results of this type have
been obtained by Ishikawa and Kunita in \cite{IK} and by Kulik in \cite{K2}. In both
cases one deals with an equation of the form
\begin{equation}
dX_{t}=g(t,X_{t})dt+f(t,X_{t-})dU_{t} \label{i2}
\end{equation}
where $U$ is a L\'{e}vy process. The above equation is multi-dimensional
(let us mention that the method presented in our paper may be used in the
multi-dimensional case as well, but then some technical problems related to
the control of the Malliavin covariance matrix have to be solved - and for
simplicity we preferred to leave out this kind of difficulties in this
paper). Ishikawa and Kunita in \cite{IK} used the finite difference approach
given by J.\ Picard in \cite{P} in order to obtain sufficient conditions for the
regularity of the density of the solution of an equation of type (\ref{i1})
(in a somehow more particular form, closed to linear equations). The result
in that paper produces a large class of examples in which we get a smooth
density even for an intensity measure which is singular with respect to the
Lebesgue measure. The second approach is due to Kulik \cite{K2}. He settled a
Malliavin type calculus based on perturbations of the time structure in
order to give sufficient conditions for the smoothness of the density. In
his paper the coefficient $f$ is equal to one so the non degeneracy comes
from the drift term $g$ only. As before, he obtains the regularity of the
density even if the intensity measure $\mu $ is singular. He also proves
that under some appropriate conditions, the density is not smooth for a
small $t$ so that one has to wait before the regularization effect of the
noise produces a regular density.
The result proved in our paper is the following. We consider the function
\begin{equation*}
\alpha (t,a,x)=g(x)-g(x+c(t,a,x))+(g\partial _{x}c+\partial _{t}c)(t,a,x).
\end{equation*}
Except the regularity and boundedness conditions on $g$ and $c$ we consider
the following non degeneracy assumption. There exists a measurable function $
\underline{\alpha }$ such that $\left\vert \alpha (t,a,x)\right\vert \geq
\underline{\alpha }(a)>0$ for every $(t,a,x)\in \mathbb{R}_{+}\times E\times \mathbb{R}.$ We
assume that there exists a sequence of subsets $E_{n}\uparrow E$ such that $
\mu (E_{n})<\infty $ and
\begin{equation*}
\underline{\lim }_{n\rightarrow \infty }\frac{1}{\mu (E_{n})}\ln
(\int_{E_{n}}\frac{1}{\underline{\alpha }(a)}d\mu (a))=\theta <\infty .
\end{equation*}
If $\theta =0$ then, for every $t>0,$ the law of $X_{t}$ has a $C^{\infty }$
density with respect to the Lebesgue measure. Suppose now that $\theta >0$
and let $q\in \mathbb{N}.$ Then, for $t>16\theta (q+2)(q+1)^{2}$ the law of $X_{t}$
has a density of class $\mathcal{C}^{q}.$ Notice that for small $t$ we are not able to
prove that a density exists and we have to wait for a sufficiently large $t$
in order to obtain a regularization effect.
In the paper of Kulik \cite{K2} one takes $c(t,a,x)=a$ so $\alpha
(t,a,x)=g(x)-g(x+c(t,a,x)).$ Then the non degeneracy condition concerns just
the drift coefficient $g.$ And in the paper of Ishikawa and Kunita the basic
example (which corresponds to the geometric L\'{e}vy process) is $
c(t,a,x)=xa(e^{a}-1)$ and $g$ constant. So $\alpha (t,a,x)=a(e^{a}-1)\sim
a^{2}$ as $a\rightarrow 0.$ The drift coefficient does not contribute to the
non degeneracy condition (which is analogous to the uniform ellipticity
condition).
The paper is organized as follows. In Section 2 we give an integration by
parts formula of Malliavin type. This is analogous to the integration by
parts formulas given in \cite {BC} and \cite{BBM}. But there are two specific points:
first of all the integration by parts formula take into account the border
terms (in the above mentioned papers the border terms cancel because one
makes use of some weights which are null on the border; but in the paper of
Kulik \cite{K2} such border terms appear as well). The second point is that we use
here a "one shot" integration by parts formula: in the classical gaussian Malliavin
calculus one employs all the noise which is
available - so one derives an infinite dimensional differential calculus
based on "all the increments" of the Brownian motion. The analogous approach
in the case of Poisson point measures is to use all the noise which comes
from the random structure (jumps). And this is the point of view of almost
all the papers on this topic. But in our paper we use just "one jump time"
which is chosen in a cleaver way (according to the non degeneracy
condition). In Section 3 we apply the general integration by parts formula
to stochastic equations with jumps. The basic noise is given by the jump
times.
\section{Integration by parts formula}
\subsection{Notations-derivative operators}
The abstract framework is quite similar to the one developed in \cite{BC} but we introduce here some modifications in order
to take into account the border terms appearing in the integration by parts formula. We consider a sequence of random variables $(V_{i})_{i\in \mathbb{N}^*}$ on a probability space $(\Omega ,\mathcal{F},P)$,
a sub $\sigma$-algebra $\mathcal{G}\subseteq \mathcal{F}$ and a random variable $J$, $\mathcal{G}$ measurable, with values in $\mathbb{N}$.
Our aim is to establish a differential calculus based on the variables $(V_i)$, conditionally on $\mathcal{G}$.
In order to derive an integration by parts formula, we need some assumptions on the random variables $(V_i)$.
The main hypothesis is that conditionally on $\mathcal{G},$ the law of $V_i$ admits a locally smooth density with respect to the Lebesgue measure.
\textbf{H0.} i) Conditionally on $\mathcal{G}$, the random variables $(V_i)_{1 \leq i \leq J}$ are independent and for each $i \in \{1, \ldots, J\}$ the law of $V_i$
is absolutely continuous with respect to the Lebesgue measure. We note $p_i$ the conditional density.
\hspace{1cm} ii) For all $i \in \{1, \ldots, J\}$, there exist some $\mathcal{G}$ measurable random variables $a_i$ and $b_i$ such that $-\infty<a_i<b_i<+ \infty$, $(a_i,b_i)\subset \{p_i>0 \}$. We also assume that $p_{i}$ admits a continuous bounded derivative on $(a_i,b_i)$ and that $\ln p_i$ is bounded on $(a_i, b_i)$.
We define now the class of functions on which this differential calculus
will apply. We consider in this paper functions $f :\Omega \times \mathbb{R}^{\mathbb{N}^*}\rightarrow \mathbb{R}$ which can be written as
\begin{equation}
f(\omega ,v)=\sum_{m=1}^{\infty }f^{m}(\omega
,v_{1},...,v_{m})1_{\{J(\omega )=m\}} \label{defS}
\end{equation}
where $f^{m}:\Omega \times \mathbb{R}^{m}\rightarrow \mathbb{R}$ are $\mathcal{G\times B}
(\mathbb{R}^{m})\mathcal{-}$measurable functions.
In the following, we fix $L \in \mathbb{N}$ and we will perform integration by parts $L$ times. But we will use another set of variables for
each integration by parts. So for $1 \leq l \leq L$, we fix a set of indices $I_l \subset \{1, \ldots, J\}$ such that if $l \neq l'$, $I_l \cap I_{l'}=\emptyset$. In order to do $l$ integration by parts, we will use successively the variables $V_i, i \in I_l$ then the variables $V_i, i \in I_{l-1}$ and end with $V_i, i \in I_1$. Moreover, given $l$ we fix a partition $(\Lambda_{l,i})_{i \in I_l}$ of $\Omega$ such that the sets $\Lambda_{l,i} \in \mathcal{G}, i \in I_l$. If $\omega \in \Lambda_{l,i}$, we will use only the variable $V_i$ in our integration by parts.
With these notations, we define our basic spaces. We consider in this paper random variables $F=f(\omega, V)$ where $V=(V_i)_i$ and $f$ is given by (\ref{defS}). To simplify the notation we write $F=f^J(\omega, V_1, \ldots ,V_J)$ so that conditionally on $\mathcal{G}$ we have $J=m$
and $F=f^m(\omega,V_1, \ldots, V_m)$.
We denote by $\mathcal{S}^0$ the space of random variables $F=f^J(\omega, V_1, \ldots, V_J)$ where $f^J$ is a continuous function on
$O_J=\prod_{i=1}^J (a_i,b_i)$ such that there exists a $\mathcal{G}$ measurable random variable $C$ satisfying
\begin{equation}
\sup_{v \in O_J} \vert f^J( \omega, v) \vert \leq C(\omega) < + \infty \quad {\mbox a.e.} \label{hypB}
\end{equation}
We also assume that $f^J$ has left limits (respectively right limits) in $a_{i}$
(respectively in $b_{i}$). Let us be more precise.
With the notations $
V_{(i)}=(V_{1},...,V_{i-1},V_{i+1},...,V_{J})$ and $(V_{(i)}, v_i)=(V_1, \ldots, V_{i-1}, v_i, V_{i+1}, \ldots, V_J)$ for $v_i \in (a_i, b_i)$ our assumption
is that the following limits exist and are finite:
\begin{equation}
\lim_{\varepsilon \rightarrow 0}f^{J}(\omega ,V_{(i)},a_i + \varepsilon):=F(a_{i}^+),\quad
\lim_{\varepsilon \rightarrow 0}f^{J}(\omega ,V_{(i)},b_i - \varepsilon):=F(b_{i}^-). \label{lim}
\end{equation}
Now for $k \geq 1$,
$\mathcal{S}^k(I_l)$ denotes the space of random variables $F=f^J(\omega, V_1, \ldots, V_J) \in \mathcal{S}^0$, such that $f^J$ admits partial derivatives up to order $k$ with respect to the variables $v_i, i \in I_l$ and these partial derivatives belong to $\mathcal{S}^0$.
We are now able to define
our differential operators.
$\square $ \textbf{The derivative operators.} We define
$D_l:\mathcal{S}^1(I_l)\rightarrow
\mathcal{S}^0(I_l):$ by
\begin{equation*}
D_l F:= 1_{O_J}(V)\sum_{i \in I_l} 1_{\Lambda_{l,i}}(\omega) \partial _{v_i}f(\omega ,V),
\end{equation*}
where $O_J= \prod_{i=1}^J (a_i,b_i)$.
$\square $ \textbf{The divergence operators} We note
\begin{equation}
p_{(l)}=\sum_{i \in I_l} 1_{\Lambda_{l,i}}p_i , \label{pl}
\end{equation}
and we define
$\delta_l:\mathcal{S}^1(I_l)\rightarrow
\mathcal{S}^0(I_l)$
by
\begin{eqnarray*}
\delta_l(F) = D_lF+F D_l \ln p_{(l)} = 1_{O_J}(V)\sum_{i \in I_l} 1_{\Lambda_{l, i}}(\partial _{v_i}F+F \partial _{v_i} \ln p_i)
\end{eqnarray*}
We can easily see that if $F,U \in \mathcal{S}^1(I_l)$ we have
\begin{equation}
\delta_l(FU)=F\delta_l(U)+UD_lF . \label{1.4}
\end{equation}
$\square $ \textbf{The border terms } \ Let $U \in \mathcal{S}^0(I_l)$. We define (using the notation (\ref{lim}) )
\begin{eqnarray*}
[U]_l&=&\sum_{i \in I_l}1_{\Lambda_{l,i}} 1_{O_{J,i}}(V_{(i)})((U p_i)(b_i^-) -( U p_i)(a_i^+))
\end{eqnarray*}
with $O_{J,i}=\prod_{1 \leq j \leq J, j \neq i} (a_j,b_j)$
\subsection{Duality and basic integration by parts formula}
In our framework the duality between $\delta_l $ and $D_l$ is given by the
following proposition. In the sequel, we denote by
$E_{\mathcal{G}}$ the conditional expectation with respect to the sigma-algebra $\mathcal{G}$.
\begin{prop}\label{duality}
Assuming H0 then $\forall F, U\in
\mathcal{S}^1(I_l)$ we have
\begin{equation}
E_{\mathcal{G}}(UD_l F)=-E_{\mathcal{G}
}(F\delta_l (U)) + E_{\mathcal{G}} [FU]_l. \label{1.2}
\end{equation}
\end{prop}
For simplicity, we assume in this proposition that the random variables $F$ and $U$ take value in $\mathbb{R}$ but such a result can
easily be extended to $\mathbb{R}^d$ value random variables.
\begin{pf}
We have $E_{\mathcal{G}}(U D_lF)= \sum_{i \in I_l} 1_{\Lambda_{l,i}}E_{\mathcal{G}} 1_{O_J}(V)(\partial _{v_i}f^J(\omega ,V) u^J(\omega,V))$.
From H0 we obtain
\begin{equation*}
E_{\mathcal{G}} 1_{O_J}(V)( \partial _{v_i}f^J(\omega ,V) u^J(\omega,V))=E_{\mathcal{G}} 1_{O_{J,i}}(V_{(i)}) \int_{a_i}^{b_i} \partial
_{v_i}(f ^J) u^J p_{i}(v_i)dv_{i} .
\end{equation*}
By using the classical integration by parts formula, we have
$$
\int_{a_i}^{b_i} \partial
_{v_i}(f^J) u^J p_{i}(v_i)dv_i=[f^J u^J p_i]_{a_i}^{b_i}-\int_{a_i}^{b_i}f^J\partial_{v_i}(u^J p_i)dv_i.
$$
Observing that $\partial_{v_i}(u^J p_i)=( \partial_{v_i}(u^J ) +u^J \partial_{v_i}( \ln p_i ))p_i$,
we have
$$
E_{\mathcal{G}}(1_{O_J} (V)\partial _{v_i}f^J u^J)=E_{\mathcal{G}}1_{O_{J,i}}[(V_{(i)})f^J u^J p_i]_{a_i}^{b_i}-E_{\mathcal{G}} 1_{O_J}(V)F (\partial_{v_i}(U) +U \partial_{v_i}( \ln p_i ))
$$
and the proposition is proved.
\end{pf}
We can now state a first integration by parts formula.
\begin{prop} \label{IPP}
Let H0 hold true and let $F \in
\mathcal{S}^2(I_l)$, $G\in \mathcal{S}^1(I_l)$ and $\Phi :\mathbb{R}\rightarrow \mathbb{R}$ be a bounded function with
bounded derivative. We assume that $F=f^J(\omega,V)$ satisfies the condition
\begin{equation}
\min_{ i \in I_l} \inf_{v \in O_J} \vert \partial_{v_i} f^J(\omega,v) \vert \geq \gamma(\omega) , \label{NDl}
\end{equation}
where $\gamma$ is $\mathcal{G}$ measurable and we define on $\{ \gamma>0 \}$
$$
(D_lF)^{-1}=1_{O_J}(V) \sum_{i \in I_l }1_{\Lambda_{l,i}} \frac{1}{\partial_{v_i} f(\omega,V)},
$$
then
\begin{equation}
1_{\{ \gamma>0 \}} E_{\mathcal{G}}(\Phi^{(1)} (F)G)=-1_{\{ \gamma>0 \}}E_{\mathcal{G}}\left(\Phi
(F)H_l(F,G)\right) + 1_{\{ \gamma>0 \}}E_{\mathcal{G}}[\Phi(F)G (D_l F)^{-1}]_l \label{IPP1}
\end{equation}
with
\begin{eqnarray}
H_l(F,G) &=&\delta_l (G (D_lF)^{-1})
=G \delta_l((D_lF)^{-1})+D_lG (D_lF)^{-1}. \label{weight}
\end{eqnarray}
\end{prop}
\begin{pf} We observe that
$$
D_l\Phi (F)=1_{O_J}(V)\sum_{i \in I_l}1_{\Lambda_{l,i}} \partial_{v_i} \Phi(F)=1_{O_J}(V) \Phi^{(1)} (F)\sum_{i \in I_l}1_{\Lambda_{l,i}} \partial_{v_i} F,
$$
so that
\begin{eqnarray*}
D_l\Phi (F).D_lF&=&\Phi^{(1)}(F) (D_l F)^2,
\end{eqnarray*}
and then $1_{\{ \gamma>0 \}}\Phi^{(1)} (F) = 1_{\{ \gamma>0 \}} D_l \Phi(F). (D_l F)^{-1}$. Now since $F \in \mathcal{S}^2(I_l)$, we deduce that $(D_l F)^{-1} \in \mathcal{S}^1(I_l)$ on $\{ \gamma>0 \}$ and applying Proposition \ref{duality} with $U=G (D_l F)^{-1}$ we obtain the result.
\end{pf}
\subsection{Iterations of the integration by parts formula}
We will iterate the integration by parts formula given in Proposition \ref{IPP}. We recall that if we iterate $l$ times the integration by parts formula, we will integrate by parts successively with respect to the variables $(V_i)_{i \in I_k}$ for $1 \leq k \leq l$. In order to give some estimates of the weights appearing in these formulas
we introduce the following norm on $\mathcal{S}^l(\cup_{k=1}^l I_k)$, for $1 \leq l \leq L$.
\begin{equation}
\vert F \vert_{l}= \vert F \vert_{\infty} + \sum_{k=1}^l \sum_{1 \leq l_1<\ldots<l_k \leq l} \vert D_{l_1} \ldots D_{l_k} F \vert_{\infty}, \label{norml}
\end{equation}
where $\vert . \vert_{\infty}$ is defined on $\mathcal{S}^0$ by
$$
\vert F \vert_{\infty} = \sup_{v \in O_J} \vert f^J(\omega,v) \vert.
$$
For $l=0$, we set $\vert F \vert_0=\vert F \vert_{\infty} $.
We remark that we have for $1 \leq l_1<\ldots<l_k \leq l$
$$
\vert D_{l_1} \ldots D_{l_k} F \vert_{\infty} = \sum_{i_1 \in I_{l_1}, \ldots, i_k \in I_{l_k} } ( \prod_{j=1}^k1_{\Lambda_{l_j,i_j}}) \vert \partial_{v_{i_1}} \ldots \partial_{v_{i_k}} F \vert_{\infty},
$$
and since for each $l$ $(\Lambda_{l,i})_{i \in I_l}$ is a partition of $\Omega$, for $\omega$ fixed, the preceding sum has only one term not equal to zero.
This family of norms satisfies for $F \in \mathcal{S}^{l+1}(\cup_{k=1}^{l+1} I_k)$ :
\begin{equation}
\vert F \vert_{l+1}=\vert D_{l+1} F \vert_l +\vert F \vert_l \quad \mbox{ so} \quad \vert D_{l+1} F \vert_l \leq \vert F \vert_{l+1}. \label{P1N}
\end{equation}
Moreover it is easy to check that if $F, G \in \mathcal{S}^l(\cup_{k=1}^l I_k)$
\begin{equation}
\vert FG\vert_l \leq C_l \vert F \vert_l \vert G \vert_l, \label{prod}
\end{equation}
where $C_l$ is a constant depending on $l$. Finally for any function $\phi \in \mathcal{C}^l(\mathbb{R}, \mathbb{R})$ we have
\begin{equation}
\vert \phi(F) \vert_l \leq C_l \sum_{k=0}^l \vert \phi^{(k)}(F) \vert_{\infty} \vert F \vert_l^k \leq C_l \max_{0 \leq k \leq l}
\vert \phi^{(k)}(F) \vert_{\infty} (1+ \vert F \vert_l^l). \label{CR}
\end{equation}
With these notations we can iterate the integration by parts formula.
\begin{theo} \label{IPPE}
Let H0 hold true and let $\Phi: \mathbb{R} \mapsto \mathbb{R}$ a bounded function with bounded derivatives up to order $L$. Let $F =f^J(w, V) \in \mathcal{S}^1(\cup_{l=1}^L I_l)$ such that
\begin{equation}
\inf_{i \in \{1, \ldots, J \}} \inf_{v \in O_J} \vert \partial_{v_i} f^J(\omega,v) \vert \geq \gamma( \omega), \quad \gamma \in [0,1] \quad \mathcal{G} {\mbox measurable}\label{ND}
\end{equation}
then we have for $ l \in \{1, \ldots, L\}$, $G \in \mathcal{S}^l(\cup_{k=1}^l I_k)$ and $F \in \mathcal{S}^{l+1}(\cup_{k=1}^l I_k)$
\begin{equation}
1_{\{ \gamma>0 \}}\vert E_{\mathcal{G}} \Phi^{(l)}(F)G \vert \leq C_l \vert \vert \Phi \vert \vert_{\infty}1_{\{ \gamma>0 \}}E_{\mathcal{G}} \left( \vert G \vert_l(1+ \vert p \vert_0 )^l
\Pi_l(F) \right) \label{IPPl}
\end{equation}
where $C_l$ is a constant depending on $l$, $\vert \vert \Phi \vert \vert_{\infty}= \sup_x \vert \Phi(x) \vert$, $\vert p\vert_0=\max_{l=1, \ldots,L} \vert p_{(l)} \vert_{\infty}$ and $\Pi_l(F)$ is defined on $\{ \gamma>0 \}$ by
\begin{equation}
\Pi_l(F)=\prod_{k=1}^l
(1+ \vert( D_k F)^{-1} \vert_{k-1})(1+ \vert \delta_k((D_kF)^{-1} )\vert_{k-1}). \label{produit}
\end{equation}
Moreover we have the bound
\begin{equation}
\Pi_l(F) \leq C_l \frac{(1+ \vert \ln p \vert_1)^l}{\gamma^{l(l+2)} }\prod_{k=1}^l(1 + \vert F \vert_{k}^{k-1} + \vert D_k F \vert_{k}^{k-1})^2, \label{BPil}
\end{equation}
where $\vert \ln p\vert_1=\max_{i=1, \ldots,J} \vert (\ln p_i)' \vert_{\infty}$.
\end{theo}
\begin{pf}
We proceed by induction.
For $l=1$, we have from Proposition \ref{IPP} since $G \in \mathcal{S}^1( I_1)$ and $F \in \mathcal{S}^{2}( I_1)$
$$
1_{\{ \gamma>0 \}}E_{\mathcal{G}}(\Phi^{(1)} (F)G)=-1_{\{ \gamma>0 \}}E_{\mathcal{G}}\left(\Phi
(F)H_1(F,G)\right) + 1_{\{ \gamma>0 \}} E_{\mathcal{G}}[\Phi(F)G (D_1 F)^{-1}]_1 .
$$
We have on $\{ \gamma>0 \}$
$$
\begin{array}{lll}
\vert H_1(F,G) \vert & \leq & \vert G \vert \vert \delta_1( (D_1F)^{-1}) \vert + \vert D_1 G \vert \vert (D_1F)^{-1} \vert, \\
& \leq & ( \vert G \vert_{\infty} +\vert D_1 G \vert _{\infty})(1+ \vert (D_1 F)^{-1} \vert_{\infty})(1+ \vert \delta_1((D_1F)^{-1} )\vert_{\infty}) , \\
& = & \vert G \vert_1(1+ \vert (D_1 F)^{-1} \vert_{0})(1+ \vert \delta_1((D_1F)^{-1} )\vert_{0}).
\end{array}
$$
Turning to the border term $[\Phi(F)G (D_1 F)^{-1}]_1$, we check that
$$
\begin{array}{lll}
\vert [\Phi(F)G (D_1 F)^{-1}]_1 \vert & \leq & 2 \vert \vert \Phi \vert \vert_{\infty} \vert G \vert_{\infty} \sum_{i \in I_1} 1_{ \Lambda_{1,i}} \vert \frac{1}{ \partial_{v_i} F} \vert_{\infty} \sum_{i \in I_1} 1_{ \Lambda_{1,i}} \vert p_i \vert_{\infty}, \\
& \leq & 2 \vert \vert \Phi \vert \vert_{\infty} \vert G \vert_{0} \vert (D_1F)^{-1} \vert_0 \vert p \vert_0.
\end{array}
$$
This proves the result for $l=1$.
Now assume that Theorem \ref{IPPE} is true for $l \geq 1$ and let us prove it for $l+1$. By assumption we have
$G \in \mathcal{S}^{l+1}(\cup_{k=1}^{l +1}I_k) \subset \mathcal{S}^{1}(I_{l+1})$ and
$F \in \mathcal{S}^{l+2}(\cup_{k=1}^{l+1} I_k) \subset \mathcal{S}^{2}(I_{l+1})$.
Consequently we can apply Proposition \ref{IPP} on $I_{l+1}$. This gives
\begin{equation}
1_{\{ \gamma>0 \}}E_{\mathcal{G}}(\Phi^{(l+1)} (F)G)=-1_{\{ \gamma>0 \}}E_{\mathcal{G}}\left(\Phi^{(l)}
(F)H_{l+1}(F,G)\right) + 1_{\{ \gamma>0 \}}E_{\mathcal{G}}[\Phi^{(l)}(F)G (D_{l+1} F)^{-1}]_{l+1}, \label{recl}
\end{equation}
with
$$
H_{l+1}(F,G)= G \delta_{l+1}((D_{l+1}F)^{-1})+D_{l+1}G (D_{l+1}F)^{-1},
$$
$$
[\Phi^{(l)}(F)G (D_{l+1} F)^{-1}]_{l+1} = \sum_{i \in I_{l+1}}1_{\Lambda_{l+1,i}} 1_{O_{J,i}}(V_{(i)})\left(( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} p_i)(b_i^-) -(\Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} p_i)(a_i^+)\right) .
$$
We easily see that $H_{l+1}(F,G) \in \mathcal{S}^l(\cup_{k=1}^l I_k)$ and so using the induction hypothesis we obtain
$$
1_{\{ \gamma>0 \}}\vert E_{\mathcal{G}}\Phi^{(l)}
(F)H_{l+1}(F,G) \vert \leq C_l \vert \vert \Phi \vert \vert_{\infty}1_{\{ \gamma>0 \}}E_{\mathcal{G}} \vert H_{l+1}(F,G) \vert_l(1 + \vert p \vert_0)^l \Pi_l(F),
$$
and we just have to bound $\vert H_{l+1}(F,G) \vert_l $ on $\{ \gamma>0 \}$. But using successively (\ref{prod}) and (\ref{P1N})
$$
\begin{array}{lll}
\vert H_{l+1}(F,G) \vert_l & \leq & C_l ( \vert G \vert_l \vert \delta_{l+1}((D_{l+1}F)^{-1}) \vert_l + \vert D_{l+1}G \vert _l \vert (D_{l+1}F)^{-1}) \vert_l, \\
& \leq & C_l \vert G \vert_{l+1}(1+ \vert (D_{l+1}F)^{-1}) \vert_l) (1+\vert \delta_{l+1}((D_{l+1}F)^{-1}) \vert_l ).
\end{array}
$$
This finally gives
\begin{equation}
\vert E_{\mathcal{G}}\Phi^{(l)}
(F)H_{l+1}(F,G) \vert \leq C_l \vert \vert \Phi \vert \vert_{\infty}E_{\mathcal{G}} \vert G\vert_{l +1} (1+ \vert p \vert_0)^l\Pi_{l+1}(F). \label{Hl}
\end{equation}
So we just have to prove a similar inequality for $ E_{\mathcal{G}}[\Phi^{(l)}(F)G (D_{l+1} F)^{-1}]_{l+1}$. This reduces to consider
\begin{equation}
E_{\mathcal{G}}\sum_{i \in I_{l+1}}1_{\Lambda_{l+1,i}} 1_{O_{J,i}}(V_{(i)})( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} p_i)(b_i^-)
= \sum_{i \in I_{l+1}}1_{\Lambda_{l+1,i}} p_i(b_i^-) E_{\mathcal{G}} 1_{O_{J,i}}(V_{(i)})( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} )(b_i^-) \label{reduc}
\end{equation}
since the other term can be treated similarly. Consequently we just have to bound
$$
\vert E_{\mathcal{G}} 1_{O_{J,i}}(V_{(i)})( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} )(b_i^-) \vert.
$$
Since all variables satisfy (\ref{hypB}), we obtain from Lebesgue Theorem, using the notation (\ref{lim})
$$
E_{\mathcal{G}}1_{O_{J,i}}(V_{(i)})( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} )(b_i^-) = \lim_{\varepsilon \rightarrow 0 } E_{\mathcal{G}}1_{O_{J,i}}(V_{(i)})( \Phi^{(l)}(f^J(V_{(i)},b_i- \varepsilon))(g^J \frac{1}{\partial_{v_i} f^J} )(V_{(i)},b_i-\varepsilon).
$$
To shorten the notation we write simply $F(b_i - \varepsilon)=f^J(V_{(i)},b_i- \varepsilon)$.
Now one can prove that if $U \in \mathcal{S}^{l'}(\cup_{k=1}^{l+1}I_k)$ for $1 \leq l' \leq l$ then $\forall i \in I_{l+1}$, $U(b_i- \varepsilon) \in
\mathcal{S}^{l'}(\cup_{k=1}^{l}I_k)$ and $\vert U( b_i- \varepsilon)\vert_{l'} \leq \vert U \vert_{l'}$.
We deduce then that $\forall i \in I_{l+1}$
$F(b_i- \varepsilon) \in \mathcal{S}^{l+1}(\cup_{k=1}^l I_k)$ and that $(G \frac{1}{\partial_{v_i} F} )(b_i- \varepsilon) \in \mathcal{S}^{l}(\cup_{k=1}^l I_k)$ and from induction hypothesis
$$
\begin{array}{ll}
\vert E_{\mathcal{G}}( \Phi^{(l)}(F(b_i- \varepsilon))1_{O_{J,i}}(G \frac{1}{\partial_{v_i} F} )(b_i-\varepsilon)\vert &\leq C_l \vert \vert \Phi \vert \vert_{\infty} E_{\mathcal{G}}
\vert G (b_i- \varepsilon) \vert_l \vert \frac{1}{\partial_{v_i} F(b_i- \varepsilon)} \vert_l (1 + \vert p \vert_0)^l \Pi_l(F(b_i- \varepsilon)), \\
& \leq C_l \vert \vert \Phi \vert \vert_{\infty}
E_{\mathcal{G}} \vert G \vert_l \vert \frac{1}{\partial_{v_i} F} \vert_l (1 + \vert p \vert_0)^l \Pi_l(F).
\end{array}
$$
Putting this in (\ref{reduc}) we obtain
\begin{eqnarray}
\vert E_{\mathcal{G}}\sum_{i \in I_{l+1}}1_{\Lambda_{l+1,i}} 1_{O_{J,i}}( \Phi^{(l)}(F)G \frac{1}{\partial_{v_i} F} p_i)(b_i^-) \vert &\leq& C_l
\vert \vert \Phi \vert \vert_{\infty}
E_{\mathcal{G}}\vert G\vert_{l } (1+ \vert p \vert_0)^l\Pi_{l} (F)\sum_{i \in I_{l+1}}1_{\Lambda_{l+1,i}} \vert p_i \vert_{\infty}\vert \frac{1}{\partial_{v_i} F}\vert_l , \nonumber \\
& \leq &C_l
\vert \vert \Phi \vert \vert_{\infty}
E_{\mathcal{G}}\vert G\vert_{l } (1+ \vert p \vert_0)^{l+1}\Pi_{l}(F) \vert (D_{l+1}F)^{-1} \vert_l. \label{bord}
\end{eqnarray}
Finally plugging (\ref{Hl}) and (\ref{bord}) in (\ref{recl})
$$
\begin{array}{lll}
\vert E_{\mathcal{G}}(\Phi^{(l+1)} (F)G) \vert & \leq & C_l
\vert \vert \Phi \vert \vert_{\infty} \left( E_{\mathcal{G}} \vert G\vert_{l +1} (1+ \vert p \vert_0)^l\Pi_{l+1}(F)
+ E_{\mathcal{G}}\vert G\vert_{l } (1+ \vert p \vert_0)^{l+1}\Pi_{l}(F) \vert (D_{l+1}F)^{-1} \vert_l \right), \\
& \leq & C_l
\vert \vert \Phi \vert \vert_{\infty} E_{\mathcal{G}} \vert G \vert_{l+1}(1+ \vert p \vert_0 )^{l+1}
\Pi_{l+1}(F),
\end{array}
$$
and inequality (\ref{IPPl}) is proved for $l+1$. This achieves the first part of the proof of Theorem \ref{IPPE}.
It remains to prove (\ref{BPil}). We assume that $\omega \in \{ \gamma>0 \}$.
Let $1 \leq k \leq l$.
We first notice that combining (\ref{P1N}) and (\ref{prod}) we obtain
\begin{eqnarray*}
\left\vert \delta _{k}(F)\right\vert _{k-1} &\leq &\left\vert F\right\vert
_{k}(1+\left\vert D_{k}\ln p_{(k)}\right\vert _{\infty}) ,
\end{eqnarray*}
since $p_{(k)}$ only depends on the variables $V_i, i \in I_{k}$.
So we deduce the bound
\begin{eqnarray}
\left\vert \delta _{k}((D_k F)^{-1})\right\vert _{k-1} &\leq &\left\vert (D_k F)^{-1}\right\vert
_{k}(1+\left\vert \ln p\right\vert _{1}) \label{deltan}.
\end{eqnarray}
Now we have
\begin{equation*}
\vert (D_k F)^{-1} \vert_{k-1}=\sum_{i \in I_{k}} 1_{\Lambda_{k,i} }\vert \frac{1}{\partial_{v_i} F} \vert_{k-1}
\end{equation*}
From (\ref{CR}) with $\phi(x)=1/x$
$$
\vert \frac{1}{\partial_{v_i} F} \vert_{k-1}\leq C_k \frac{(1 + \vert F \vert_k^{k-1})}{\gamma^{k}},
$$
and consequently
\begin{equation}
\vert (D_k F)^{-1} \vert_{k-1}\leq C_k \frac{(1 + \vert F \vert_k^{k-1})}{\gamma^{k}}. \label{DkF}
\end{equation}
Moreover we have using successively (\ref{P1N}) and (\ref{DkF})
$$
\begin{array}{lll}
\vert (D_k F)^{-1} \vert_k & = & \vert (D_k F)^{-1} \vert_{k-1} +\vert D_k (D_k F)^{-1} \vert_{k-1} , \\
& \leq & C_k \left( \frac{(1 + \vert F \vert_k^{k-1})}{\gamma^{k}} + \frac{(1 + \vert D_k F \vert_k^{k-1})}{\gamma^{k+1}} \right), \\
& \leq & C_k \frac{(1 +\vert F \vert_k^{k-1} +\vert D_k F \vert_k^{k-1})}{\gamma^{k+1}}.
\end{array}
$$
Putting this in (\ref{deltan})
\begin{equation}
\left\vert \delta _{k}((D_k F)^{-1})\right\vert _{k-1} \leq C_k \frac{(1 +\vert F \vert_k^{k-1} +\vert D_k F \vert_k^{k-1})}{\gamma^{k+1}}(1+\left\vert \ln p\right\vert _{1}) . \label{DDkF}
\end{equation}
Finally from (\ref{DkF}) and (\ref{DDkF}), we deduce
$$
\Pi_l(F) \leq C_l \frac{(1+\left\vert \ln p\right\vert _{1})^l }{\gamma^{l(l+2)} } \prod_{k=1}^l (1 +\vert F \vert_k^{k-1} +\vert D_k F \vert_k^{k-1})^2,
$$
and Theorem \ref{IPPE} is proved.
\end{pf}
\section{Stochastic equations with jumps}
\subsection{Notations and hypotheses}
We consider a Poisson point process $p$ with measurable state space $(E , \mathcal{B}(E))$. We refer to Ikeda and Watanabe \cite{IK}
for the notation. We denote by $N$ the counting measure associated to $p$ so $N_t(A):= N((0,t) \times A)=\# \{s<t; p_s \in A\}$.
The intensity measure is $dt \times d\mu(a)$ where $\mu$ is a sigma-finite measure on $(E , \mathcal{B}(E))$ and we fix an
non decreasing sequence
$(E_n)$ of subsets of $E$ such that $E=\cup_n E_n$, $\mu(E_n) < \infty$ and $\mu(E_{n+1}) \leq \mu(E_n) + K$ for all $n$ and for a constant $K>0$.
We consider the one dimensional stochastic equation
\begin{equation}
X_t=x + \int_0^t \int_E c(s,a, X_{s^-})dN(s,a) + \int_0^tg(s, X_s) ds. \label{eq}
\end{equation}
Our aim is to give sufficient conditions on the coefficients $c$ and $g$ in order to prove that the law of $X_t$ is absolutely continuous
with respect to the Lebesgue measure and has a smooth density. We make the following assumptions on the coefficients $c$ and $g$.
{\bf H1.} We assume that the functions $c$ and $g$ are infinitely differentiable with respect to the variables $(t,x)$ and that there exist
a bounded function $\overline{c}$ and a constant $\overline{g}$, such that
$$
\forall (t,a,x) \quad \vert c(t,a,x) \vert \leq \overline{c}(a)(1+ \vert x \vert), \quad \sup_{l+l' \geq 1}\vert \partial^{l'}_t \partial^l_x c(t,a,x)\vert \leq \overline{c}(a);
$$
$$
\forall (t,x)\quad \vert g(t,x) \vert \leq \overline{g}(1+ \vert x \vert), \quad \sup_{ l+l' \geq 1}\vert \partial_t^{l'}\partial^l_x g(t,x)\vert \leq \overline{g};
$$
We assume moreover that $\int_E \overline{c}(a) d \mu(a) < \infty$.
Under H1, equation (\ref{eq}) admits a unique solution.
{ \bf H2.} We assume that there exists a measurable function $\hat{c}: E \mapsto \mathbb{R}_+$ such that $\int_E \hat{c}(a) d \mu(a) < \infty$ and
$$
\forall (t,a,x)\quad \vert \partial_x c(t,a,x)(1+\partial_x c(t,a,x))^{-1} \vert \leq \hat{c}(a).
$$
To simplify the notation we take $\hat{c}=\overline{c}$.
Under H2, the tangent flow associated to (\ref{eq}) is invertible. At last we give a non-degeneracy condition wich will imply (\ref{ND}). We denote by $\alpha$ the function
\begin{equation}
\alpha(t,a,x)= g(t,x)-g(t,x+c(t,a,x))+(g \partial_xc+ \partial_t c)(t,a,x). \label{alpha}
\end{equation}
{\bf H3.} We assume that there exists a measurable function $\underline{\alpha}: E \mapsto \mathbb{R}_+$ such that
$$
\forall (t,a,x) \quad \vert \alpha(t,a,x) \vert \geq \underline{\alpha}(a)>0,
$$
$$
\forall n \int_{E_n} \frac{1}{ \underline{\alpha}(a) }d \mu(a) < \infty \quad \mbox{ and} \quad\liminf_n \frac{1}{\mu(E_n)} \ln \left( \int_{E_n} \frac{1}{ \underline{\alpha}(a) }d \mu(a)\right) =\theta < \infty.
$$
We give in the following some examples where $E=(0,1]$ and $\underline{\alpha}(a)=a$.
\subsection{Main results and examples}
Following the methodology introduced in Bally and Cl\'ement \cite{BC}, our aim is to bound the Fourier transform of $X_t$, $\hat{p}_{X_t}( \xi )$, in terms of $1/ \vert \xi \vert$, recalling that if $\int_{\mathbb{R} }\vert \xi \vert^q \vert \hat{p}_{X_t}( \xi) \vert d \xi < \infty$, for $q>0$, then
the law of $X_t$ is absolutely continuous and its density is $\mathcal{C}^{[q]}$. This is done in the next proposition. The proof of this proposition relies on an approximation of $X_t$ which will be given in the next section.
\begin{prop} \label{fourier}
Assuming H1, H2 and H3 we have for all $n,L \in \mathbb{N}^*$
$$
\vert \hat{p}_{X_t}( \xi) \vert \leq C_{t,L} \left( e^{-\mu(E_n)t/(2L)}+ \frac{1}{\vert \xi\vert^L} A_{n,L} \right),
$$
with $A_{n,L}=\mu(E_n)^L (\int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu(a))^{L(L+2)}$.
\end{prop}
From this proposition, we deduce our main result.
\begin{theo} \label{density}
We assume that H1, H2 and H3 hold. Let $q \in \mathbb{N}$, then for $t> 16 \theta (q+2)(q+1)^2$, the law of $X_t$ is absolutely continuous with respect to the Lebesgue measure and its density is of class $\mathcal{C}^q$. In particular if $\theta=0$, the law of $X_t$ is absolutely continuous with respect to the Lebesgue measure and its density is of class $\mathcal{C}^{\infty}$ for every $t>0$.
\end{theo}
\begin{pf} From Proposition \ref{fourier}, we have
$$
\vert \hat{p}_{X_t}( \xi) \vert \leq C_{t,L} \left( e^{-\mu(E_n)t/2L}+ \frac{1}{\vert \xi\vert^L} A_{n,L} \right).
$$
Now $\forall k,k_0>0$, if $t/2L >k \theta$, we deduce from H3 that for $n \geq n_L $
$$
t/2L >\frac{k}{\mu(E_n)} \ln (\int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu(a)) +\frac{k\ln \mu(E_n)}{k_0 \mu(E_n)}
$$
since the second term on the right hand side tends to zero. This implies
$$
e^{\mu(E_n) t/2L} >(\int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu(a))^k \mu(E_n)^{k/k_0}.
$$
Choosing $k=L(L+2)$ and $k/k_0=L$, we obtain that for $n \geq n_L$ and $t/2L >L(L+2) \theta$
$$
e^{\mu(E_n) t/2L}>A_{n,L}.
$$
and then
$$
\begin{array}{lll}
\vert \hat{p}_{X_t}( \xi) \vert & \leq & C_{t,L} \left( e^{-\mu(E_n)t/2L}+\frac{1}{\vert \xi\vert^L} e^{\mu(E_n)t/2L} \right), \\
& \leq & C_{t,L}(\frac{1}{B_n(t)}+\frac{B_n(t)}{\vert \xi \vert^L}),
\end{array}
$$
with $B_n(t)= e^{\mu(E_n)t/2L}$. Now recalling that $\mu(E_n)<\mu(E_{n+1} )\leq K+ \mu(E_n)$, we have $B_n(t)<B_{n+1}(t) \leq K_t B_n(t)$. Moreover since $B_n(t)$ goes to infinity with $n$ we have
$$
1_{\{ \vert \xi \vert^{L/2} \geq B_{n_L}(t) \}}= \sum_{n \geq n_L} 1_{ \{ B_n(t) \leq \vert \xi \vert^{L/2} < B_{n+1}(t)\}}.
$$
But if $B_n(t) \leq \vert \xi \vert^{L/2}< B_{n+1}(t)$, $\vert \hat{p}_{X_t}( \xi) \vert \leq C_{t,L}/ \vert \xi \vert^{L/2}$ and so
$$
\begin{array}{lll}
\int \vert \xi \vert^q \vert \hat{p}_{X_t}( \xi) \vert d \xi & = & \int_{\vert \xi \vert^{L/2}<B_{n_L(t)}}\vert \xi \vert^q \vert \hat{p}_{X_t}( \xi)\vert d \xi
+ \int_{\vert \xi \vert^{L/2}\geq B_{n_L}(t)}\vert \xi \vert^q \vert \hat{p}_{X_t}( \xi) \vert d \xi, \\
& \leq &C_{t,L, n_L} + \int_{\vert \xi \vert^{L/2}\geq B_{n_L}(t)}\vert \xi \vert^{q-L/2}d \xi.
\end{array}
$$
For $q \in \mathbb{N}$, choosing $L$ such that $L/2-q>1$, we obtain $\int \vert \xi \vert^q \vert \hat{p}_{X_t}( \xi) \vert d \xi < \infty$ for $t/2L >L(L+2) \theta$ and consequently the law of $X_t$ admits a density $\mathcal{C}^q$ for $t>2L^2(L+2) \theta$ and $L>2(q+1)$, that is $t>16\theta (q+1)^2(q+2)$ and Theorem \ref{density} is proved.
\end{pf}
We end this section with two examples
{\bf Example 1.} We take $E=(0,1]$, $\mu_{\lambda}=\sum_{k \geq 1} \frac{1}{k^{\lambda}} \delta_{1/k}$ with $0<\lambda<1$ and
$E_n=[1/n,1]$. We have $\cup_n E_n=E$, $\mu(E_n)= \sum_{k=1}^n \frac{1}{k^{\lambda}}$ and $\mu_{\lambda}(E_{n+1}) \leq \mu_{\lambda}(E_n) +1$ . We consider the process $(X_t)$ solution of (\ref{eq}) with $c(t,a,x)=a$ and $g(t,x)=g(x)$ assuming that the derivatives of $g$ are bounded
and that $\vert g'(x) \vert \geq \underline{g}>0$. We have $\int_E a d \mu_{\lambda}(a)= \sum_{k \geq 1} \frac{1}{k^{\lambda+1}} < \infty$ so H1 and H2 hold.
Moreover $\alpha(t,a,x)=g(x)-g(x+a)$ so $\underline{\alpha}(a)=\underline{g} a$. Now $\int_{E_n} \frac{1}{a} d \mu_{\lambda}(a) =\sum_{k=1}^n k^{1- \lambda}$ which is equivalent as $n$ go to infinity to $n^{2- \lambda}/(2- \lambda)$. Now we have
$$
\frac{1}{ \mu_{\lambda}(E_n)} \ln \left( \int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu_{\lambda}(a) \right)= \frac{ \ln( \underline{g} \sum_{k=1}^n k^{1- \lambda})}{ \sum_{k=1}^n \frac{1}{k^{\lambda}}} \thicksim_{ n \rightarrow \infty} C \frac{ \ln(n^{2- \lambda})}{n^{1- \lambda}} \rightarrow 0,
$$
and then H3 is satisfied with $\theta=0$. We conclude from Theorem \ref{density} that $\forall t>0$, $X_t$ admits a density $\mathcal{C}^{\infty}$.
In the case $\lambda=1$, we have $\mu_1(E_n)=\sum_{k=1}^n \frac{1}{k} \thicksim_{n \rightarrow \infty} \ln n$ then
$$
\frac{1}{ \mu_{1}(E_n)} \ln \left( \int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu_{1}(a) \right)= \frac{ \ln( \underline{g} \sum_{k=1}^n 1)}{ \sum_{k=1}^n \frac{1}{k}} \thicksim_{ n \rightarrow \infty} 1,
$$
and this gives H3 with $\theta=1$. So the density of $X_t$ is regular as soon as $t$ is large enough. In fact it is proved in Kulik \cite{K2} that
under some appropriate conditions the density of $X_t$ is not continuous for small $t$.
{\bf Example 2.} We take the intensity measure $\mu_{\lambda}$ as in the previous example and we consider the process $(X_t)$
solution of (\ref{eq}) with $g=1$ and $c(t,a,x)=ax$. This gives $\overline{c}(a)=a$ and $\underline{\alpha}(a)=a$. So the conclusions are similar to example 1 in both cases $0< \lambda<1$ and $\lambda=1$. But in this example we can compare our result to the one given by
Ichikawa and Kunita \cite{IK}. They assume the condition
$$
\liminf_{u \rightarrow 0} \frac{1}{u^h} \int_{\vert a \vert \leq u} a^2 d \mu(a) >0, \quad (\star)
$$
for some $h \in (0,2)$. Here we have
$$
\int_{\vert a \vert \leq u} a^2 d \mu(a) =\sum_{k \geq 1/u} \frac{1}{k^{2 + \lambda}}\thicksim_{u \rightarrow 0} \frac{u^{1+ \lambda}}{1 + \lambda}.
$$
So if $0<\lambda<1$, $(\star)$ holds and their results apply. In the case $\lambda=1$, $(\star)$ fails and they do not conclude. However
in our approach we conclude that the density of $X_t$ is $\mathcal{C}^q$ for $t>16(q+2)(q+1)^2$.
The next section is devoted to the proof of Proposition \ref{fourier}.
\subsection{Approximation of $X_t$ and integration by parts formula}
In order to bound the Fourier transform of the process $X_t$ solution of (\ref{eq}), we will apply the differential calculus developed in section 2. The first step consists in an approximation of $X_t$ by a random variable $X_t^N$ which can be viewed as an element of our
basic space $\mathcal{S}^0$. We assume that the process $(X_t^N)$ is solution of the discrete version of equation (\ref{eq})
\begin{equation}
X_t^N=x + \int_0^t \int_{E_N} c(s,a, X^N_{s^-})dN(s,a) + \int_0^tg(s, X^N_s) ds. \label{eqdis}
\end{equation}
Since $\mu(E_N) < \infty$, the number of jumps of the process $X^N$ on the interval $(0,t)$ is finite and consequently we may consider
the random variable $X_t^N$ as a function of these jump times and apply the methodology proposed in section 2. We denote by $(J_t^N)$ the Poisson process defined by $J_t^N=N((0,t), E_N)=\# \{s<t; p_s \in E_N \}$ and we note $(T_k^N)_{k \geq 1}$ its jump times. We also introduce the notation $\Delta_k^N=p_{T_k^N}$. With these notations, the process solution of (\ref{eqdis}) can be written
\begin{equation}
X_t^N=x+\sum_{k=1}^{J_t^N} c(T_k^N, \Delta_k^N, X^N_{T_k^N-}) + \int_0^t g(s, X_s^N) ds . \label{eqdisbis}
\end{equation}
We will not work with all the variables $(T_k^N)_k$ but only with the jump times $(T_k^n)$ of the
Poisson process $J_t^n$, where $n<N$. In the following we will keep $n$ fixed and we will make $N$ go to infinity. We note $(T_k^{N,n})_k$ the jump times of the Poisson process $J_t^{N,n}=N((0,t), E_N\backslash E_n)$ and $\Delta_k^{n,N}=p_{T_k^{n,N}}$.
Now we fixe $L \in \mathbb{N}^*$, the number of integration by parts and we note $t_l=t l/L$, $0 \leq l \leq L$. Assuming that
$J_{t_l}^n-J_{t_{l-1}}^n=m_l$ for $1 \leq l \leq L$, we denote by $(T_{l,i}^n)_{1 \leq i \leq m_l}$ the jump times of $J_t^n$ belonging to
the time interval $(t_{l-1}, t_l)$. In the following we assume that $m_l \geq 1$, $\forall l$. For $i=0$ we set $T_{l,0}^n=t_{l-1}$ and for $i=m_l+1$, $T_{l,m_l+1}^n=t_{l}$. With these definitions we choose our basic variables $(V_i, i \in I_l)$ as
\begin{equation}
(V_i, i \in I_l)=(T_{l,2i+1}^n, 0 \leq i \leq [(m_l-1)/2]). \label{Vi}
\end{equation}
The $\sigma$-algebra which contains the noise which is not involved in our differential calculus is
\begin{equation}
\mathcal{G}=\sigma\{ (J_{t_l}^n)_{1 \leq l \leq L}; (T_{l,2i}^n)_{1 \leq2 i \leq m_l, 1 \leq l \leq L}; (T_k^{N,n})_k ; (\Delta_k^N)_k \}. \label{G}
\end{equation}
Using some well known results on Poisson processes, we easily see that conditionally on $\mathcal{G}$ the variables $(V_i)$ are independent and for $i \in I_l$ the law of $V_i$ conditionally on $\mathcal{G}$ is uniform on $(T_{l,2i}^n, T_{l,2i+2}^n)$ and we have
\begin{equation}
p_i(v)=\frac{1}{T_{l,2i+2}^n-T_{l,2i}^n} 1_{(T_{l,2i}^n, T_{l,2i+2}^n)}(v), \quad i \in I_l \label{pi},
\end{equation}
Consequently taking $a_i=T_{l,2i}^n$ and $b_i=T_{l,2i+2}^n$ we check that hypothesis H0 holds. It remains to define the localizing sets
$(\Lambda_{l,i})_{i \in I_l}$.
We denote
\[
h_{l}^{n}=\frac{t_{l}-t_{l-1}}{2m_{l}}=\frac{t}{2Lm_{l}}
\]
and $n_{l}=[(m_{l}-1)/2].$ We will work on the $\mathcal{G}$ measurable set
\begin{equation}
\Lambda_{l}^n=\cup_{i=0}^{n_l} \{
T_{l,2i+2}^{n}-T_{l,2i}^{n}\geq h_{l}^{n}\}, \label{Ln}
\end{equation}
and we consider the following partition of this set:
\begin{eqnarray*}
\Lambda _{l,0} &=&\{T_{l,2}^{n}-T_{l,0}^{n}\geq h_{l}^{n}\}, \\
\Lambda _{l,i} &=&\cap
_{k=1}^{i}\{T_{l,2k}^{n}-T_{l,2k-2}^{n}<h_{l}^{n}\}\cap
\{T_{l,2i+2}^{n}-T_{l,2i}^{n}\geq h_{l}^{n}\},\quad i=1,...,n_{l}.
\end{eqnarray*}
After $L-l$ iterations of the integration by parts we will work with the variables $
V_{i},i\in I_{l}$ so the corresponding derivative is
\[
D_{l}F=\sum_{i\in I_{l}}1_{\Lambda _{l,i}}\partial _{V_{i}}F=\sum_{i\in
I_{l}}1_{\Lambda _{l,i}}\partial _{T_{l,2i+1}^{n}}F.
\]
If we are on $\Lambda _{l}^n$ then we have at least one $i$ such that $
t_{l-1}\leq T_{l,2i}^{n}<T_{l,2i+1}^n<T_{l,2i+2}^{n}\leq t_{l}$ and $
T_{l,2i+2}^{n}-T_{l,2i}^{n}\geq h_{l}^{n}.$ Notice that in this case $
1_{\Lambda _{l,i}}\left\vert p_{i}\right\vert _{\infty }\leq (h_{l}^{n})^{-1}
$ and roughly speaking this means that the variable $V_{i}=T_{l,2i+1}^{n}$
gives a sufficiently large quantity of noise. Moreover, in order to perform $
L$ integrations by parts we will work on
\begin{equation}
\Gamma _{L}^{n}=\cap _{l=1}^{L}\Lambda _{l}^n \label{GnL}
\end{equation}
and we will leave out the complementary of $\Gamma_{L}^{n}.$ The following
lemma says that on the set $\Gamma _{L}^{n}$ we have enough noise
and that the complementary of this set may be ignored.
\begin{lem} \label{techni}
Using the notation given in Theorem \ref{IPPE} one has
i) $ \left\vert p\right\vert _{0} :=\max_{1\leq l\leq L}\sum_{i\in
I_{l}}1_{\Lambda _{l,i}}\left\vert p_{i}\right\vert _{\infty }\leq \frac{2L}{
t} J_t^n$,
ii) $ P((\Gamma_{L}^{n})^{c}) \leq L\exp (-\mu (E_{n})t/2L)$.
\end{lem}
\begin{pf}
As mentioned before $1_{\Lambda _{l,i}}\left\vert
p_{i}\right\vert _{\infty }\leq (h_{l}^{n})^{-1}=2Lm_{l}/t\leq \frac{2L}{t
}J_{t}^{n}$ and so we have i). In order to prove ii) we have to
estimate $P((\Lambda_{l}^n)^{c})$ for $1 \leq l\leq L.$ We denote $s_{l}=\frac{1}{2}
(t_{l}+t_{l-1})$ and we will prove that $\{J_{t_{l}}^{n}-J_{s_{l}}^{n}\geq
1\}\subset \Lambda _{l}^n.$ Suppose first that $
m_{l}=J_{t_{l}}^{n}-J_{t_{l-1}}^{n}$ is even. Then $2n_{l}+2=m_{l}.$ If $
T_{l,2i+2}^{n}-T_{l,2i}^{n}< h_{l}^{n}$ for every $i=0,...,n_{l}$ then
\[
T_{l,m_{l}}^n-t_{l-1}=\sum_{i=0}^{n_{l}}(T_{l,2i+2}^{n}-T_{l,2i}^{n})\leq
(n_{l}+1)\times \frac{t}{2Lm_{l}}\leq \frac{t}{4L}\leq s_{l}-t_{l-1}
\]
so there are no jumps in $(s_{l},t_{l}).$ Suppose now that $m_{l}$ is odd
so $2n_{l}+2=m_{l}+1$ and $T_{l,2n_{l}+2}^{n}=t_{l}.$ If we have $T_{l,2i+2}^{n}-T_{l,2i}^{n}< h_{l}^{n}$ for every $
i=0,...,n_{l},$ then we deduce
\[
\sum_{i=0}^{n_{l}}(T_{l,2i+2}^{n}-T_{l,2i}^{n})< (n_{l}+1)\times \frac{t}{
2Lm_{l}}< \frac{m_l+1}{m_l} \frac{t}{4L}\leq \frac{t}{2L},
\]
and there are no jumps in $(s_{l},t_{l}).$
So we have proved that $\{J_{t_{l}}^{n}-J_{s_{l}}^{n}\geq 1\}\subset \Lambda
_{l}^n$ and since $P(J_{t_{l}}^{n}-J_{s_{l}}^{n}=0)=\exp (-\mu (E_{n})t/2L)$
the inequality $ii)$ follows.
\end{pf}
Now we will apply Theorem \ref{IPPE}, with $F^N=X_t^N$, $G=1$ and $\Phi_{\xi}(x)=e^{i \xi x}$. So we have to check that
$F^N \in \mathcal{S}^{L+1}( \cup_{l=1}^L I_l)$ and that condition (\ref{ND}) holds. Moreover we have to bound $\vert F^N \vert_l^{l-1}$
and $\vert D_l F^N \vert_l^{l-1}$, for $1 \leq l \leq L$. This needs some preliminary lemma.
\begin{lem} \label{derdet}
Let $v=(v_i)_{i \geq 0}$ a positive non increasing sequence with $v_0=0$ and $(a_i)_{i \geq 1}$ a sequence of $E$. We define
$J_t(v)$ by $J_t(v)=v_i$ if $v_i \leq t <v_{i+1}$ and we consider the process solution of
\begin{equation}
X_t=x+ \sum_{k=1}^{J_t} c(v_k, a_k, X_{v_k-})+ \int_0^t g(s,X_s) ds \label{deteq}.
\end{equation}
We assume that H1 holds. Then $X_t$ admits some derivatives with respect to $v_i$ and if we note $U_i(t)= \partial_{v_i} X_t$ and
$W_i(t)=\partial^2_{v_i} X_t$, the processes $(U_i(t))_{ t \geq v_i}$ and $(W_i(t))_{t \geq v_i}$ solve respectively
\begin{equation}
U_i(t)= \alpha(v_i, a_i, X_{v_i-}) + \sum_{k=i+1}^{J_t} \partial_x c(v_k, a_k, X_{v_k-})U_i(v_k-) + \int_{v_i}^t \partial_x g(s, X_s) U_i(s) ds
\label{DFN},
\end{equation}
\begin{equation}
W_i(t)= \beta_i(t) + \sum_{k=i+1}^{J_t} \partial_x c(v_k, a_k, X_{v_k-})W_i(v_k-) + \int_{v_i}^t \partial_x g(s, X_s) W_i(s) ds
\label{D2FN},
\end{equation}
with
$$
\begin{array}{cll}
\alpha(t,a,x) & = & g(t,x)-g(t,x+c(t,a,x))+ g(t,x) \partial_x c(t,a,x) + \partial_t c(t,a,x) , \\
\beta_i(t) & = & \partial_t \alpha(v_i,a_i ,X_{v_i-})+\partial_x \alpha(v_i,a_i ,X_{v_i-}) g(v_i, X_{v_i-})-\partial_x g(v_i, X_{v_i}) U_i(v_i) \\
& &+ \sum_{k=i+1}^{J_t}\partial_x^2 c(v_k, a_k, X_{v_k-})(U_i(v_k-) )^2 +\int_{v_i}^t \partial_x^2 g(s, X_s) (U_i(s))^2 ds.
\end{array}
$$
\end{lem}
\begin{pf}
If $s<v_i$, we have $\partial_{v_i} X_s=0$. Now we have
$$
X_{v_i-}=x+\sum_{k=1}^{v_{i-1}} c(v_k, a_k ,X_{v_k-})+\int_0^{v_i }g(s,X_s) ds,
$$
and consequently
$$
\partial_{v_i}X_{v_i-}=g(v_i, X_{v_i-}).
$$
For $t>v_i$, we observe that
$$
X_t=X_{v_i-}+ \sum_{k=v_i}^{J_t} c(v_k, a_k ,X_{v_k-})+ \int_{v_i}^t g(s, X_s) ds,
$$
this gives
$$
\begin{array}{lll}
\partial_{v_i} X_t& = & g(v_i, X_{v_i-})+g(v_i, X_{v_i-}) \partial_x c(v_i, a_i,X_{v_i-}) + \partial_t c(v_i, a_i,X_{v_i-})-g(v_i, X_{v_i}) \\
& & + \sum_{k=i+1}^{J_t} \partial_x c(v_k, a_k, X_{v_k-})\partial_{v_i} X_{v_k-} + \int_{v_i}^t \partial_x g(s, X_s) \partial_{v_i} X_s ds.
\end{array}
$$
Remarking that $X_{v_i}=X_{v_i-}+c(v_i, a_i,X_{v_i-})$, we obtain (\ref{DFN}). The proof of (\ref{D2FN}) is similar and we omit it.
\end{pf}
We give next a bound for $X_t$ and its derivatives with respect to the variables $(v_i)$.
\begin{lem} \label{Bdet}
Let $(X_t)$ the process solution of (\ref{deteq}). We assume that H1 holds and we note
$$
n_t(\overline{c})=\sum_{k=1}^{J_t} \overline{c}(a_k).
$$
Then we have:
$$
\sup_{s \leq t} \vert X_t\vert \leq C_t(1+n_t(\overline{c}))e^{n_t(\overline{c})}.
$$
Moreover
$\forall l \geq 1$, there exist some constants $C_{t,l}$ and $C_l$ such that
$\forall (v_{k_i})_{i=1, \ldots, l}$ with $t>v_{k_l}$, we have
$$
\sup_{v_{k_l} \leq s \leq t} \vert \partial_{v_{k_1}} \ldots \partial_{v_{k_{l-1}}} U_{k_l}(s) \vert
+\sup_{v_{k_l} \leq s \leq t} \vert \partial_{v_{k_1}} \ldots \partial_{v_{k_{l-1}}} W_{k_l}(s) \vert \leq C_{t,l}(1+n_t(\overline{c}))^{C_l}e^{C_l n_t(\overline{c})}.
$$
\end{lem}
We observe that the previous bound does not depend on the variables $(v_i)$.
\begin{pf}
We just give a sketch of the proof.
We first remark that the process $(e_t)$ solution of
$$
e_t=1+\sum_{k=1}^{J_t} \overline{c}(a_k) e_{v_k-} + \overline{g}\int_0^t e_s ds,
$$
is given by $e_t=\prod_{k=1}^{J_t} (1+\overline{c}(a_k)) e^{\overline{g}t}$.
Now from H1, we deduce for $s \leq t$
$$
\begin{array}{lll}
\vert X_s \vert & \leq & \vert x \vert + \sum_{k=1}^{J_s} \overline{c}(a_k)(1 + \vert X_{v_k-} \vert ) + \int_0^s \overline{g} (1 + \vert X_u \vert )du, \\
& \leq & \vert x \vert + \sum_{k=1}^{J_t} \overline{c}(a_k) +\overline{g} t + \sum_{k=1}^{J_s} \overline{c}(a_k) \vert X_{v_k-} \vert + \int_0^s \overline{g} \vert X_u \vert du, \\
& \leq & ( \vert x \vert + \sum_{k=1}^{J_t} \overline{c}(a_k) +\overline{g} t ) e_s
\end{array}
$$
where the last inequality follows from Gronwall lemma.
Then using the previous remark
\begin{equation}
\sup_{s \leq t} \vert X_s\vert \leq C_t (1+ n_t(\overline{c}))\prod_{k=1}^{J_t} (1+ \overline{c}(a_k)) \leq C_t (1+ n_t(\overline{c})) e^{n_t(\overline{c})}. \label{BX}
\end{equation}
We check easily that $\vert \alpha(t,a,x) \vert \leq C(1+ \vert x \vert ) \overline{c}(a)$, and we get successively from (\ref{DFN}) and (\ref{BX})
$$
\sup_{v_{k_l} \leq s \leq t} \vert U_{k_l}(s) \vert \leq C_t(1+ \vert X_{v_{k_l}-} \vert ) \overline{c}(a_{k_l})(1+ n_t(\overline{c})) e^{n_t(\overline{c})} \leq C_t (1+ n_t(\overline{c}))^2 e^{2 n_t(\overline{c})}.
$$
Putting this in (\ref{D2FN}), we obtain a similar bound for $\sup_{v_{k_l} \leq s \leq t} \vert W_{k_l}(s) \vert$ and we end the proof
of Lemma \ref{Bdet} by induction since we can derive equations for the higher order derivatives of $U_{k_l}(s)$ and $W_{k_l}(s)$
analogous to (\ref{D2FN}).
\end{pf}
We come back to the process $(X_t^N)$ solution of (\ref{eqdis}). We recall that $F^N=X_t^N$ and we will check that $F^N$ satisfies
the hypotheses of Theorem \ref{IPPE}.
\begin{lem} \label{verhyp}
i) We assume that H1 holds. Then $\forall l \geq 1$, $\exists C_{t,l}, C_l$ independent of $N$ such that
$$
\vert F^N \vert_l + \vert D_l F^N \vert_l \leq C_{t,l} \left( (1+N_t(\overline{c}))e^{N_t(\overline{c}) }\right)^{C_l},
$$
with $N_t(\overline{c})= \int_0^t \int_E \overline{c}(a) dN(s,a) $.
ii) Moreover if we assume in addition that H2 and H3 hold and that $m_l=J_{t_l}^n-J_{t_{l-1}}^n \geq 1$, $\forall l \in \{1, \ldots, L\}$ then we have $\forall 1 \leq l \leq L$, $\forall i \in I_l$
$$
\vert \partial_{V_i} F^N \vert \geq \left(e^{2 N_t(\overline{c})} N_t (1_{E_n} 1/\underline{\alpha} ) \right)^{-1} :=\gamma_n
$$
and (\ref{ND}) holds.
\end{lem}
We remark that on the non degeneracy set $\Gamma_L^n$ given by (\ref{GnL}) we have at least one jump on $(t_{l-1}, t_l)$, that is $m_l \geq 1$, $\forall l \in \{1, \ldots, L\}$. Moreover we have $\Gamma_L^n \subset \{\gamma_n>0 \}$.
\begin{pf}
The proof of i) is a straightforward consequence of Lemma \ref{Bdet}, replacing $n_t(\overline{c})$ by $\sum_{p=1}^{J_t^N} \overline{c}(\Delta_p^N)$ and observing that
$$
\sum_{p=1}^{J_t^N} \overline{c}(\Delta_p^N)= \int_0^t \int_{E_N} \overline{c}(a) dN(s,a) \leq \int_0^t \int_E \overline{c}(a) dN(s,a)=N_t(\overline{c}).
$$
Turning to ii) we have from Lemma \ref{derdet}
$$
\partial_{T_k^N} X_t^N=\alpha(T_k^N, \Delta_k^N, X^N_{T_k^N-}) + \sum_{p=k+1}^{J_t^N} \partial_x c(T_p^N, \Delta_p^N, X^N_{T_p^N-}) \partial_{T_k^N} X_{T_p^N-}^N+ \int_{T_k^N}^t \partial_x g(s, X^N_s) \partial_{T_k^N} X_s ds.
$$
Assuming H2, we define $(Y_t^N)_t$ and $(Z_t^N)_t$ as the solutions of the equations
$$
\begin{array}{lll}
Y_t^N & = & 1 + \sum_{p=1}^{J_t^N} \partial_x c(T_p^N, \Delta_p^N, X^N_{T_p^N-}) Y_{T_k^N-} + \int_0^t \partial_x g(s, X_s^N) Y_s^N ds,
\\
Z_t^N & = & 1 - \sum_{p=1}^{J_t^N} \frac{\partial_x c(T_p^N, \Delta_p^N, X^N_{T_p^N-})}{1+\partial_x c(T_p^N, \Delta_p^N, X^N_{T_p^N-})} Z_{T_k^N-} -\int_0^t \partial_x g(s, X_s^N) Z_s^N ds.
\end{array}
$$
We have $Y_t^N \times Z_t^N=1$, $\forall t \geq 0$ and
$$
\vert Y_t^N \vert \leq e^{t \overline{g}} e^{N_t(1_{E_N} \overline{c})} \leq e^{N_t( \overline{c})} ,\quad \vert Z_t^N \vert = \vert \frac{1}{ Y_t^N} \vert \leq e^{N_t( \overline{c})}.
$$
Now one can easily check that
$$
\partial_{T_k^N} X_t^N=\alpha(T_k^N, \Delta_k^N, X^N_{T_k^N-}) Y_t^N Z^N_{T_k^N},
$$
and using H3 and the preceding bound it yields
$$
\vert \partial_{T_k^N} X_t^N\vert \geq e^{-2N_t( \overline{c})} \underline{\alpha}(\Delta_k^N).
$$
Recalling that we do not consider the derivatives with respect to all the variables $(T_k^N)$ but only with respect to $(V_i)=(T^n_{l,2i+1})_{l,i}$
with $n<N$ fixed, we have $\forall 1 \leq l \leq L$ and $ \forall i \in I_l$
$$
\vert \partial_{V_i} X_t^N\vert \geq e^{-2N_t( \overline{c})} \left(\sum_{p=1}^{J_t^n} \frac{1}{\underline{\alpha}(\Delta_p^n)}\right)^{-1}=
\left(e^{2 N_t(\overline{c})} N_t (1_{E_n} 1/\underline{\alpha} ) \right)^{-1},
$$
and Lemma \ref{verhyp} is proved.
\end{pf}
With this lemma we are at last able to prove Proposition \ref{fourier}.
\noindent
{\bf Proof of Proposition \ref{fourier}: }
From Theorem \ref{IPPE} we have since $\Gamma_L^n \subset \{ \gamma_n >0 \}$
$$
1_{\Gamma_L^n} \vert E_{\mathcal{G} }\Phi^{(L)}(F^N) \vert \leq C_L \vert \vert \Phi \vert \vert_{\infty} 1_{\Gamma_L^n} E_{\mathcal{G}} (1+\vert p_0 \vert)^L \Pi_L(F^N).
$$
Now from Lemma \ref{techni} i) we have
$$
\vert p_0 \vert \leq 2 L J_t^n/t
$$
and moreover we can check that $\vert \ln p \vert_1=0$. So we deduce from Lemma \ref{verhyp}
$$
\Pi_L(F^N) \leq \frac{C_{t,L}}{\gamma_n^{L(L+2)} }\left( (1+N_t(\overline{c}))e^{N_t(\overline{c}) }\right)^{C_L} \leq
C_{t,L}N_t (1_{E_n} 1/\underline{\alpha} )^{L(L+2)} \left( (1+N_t(\overline{c}))e^{N_t(\overline{c}) }\right)^{C_L} .
$$
This finally gives
\begin{equation}
\vert E1_{\Gamma_L^n} \Phi^{(L)}(F^N) \vert \leq \vert \vert \Phi \vert \vert_{\infty} C_{t,L} E\left((J_t^N)^L N_t (1_{E_n} 1/\underline{\alpha} )^{L(L+2)} \left( (1+N_t(\overline{c}))e^{N_t(\overline{c}) }\right)^{C_L} \right). \label{Bint}
\end{equation}
Now we know from a classical computation (see for example \cite{BC}) that the Laplace transform of $N_t(f)$
satisfies
\begin{equation}
E e^{-s N_t(f)}=e^{-t \alpha_{f}(s)}, \quad \alpha_{f}(s)= \int_E (1-e^{-s f(a)}) d \mu(a). \label{laplace}
\end{equation}
From H1, we have $\int_E \overline{c}(a) d \mu(a)< \infty$, so we deduce using (\ref{laplace}) with $f= \overline{c}$ that, $\forall q>0$
$$
E \left( (1+N_t(\overline{c}))e^{N_t(\overline{c}) }\right)^q \leq C_{t,q} < \infty.
$$
Since $J_t^n$ is a Poisson process with intensity $t \mu(E_n)$, we have $\forall q>0$
$$
E (J_t^n)^q \leq C_{t,q} \mu(E_n)^q.
$$
Finally, using once again (\ref{laplace}) with $f=1_{E_n} 1/\underline{\alpha}$ we see easily that $\forall q>0$
$$
EN_t (1_{E_n} 1/\underline{\alpha} )^q \leq C_{t,q} \left( \int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu(a) \right)^q.
$$
Turning back to (\ref{Bint}) and combining Cauchy-Schwarz inequality and the previous bounds we deduce
\begin{equation}
\vert E1_{\Gamma_L^n} \Phi^{(L)}(F^N) \vert \leq \vert \vert \Phi \vert \vert_{\infty} C_{t,L} \mu(E_n)^L \left( \int_{E_n} \frac{1}{\underline{\alpha}(a)} d \mu(a)\right)^{L(L+2)} = \vert \vert \Phi \vert \vert_{\infty}C_{t,L} A_{n,L}. \label{Bfin}
\end{equation}
We are now ready to give a bound for $\hat{p}_{X_t^N}(\xi)$.
We have $\hat{p}_{X_t^N}(\xi)= E \Phi_{\xi}(F^N)$, with $\Phi_{\xi}(x)=e^{i \xi x}$. Since $\Phi^{(L)}_{\xi}(x)=(i \xi)^L \Phi_{\xi}(x)$,
we can write $\vert \hat{p}_{X_t^N}(\xi)\vert=\vert E\Phi^{(L)}_{\xi}(F^N) \vert / \vert \xi \vert^L$ and consequently we deduce from (\ref{Bfin})
$$
\vert \hat{p}_{X_t^N}(\xi)\vert \leq P((\Gamma_L^n)^c) +C_{t,L} A_{n,L}/ \vert \xi \vert^L.
$$
But from Lemma \ref{techni} ii)
we have
$$
P((\Gamma_L^n)^c) \leq L e^{-\mu(E_n) t/(2L)}
$$
and finally
$$
\vert \hat{p}_{X_t^N}(\xi) \vert \leq C_{L,t} \left( e^{-\mu(E_n) t/(2L)} + A_{n,L}/ \vert \xi \vert^L \right).
$$
We achieve the proof of Proposition \ref{fourier} by letting $N$ go to infinity, keeping $n$ fixed.
{\hfill\mbox{$\diamond$}\medskip}
|
1,108,101,562,782 | arxiv | \section{Introduction}
\label{sec 1} \setcounter{equation}{0}
The well-known Carleman's inequality asserts that for convergent infinite series $\sum a_n$ with non-negative terms, one has
\begin{equation*}
\sum^\infty_{n=1}(\prod^n_{k=1}a_k)^{\frac 1{n}}
\leq e\sum^\infty_{n=1}a_n,
\end{equation*}
with the constant $e$ best possible.
There is a rich literature on many different proofs of Carleman's inequality as well as its generalizations and extensions. We shall refer the readers to the survey articles \cite{P&S} and \cite{D&M} as well as the references therein for
an account of Carleman's inequality.
From now on we will assume $a_n \geq 0$ for $n \geq 1$ and any
infinite sum converges. Our goal in this paper is to study the following weighted Carleman's inequality:
\begin{equation}
\label{1}
\sum^\infty_{n=1}G_n
\leq C\sum^\infty_{n=1}a_n,
\end{equation}
where
\begin{equation}
\label{2}
G_n=\prod^n_{k=1}a^{\lambda_k/\Lambda_n}_k, \hspace{0.1in} \Lambda_n=\sum^n_{k=1}\lambda_k, ~~\lambda_k \geq 0, ~~\lambda_1>0.
\end{equation}
The task here is to determine the best constant $C$ so that inequality \eqref{1} holds for any non-negative sequence $\{a_n \}^{\infty}_{n=1}$.
One approach to our problem here is to deduce inequality \eqref{1} via $l^{p}$ operator norm of the corresponding weighted mean matrix. We recall here that a matrix $A=(a_{j,k})$ is said to be a weighted mean matrix if its entries satisfy:
\begin{equation}
\label{3}
a_{j,k}=\lambda_k/\Lambda_j, 1 \leq k \leq j; \hspace{0.1in} a_{j,k}=0, k>j,
\end{equation}
where the notations are as in \eqref{2}.
For $p>1$, let $l^p$ be the Banach space of all complex sequences ${\bf b}=(b_n)_{n \geq 1}$ with norm
\begin{equation*}
||{\bf b}||: =(\sum_{n=1}^{\infty}|b_n|^p)^{1/p} < \infty.
\end{equation*}
The $l^{p}$ operator norm $||A||_{p,p}$ of $A$ for $A$ as defined in \eqref{3} is then defined as the $p$-th root of the smallest value of the
constant $U$ so that the following inequality holds for any ${\bf b} \in l^p$:
\begin{equation}
\label{4}
\sum^{\infty}_{n=1}\big{|}\sum^{\infty}_{k=1}\lambda_kb_k/\Lambda_n
\big{|}^p \leq U \sum^{\infty}_{n=1}|b_n|^p.
\end{equation}
In an unpublished dissertation \cite{Car}, Cartlidge studied
weighted mean matrices as operators on $l^p$ and obtained the
following result (see also \cite[p. 416, Theorem C]{B1}).
\begin{theorem}
\label{thm02}
Let $1<p<\infty$ be fixed. Let $A=(a_{j,k})$ be a weighted mean matrix given by \eqref{3}. If
\begin{equation}
\label{022}
L=\sup_n(\frac {\Lambda_{n+1}}{\lambda_{n+1}}-\frac
{\Lambda_n}{\lambda_n}) < p ~~,
\end{equation}
then
$||A||_{p,p} \leq p/(p-L)$.
\end{theorem}
The above theorem implies that one can take $U=(p/(p-L))^p$ in inequality \eqref{4} for any weighted mean matrix $A$ satisfying \eqref{022}. We note here by a change of variables $b_k \rightarrow a^{1/p}_k$ in \eqref{4} and on letting $p \rightarrow +\infty$, one obtains inequality \eqref{1} with $C=e^{L}$ as long as \eqref{022} is satisfied with $p$ replaced by $+\infty$ there.
In this note, we will study inequality \eqref{1} via Carleman's original approach and we shall prove in the next section the following:
\begin{theorem}
\label{thm1}
Suppose that
\begin{equation}
\label{5}
M=\sup_n\frac
{\Lambda_n}{\lambda_n}\log \Big(\frac {\Lambda_{n+1}/\lambda_{n+1}}{\Lambda_n/\lambda_n} \Big ) < +\infty,
\end{equation}
then inequality \eqref{1} holds with $C=e^M$.
\end{theorem}
We point out here that the result of Theorem \ref{thm1} is better than what one can deduce from Cartlidge's result as discussed above. This can be seen by noting that \eqref{5} is equivalent to
\begin{equation*}
\frac {\Lambda_{n+1}\lambda_{n}}{\Lambda_{n}\lambda_{n+1}} \leq e^{\lambda_nM/\Lambda_n},
\end{equation*}
for any integer $n \geq 1$. Suppose now \eqref{022} is satisfied, then the case $n=1$ of \eqref{022} implies $L>0$ and it is easy to check that
\begin{equation*}
\frac {\Lambda_{n+1}\lambda_{n}}{\Lambda_{n}\lambda_{n+1}} = 1+\frac
{\lambda_n}{\Lambda_n}(\frac {\Lambda_{n+1}}{\lambda_{n+1}}-\frac
{\Lambda_n}{\lambda_n}) \leq 1+\frac
{\lambda_n}{\Lambda_n}L \leq e^{\lambda_nL/\Lambda_n},
\end{equation*}
from which we deduce that $M \leq L$.
Bennett \cite[p. 829]{Be1} conjectured that inequality \eqref{1} holds for $\lambda_k=k^{\alpha}$ for $\alpha > -1$ with $C=1/(\alpha+1)$. As the cases $-1 < \alpha \leq 0$ or $\alpha \geq 1$ follow directly from Cartlidge's result above (Theorem \ref{thm02}), the only case left unknown is when $0< \alpha <1$. As an application of Theorem \ref{thm1}, we shall prove Bennett's conjecture in Section \ref{sec 3}.
\section{Proof of Theorem \ref{thm1}}
\label{sec 2} \setcounter{equation}{0}
It suffices to establish our assertion with the infinite summation in \eqref{1} replaced by any finite summation, say from $1$ to $N \geq 1$ here. We now follow Carleman's approach by determing the maximamum value $\mu_N$ of $\sum^N_{n=1}G_n$ subject to the constraint $\sum^N_{n=1}a_n=1$ using Lagrange multipliers. It is easy to see that we may assume $a_n > 0$ for all $1 \leq n \leq N$ when the maximamum is reached. We now define
\begin{equation*}
F({\bf a}; \mu)=\sum^N_{n=1}G_n-\mu (\sum^N_{n=1}a_n-1),
\end{equation*}
where ${\bf a}=(a_n)_{1 \leq n \leq N}$. By the Lagrange method, we have to solve $\nabla F=0$, or the following system of equations:
\begin{equation}
\label{2.1}
\mu a_k=\sum^N_{n=k}\frac {\lambda_kG_n}{\Lambda_n}, \hspace{0.1in} 1 \leq k \leq N; \hspace{0.1in} \sum^N_{n=1}a_n=1.
\end{equation}
We note that on summing over $1 \leq k \leq N$ of the first $N$ equations above, we get
\begin{equation*}
\sum^N_{n=1}G_n=\mu.
\end{equation*}
Hence we have $\mu=\mu_N$ in this case which allows us to recast the equations \eqref{2.1} as:
\begin{equation*}
\mu_N \frac {a_k}{\lambda_k}=\sum^N_{n=k}\frac {G_n}{\Lambda_n}, \hspace{0.1in} 1 \leq k \leq N; \hspace{0.1in} \sum^N_{n=1}a_n=1.
\end{equation*}
On subtracting consecutive equations, we can rewrite the above system of equations as:
\begin{equation*}
\mu_N (\frac {a_k}{\lambda_k}-\frac {a_{k+1}}{\lambda_{k+1}})=\frac {G_k}{\Lambda_k}, \hspace{0.1in} 1 \leq k \leq N-1; \hspace{0.1in} \mu_N \frac {a_N}{\lambda_N}=\frac {G_N}{\Lambda_N}; \hspace{0.1in} \sum^N_{n=1}a_n=1.
\end{equation*}
Now we define for $1 \leq k \leq N-1$,
\begin{equation*}
\omega_k = \frac {\Lambda_k}{\lambda_k}-\frac {\Lambda_k a_{k+1}}{\lambda_{k+1}a_k},
\end{equation*}
so that we can further rewrite our system of equations as:
\begin{equation*}
\mu_N a_k \omega_k=G_k, \hspace{0.1in} 1 \leq k \leq N-1; \hspace{0.1in} \mu_N \frac {a_N}{\lambda_N}=\frac {G_N}{\Lambda_N}; \hspace{0.1in} \sum^N_{n=1}a_n=1.
\end{equation*}
It is easy to check that for $1 \leq k \leq N-2$,
\begin{equation*}
\omega^{\Lambda_{k+1}}_{k+1}=\frac 1{ \mu^{\lambda_{k+1}}_N}\Big (\frac {\omega_{k}}{\frac {\lambda_{k+1}}{\Lambda_k}(\Lambda_k/\lambda_k-\omega_{k})} \Big )^{\Lambda_{k}}.
\end{equation*}
We now define a sequence of real functions $\Omega_k(\mu)$ inductively by setting $\Omega_1(\mu)=1/\mu$ and
\begin{equation}
\label{2.2}
\Omega^{\Lambda_{k+1}}_{k+1}(\mu)=\frac 1{ \mu^{\lambda_{k+1}}}\Big (\frac {\Omega_{k}}{\frac {\lambda_{k+1}}{\Lambda_k}(\Lambda_k/\lambda_k-\Omega_{k})} \Big )^{\Lambda_{k}}.
\end{equation}
We note that $\Omega_k(\mu_N)= \omega_k$ for $1 \leq k \leq N-1$ and
\begin{eqnarray*}
\Omega^{\Lambda_{N}}_N(\mu_N) &=& \frac 1{ \mu^{\lambda_{N}}_N}\Big (\frac {\omega_{N-1}}{\frac {\lambda_{N}}{\Lambda_{N-1}}(\Lambda_{N-1}/\lambda_{N-1}-\omega_{N-1})} \Big )^{\Lambda_{N-1}}=\frac 1{ \mu^{\lambda_{N}}_N}\Big (\frac {\omega_{N-1}a_{N-1}}{a_{N}} \Big )^{\Lambda_{N-1}} \\
&=& \frac 1{ \mu^{\lambda_{N}}_N}\Big (\frac {G_{N-1}}{\mu_Na_{N}} \Big )^{\Lambda_{N-1}}=\Big (\frac {G_{N}}{\mu_Na_{N}} \Big )^{\Lambda_{N}}=\Big (\frac {\Lambda_{N}}{\lambda_{N}} \Big )^{\Lambda_{N}}.
\end{eqnarray*}
We now show by induction that if $\mu > e^M$, then for any $k \geq 1$,
\begin{equation}
\label{2.3}
\Omega_k(\mu) < \frac {\Lambda_k/\lambda_k}{\Lambda_{k+1}/\lambda_{k+1}}.
\end{equation}
As we have seen above that $\Omega_N(\mu_N)=\Lambda_{N}/\lambda_{N}$, this forces $\mu_N \leq e^M$ and hence our assertion for Theorem \ref{thm1} will follow.
Now, to establish \eqref{2.3}, we note first the case $k=1$ follows directly from our assumption \eqref{5} on considering the case $n=1$ there. Suppose now \eqref{2.3} holds for $k \geq 1$, then by the relation \eqref{2.2}, we have
\begin{eqnarray*}
\Omega^{\Lambda_{k+1}}_{k+1}(\mu) &=& \frac 1{ \mu^{\lambda_{k+1}}}\Big (\frac {\Omega_{k}}{\frac {\lambda_{k+1}}{\Lambda_k}(\Lambda_k/\lambda_k-\Omega_{k})} \Big )^{\Lambda_{k}} \\
&<& \frac 1{ \mu^{\lambda_{k+1}}}\Big (\frac {\frac {\Lambda_k/\lambda_k}{\Lambda_{k+1}/\lambda_{k+1}}}{\frac {\lambda_{k+1}}{\Lambda_k}(\Lambda_k/\lambda_k-\frac {\Lambda_k/\lambda_k}{\Lambda_{k+1}/\lambda_{k+1}})} \Big )^{\Lambda_{k}}=\frac 1{ \mu^{\lambda_{k+1}}}.
\end{eqnarray*}
This implies that
\begin{equation*}
\Omega_{k+1}(\mu) < \frac 1{ \mu^{\lambda_{k+1}/\Lambda_{k+1}}}< \frac {\Lambda_{k+1}/\lambda_{k+1}}{\Lambda_{k+2}/\lambda_{k+2}}.
\end{equation*}
The last inequality follows from the case $n=k+1$ of our assumption \eqref{5} and this completes the proof.
\section{An Application of Theorem \ref{thm1}}
\label{sec 3} \setcounter{equation}{0}
Our goal in this section is to establish the following:
\begin{theorem}
\label{thm2}
Inequality \eqref{1} holds for $\lambda_k=k^{\alpha}$ for $0 < \alpha <1 $ with $C=1/(\alpha+1)$.
\end{theorem}
We need a lemma first:
\begin{lemma}\cite[Lemma 1, 2, p.18]{L&S}
\label{lem0}
For an integer $n \geq 1$ and $0 \leq r \leq 1$,
\begin{equation*}
\frac {1}{r+1}n(n+1)^r \leq \sum^n_{i=1}i^r \leq \frac {r}{r+1}\frac
{n^r(n+1)^r}{(n+1)^r-n^r}.
\end{equation*}
\end{lemma}
Now we return to the proof of Theorem \ref{thm2}. It suffices to check that condition \eqref{5} is satisfied with $M=1/(\alpha+1)$ there. Explicitly, we need to show that for any integer $n \geq 1$,
\begin{equation}
\label{3.1}
\frac {\sum^n_{k=1}k^{\alpha}}{n^{\alpha}}\log \Big ( \Big ( 1+ \frac {(n+1)^{\alpha}}{\sum^n_{k=1}k^{\alpha}} \Big )\Big ( \frac {n^{\alpha}}{(n+1)^{\alpha}} \Big )\Big ) \leq \frac 1{\alpha+1}.
\end{equation}
Now we apply Lemma \ref{lem0} to obtain:
\begin{equation*}
1+ \frac {(n+1)^{\alpha}}{\sum^n_{k=1}k^{\alpha}} \leq 1+ \frac {\alpha+1}{n}.
\end{equation*}
We use this together with the upper bound in Lemma \ref{lem0} to see that inequality \eqref{3.1} is a consequence of the following inequality:
\begin{equation}
\label{3.2}
\alpha \Big ( \log ( 1+ \frac {\alpha+1}{n} ) - \log (1+1/n)^{\alpha} \Big ) \leq 1-\frac 1{(1+1/n)^{\alpha}}.
\end{equation}
We now define
\begin{equation*}
f(x)=1-(1+x)^{-\alpha}-\alpha \Big ( \log ( 1+ (\alpha+1)x ) -\alpha \log (1+x) \Big ).
\end{equation*}
Note that inequality \eqref{3.2} is equivalent to $f(1/n) \geq 0$. Hence it suffices to show that $f(x) >0$ for $0 < x \leq 1$.
Calculation shows that
\begin{equation*}
f'(x)=\frac {\alpha g(x)}{(1+x)^{1+\alpha}\big(1+(1+\alpha)x \big )},
\end{equation*}
where
\begin{equation*}
g(x)=1+ (\alpha+1)x-\big(\alpha+(1-\alpha^2)x \big )(1+x)^{\alpha}.
\end{equation*}
Note that when $0<\alpha <1$,
\begin{equation*}
(1+x)^{\alpha} \leq 1+\alpha x.
\end{equation*}
It follows that
\begin{eqnarray*}
g(x) &\geq & 1+ (\alpha+1)x-\big(\alpha+(1-\alpha^2)x \big )(1+\alpha x) \\
&=& 1-\alpha+\alpha x -\alpha (1-\alpha^2)x^2 :=h(x).
\end{eqnarray*}
It is easy to see that $h(x)$ is concave for $0 \leq x \leq 1$ and $h(0)=1-\alpha >0, h(1)=1-\alpha (1-\alpha^2) >0$. It follows that $h(x) >0$ for $0<x<1$ so that $g(x) >0$ and hence $f'(x) >0$ for $0<x<1$. As $f(0)=0$, this implies $f(x) \geq 0$ for $0<x \leq 1$ and this completes the proof of Theorem \ref{thm2}.
|
1,108,101,562,783 | arxiv | \section{Introduction}
The study of deep inelastic electron - proton ($ep$) scattering has significantly improved our understanding of the proton structure in the high energy (small - $x$) regime (For a recent review see, e.g. Ref. \cite{rmp}).
In the future the partonic structure of other hadrons will be
investigated \cite{pionkaon}. The pion structure has been discussed by several authors \cite{Holtmann,Kopeliovich:1996iw,Przybycien:1996z,Nikolaev:1997cn,holt,Kopeliovich:2012fd,McKenney:2015xis} and the subject became recently a hot topic due to the perspective of measuring the pion structure function $F_2^{\pi}(x,Q^2) $ in future electron - hadron colliders at the BNL and CERN \cite{eic,lhec}. The basic idea is that the pion structure can be probed in electron - proton collisions through the Sullivan process \cite{Sullivan:1971kd}, where the electron scatters off the meson cloud of the proton target. The associated processes can be separated by tagging a forward neutron in the final state, which carries a large fraction of the proton energy.
Theoretically, this leading neutron production, is usually described assuming that the splitting $p \rightarrow \pi^+ n$ and the photon -- pion
interaction can be factorized, as represented in Fig.
\ref{Fig:diagram} (a), where $f_{\pi/p}$ represents the pion flux. Assuming the validity of the factorization hypothesis and the universality of the fragmentation process, which allows us to constrain $f_{\pi/p}$ using the data of leading neutron production in $pp$ collisions, we can obtain $\sigma^{\gamma^* \pi}$ and, consequently, determine the $x$ and $Q^2$ dependencies of the pion structure function. However, the validity of this procedure is limited by absorptive effects, denoted by $S^2_{eik}$ in Fig. \ref{Fig:diagram}, that are associated to soft rescatterings between the produced and spectator particles. The studies performed in Refs. \cite{pirner,kop,Khoze:2017bgh} indicated that these effects strongly affect leading neutron production in $pp$ collisions. In contrast, the absorptive corrections are predicted to be smaller in $ep$ collisions and their effects become weaker at larger photon virtualities \cite{Nikolaev:1997cn,pirner,Kaidalov:2006cw,Khoze:2006hw,Kopeliovich:2012fd,levin}. Although the treatment of the absorptive corrections has advanced in recent years, they are still one of the main uncertainties in the study of leading neutron production in $pp$ collisions at RHIC and LHC and $ep$ collisions at the EIC and LHeC.
In Refs. \cite{nos1,nos2} we proposed a model to treat leading neutron production in $ep$ processes based on the color dipole formalism \cite{nik}. In this model, the virtual photon - pion cross section can be factorized in terms of the photon wave function (which describes the photon splitting in a $q\bar{q}$ pair) and the dipole - pion cross section $\sigma_{d\pi}$, as represented in Fig. \ref{Fig:diagram} (b). As shown in Refs. \cite{nos1,nos2}, the HERA data are quite well described by this approach assuming that absorptive corrections can be factorized and represented by a multiplicative constant factor, denoted by ${{K}}$ in Ref. \cite{nos1}. Although successful
(in the limited kinematical range probed by HERA) and a
reasonable assumption to obtain a first approximation of the cross sections for the EIC and LHeC, it is fundamental to improve the description of $S^2_{eik}$ in order to derive more realistic predictions. Our goal in this paper is to revisit and update the approach proposed in Ref. \cite{pirner} for the absorptive effects. This approach allows us to estimate these effects in terms of the color dipole formalism, i.e. using the same ingredients of the model proposed in \cite{nos1,nos2}. As a consequence, we will be able to derive parameter free predictions for the cross sections, which can be directly compared with the HERA data. Moreover, we will estimate the strength of the absorptive effects for different photon virtualities and center - of - mass energies and present predictions for leading neutron production in future colliders.
\begin{figure}[t]
\begin{tabular}{ccc}
\includegraphics[width=.45\linewidth]{abso_1a.eps}& \,\,\,\,\,\,&
\includegraphics[width=.45\linewidth]{abso_1b.eps} \\
(a) & \,\,\,\,\,\, & (b)
\end{tabular}
\caption{ (a) Leading neutron $n$ production in $e p \rightarrow e n X $
interactions at high energies. (b) Description of the process in the color
dipole model.}
\label{Fig:diagram}
\end{figure}
Initially, let us discuss the approach proposed in Ref. \cite{nos1} to treat the leading neutron production in $ep$ collisions, disregarding the absorptive effects. At
high center - of - mass energies, this process can be seen
as a set of three factorizable subprocesses [See Fig. \ref{Fig:diagram} (b)]: i) the photon emitted by the electron fluctuates into a
quark-antiquark pair (the color dipole), ii) the color dipole interacts with the pion and iii) the leading neutron is formed. In the color dipole formalism, the differential cross section reads:
\begin{eqnarray}
\frac{d^2 \sigma(W,Q^2,x_L,t)}{d x_L d t} & = & f_{\pi/p} (x_L,t) \,
\sigma_{\gamma^* \pi}(\hat{W}^2,Q^2) \,\,, \\
& = & f_{\pi/p} (x_L,t) \times \int _0 ^1 dz \int d^2 \rr \sum_{L,T} \left|\Psi_{T,L} (z, \rr, Q^2)\right|^2 \sigma_{d\pi}({x}_{\pi}, \rr)
\label{crossgen}
\end{eqnarray}
where $Q^2$ is the virtuality of the exchanged photon,
$x_L$ is the proton momentum fraction carried by the
neutron and $t$ is the square of the four-momentum of the exchanged pion. Moreover,
$\hat{W}$ is the center-of-mass energy of the
virtual photon-pion system, which can be written as $\hat{W}^2 = (1-x_L) \, W^2$, where $W$ is the center-of-mass energy of the
virtual photon-proton system. In terms of the measured quantities $x_L$ and
transverse momentum $p_T$ of the neutron, the pion
virtuality is:
\beq
t \simeq-\frac{p_T^2}{x_L}-\frac{(1-x_L)(m_n^2-m_p^2 x_L)}{x_L} \,\,.
\label{virtuality}
\eeq
In Eq. (\ref{crossgen}), the virtual photon - pion cross section was expressed in terms of the transverse and longitudinal photon wave functions $\Psi_i$, which
describe the photon splitting into a $q\bar{q}$ pair of size $r \equiv |\rr|$, and the dipole-pion cross section $\sigma_{d\pi}$, which is determined by the QCD dynamics at high energies \cite{hdqcd}.
The variable $z$ represents the longitudinal photon momentum fraction carried
by the quark, the variable $\rr$ defines the relative transverse separation of the pair (dipole) and the scaling variable $x_{\pi}$ is defined by $x_{\pi} = x / (1-x_L)$, where $x$ is the Bjorken variable.
The flux factor $f_{\pi/p}$ gives the probability of the
splitting of a proton into a pion-neutron system and can be expressed as follows (See e.g. Ref. \cite{pirner})
\beq
f_{\pi/p}(x_L,t) = \frac{2}{3}\pi\sum_{\lambda\lambda'}
|\phi_{n\pi}^{\lambda\lambda'}(x_L,{\bf p}_T)|^2
\eeq
where $\phi_{n\pi}^{\lambda\lambda'}(x_L,{\bf p}_T)$ is the probability
amplitude to find, inside a proton with spin up, a neutron with longitudinal
momentum fraction $x_L$, transverse momentum ${\bf p}_T$ and helicity
$\lambda$ and a pion, with longitudinal momentum fraction $1-x_L$,
transverse momentum $-{\bf p}_T$ and helicity $\lambda'$. In the light-cone approach, the amplitudes $\phi_{n\pi}$ of a proton with spin $+1/2$, read:
\beqa
\label{phi}
\phi_{n\pi}^{1/2,0}(x_L,{\bf p}_T) & = &
\frac{\sqrt{3}g_0}{4\pi\sqrt{\pi}}\frac{1}{\sqrt{x_L^2(1-x_L)}}
\frac{m_n(x_L-1)}{M_{n\pi}^2-m_n^2}\nonumber\\
\phi_{n\pi}^{-1/2,0}(x_L,{\bf p}_T) & = &
\frac{\sqrt{3}g_0}{4\pi\sqrt{\pi}}\frac{1}{\sqrt{x_L^2(1-x_L)}}
\frac{|{\bf p}_T|e^{-i\varphi}}{M_{n\pi}^2-m_n^2}\,,
\eeqa
where $M_{n\pi}^2$ is the invariant mass of the pion-neutron system,
given by
\[
M_{n\pi}^2 = \frac{m_n^2+p_T^2}{x_L} + \frac{m_\pi^2+p_T^2}{1-x_L}\,,
\]
with $m_n$ and $m_\pi$ being the neutron and the pion masses, $g_0$ is the bare
pion-nucleon coupling constant and $\varphi$ is the azimuthal angle
in the transverse plane.
Because of the extended nature of the hadrons involved, the interaction
amplitudes in the above equations have to be
modified by including a phenomenological $\pi NN$ form factor, $G(x_L,p_T)$. It is
important to stress here that while the vertex is
derived from an effective meson-nucleon Lagrangian, the
form factor is introduced ad hoc. In our analysis we will choose the
covariant form factor, corrected by the Regge factor, given by
\beq
\label{covff}
G(x_L,p_T) = {\rm exp}[R_{c}^2(t-m_\pi^2)] \, (1-x_L)^{-t}
\eeq
where $R_c^2 = 0.3$ GeV$^2$ was constrained using the HERA data (For details see Ref. \cite{nos1}).
The amplitude $\phi_{n\pi}^{\lambda\lambda'}(x_L,{\bf p}_T)$ changes to
$\phi_{n\pi}^{\lambda\lambda'}(x_L,{\bf p}_T) \, G(x_L,p_T)$ and then the
pion flux becomes:
\beq
\label{flux}
f_{\pi/p}(x_L,t) = \frac{2}{3}\pi\sum_{\lambda\lambda'}
|\phi_{n\pi}^{\lambda\lambda'}(x_L,{\bf p}_T)|^2|G(x_L,p_T)|^2\,,
\eeq
where $2/3$ is the isospin factor and the azimuthal angle in the transverse plane
has been integrated out.
In order to include the absorptive effects in our predictions for the leading neutron spectrum $d\sigma/dx_L$, we will follow the approach proposed
in Ref. \cite{pirner}, where these effects were estimated using the high - energy Glauber approximation \cite{glauber} to treat the multiple scatterings between the dipole and the pion -- neutron system. As demonstrated in Ref. \cite{pirner}, such approach can be easily implemented in the impact parameter space, implying that the spectrum can be expressed as follows
\beqa
\label{desdzgamma}
\frac{d\sigma(W,Q^2,x_L)}{dx_L} & = &
\int\! d^2{\rb}_{rel} \, \rho_{n\pi}(x_L,{\rb}_{rel})\,
\int\! dz \, d^2{\rr} \,\sum_{L,T} \left|\Psi_{T,L} (z, \rr, Q^2)\right|^2 \sigma_{d\pi}({x}_{\pi}, \rr) \,
S_{eik}^2(\rr,\rb_{rel}) \,\,\,,
\eeqa
where $ \rho_{n\pi}(x_L,{\rb}_{rel})$ is the probability density of
finding a neutron and a pion with momenta $x_L$ and $1-x_L$, respectively, and with a
relative transverse separation $\rb_{rel}$, which is given by
\beq
\rho_{n\pi}(x_L,{\rb}_{rel}) = \sum_i|\psi^i_{n\pi}(x_L,{\rb}_{rel})|^2\,.
\label{rho}
\eeq
with
\beq
\psi^i_{n\pi}(x_L,{\rb}_{rel}) = \frac{1}{2\pi}\int\!d^2{\bf p}_T \,
e^{i{\rb}_{rel}\cdot{\bf p}_T} \, \phi^i_{n\pi}(x_L,{\bf p}_T) \,,
\eeq
and $\phi^i_{n\pi}$ = $\sqrt{2/3} \, \phi^{\lambda\lambda'}_{n\pi} \, G(x_L,p_T)$. Moreover, the survival factor $S_{eik}^2$ associated to the absorptive effects is expressed in terms of the dipole -- neutron ($\sigma_{dn}$) cross sections as follows
\begin{eqnarray}
S_{eik}^2(\rr,\rb_{rel}) = \Big\{1-\Lambda_{\rm eff}^2\frac{\sigma_{dn}(x_n,{\rr})}{2\pi}\,
{\rm exp}\Big[-\frac{\Lambda_{\rm eff}^2{\rb}_{rel}^2}{2}\Big]\Big\}\,,
\label{Eq:esse2}
\end{eqnarray}
where $x_n = x/x_L$ and $\Lambda^2_{\rm eff}$ is an effective parameter that was found to be equal to $0.1$ GeV$^2$ in Ref. \cite{pirner}. In our analysis, we will assume that $\sigma_{dn}$ is equal to the dipole - proton cross section, $\sigma_{dp}$, constrained by the HERA data. Finally, in order to estimate the spectrum, we must
specify the dipole - pion cross section, which is dependent on the description of the QCD dynamics at small - $x$. As in Ref. \cite{nos1}, we will assume that this quantity can be related to the dipole - proton cross section using the additive quark model. Moreover, $\sigma_{dp}$ will be described by the Color Glass Condensate (CGC) formalism, as given in the phenomenological model proposed in Ref. \cite{iim}. As a consequence, we will have that:
\begin{eqnarray}
\sigma_{d\pi} (x,\rr) = \frac{2}{3} \cdot \sigma_{dp} ({x}, \rr) = \frac{2}{3} \cdot 2 \pi R_p^2 \times \left\{ \begin{array}{ll}
{\mathcal N}_0\, \left(\frac{r\, Q_s}{2}\right)^{2\left(\gamma_s +
\frac{\ln (2/r\, Q_s)}{K \,\lambda \, Y}\right)}\,, & \mbox{for $r
Q_s({x}) \le 2$}\,,\\
1-\text{e}^{-a\,\ln^2\,(b\,r\, Q_s)}\,, & \mbox{for $r Q_s({x}) > 2$}\,,
\end{array} \right.
\label{Eq:sigdp}
\end{eqnarray}
where $a$ and $b$ are determined by continuity conditions at $r Q_s({x})=2$. The parameters
$\gamma_s= 0.7376$, $\kappa= 9.9$, ${\mathcal N}_0=0.7$ and $R_p = 3.344$ GeV$^{-1}$ has been adjusted using the HERA data in Ref. \cite{soyez}, with
the saturation scale $Q_s$ being given by:
\beq
Q^2_s ({x}) = Q^2_0 \left( \frac{x_0}{{x}}\right)^{\lambda}
\label{qsat}
\eeq
with $x_0=1.632\times 10^{-5}$, $\lambda=0.2197$, $Q_0^2 = 1.0$ GeV$^2$.
The first line of Eq. (\ref{Eq:sigdp}) describes the linear regime whereas
the second one includes saturation effects.
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[width=.40\linewidth]{abso_2a.eps}& \,\,\, &
\includegraphics[width=.40\linewidth]{abso_2b.eps} \\
(a) & \,\,\, & (b)
\end{tabular}
\caption{(a) Comparison of the CDM prediction with the H1 data \cite{hera1}.
(b) Predictions for the spectra considering different center - of - mass
energies and $Q^2 = 5$ GeV$^2$.}
\label{Fig:comp}
\end{figure}
With the ingredients introduced above, we are ready to obtain parameter
free predictions that can be compared with the HERA data. We can also derive
predictions which can be tested in future $ep$ colliders.
In Fig. \ref{Fig:comp} (a) the CDM prediction for the kinematical range probed
by HERA is presented. As it can be seen, the H1 data \cite{hera1} are quite well
described in the region $x_L \gtrsim 0.5$. As shown in previous studies
\cite{kop,Kaidalov:2006cw}, for smaller values of $x_L$, additional contributions
are expected to play a significant role. We can estimate the leading neutron
spectrum for a kinematical range beyond that probed by HERA. We are particularly
interested in smaller values of the photon virtuality, where we expect a larger
contribution of
saturation effects, and for the center - of - mass energies that will be
reached at the EIC and LHeC. The results are presented in Fig. \ref{Fig:comp} (b).
From the figure we see that the predictions are not strongly dependent on $W$.
This is expected from the results presented in Ref. \cite{nos1}, where we have
demonstrated that saturation leads to Feynman scaling, i.e. the energy
independence of the $x_L$ spectra. Such scaling is expected to be strict when
the saturation scale becomes larger than the photon virtuality, which is
satisfied for small values of $Q^2$ ($\lesssim 2$ GeV$^2$). However, as shown
e.g. in Ref. \cite{iim}, the presence of the saturation effects also modifies
the behavior of the cross sections in a larger $Q^2$ range, implying the result
observed in Fig. \ref{Fig:comp} (b). In contrast, the DGLAP evolution leads to
stronger violation of Feynman scaling, as shown in Ref. \cite{nos1}. In
a future experimental analysis of the leading neutron spectrum it will be very
interesting to test this prediction of the Color Dipole Model.
As discussed above, in order to measure the $\gamma \pi$ cross section and extract
the pion structure function, it is crucial to have control of the absorptive effects
in the kinematical range probed by the collider. In particular, we should know the
dependence of these effects on $Q^2$, $W$ and $x_L$. We can estimate the impact of
the absorptive effects through the calculation of the ratio between the cross
sections with and without absorption, where the latter is estimated assuming
$S^2_{eik} = 1$. Our predictions for this ratio, denoted $K_{abs}$ hereafter,
are presented in Fig. \ref{Fig:kfac}. Our results show that the impact
increases for smaller values of $Q^2$ and larger energies $W$. For $Q^2 = 50$
GeV$^2$, we see that $K_{abs} \approx 0.9$ for $x_L \gtrsim 0.5$, with the
predictions being similar for the three values of $W$. This weak absorption is
expected in the Color Dipole Model, since at large values of $Q^2$ the main
contribution for the cross section comes from dipoles with a small pair
separation. In this regime, denoted color transparency, the impact of the
rescatterings is small, which implies that the absorptive effects become
negligible. Another important aspect, is that for large photon virtualities, the
main effect of absorption is to suppress the cross section by a constant factor.
Similar results were derived in Ref. \cite{pirner}. On the other hand, for
photoproduction ($Q^2 = 0$), we observe strong absorptive effects, which
reduce
the cross sections by a factor $\approx 0.4$ for $x_L = 0.5$. This result is
also expected, since for small $Q^2$ the cross section is dominated by large
dipoles and, consequently, the contribution of the rescatterings cannot be
disregarded. For larger values of $x_L$, absorptive effects cannot be
modelled by a constant factor. Our conclusions agree with those derived in
Ref. \cite{Kaidalov:2006cw} using Regge theory. Finally, our results
indicate that the contribution of the absorptive effects is not strongly
energy dependent.
This result suggests that the main conclusion of Ref. \cite{nos1}, that the
spectra will satisfy Feynman scaling, is still valid when the
absorptive effects are estimated using a more realistic model,
as already observed in Fig. \ref{Fig:comp} (b).
\begin{figure}[t]
\begin{tabular}{ccc}
\includegraphics[scale=0.22]{abso_3a.eps} &
{\includegraphics[scale=.22]{abso_3b.eps}} &
\includegraphics[scale=0.22]{abso_3c.eps} \\
(a) & (b) & (c)
\end{tabular}
\caption{Dependence of the absorptive effects on $x_L$ in leading neutron
production in $ep$ collisions for differents values of the photon virtuality
and (a) $W = 60$ GeV, (b) $W = 100$ GeV and (c) $W = 1000$ GeV.}
\label{Fig:kfac}
\end{figure}
As a summary, in this paper we have updated the treatment of the absorptive
effects and incorporated them in the model proposed in our previous studies
\cite{nos1,nos2,nos3,nos4}, which is based on the color dipole formalism.
Using the approach proposed in Ref. \cite{pirner}, we have been able to
derive parameter free predictions for the leading neutron spectra. We
demonstrated that our model describes the HERA data in the region where
the pion exchange is expected to dominate. Moreover, we have presented
predictions for the kinematical ranges that will be probed by the future
EIC and LHeC. Our results indicate that the leading neutron spectra are not
strongly energy dependent at small photon virtualities. As shown in
Ref. \cite{nos1}, this almost energy independence (Feynman scaling) is a
consequence of saturation effects, which are expected to become significant
at small - $Q^2$ and large energies. We have estimated the impact of the
absorptive effects, demonstrated that they increase at smaller photon
virtualities and that they depend on the longitudinal momentum $x_L$.
Our results show that modelling these effects by a constant factor is a
good approximation only for large $Q^2$. Our main conclusion is that a
realistic measurement of the $\gamma \pi$ cross section in future colliders
and the extraction of the pion structure function must take into account the
important contribution of the absorptive effects. Future experimental data on
leading neutron production in $ep$ collisions at the EIC
will be crucial to test the main assumptions of our model, as well as
to improve our understanding of this important observable.
\begin{acknowledgments}
This work was partially financed by the Brazilian funding
agencies CNPq, FAPESP, FAPERGS and INCT-FNA (process number
464898/2014-5).
\end{acknowledgments}
\hspace{1.0cm}
|
1,108,101,562,784 | arxiv | \section{Introduction and results}
The aim of this note is to establish new boundary Harnack inequalities for nonlocal elliptic operators in non-divergence form in general open sets.
To our knowledge, the first boundary Harnack principle for nonlocal elliptic operators was established by Bogdan \cite{Bogdan1}, who proved it for the fractional Laplacian in Lipschitz domains.
Later, his result was extended to arbitrary open sets by Song and Wu in \cite{SW}; see also Bogdan-Kulczycki-Kwasnicki \cite{Bogdan2}.
More recently, Bogdan-Kumagai-Kwasnicki \cite{Bogdan3} established the Boundary Harnack principle in general open sets for a wide class of Markov processes with jumps.
In particular, their results apply to all linear operators of the form
\begin{equation}\label{operator-L-linear}
Lu(x)=\int_{\mathbb{R}^n}\left(\frac{u(x+y)+u(x-y)}{2}-u(x)\right)K(y)\,dy,
\end{equation}
with kernels $K(y)=K(-y)$ satisfying
\begin{equation}\label{ellipt-const-linear}
\qquad \qquad \qquad 0<\frac{\lambda}{|y|^{n+2s}}\leq K(y)\leq \frac{\Lambda}{|y|^{n+2s}},\qquad y\in \mathbb{R}^n;
\end{equation}
see \cite[Example 5.6]{Bogdan3}.
Here, we consider \emph{non-divergence} form operators
\begin{equation}\label{operator-L}
Lu(x)=\int_{\mathbb{R}^n}\left(\frac{u(x+y)+u(x-y)}{2}-u(x)\right)K(x,y)\,dy,
\end{equation}
with kernels $K(x,y)=K(x,-y)$ satisfying
\begin{equation}\label{ellipt-const}
\qquad \qquad \qquad 0<\frac{\lambda}{|y|^{n+2s}}\leq K(x,y)\leq \frac{\Lambda}{|y|^{n+2s}},\qquad x,y\in \mathbb{R}^n.
\end{equation}
No regularity in $x$ is assumed.
These are the nonlocal analogues of second order uniformly elliptic operators $L=\sum_{i,j}a_{ij}(x)\partial_{ij}$ with bounded measurable coefficients; see \cite{BL,S,CS}.
To our knowledge, our results are the first ones that establish boundary Harnack inequalities for such class of nonlocal operators in non-divergence form.
Quite recently, we established in \cite{RS-C1} a boundary Harnack estimate for operators of the form \eqref{operator-L}-\eqref{ellipt-const} under the important extra assumption that $K(x,y)$ is \emph{homogeneous} in $y$.
The results of \cite{RS-C1} are for $C^1$ domains, and the all the proofs are by blow-up and perturbative arguments.
The techniques of the present paper are of very different nature, and completely independent from those in \cite{RS-C1}.
Our first result establishes the boundary Harnack principle in general open sets~$\Omega$, and reads as follows.
\begin{thm}\label{thm-bdryH}
Let $s\in(0,1)$, and $L$ be any operator of the form \eqref{operator-L}-\eqref{ellipt-const}.
Let $\Omega\subset\mathbb{R}^n$ be any open set, with $0\in\partial\Omega$, and $u_1, u_2\in C(B_1)$ be two viscosity solutions of
\begin{equation}\label{pb0}
\left\{ \begin{array}{rcll}
L u_1=Lu_2 &=&0&\textrm{in }B_1\cap \Omega \\
u_1=u_2&=&0&\textrm{in }B_1\setminus\Omega,
\end{array}\right.\end{equation}
satisfying $u_i\geq0$ in $\mathbb{R}^n$ and
\[\int_{\mathbb{R}^n}\frac{u_i(x)}{1+|x|^{n+2s}}\,dx=1.\]
Then,
\[ C^{-1}u_2\leq u_1\leq C\,u_2\qquad\textrm{in}\ B_{1/2}.\]
The constant $C$ depends only on $n$, $s$, $\Omega$, and ellipticity constants.
\end{thm}
Here, the equation $Lu=0$ should be understood in the viscosity sense as $M^+u\geq0\geq M^-u$, where
\[M^+ u=M^+_{\mathcal L_0}u=\sup_{L\in\mathcal L_0} Lu,\qquad
M^- u=M^-_{\mathcal L_0}u=\inf_{L\in\mathcal L_0} Lu,\]
and $\mathcal L_0$ is the class of operators of the form \eqref{operator-L-linear}-\eqref{ellipt-const-linear}; see \cite{CS} for more details.
The fact that both $u_1$ and $u_2$ solve the \emph{same} equation $Lu_1=Lu_2=0$ can be stated as $M^+(au_1+bu_2)\geq0$ for all $a,b\in\mathbb{R}$.
Notice that taking $a=\pm1$ and $b=0$, or $a=0$ and $b=\pm1$, we get that $M^+u_i\geq0\geq M^-u_i$.
We will in fact prove a more general version of Theorem \ref{thm-bdryH}, in which we allow a right hand side in the equation, $Lu_1=f_1$ and $Lu_2=f_2$ in $\Omega\cap B_1$, with $\|f_i\|_{L^\infty}\leq \delta$, and $\delta>0$ small enough.
In terms of the extremal operators $M^+$ and $M^-$, it reads as follows.
\begin{thm}\label{thm-main}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any open set.
Assume that there is $x_0\in B_{1/2}$ and $\varrho>0$ such that $B_{2\varrho}(x_0)\subset \Omega\cap B_{1/2}$.
Then, there exists $\delta>0$, depending only on $n$, $s$, $\varrho$, and ellipticity constants, such that the following statement holds.
Let $u_1,u_2\in C(B_1)$ be viscosity solutions of
\begin{equation}\label{pb}
\left\{ \begin{array}{rcll}
M^+(au_1+bu_2) &\geq&-\delta(|a|+|b|)&\textrm{in }B_1\cap \Omega\\
u_1=u_2&=&0&\textrm{in }B_1\setminus\Omega
\end{array}\right.\end{equation}
for all $a,b\in\mathbb{R}$, and such that
\begin{equation}\label{u-is-nonneg}
u_i\geq0\quad\mbox{in}\quad \mathbb{R}^n, \qquad \int_{\mathbb{R}^n}\frac{u_i(x)}{1+|x|^{n+2s}}\,dx=1.
\end{equation}
Then,
\[ C^{-1}u_2\leq u_1\leq C\,u_2\qquad\textrm{in}\ B_{1/2}.\]
The constant $C$ depends only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{thm}
One of the advantages of Theorem \ref{thm-main} is that it allows us to establish the following result.
\begin{thm}\label{thm-Lip}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
Then, there is $\delta>0$, depending only on $n$, $s$, $\Omega$, and ellipticity constants, such that the following statement holds.
Let $u_1,u_2\in C(B_1)$ be viscosity solutions of \eqref{pb} satisfying \eqref{u-is-nonneg}.
Then, there is $\alpha\in(0,1)$ such that
\[ \left\|\frac{u_1}{u_2}\right\|_{C^{0,\alpha}(\overline\Omega\cap B_{1/2})}\leq C.\]
The constants $\alpha$ and $C$ depend only on $n$, $s$, $\Omega$, and ellipticity constants.
\end{thm}
The proof of Theorems \ref{thm-bdryH} and \ref{thm-main} that we present here is quite short and simple, and to our knowledge is new even for the fractional Laplacian $(-\Delta)^s$.
Such proof uses very strongly the nonlocal character of the operator (as it must be! Recall that the boundary Harnack principle is in general false for second order (local) operators in H\"older domains \cite{BHP3}).
Then, we prove Theorem \ref{thm-Lip} by iterating appropriately Theorem \ref{thm-main}.
The paper is organized as follows.
In Section \ref{sec2} we give some preliminaries.
In Section \ref{sec3} we establish Theorems \ref{thm-main} and \ref{thm-bdryH}.
In Section \ref{sec4} we prove Theorem \ref{thm-Lip}.
Finally, in Section \ref{sec5} we extend those results to non-symmetric operators and to operators with drift.
\section{Preliminaries}
\label{sec2}
In this section we recall some results that will be used in our proofs.
An important ingredient to prove our boundary Harnack inequality is the interior Harnack inequality for nonlocal equations in non-divergence form, which states that if $u$ solves
\[M^+u\geq -C_0\qquad \textrm{and}\qquad M^-u\leq C_0\qquad \textrm{in}\quad B_1,\]
and $u\geq0$ in $\mathbb{R}^n$, then
\[\sup_{B_{1/2}}u\leq C\left(\inf_{B_{1/2}}u+C_0\right);\]
see \cite{CS} and also \cite{BL}.
In our proof, in fact, we will need the following two results, which imply the Harnack inequality.
The first one is a half Harnack inequality for subsolutions.
\begin{thm}[\cite{CS3}]\label{half-Harnack-sub}
Assume that $u\in C(B_1)$ satisfies
\[M^+u\geq -C_0\quad \textrm{in}\ B_1\]
in the viscosity sense.
Then,
\[\sup_{B_{1/2}}u\leq C\left(\int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+2s}}\,dx+C_0\right).\]
The constant $C$ depends only on $n$, $s$, and ellipticity constants.
\end{thm}
The second one is the other half Harnack inequality, for supersolutions.
\begin{thm} \label{half-Harnack-sup}
Assume that $u\in C(B_1)$ satisfies
\[M^-u\leq C_0\quad \textrm{in}\ B_1\]
in the viscosity sense.
Assume in addition that $u\geq0$ in $\mathbb{R}^n$.
Then,
\[\int_{\mathbb{R}^n}\frac{u(x)}{1+|x|^{n+2s}}\,dx\leq C\left(\inf_{B_{1/2}}u+C_0\right).\]
The constant $C$ depends only on $n$, $s$, and ellipticity constants.
\end{thm}
When $s\geq\frac12$, the result can be found in \cite[Corollary 6.2]{CD}, where it is proved in the more general setting of parabolic and nonsymmetric operators with drift.
For completeness, we give a short proof of Theorem \ref{half-Harnack-sup} here.
\begin{proof}[Proof of Theorem \ref{half-Harnack-sup}]
Let $b\in C^\infty_c(B_{3/4})$ be such that $0\leq b\leq 1$ and $b\equiv1$ in $B_{1/2}$.
Let $t>0$ be the maximum value for which $u\geq tb$.
Notice that $t\leq \inf_{B_{1/2}}u$.
Since $u$ and $b$ are continuous in $B_1$, then there is $x_0\in B_{3/4}$ such that $u(x_0)=tb(x_0)$.
Now, on the one hand, we have
\[M^-(u-tb)(x_0)\leq M^-u(x_0)-tM^-b\leq C_0+Ct.\]
On the other hand, since $u-tb\geq0$ in $\mathbb{R}^n$ and $(u-tb)(x_0)=0$ then
\[M^-(u-tb)(x_0)=\lambda\int_{\mathbb{R}^n}\frac{u(z)-tb(z)}{|x_0-z|^{n+2s}}dz\geq c\int_{\mathbb{R}^n}\frac{u(z)}{1+|z|^{n+2s}}dz-Ct.\]
Combining the previous identities, we get
\[ \inf_{B_{1/2}} u\geq t\geq -c_1C_0+c_2\int_{\mathbb{R}^n}\frac{u(z)}{1+|z|^{n+2s}}dz,\]
and the result follows.
\end{proof}
\section{Proof of Theorem \ref{thm-main}}
\label{sec3}
Theorem \ref{thm-bdryH} is a particular case of Theorem \ref{thm-main}.
We give below the proof of Theorem \ref{thm-main}.
Before that, we need a Lemma.
\begin{lem}\label{lem-use}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any open set.
Assume that there is $x_0\in B_{1/2}$ and $\varrho>0$ such that $B_{2\varrho}(x_0)\subset \Omega\cap B_{1/2}$.
Denote $D=B_\varrho(x_0)$.
Let $u\in C(B_1)$ be a viscosity solution of
\[\left\{ \begin{array}{rcll}
M^+u\geq-C_0\qquad\textrm{and}\qquad M^-u &\leq&C_0&\textrm{in }B_1\cap \Omega\\
u&=&0&\textrm{in }B_1\setminus\Omega
\end{array}\right.\]
Assume in addition that $u\geq0$ in $\mathbb{R}^n$.
Then,
\[\sup_{B_{3/4}}u\leq C\left(\inf_{D}u+C_0\right),\]
with $C$ depending only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{lem}
\begin{proof}
Since $u\geq0$ in $B_1$ and $M^+u\geq-C_0$ in $B_1\cap \{u>0\}$, then $M^+u\geq-C_0$ in all of $B_1$.
Thus, by Theorem \ref{half-Harnack-sub} we have
\[\sup_{B_{3/4}}u\leq C\left(\int_{\mathbb{R}^n}\frac{u(x)}{1+|x|^{n+2s}}\,dx+C_0\right).\]
(Notice that Theorem \ref{half-Harnack-sub} gives a the bound in $B_{1/2}$, but by a standard covering argument we get the same in $B_{3/4}$.)
Now, using Theorem~\ref{half-Harnack-sup} in the ball $B_{2\varrho}(x_0)$, we find
\[\int_{\mathbb{R}^n}\frac{u(x)}{1+|x|^{n+2s}}\,dx\leq C\left(\inf_D u+C_0\right),\]
where $D=B_\varrho(x_0)$.
Combining the previous estimates, the Lemma follows.
\end{proof}
We next give the:
\begin{proof}[Proof of Theorem \ref{thm-main}]
First, as in Lemma \ref{lem-use}, by \eqref{u-is-nonneg} we have
\begin{equation}\label{ineq0}
u_i\leq C\quad \textrm{in}\ B_{3/4}
\end{equation}
and
\begin{equation}\label{ineq2}
u_i\geq c>0\quad\textrm{in}\ B_\varrho(x_0),
\end{equation}
provided that $\delta>0$ is small enough.
Notice that $c$ depends on $n$, $s$, ellipticity constants, and $\varrho$, but not on $\Omega$.
Let now $b\in C^\infty_c(B_{1/2})$ be such that $0\leq b\leq 1$ and $b\equiv1$ in $B_{1/4}$, and let $\eta\in C^\infty_c(B_\varrho(x_0))$ such that $0\leq\eta\leq 1$ in $B_\varrho(x_0)$ and $\eta=1$ in $B_{\varrho/2}(x_0)$.
Let
\[w:=u_1\chi_{B_{3/4}}+C_1(b-1)+C_2\eta.\]
Then, thanks to \eqref{ineq0}, if $C_1$ is chosen large enough we will have
\[w\leq 0\quad\textrm{in}\ \mathbb{R}^n\setminus B_{1/2}.\]
Moreover, taking now $C_2$ large enough,
\[\begin{split}
M^+w& \geq M^+u_1+M^-(u_1\chi_{\mathbb{R}^n\setminus B_{3/4}})+C_1M^-b+C_2M^-\eta \\
&\geq -\delta-C-CC_1+cC_2 \geq 1\qquad\qquad\qquad
\textrm{in}\quad \Omega \cap B_{1/2}\setminus B_\varrho(x_0).\end{split}\]
Here we used that $M^+u_1\geq-\delta$ in $\Omega\cap B_1$, that $M^-(u_1\chi_{\mathbb{R}^n\setminus B_{3/4}})\geq -C\int_{\mathbb{R}^n}u_1(x)/(1+|x|^{n+2s})dx\leq C$ in $B_{1/2}$, that $M^-b\geq -C$, and that $M^-\eta\geq c>0$ in $B_1\setminus B_\varrho(x_0)$.
Analogously, for any $C_3\leq \delta^{-1}$ we get that
\[M^+(w-C_3u_2)\geq 1-C_3\delta\geq 0\quad \textrm{in}\ \Omega\cap B_{1/2}\setminus B_\varrho(x_0),\]
Finally, since $w\leq C$ in $B_\varrho(x_0)$ and $u_2\geq c>0$ in $B_\varrho(x_0)$, we clearly have
\[w\leq C_3u_2\quad\textrm{in}\ B_\varrho(x_0)\]
for some big constant $C_3$.
Taking $\delta$ small enough so that $\delta^{-1}\geq C_3$, by the comparison principle we find $w\leq C_3u_2$ in all of $\mathbb{R}^n$.
In particular, since $w\equiv u_1$ in $B_{1/4}\setminus B_\varrho(x_0)$, this yields
\[u_1\leq C_3u_2\quad \textrm{in}\ B_{1/4}\setminus B_\varrho(x_0).\]
Since $u_1$ and $u_2$ are comparable in $B_\varrho(x_0)$, we deduce
\[u_1\leq C u_2\quad \textrm{in}\ B_{1/4},\]
maybe with a bigger constant $C$.
Finally, a standard covering argument yields the same result in $B_{1/2}$, and thus the theorem is proved.
\end{proof}
\section{Proof of Theorem \ref{thm-Lip}}
\label{sec4}
We prove here Theorem \ref{thm-Lip}.
Throughout this section, $\Omega$ will be a Lipschitz domain with $0\in\partial\Omega$.
In particular, there is $\varrho>0$ such that for every $r\in(0,1)$ there is $x_r\in B_{r/2}$ for which
\begin{equation}\label{P}
B_{2\varrho r}(x_r)\subset \Omega\cap B_{r/2}.
\end{equation}
Throughout this section, we denote $D_r=B_{\varrho r}(x_r)$.
We will divide the proof of Theorem \ref{thm-Lip} in several steps.
First, we have the following boundary Harnack type estimate, which is an immediate consequence of Theorem \ref{thm-main}.
\begin{lem}\label{lem1}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any open set.
Assume that there is $x_0\in B_{1/2}$ and $\varrho>0$ such that $B_{2\varrho}(x_0)\subset \Omega\cap B_{1/2}$.
Denote $D=B_\varrho(x_0)$.
Then, there exists is $\delta>0$, depending only on $n$, $s$, $\varrho$, and ellipticity constants, such that the following statement holds.
Let $u_1$ and $u_2$ be two functions satisfying, for all $a,b\in\mathbb{R}$,
\begin{equation}\label{pb3}
\left\{ \begin{array}{rcll}
M^+(au_1+bu_2) &\geq&-|a|C_0-|b|\delta&\textrm{in }B_1\cap \Omega\\
u_1=u_2&=&0&\textrm{in }B_1\setminus\Omega,
\end{array}\right.\end{equation}
with $u_1,u_2\geq0$ in $\mathbb{R}^n$ and $\inf_D u_2=1$.
Then,
\begin{equation}\label{mec00}
\inf_D \frac{u_1}{u_2}\leq C\left(\inf_{B_{1/2}}\frac{u_1}{u_2}+C_0\right).
\end{equation}
The constant $C$ depends only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{lem}
\begin{proof}
Dividing by $\inf_D u_1$ if necessary, we may assume $\inf_D u_1=1$.
By the interior Harnack inequality, $1=\inf_D u_2\leq \sup_D u_2\leq C$ (provided that $\delta$ is small enough).
Thus,
\[\inf_D \frac{u_1}{u_2}\leq C_1,\]
with $C_1$ independent of $C_0$.
Now, if $C_0\leq \delta$, then by Theorem \ref{thm-main} we have $u_2\leq C_2u_1$ in $B_{1/2}$, and therefore
\[\inf_D \frac{u_1}{u_2}\leq C_1\leq C_1C_2\left(\inf_{B_{1/2}}\frac{u_1}{u_2}\right).\]
If $C_0\geq\delta$, then we simply have
\[\inf_D \frac{u_1}{u_2}\leq C_1\leq \frac{C_1}{\delta}C_0=CC_0.\]
In any case, \eqref{mec00} is proved.
\end{proof}
Second, we need the following consequence of the interior Harnack.
\begin{lem}\label{lem2}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any open set.
Assume that there is $x_0\in B_{1/2}$ and $\varrho>0$ such that $B_{2\varrho}(x_0)\subset \Omega\cap B_{1/2}$.
Denote $D=B_\varrho(x_0)$.
Then, there exists is $\delta>0$, depending only on $n$, $s$, $\varrho$, and ellipticity constants, such that the following statement holds.
Let $u_1$ and $u_2$ be two functions satisfying $u_1,u_2\geq0$ in $\mathbb{R}^n$, \eqref{pb3}, and $\inf_D u_2=1$.
Then,
\begin{equation}\label{mec2}
\sup_D \frac{u_1}{u_2}\leq C\left(\inf_D\frac{u_1}{u_2}+C_0\right).
\end{equation}
The constant $C$ depends only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{lem}
\begin{proof}
Notice that $M^+u_1\geq-C_0$ and $M^-u_1\leq C_0$ in $\Omega\cap B_1$, while $M^+u_2\geq-\delta$ and $M^-u_2\leq \delta$ in $\Omega\cap B_1$.
By interior Harnack inequality, we have $1=\inf_D u_2\leq \sup_D u_2\leq C$ (provided that $\delta$ is small enough).
Moreover, for $u_1$ we have $\sup_D u_1\leq C(\inf_D u_1+C_0)$, and thus
\[ \sup_D \frac{u_1}{u_2}\leq C\sup_D u_1\leq C\left(\inf_D u_1+C_0\right)\leq C\left(\inf_D\frac{u_1}{u_2}+C_0\right),\]
as desired.
\end{proof}
We will also need the following rescaled versions of the previous Lemmas.
\begin{cor}\label{cor1}
Let $s\in(0,1)$, $r\in(0,1)$, and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
Then, there exists is $\delta>0$, depending only on $n$, $s$, $\varrho$ in \eqref{P}, ellipticity constants, such that the following statement holds.
Let $u_1$ and $u_2$ be two functions satisfying, for all $a,b\in\mathbb{R}$,
\begin{equation}\label{pb7}
\left\{ \begin{array}{rcll}
M^+(au_1+bu_2) &\geq&-|a|K-|b|\delta/C_1&\textrm{in }B_r\cap \Omega\\
u_1=u_2&=&0&\textrm{in }B_r\setminus\Omega,
\end{array}\right.\end{equation}
with $C_1>0$ and $u_1,u_2\geq0$ in $\mathbb{R}^n$.
Assume in addition that
\begin{equation}\label{asst}
\frac{r^{2s}}{\inf_{D_r}u_2}\leq C_1.
\end{equation}
Then,
\begin{equation}\label{mec}
\inf_{D_r} \frac{u_1}{u_2}\leq C\left(\inf_{B_{r/2}}\frac{u_1}{u_2}+K\,\frac{r^{2s}}{\inf_{D_r}u_2}\right).
\end{equation}
The constant $C$ depends only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{cor}
\begin{proof}
The functions $v_1(x):=u_1(rx)/\inf_{D_r}u_2$ and $v_2(x):=C_1 u_2(rx)/\inf_{D_r}u_2$ satisfy
\[\left\{ \begin{array}{rcll}
M^+(av_1+bv_2) &\geq&-|a|K\frac{r^{2s}}{\inf_{D_r}u_2}-|b|\delta &\textrm{in }B_1\cap \Omega\\
v_1=v_2&=&0&\textrm{in }B_1\setminus\Omega.
\end{array}\right.\]
Thus, the result follows from Lemma \ref{lem1}.
\end{proof}
\begin{cor}\label{cor2}
Let $s\in(0,1)$, $r\in(0,1)$, and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
Then, there exists is $\delta>0$, depending only on $n$, $s$, $\varrho$ in \eqref{P}, and ellipticity constants, such that the following statement holds.
Let $u_1$ and $u_2$ be two functions satisfying $u_1,u_2\geq0$ in $\mathbb{R}^n$, and \eqref{pb7}.
Assume in addition \eqref{asst}.
Then,
\begin{equation}\label{mec2}
\sup_{D_r} \frac{u_1}{u_2}\leq C\left(\inf_{D_r}\frac{u_1}{u_2}+K\,\frac{r^{2s}}{\inf_{D_r}u_2}\right).
\end{equation}
The constant $C$ depends only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{cor}
\begin{proof}
Setting $v_1(x):=u_1(rx)/\inf_{D_r}u_2$ and $v_2(x):=C_1u_2(rx)/\inf_{D_r}u_2$, the result follows from Lemma \ref{lem2}.
\end{proof}
We will also need the following.
\begin{lem}\label{lem-Lip-dom}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
There exists is $\delta>0$, $\gamma\in(0,1)$, and $c_0>0$ depending only on $n$, $s$, $\Omega$, and ellipticity constants, such that the following statement holds.
Let $u$ be a viscosity solution of $M^+u\geq -\delta$ and $M^-u\leq \delta$ in $B_1\cap \Omega$, with $u=0$ in $B_1\setminus\Omega$.
Assume in addition that $u\geq0$ in $\mathbb{R}^n$ and $\inf_{D_1}u=1$.
Then, $u\geq c_0d^{2s-\gamma}$ in $B_{1/2}$,
where $d(x)={\rm dist}(x,B_1\setminus\Omega)$.
In particular,
\[\inf_{D_r}u\geq c_0r^{2s-\gamma}\qquad \textrm{for all}\quad r\in(0,1).\]
The constants $\gamma$ and $c_0$ depend only on $n$, $s$, $\Omega$, and ellipticity constants.
\end{lem}
\begin{proof}
We differ the proof to the Appendix.
\end{proof}
As a consequence, we find the following.
\begin{cor}\label{cor7}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
There exists is $\delta>0$, depending only on $n$, $s$, $\Omega$, and ellipticity constants, such that the following statement holds.
Let $u_2$ be a viscosity solution of $M^+u_2\geq -\delta$ and $M^-u_2\leq \delta$ in $B_1\cap \Omega$, with $u_2=0$ in $B_1\setminus\Omega$.
Assume in addition that $u_2\geq0$ in $\mathbb{R}^n$.
Then, there is $\gamma\in(0,1)$ such that
\[\sup_{B_{2r|z|}}u_2\leq C|z|^{2s-\gamma}\inf_{D_r}u_2\qquad \textrm{whenever}\quad |z|\geq\frac12\quad \textrm{and}\quad r|z|\leq \frac14.\]
The constants $\gamma$ and $C$ depend only on $n$, $s$, $\Omega$, and ellipticity constants.
\end{cor}
\begin{proof}
We use the previous Lemma with
\[v(x):=\frac{u_2(4r|z|x)}{\inf_{D_{4r|z|}}u_2},\]
to find
\[c|z|^{\gamma-2s}=t^{2s-\gamma} \leq C\inf_{D_t}v = C\frac{\inf_{D_r} u_2}{\inf_{D_{4r|z|}} u_2},\]
where $t=\frac14|z|^{-1}$.
Thus,
\[\inf_{D_{4r|z|}} u_2 \leq C|z|^{2s-\gamma}\inf_{D_r} u_2.\]
Moreover, by Lemma \ref{lem-use} we have
\[\sup_{B_{2r|z|}}u_2\leq C\inf_{D_{4r|z|}} u_2,\]
then
\[\sup_{B_{2r|z|}}u_2\leq C|z|^{2s-\gamma}\inf_{D_r} u_2,\]
and we are done.
\end{proof}
Using the previous results, we now prove the following.
\begin{lem}\label{lem-main}
Let $s\in(0,1)$ and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in \partial\Omega$.
Then, there exists $\delta>0$, depending only on $n$, $s$, $\varrho$ in \eqref{P}, and ellipticity constants, such that the following statement holds.
Let $u_1,u_2\in C(B_1)$ be viscosity solutions \eqref{pb} satisfying \eqref{u-is-nonneg}.
Then,
\begin{equation}\label{mec0}
\sup_{\Omega\cap B_r} \frac{u_1}{u_2}-\inf_{\Omega\cap B_r}\frac{u_1}{u_2}\leq Cr^\alpha
\end{equation}
for all $r\leq 3/4$.
The constants $C$ and $\alpha\in(0,1)$ depend only on $n$, $s$, $\varrho$, and ellipticity constants.
\end{lem}
\begin{proof}
We will prove that there exist constants $C_1>0$ and $\alpha>0$, and monotone sequences $\{m_k\}_{k\geq1}$ and $\{\bar m_k\}_{k\geq1}$, such that
\[\bar m_k-m_k=4^{-\alpha k},\quad 0\leq m_k\leq m_{k+1}<\bar m_{k+1}\leq \bar m_k\leq 1,\]
and
\begin{equation}\label{C1}
m_ku_2\leq C_1^{-1}u_1\leq \bar m_k u_2\quad \textrm{in}\ B_{r_k}, \qquad r_k=4^{-k}.
\end{equation}
Clearly, if such sequences exist, then \eqref{mec0} holds for all $r\leq \frac14$.
We will construct such sequences inductively.
First notice that, by Theorem \ref{thm-main} (and a covering argument), we have
\begin{equation}\label{bound}
0\leq u_1\leq \tilde C_1u_2\quad \textrm{in}\ B_{3/4},
\end{equation}
for some constant $\tilde C_1$.
Thus, it follows that \eqref{mec0} holds for $\frac14\leq r\leq \frac34$, and that we may take $m_1=0$, $\bar m_1=1$.
Furthermore, by taking $C_1\geq \tilde C_1 4^{\alpha k_0}$ we see that \eqref{C1} holds with for all $k\leq k_0$, with $m_k=0$ and $\bar m_k=4^{-\alpha k}$ for $1\leq k\leq k_0$, and $k_0$ is to be chosen later.
Assume now that we have sequences up to $m_k$ and $\bar m_k$ (with $k\geq k_0$), and let
\[v_k:=C_1^{-1}u_1-m_ku_2.\]
Notice that by induction hypothesis we have $v_k\geq0$ in $B_{r_k}$ (but not in all of $\mathbb{R}^n$).
Moreover, since $C_1^{-1}u_1\geq m_j u_2$ in $B_{r_j}$ for $j\leq k$, then
\[v_k\geq (m_j-m_k)u_2\geq (m_j-\bar m_j+\bar m_k-m_k)u_2=-(4^{-\alpha j}-4^{-\alpha k})u_2\quad \textrm{in}\ B_{r_j},\]
for every $j\leq k$.
Using now that for every $x\in B_1\setminus B_{r_k}$ there is $j<k$ such that $|x|<r_j=4^{-j}\leq 4|x|$, we find
\[v_k(x)\geq -u_2(x)\bigl(|4x|^\alpha-r_k^\alpha\bigr)\quad \textrm{in}\ B_{1/4}\setminus B_{r_k}.\]
Thanks to this, and since $v_k\geq0$ in $B_{r_k}$, for every $x\in B_{r_k/2}$ we have that the negative part of $v_k$ satisfies
\[\begin{split}
0&\leq M^- v_k^-(x)\leq M^+ v_k^-(x)=\Lambda\int_{x+y\notin B_{r_k}} v_k^-(x+y)\frac{dy}{|y|^{n+2s}}\\
& \leq C\int_{\frac{r_k}{2}\leq |y|\leq \frac14} u_2(x+y)\bigl(|4y|^\alpha-r_k^\alpha\bigr)\frac{dy}{|y|^{n+2s}}
+\int_{\mathbb{R}^n\setminus B_{1/4}}C_1^{-1}u_1(x+y)\frac{dy}{|y|^{n+2s}}\\
&= Cr_k^{\alpha-2s} \int_{\frac12\leq |z|\leq \frac{1}{4r_k}}
\frac{\bigl(|4z|^\alpha-1\bigr)u_2(x+r_k z)}{|z|^{n+2s}}\,dz +
CC_1^{-1}\int_{\mathbb{R}^n}u_1(y)\frac{dy}{1+|y|^{n+2s}}\\
&\le Cr_k^{\alpha-2s} \int_{\frac12\leq |z|\leq \frac{1}{4r_k}}
\frac{\bigl(|4z|^\alpha-1\bigr)\sup_{B_{2r_k|z|}}u_2}{|z|^{n+2s}}\,dz + CC_1^{-1}.
\end{split}\]
Now, by Corollary \ref{cor7} there is $\gamma>0$ such that
\[\sup_{B_{2r_k|z|}}u_2\leq C|z|^{2s-\gamma}\bigl(\inf_{D_{r_k}}u_2\bigr)\]
for every $|z|\geq\frac12$ and $r|z|\leq \frac14$, and thus
\[\begin{split}
Cr_k^{\alpha-2s}\int_{\frac12\leq |z|\leq \frac{1}{4r_k}}\frac{\bigl(|4z|^\alpha-1\bigr)\sup_{B_{2r_k|z|}}u_2}{|z|^{n+2s}}\,dz&\leq
Cr_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr) \int_{\frac12\leq |z|\leq \frac{1}{4r_k}} \frac{\bigl(|4z|^\alpha-1\bigr)|z|^{2s-\gamma}}{|z|^{n+2s}}\,dz \\
&\leq \varepsilon_0 r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr),
\end{split}\]
with
\[\varepsilon_0:=C\int_{|z|\geq \frac12}\frac{\bigl(|4z|^\alpha-1\bigr)|z|^{2s-\gamma}}{|z|^{n+2s}}\,dz\longrightarrow0\qquad \textrm{as}\quad \alpha\rightarrow0.\]
This means that
\[0\leq M^- v_k^-\leq M^+ v_k^-\leq \varepsilon_0 r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr)+CC_1^{-1}\quad\textrm{in}\ B_{r_k/2}.\]
Therefore, since $v_k^+=C_1^{-1}u_1-m_ku_2+v_k^-$, we have
\[\begin{split} M^-v_k^+&\leq C_1^{-1}M^-(u_1-m_ku_2)+M^+v_k^-\leq C_1^{-1}(1+m_k)\delta+\varepsilon_0r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr)+CC_1^{-1}\\
&\leq \delta+\varepsilon_0r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr)+CC_1^{-1}\end{split}\]
in $\Omega\cap B_{r_k/2}$.
Also,
\[M^+v_k^+\geq M^+v_k\geq -(C_1^{-1}+m_k)\delta\geq-\delta\quad \textrm{in}\ \Omega\cap B_{r_k/2}.\]
Similarly, we have
\[M^+(a v_k^+ + bu_2)\geq -|a|\left(\delta+\varepsilon_0r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr)+CC_1^{-1}\right)-|b|\delta \qquad\textrm{in}\quad \Omega\cap B_{r_k/2}.\]
Now, recall that by Corollary \ref{cor7} we have
\[\frac{r_k^{2s}}{\inf_{D_{r_k}}u_2}\leq Cr_k^\gamma\leq C_1.\]
Thus, we can apply Corollaries \ref{cor1} and \ref{cor2} to the functions $v_k^+$ and $u_2$, to obtain
\[\begin{split}
\inf_{D_{r_k}}\frac{v_k^+}{u_2}& \leq
C\inf_{B_{r_k/2}}\frac{v_k^+}{u_2}+ C\left(\delta+\varepsilon_0r_k^{\alpha-2s}\bigl(\inf_{D_{r_k}}u_2\bigr)+CC_1^{-1}\right) \frac{r_k^{2s}}{\inf_{D_{r_k}}u_2} \\
&\leq C\inf_{B_{r_k/2}}\frac{v_k^+}{u_2}+ C(\delta+C_1^{-1})r_k^\gamma+C\varepsilon_0 r_k^\alpha,\end{split}\]
and
\[\sup_{D_{r_k/2}}\frac{v_k^+}{u_2}\leq C\inf_{D_{r_k/2}} \frac{v_k^+}{u_2}+C(\delta+C_1^{-1})r_k^\gamma+C\varepsilon_0r_k^\alpha.\]
Recalling that $v_k^+=v_k=C_1^{-1}u_1-m_ku_2$ in $B_{r_k/2}$, we find
\[\inf_{D_{r_k/2}}(C_1^{-1}u_1/u_2-m_k)\leq C\inf_{B_{r_k/4}}(C_1^{-1}u_1/u_2-m_k)+C(\delta+C_1^{-1})r_k^\gamma +C\varepsilon_0r_k^\alpha,\]
and
\[\sup_{D_{r_k/2}}(C_1^{-1}u_1/u_2-m_k)\leq C\inf_{D_{r_k/2}} (C_1^{-1}u_1/u_2-m_k)+C(\delta+C_1^{-1}) r_k^\gamma +C\varepsilon_0r_k^\alpha.\]
Therefore, we deduce
\[\sup_{D_{r_k/2}}(C_1^{-1}u_1/u_2-m_k)\leq C\inf_{B_{r_k/4}}(C_1^{-1}u_1/u_2-m_k)+C(\delta+C_1^{-1}) r_k^\gamma +C\varepsilon_0r_k^\alpha.\]
Repeating the same argument with $\bar v_k:=\bar m_k-C_1^{-1}u_1$ instead of $v_k$, we find
\[\sup_{D_{r_k/2}}(\bar m_k-C_1^{-1}u_1/u_2)\leq C\inf_{B_{r_k/4}}(\bar m_k-C_1^{-1}u_1/u_2)+C(\delta+C_1^{-1}) r_k^\gamma +C\varepsilon_0r_k^\alpha.\]
Thus, combining the previous estimates, we get
\[\begin{split}
\bar m_k-m_k&\leq C\inf_{B_{r_k/4}}(C_1^{-1}u_1/u_2-m_k)+C\inf_{B_{r_k/4}}(\bar m_k-C_1^{-1}u_1/u_2) + C(\delta+C_1^{-1}) r_k^\gamma +C\varepsilon_0r_k^\alpha \\
&=C\left(\inf_{B_{r_k/4}}(C_1^{-1}u_1/u_2)-\sup_{B_{r_k/4}}(C_1^{-1}u_1/u_2)+\bar m_k-m_k + (\delta+C_1^{-1}) r_k^\gamma +\varepsilon_0r_k^\alpha\right).
\end{split}\]
Using that $\bar m_k-m_k=4^{-\alpha k}$, $r_k=4^{-k}$, and $k\geq k_0$, we obtain
\[\sup_{B_{r_{k+1}}}(C_1^{-1}u_1/u_2)-\inf_{B_{r_{k+1}}}(C_1^{-1}u_1/u_2)\leq \left(\frac{C-1}{C}+(\delta+C_1^{-1}) 4^{-(\gamma-\alpha)k_0} +\varepsilon_0\right)4^{-\alpha k}.\]
Taking $\alpha$ small enough and $k_0$ large enough, we get
\[\sup_{B_{r_{k+1}}}(C_1^{-1}u_1/u_2)-\inf_{B_{r_{k+1}}}(C_1^{-1}u_1/u_2)\leq 4^{-\alpha (k+1)}.\]
This means that we can choose $m_{k+1}$ and $\bar m_{k+1}$, and thus we are done.
\end{proof}
We finally give the:
\begin{proof}[Proof of Theorem \ref{thm-Lip}]
We will combine Lemma \ref{lem-main} with interior estimates in order to get the desired result.
Let $x,y\in \Omega\cap B_{1/2}$, let
\[r=|x-y|\qquad \textrm{and}\qquad d=\min\{d(x),d(y)\},\]
where $d(x)=\textrm{dist}(x,\partial\Omega)$.
Let $x_*\in\partial\Omega$ be such that $d(x)=|x-x_*|$.
We need to show that $\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq Cr^{\alpha'}$, with $\alpha'>0$.
Since $u_1/u_2$ is bounded in $B_{3/4}$, we may assume that $0<r\leq r_0$, with $r_0$ small enough.
If $r\leq d/2$, then by interior estimates \cite{CS} we have
\[\|u_i\|_{C^\alpha(B_{d/2}(x))}\leq Cd^{-\alpha}.\]
Since $\inf_{B_{d/2}(x)} u_2\geq c_0d^{2s-\gamma}$, then
\[\|u_2^{-1}\|_{C^\alpha(B_{d/2}(x))}\leq Cd^{\gamma-\alpha-2s}.\]
Therefore, for $r\leq d/2$ we have
\[\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq Cr^\alpha d^{\gamma-2\alpha-2s}\leq Cr^\alpha d^{-2s}.\]
provided that $\alpha\leq \gamma/2$.
In particular, if $r\leq d^\theta/2$, with $\theta>2s/\alpha>1$, then
\begin{equation}\label{ineq1}
\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq Cr^{\alpha-2s/\theta}.
\end{equation}
On the other hand, for all $r\in(0,r_0)$ we have $x,y\in B_{d+r}(x_*)$, and thus by Lemma~\ref{lem-main} we have
\[\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq \sup_{B_{d+r}(x_*)\cap\Omega}\frac{u_1}{u_2}-\inf_{B_{d+r}(x_*)\cap\Omega}\frac{u_1}{u_2}\leq C(d+r)^\alpha.\]
In particular, if $r\geq d^\theta/2$ then
\begin{equation}\label{ineq2}
\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq Cr^{\theta\alpha}.
\end{equation}
Combining \eqref{ineq1} and \eqref{ineq2}, we find
\[\bigl|(u_1/u_2)(x)-(u_1/u_2)(y)\bigr|\leq Cr^{\alpha'}\qquad \textrm{for all}\quad r\in(0,1),\]
with $\alpha'=\min\{\alpha-2s/\theta,\,\theta\alpha\}>0$.
Thus, the Theorem is proved.
\end{proof}
\section{Non-symmetric operators with drift}
\label{sec5}
The above proofs of Theorems \ref{thm-main} and Theorem \ref{thm-Lip} work as well for operators of the form
\[\tilde Lu(x)=\int_{\mathbb{R}^n}\bigl(u(x+y)-u(x)-\nabla u(x)\cdot y\chi_{B_1}(y)\bigr)K(x,y)dy+b(x)\cdot\nabla u,\]
provided that $s\geq\frac12$.
Namely, consider the class of nonlocal and non-symmetric operators
\begin{equation}\label{drift1}
\tilde Lu(x)=\int_{\mathbb{R}^n}\bigl(u(x+y)-u(x)-\nabla u(x)\cdot y\chi_{B_1}(y)\bigr)K(y)dy+b\cdot\nabla u,
\end{equation}
with $K$ satisfying \eqref{ellipt-const-linear} and
\begin{equation}\label{drift2}
|b|+\left|r^{2s-1}\int_{B_1\setminus B_r} y\,K(y)dy\right|\leq \beta.
\end{equation}
Given $\lambda$, $\Lambda$, and $\beta$, we define the class $\mathcal L(\lambda,\Lambda,\beta)$ as the set of all linear operators \eqref{drift1} satisfying \eqref{ellipt-const-linear} and \eqref{drift2}.
Then, we may define $\widetilde M^\pm$ as
\[\widetilde M^+ u=\widetilde M^+_{\mathcal L(\lambda,\Lambda,\beta)}u=\sup_{\tilde L\in\mathcal L(\lambda,\Lambda,\beta)} \tilde Lu,\qquad
\widetilde M^- u=\widetilde M^-_{\mathcal L(\lambda,\Lambda,\beta)}u=\inf_{\tilde L\in\mathcal L(\lambda,\Lambda,\beta)} \tilde Lu.\]
For such operators, Theorems \ref{half-Harnack-sub} and \ref{half-Harnack-sup} were established in \cite{CD}; see Corollaries 4.3 and 6.2 therein.
Using such results, and with the exact same proofs given in the previous Sections, we find the following.
\begin{thm}\label{thm-main-drift}
Let $s\in[\frac12,1)$ and $\Omega\subset\mathbb{R}^n$ be any open set.
Assume that there is $x_0\in B_{1/2}$ and $\varrho>0$ such that $B_{2\varrho}(x_0)\subset \Omega\cap B_{1/2}$.
Then, there exists $\delta>0$, depending only on $n$, $s$, $\varrho$, $\lambda$, $\Lambda$, and $\beta$, such that the following statement holds.
Let $u_1,u_2\in C(B_1)$ be viscosity solutions of
\begin{equation}\label{pb-drift}
\left\{ \begin{array}{rcll}
\widetilde M^+(au_1+bu_2) &\geq&-\delta(|a|+|b|)&\textrm{in }B_1\cap \Omega\\
u_1=u_2&=&0&\textrm{in }B_1\setminus\Omega
\end{array}\right.\end{equation}
for all $a,b\in\mathbb{R}$, and such that
\begin{equation}\label{u-is-nonneg-drift}
u_i\geq0\quad\mbox{in}\quad \mathbb{R}^n, \qquad \int_{\mathbb{R}^n}\frac{u_i(x)}{1+|x|^{n+2s}}\,dx=1.
\end{equation}
Then,
\[ C^{-1}u_2\leq u_1\leq C\,u_2\qquad\textrm{in}\ B_{1/2}.\]
The constant $C$ depends only on $n$, $s$, $\varrho$, $\lambda$, $\Lambda$, and $\beta$.
\end{thm}
Moreover, we also have the following.
\begin{thm}\label{thm-Lip-drift}
Let $s\in[\frac12,1)$ and $\Omega\subset\mathbb{R}^n$ be any Lipschitz domain, with $0\in\partial\Omega$.
Then, there is $\delta>0$, depending only on $n$, $s$, $\Omega$, $\lambda$, $\Lambda$, and $\beta$, such that the following statement holds.
Let $u_1,u_2\in C(B_1)$ be viscosity solutions of \eqref{pb-drift} satisfying \eqref{u-is-nonneg-drift}.
Then, there is $\alpha\in(0,1)$ such that
\[ \left\|\frac{u_1}{u_2}\right\|_{C^{0,\alpha}(\overline\Omega\cap B_{1/2})}\leq C.\]
The constants $\alpha$ and $C$ depend only on $n$, $s$, $\Omega$, $\lambda$, $\Lambda$, and $\beta$.
\end{thm}
To our best knowledge, Theorems \ref{thm-main-drift} and \ref{thm-Lip-drift} are new even for the linear operator $(-\Delta)^{1/2}+b\cdot\nabla$.
Those results will be used in the forthcoming paper \cite{FR-drift}.
\section{Appendix: Subsolution in Lipschitz domains}
We prove here a lower bound for positive solutions $u$ in Lipschitz domains, namely $u\geq cd^{2s-\gamma}$ in $\Omega$ for some small $\gamma>0$.
This is stated in Lemma \ref{lem-Lip-dom}, which we prove below.
For this, we need to construct the following subsolution.
\begin{lem}\label{homog-subsol}
Let $s\in(0,1)$, and $e\in S^{n-1}$.
Given $\eta>0$, there is $\epsilon>0$ depending only on $n$, $s$, $\eta$ and ellipticity constants such that the following holds.
Define
\[\Phi(x) := \left( e\cdot x- \eta |x| \left(1- \frac{(e\cdot x)^2}{|x|^2} \right)\right)_+^{2s-\epsilon}\]
Then,
\[
\begin{cases}
M^- \Phi \ge 0 \quad & \mbox{in }\mathcal C_\eta \\
\Phi = 0 \quad & \mbox{in }\mathbb{R}^n \setminus \mathcal C_\eta
\end{cases}
\]
where $\mathcal C_{\eta}$ is the cone defined by
\[\mathcal C_{\eta}: = \left\{ x \in \mathbb{R}^n\ : e\cdot \frac{x}{|x|} > \eta \left( 1 - \left( e\cdot \frac{ x }{|x|} \right)^2\right) \right\}.\]
The constant $\epsilon$ depends only on $\eta$, $s$, and ellipticity constants.
In particular $\Phi$ satisfies $M^-\Phi\ge 0$ in all of $\mathbb{R}^n$.
\end{lem}
\begin{proof}
By homogeneity it is enough to prove that, for $\epsilon$ small enough, we have $M^-\Phi \ge 1$ on points belonging to $e + \partial \mathcal{ C}_\eta$, since all the positive dilations of this set with respect to the origin cover the interior of $\mathcal{\tilde C}_\eta$.
Let thus $P\in \partial \mathcal{ C}_\eta$, that is,
\[ e\cdot P- \eta \left( |P| - \frac{(e\cdot P)^2}{|P|} \right) =0.\]
Consider
\[\begin{split}
\Phi_{P}(x) &:= \Phi(P+e+x)
\\
&= \left( e\cdot (P+e+x)- \eta \left( |P+e+x| - \frac{(e\cdot (P+e+x))^2}{|P+e+x|} \right)\right)_+^{2s-\epsilon}
\\
&= \left( 1 +e\cdot x- \eta \left( |P+e+x| -|P|- \frac{(e\cdot (P+e+x))^2}{|P+e+x|} +\frac{(e\cdot P)^2}{|P|} \right)\right)_+^{2s-\epsilon}
\\
&=\bigl( 1 +e\cdot x- \eta \psi_P(x) \bigr)_+^{s+\epsilon},
\end{split}\]
where we define
\[\psi_P(x) := |P+e+x| -|P|- \frac{(e\cdot (P+e+x))^2}{|P+e+x|} +\frac{(e\cdot P)^2}{|P|} .\]
Note that the functions $\psi_P$ satisfy
\[
|\nabla \psi_P(x)| \le C \quad \mbox{in } \mathbb{R}^n \setminus \{ -P-e\},
\]
and
\begin{equation} \label{esthess}
|D^2 \psi_P(x)| \le C \quad \mbox{for }x \in B_{1/2},
\end{equation}
where $C$ does not depend on $P$ (recall that $|e|=1$).
Now for fixed $\tilde e \in \partial \mathcal{ C}_\eta\cap \partial B_1$ let us compute
\[
\lim_{t\uparrow +\infty} \psi_{t\tilde e}(x) = \lim_{t\uparrow +\infty} (|t\tilde e +e+x|-|t\tilde e|) - \lim_{t\uparrow +\infty} \left(\frac{(e\cdot (t\tilde e +e+x))^2}{|t\tilde e +e+x|} -\frac{(e\cdot t \tilde e)^2}{|t\tilde e |} \right).
\]
On the one hand, we have
\[
\lim_{t\uparrow +\infty} (|\tilde e t+e+x|-|\tilde e t|) = \tilde e\cdot(e+x).
\]
On the other hand to compute for $f_t(y) := \frac{(e\cdot (t\tilde e + y))^2}{|t\tilde e +y|} $ we have
\[
\partial_{y_i} f_t(y) = \frac{2(e\cdot (t\tilde e+ y)) e_i}{|t\tilde e +y|} - \frac{(e\cdot (t\tilde e + y))^2}{|t\tilde e +y|^3} (t\tilde e +y )_i
\]
and hence
\[
\lim_{t\uparrow +\infty} \partial_{y_i} f_t(y) = \big(2(e\cdot \tilde e) e_i- (e\cdot\tilde e)^2 \tilde e _i\big).
\]
Therefore,
\[
\lim_{t\uparrow +\infty} \left(\frac{(e\cdot (t\tilde e +e+x))^2}{|t\tilde e +e+x|}-\frac{(e\cdot t \tilde e)^2}{|t\tilde e |}\right) = \big(2(e\cdot \tilde e)e - (e\cdot\tilde e)^2\tilde e\big) \cdot(e+x) .
\]
We have thus found
\[
\lim_{t\uparrow +\infty} \psi_{P}(x) = \big(\tilde e-2(e\cdot \tilde e)e + (e\cdot\tilde e)^2\tilde e\big)\cdot(e+x)
\]
and
\[
\lim_{t\uparrow +\infty} \big(1+e\cdot x-\eta \psi_{P}(x)\big) = \big(e -\eta \tilde e+ 2\eta(e\cdot \tilde e)e-\eta (e\cdot\tilde e)^2\tilde e\big)\cdot(e+x)
\]
Note that for $\delta$ small enough (depending only on $\eta$), if we define
\[ \mathcal C_{\tilde e}:= \left\{x\in \mathbb{R}^n \ :\ \frac{x+e}{|x+e|}\cdot \frac{e-(e\cdot\tilde e)\tilde e}{|e-(e\cdot\tilde e)\tilde e|} \ge (1-\delta)\right\} \]
satisfies
\begin{equation}\label{Plarge}
\lim_{t\uparrow +\infty} \big(1+e\cdot x-\eta \psi_{P}(x)\big) \ge c|x|\quad \mbox{for all }x\in \mathcal C_{\tilde e}
\end{equation}
where $c>0$. Indeed, the vector $e' :=e-(e\cdot\tilde e)\tilde e$ is perpendicular to $\tilde e$ and has positive scalar product with $e$. Thus, we have
\[
\big(e -\eta \tilde e+ 2\eta(e\cdot \tilde e)e-\eta (e\cdot\tilde e)^2\tilde e\big)\cdot e'>0
\]
Let us show now that for $\varepsilon>0$ small enough the function $\Phi_P$ satisfies
\begin{equation} \label{goal}
M^-\Phi_P(0) \ge 1.
\end{equation}
We first prove \eqref{goal} in the case $|P|\ge R$ with $R$ large enough.
Indeed let $P= t\tilde e$ for $t\uparrow +\infty$ and $\tilde e \in \partial \mathcal{ C}_\eta\cap \partial B_1$.
Let us denote
\[
\delta^2 u(x,y) = \frac{u(x+y)+u(x-y)}{2}-u(x).
\]
Using \eqref{esthess}, and \eqref{Plarge}, and $\Phi_{P} \ge 0$ we obtain
\[
\begin{split}
\lim_{t\to \infty} M^-\Phi_{P} (0)
&\ge
\int_{\mathbb{R}^n} \left( (\delta^2 u)_+ \frac{\lambda}{|y|^{n+2s}} - (\delta^2 u)_- \frac{\Lambda}{|y|^{n+2s}}\right) \,dy
\\
&\ge
\int_{ \mathcal C_{\tilde e}} (c|y| -C)_+^{2s-\epsilon}\,\frac{dy}{|y|^{n+2s}} - C \int_{\mathbb{R}^n} \min\{1,|y|^2\} \frac{dy}{|y|^{n+2s}}
\\
&\ge \frac{c}{\epsilon}-C.
\end{split}
\]
Thus \eqref{goal} follows for $|P|\ge R$ with $R$ large, provided that $\epsilon$ is taken small enough.
We now concentrate in the case $|P|<R$. In this case we use that, taking $\delta>0$ small enough (depending on $\eta$) and defining the cone
\[ \mathcal C_{e}:= \left\{x\in \mathbb{R}^n \ :\ \frac{x}{|x|}\cdot e \ge (1-\delta)\right\} \]
we have
\[
e\cdot (P+e+x)- \eta \left( |P+e+x| - \frac{(e\cdot (P+e+x))^2}{|P+e+x|} \ge c |x|\right)
\]
for $x\in \mathcal C_e$ with $|x|\ge L$ with $L$ large enough (depending on $R$).
Thus, reasoning similarly as above but now integrating in $\mathcal C_e \cap \{ |x|>L\}$ instead of on $\mathcal C_{\tilde e}$ we prove
\eqref{goal} also in the case $P\ge R$, provided that $\epsilon$ is small enough. Therefore the lemma is proved.
\end{proof}
Finally, we give the:
\begin{proof}[Proof of Lemma \ref{lem-Lip-dom}]
Note that we only need to prove the conclusion of the Lemma for $r>0$ small enough, since the conclusion for non-small $r$ follows from the interior Harnack inequality.
Recall that $\Omega\subset\mathbb{R}^n$ is assumed to Lipschitz domain, with $0\in\partial\Omega$.
Then, for some $e\in S^{n-1}$, $\eta>0$ (typically large), and ${r_0}>0$ depending on (the Lipschitz regularity of) $\Omega$ we have
\[
\tilde{\mathcal C}_\eta \cap B_{2r_0}\subset\Omega
\]
where $\tilde {\mathcal C}_\eta$ is the cone of Lemma \ref{homog-subsol}, which is very sharp for $\eta$ large.
Let $\Phi$ and $\epsilon>0$ be the subsolution and the constant in Lemma \ref{homog-subsol}. We now take
\[
\tilde \Phi = \big( \Phi -(|x|/r_0)^2 \big) \chi_{2r_0}.
\]
By Lemma \eqref{homog-subsol} we have
\[
M^-\tilde \Phi \ge - C \quad \mbox{in }B_{r_0}
\]
while clearly $\tilde \Phi\le 0$ outside $B_{r_0}$.
Now we take observe that, for $c_1>0$ small enough we have
\[
M^-(c_1\tilde\Phi + \chi_{D_1}) \ge -c_1 C + c \ge c/2>0
\]
in $B_{r_0}$ --- not that $B_{r_0}\cap D_1 = \varnothing$ since $r_0$ is small.
Then, taking $\delta\in(0,c/2)$
we have
\[
M^-(u-c_1\tilde\Phi + \chi_{D_1}) \le 0 \quad \mbox{ in }B_{r_0}
\]
while
\[
u-c_1\tilde\Phi + \chi_{D_1} \ge 0-c_1\tilde\Phi +0 \ge 0\quad\mbox{in}(\mathbb{R}^n\setminus B_{r_0})\setminus {D_1}
\]
and
\[
u-c_1\tilde\Phi + \chi_{D_1} = (u-1)-c_1\tilde\Phi \ge 0-c_1\tilde\Phi \ge 0\quad \mbox{in } (\mathbb{R}^n\setminus B_{r_0})\cap {D_1}.
\]
Then, by the maximum principle we obtain
\[
u-c_1\tilde\Phi = u-c_1\tilde\Phi + \chi_{D_1} \ge 0 \quad \mbox{in } B_{r_0}
\]
and hence
\[
u(x)\ge c_1\Phi(x) - C |x|^2 \quad\mbox{for }x\in B_{r_0}
\]
which clearly implies the Lemma (taking $\gamma=\epsilon$).
\end{proof}
|
1,108,101,562,785 | arxiv | \section{Introduction}
In many practical applications the ability to observe and control polarization is critical,
as polarization is one of the basic characteristics of transverse light waves \cite%
{Hecht,Wolf,Azzam,Goldstein,Duarte}. Two prominent optical devices for controlling light's polarization state are optical
polarization rotators and optical polarization retarders \cite%
{Hecht,Wolf,Azzam,Goldstein,Duarte}. A polarization rotator rotates the plane
of linear light polarization at a specified angle \cite{Pye,Damask,Rangelov,Dimova,Stojanova},
while a retarder (or a waveplate) introduces a phase difference between two orthogonal
polarization components of a light wave \cite{Pye,Damask,Ardavan,Ivanov,Peters}.
Retarders are usually made from birefringent materials. Fresnel rhombs \cite{Bennett,Bakhouche} are also widely used
and, based on total internal reflections, they achieve retardation at a broader range of wavelengths.
Two common types of retarders are the half-wave plate
and the quarter-wave plate. By introducing a phase shift of $\pi$ between the two orthogonal
polarization components for a particular wavelength, the half-wave plate effectively rotates the polarization vector
to a predefined angle.
The quarter-wave plate introduces a shift of $\pi/2$, and thereby converts linearly polarized light into circularly
polarized light and vice versa \cite{Hecht,Wolf,Azzam,Goldstein,Duarte}.
Although half-wave and quarter-wave plates clearly dominate in practice,
retarders of \emph{any} desired retardation can be designed and successfully applied.
Tuning the retardance is a valuable feature because in some practical settings one may need a half-wave plate,
while in others a quarter-wave plate is required.
Tunable retardance can be achieved by liquid-crystals \cite{Sharp,Sit,Ye}.
Alternatively, one can use the technique recently suggested by Messaadi et al. \cite{Messaadi},
which is based on two half-wave plates cascading between two quarter-wave
plates. This basic optical system functions as an adjustable retarder that
can be controlled by spinning one of the half-wave plates.
A polarization rotator may employ Faraday rotation or birefringence. The Faraday
rotator consists of a magnetoactive material that is put inside a powerful
magnet \cite{Hecht,Wolf,Azzam,Goldstein,Duarte}. The magnetic field causes a
circular anisotropy (Faraday effect), which makes left- and right-circular
polarized waves ``feel'' different refraction indices.
As a result, the linear polarization plane is rotated. A birefringent rotator
can be achieved as a combination of two half-wave plates. Such a rotator would
have an angle of rotation equal to twice the angle between the optical axes of
the two half-wave plates \cite{Zhan}.
Wave plates and rotators are basic building blocks for polarization
manipulation. Indeed, every reversible polarization transformation,
(a reversible change in the polarization vector from any initial state to any final state)
can be achieved using a composition of a retarder and a rotator \cite{Hurvitz}.
An arbitrary transformation requires one half-wave plate and two quarter-wave plates \cite{Simon},
or just two quarter-wave plates \cite{Bagini,Zela,Damask} if the
apparatus itself is rotated. Arbitrary polarization transformations, however,
require one to perform rotations on individual plates which may turn out to be very
impractical in particular applications where one needs to change the angles with certain frequency and speed.
In this paper we attempt to solve this problem by substituting mechanical rotations of the plates with
variation of magnetic fields.
The device we propose is a modified version of Simon-Mukunda's controller and consists of two
quarter-wave plates and two rotators. In this setting we can perform fast and continuous variation
of the polarization vector simply by changing the magnetic fields in each Faraday rotator.
\section{Prefaces}
The Jones matrix that describes a rotator with rotation angle $\theta$ is
\begin{equation}
\mathbf{R}(\theta )=\left[
\begin{array}{cc}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta%
\end{array}%
\right] ,
\end{equation}%
while the Jones matrix that represents a retarder is
\begin{equation}
\mathbf{J}(\varphi )=\left[
\begin{array}{cc}
e^{i\varphi /2} & 0 \\
0 & e^{-i\varphi /2}%
\end{array}%
\right].
\end{equation}%
Here $\varphi $ is the phase shift between the two orthogonal
polarization components of the light wave. The most widely-used retarders are the half-wave plate ($%
\varphi =\pi $) and the quarter-wave plate ($\varphi =\pi /2$) \cite%
{Pye,Damask}.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.8\columnwidth]{fig1.eps}}
\caption{(Color online)(a) The Simon--Mukunda polarization controller in
a configuration of the form QW-HW-QW (b) The scheme of arbitrary to arbitrary
polarization transformation device, composed by two quarter-wave plates and
two Faraday rotators. The orientation of the quarter-wave plates is fixed.
(c) Polarization evolution on the Poincare sphere. The initial polarization is at
point A and the final polarization is at point B. As can be seen from Eq. (\protect\ref%
{arbitrary-to-arbitrary polarization}) the first part of the evolution is
rotation at angle $\protect\beta$ between points A and C, followed by a retardation
at angle $\protect\alpha$ from point C to point B.}
\label{fig1}
\end{figure}
Consider now a single polarizing birefringent plate of phase shift $\varphi$, whose fast
axis is rotated to an angle of $\theta$ relative to the vertical axis (azimuth angle).
In a 2-dimensional rectangular coordinate system whose axes are aligned with the horizontal
and the vertical directions, the Jones matrix is given by
\begin{equation}
\mathbf{J}_{\theta }(\varphi )=\mathbf{R}(-\theta )\mathbf{J}(\varphi )%
\mathbf{R}(\theta ). \label{retarder}
\end{equation}
If behind this plate one places another plate with azimuth angle $\theta +\alpha /2$,
the resulting Jones matrix is given by the product
\begin{equation}
\mathbf{J}_{\theta +\alpha /2}(\pi )\mathbf{J}_{\theta}(\pi )=-\left[
\begin{array}{cc}
\cos \alpha & -\sin \alpha \\
\sin \alpha & \cos \alpha%
\end{array}%
\right] , \label{rotator}
\end{equation}%
that represents a Jones rotator matrix (up to an unimportant phase of $\pi$) \cite{Zhan,Rangelov}.
\section{Modified Simon--Mukunda polarization controller}
A general scheme of a device capable of arbitrary polarization
transformations is the Simon--Mukunda
polarization controller \cite{Simon}. It consists of one half-wave plate
(HW) and two quarter-wave plates (QW) in one of the following arrangements
QW-HW-QW, QW-QW-HW, or HW-QW-QW. The first one is the most popular (Fig. \ref{fig1} a)
and is also used as a fiber polarization controller \cite{Thorlabs}. The
Simon--Mukunda polarization controller \cite{Simon} operates in the following way: the
first quarter-wave plate turns the input elliptical polarization into a linear
polarization. Then the half-wave plate rotates the obtained linear polarization vector,
which is finally transformed into the required elliptical polarization output by the second quarter-wave plate.
The Jones matrix for the Simon--Mukunda polarization controller is
\begin{equation}
\mathbf{J}=\mathbf{J}_{\theta _{3}}(\pi /2)\mathbf{J}_{\theta _{2}}(\pi )%
\mathbf{J}_{\theta _{1}}(\pi /2).
\end{equation}%
Because the unit matrix can be written as
\begin{equation}
\mathbf{\hat{1}}=\mathbf{J}_{\theta _{1}}(\pi /2)\mathbf{J}_{\theta
_{1}}(-\pi /2),
\end{equation}%
we obtain
\begin{equation}
\mathbf{J=J}_{\theta _{3}}(\pi /2)\mathbf{J}_{\theta _{2}}(\pi )\mathbf{J}%
_{\theta _{1}}(\pi /2)\mathbf{J}_{\theta _{1}}(\pi /2)\mathbf{J}_{\theta
_{1}}(-\pi /2). \label{polarization controller}
\end{equation}%
Next we use that
\begin{equation}
\mathbf{J}_{\theta _{1}}(\pi )\mathbf{=J}_{\theta _{1}}(\pi /2)\mathbf{J}%
_{\theta _{1}}(\pi /2)
\end{equation}%
to simplify Eq. (\ref{polarization controller}):
\begin{equation}
\mathbf{J=J}_{\theta _{3}}(\pi /2)\mathbf{J}_{\theta _{2}}(\pi )\mathbf{J}%
_{\theta _{1}}(\pi )\mathbf{J}_{\theta _{1}}(-\pi /2).
\end{equation}%
Next we make use of Eq. (\ref{rotator}) for the combination of two half-wave
plates
\begin{equation}
\mathbf{R}(2(\theta _{1}-\theta _{2}))=-\mathbf{J}_{\theta _{2}}(\pi )\mathbf{%
J}_{\theta _{1}}(\pi ),
\end{equation}%
to get the final expression of the Jones matrix:
\begin{equation}
\mathbf{J}=-\mathbf{J}_{\theta _{3}}(\pi /2)\mathbf{R}(\alpha )\mathbf{J}%
_{\theta _{1}}(-\pi /2), \label{Simon--Mukunda}
\end{equation}%
where the rotator angle is $\alpha =2(\theta _{1}-\theta _{2})$. Therefore the
Simon--Mukunda polarization controller can be constructed as combination of
two quarter-wave plates along with a rotator between them. This device would operate
in a similar way as the one before: the first quarter-wave plate turns the
input elliptical polarization into a linear polarization vector, which is then rotated
by the rotator element, and is finally transformed into the required elliptical output
polarization by the second quarter-wave plate.
\section{Arbitrary retarder as a special case of the modified Simon--Mukunda
polarization controller}
In the special case when the two quarter-wave plates are oriented such that
their fast optical axes are perpendicular to each other (cf. Eq. \ref%
{Simon--Mukunda}) ($\theta _{1}=\theta _{3}=\pi /4$) we obtain a retarder
with retardation $2\alpha $ \cite{Messaadi}:
\begin{equation}
\mathbf{J}_{0}(2\alpha )=\mathbf{J}_{\pi /4}(\pi /2)\mathbf{R}(\alpha )%
\mathbf{J}_{\pi /4}(-\pi /2)=-\left[
\begin{array}{cc}
e^{i\alpha } & 0 \\
0 & e^{-i\alpha }%
\end{array}%
\right] . \label{tunable retarder}
\end{equation}%
If the two quarter-wave plates are achromatic (for example as in\textbf{\ }%
\cite{Ardavan,Ivanov,Peters} or if Fresnel rhombs are used as quarter-wave
plates) then one can achieve a wavelength tunable half-wave or quarter-wave plate that
operates differently compared to previously suggested tunable wave plates \cite%
{Goltser,Darsht}. However, in contrast to previously used tunable wave plates,
here we do not need to rotate the wave plates, but rather change the
magnetic field in the Faraday rotator. \special{color cmyk 0 0 0 1.} Therefore, the
suggested tunable retarder is not mechanical and can be used as a fast
switcher, where the switching on/off time of the optical activity is in the
order of microseconds with the state of the art approaching subnanosecond
\cite{Didosyan,Shaoying}. \special{color cmyk 0 0 0 1.}
\section{Arbitrary to arbitrary polarization converter}
Based on the fact that combining an arbitrary rotator with an arbitrary
retarder allows to achieve any polarization transformation \cite{Hurvitz},
we can combine the tunable retarder from Eq. (\ref{tunable retarder}) with
an additional rotator to get an arbitrary-to-arbitrary polarization
manipulation device. Its Jones matrix is
\begin{equation}
\mathbf{J}=\mathbf{J}_{0}(2\alpha )\mathbf{R}(\beta ),
\end{equation}%
or
\begin{equation}
\mathbf{J}=\mathbf{J}_{\pi /4}(\pi /2)\mathbf{R}(\alpha )\mathbf{J}_{\pi
/4}(-\pi /2)\mathbf{R}(\beta ). \label{arbitrary-to-arbitrary polarization}
\end{equation}%
The proposed optical device, illustrated schematically in Fig. \ref{fig1} b, has potential
advantages over the Simon--Mukunda polarization controller
\cite{Simon,Damask}, where one has to adjust the spatial orientation of all three wave plates.
The proposed device (cf. Eq. (\ref{arbitrary-to-arbitrary polarization})) is more convenient to use
in the sense that the rotator angle and the retardation are obtained by changing the magnetic field of the first
and the second Faraday rotator, respectively. Furthermore, our scheme
is fast switchable on and off, because in the absence of a magnetic field the
polarization is not changed.
\special{color cmyk 0 0 0 1.}
Finally, we investigate the possibility of achieving any pair of rotation angles $%
\alpha $ and $\beta $ (cf. \eqref{arbitrary-to-arbitrary polarization}) for
the proposed arbitrary-to-arbitrary polarization conversion device.
The rotation angle for the Faraday rotator is given by%
\[
\theta (\lambda )=V(\lambda )BL\,,
\]%
where $B$ is the external magnetic field, $L$ is the magneto-optical element
length, and $V(\lambda )$ is Verdet's constant. We do the
most common calculation involving the Terbium Gallium Garnet (TGG) crystal, as this crystal yields a
high Verdet constant. So far, the dispersion of Verdet's constant for the TGG crystal
has been extensively studied \cite{Bozinis1978, Jannin1998,
Yoshida,Villora2011} and the wavelength dependence has been
shown to be described by the formula
\begin{equation}
V(\lambda )=\frac{E}{\lambda _{0}^{2}-\lambda ^{2}}\,, \label{dispersion}
\end{equation}%
where $E=4.45\cdot 10^{7}\,\frac{\text{rad $\cdot $ nm}^{2}}{\text{T $%
\cdot $ m}}$ and $\lambda _{0}=257.5$~nm is the wavelength, often close to
the Terbium ion's 4f-5d transition wavelength. In the range of 400 --- 1100 nm,
excluding 470 --- 500 nm (absorption window \cite{Villora2011}), the TGG crystal has
optimal material properties for a Faraday rotator. The Verdet constant
decreases with increasing wavelength for most materials (in absolute value):
for the TGG crystal it is equal to $475\,\frac{\text{rad}}{\text{T $\cdot $ m}}$ at $400$%
~nm and $41\,\frac{\text{rad}}{\text{T $\cdot $ m}}$ at $1064$~nm \cite%
{Villora2011}. Our simulations were carried out for three different values
for the magnetic field, $B_{1}=0.5$ T, $B_{2}=1$ T and $B_{3}=2$ T, at a fixed length $%
L=0.05$ m of the TGG crystal. As can be seen from Fig. \ref{fig2}, any pair of
rotation angles $\alpha $ and $\beta $ in the interval $[0,2\pi ]$\ can be achieved with magnetic field smaller than 1T for the visible spectrum.
Therefore, with commercial Faraday rotators available on the market, the
practical realization of the proposed polarization control device should be
straightforward.
\begin{figure}[tbh]
\centerline{\includegraphics[width=0.8\columnwidth]{fig2.eps}}
\caption{\special{color cmyk 0 0 0 1.} (Color online) The Faraday rotation angle $\protect\theta $ vs the
light wavelength $\protect\lambda $ for three different magnetic fields $%
B_{1}=0.5$ T (red dotted), $B_{2}=1$ T (green solid) and $B_{3}=2$ T (blue
dashed). \special{color cmyk 0 0 0 1.}}
\label{fig2}
\end{figure}
\special{color cmyk 0 0 0 1.}
\section{Conclusion}
In conclusion, we have suggested two useful polarization manipulation
devices. The first device is the modified Simon--Mukunda polarization
controller, which in contrast to the traditional controller
is constructed as a combination of two quarter-wave plates and a Faraday
rotator between them. Our second device for arbitrary to arbitrary
polarization transformation is composed of two Faraday rotators and two
quarter-wave plates, where the retardance and the rotation can be continuously
modified merely by changing the magnetic fields of the two Faraday rotators.
Because they use Faraday rotation, the suggested schemes are non-reciprocal.
We hope the proposed methods for polarization control would be cost-effective and useful in any scientific laboratory.
\section*{Acknowledgment}
This work was supported by Sofia University Grant 80-10-191/2020.
|
1,108,101,562,786 | arxiv | \section{Introduction}
\label{Sec:Introduction}
Optimization is ubiquitous in the pursuit of quantum technologies for molecular transformations~\cite{Magann2021}, high-precision sensing~\cite{Degen2017,Poggiali2018}, secure communications~\cite{Waks2002}, revolutionary computing~\cite{Resch2019}, etc. At the physical level, the classical electromagnetic fields for controlling quantum dynamics need to be optimized for improving performance~\cite{Brif2010,Glaser2015}. The past two decades have witnessed a large number of successes in quantum control, and new applications are still emerging. In a different but closely related area, variational algorithms operating on quantum circuits~\cite{Cerezo2021} also involve optimization of classical control parameters, which are expected to achieve computational advantage on current Noisy Intermediate-Scale Quantum (NISQ) devices~\cite{Preskill2018} with applications in many areas including {quantum simulation, combinatorial optimization, quantum chemistry and quantum machine learning~\cite{Lubasch2020,Bharti2021,Benedetti2021,Plekhanov2022,Amaro2022,Dou2022}.}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{schematics.pdf}
\caption{(a) The general setup of hybrid quantum-classical optimization system; (b) the optimization of classical fields in the control of a molecule or other modest sized quantum system; (c) the training of variational quantum circuits using a classical optimizer.}
\label{fig:schematics}
\end{figure}
Quantum optimal control (QOC) and the variational quantum algorithm (VQA), as shown in Figs.~\ref{fig:schematics}(b) and (c), respectively, can be categorized as hybrid quantum-classical optimization algorithms in Fig.~\ref{fig:schematics}(a). Both QOC and VQA settings fit the framework in Fig.~\ref{fig:schematics}(a) which can be understood as special cases of an early paradigm~\cite{Judson1992}, differing in whether the quantum system is natural (e.g., a molecule) or engineered (e.g., coupled qubits). The objective function to be optimized is evaluated by a quantum ansatz $U(\theta)$, but the optimizer for updating the control variables $\theta$ is classical. Here we use the word ansatz to generally encompass natural or engineered quantum systems described by their respective unitary evolution operator $U$. Since greedy algorithms (e.g., gradient-based algorithms) are a frequent choice, it is fundamental to investigate whether the optimization possesses a nice landscape, i.e., whether the designed algorithm can successfully and efficiently reach favored solutions, with the landscape being the physical objective as a function of the control. Otherwise, a bad landscape (e.g., with many traps) may nullify the prospect of achieving optimal quantum performance.
Suppose that the quantum ansatz is defined on an $N$-dimensional Hilbert space, and the real variables $\theta$ to be optimized are in an $M$-dimensional space $\mathcal{X}$. In most existing quantum control applications, the number of control parameters is much larger than the effective dimension of the system, i.e., $M\gg N$; creating an arbitrary unitary $U$ generally require $M$ suitable controls satisfying $M\geq N^2$. By contrast, the opposite situation $M\ll N$ may be encountered in NISQ applications as the exponentially increasing dimensionality of the engineered quantum system grows so fast that it will soon be far greater than the number of practically tunable circuit parameters. The discussions in this review will show that the optimization landscapes of hybrid quantum-classical algorithms will experience morphological transitions when the size of quantum systems grows from small to large with respect to the available resources of the classical optimizer.
The early investigation of quantum control revealed that the landscape is almost always devoid of traps when the control fields are unlimited~\cite{Rabitz2004}, as is schematically shown in Fig.~\ref{fig:schematics_form}(a). There are additional saddle points on the landscape that may slow down the optimization, but they will not halt even a greedy algorithm search for global optimal solutions~\cite{Riviello2017}. When the control resources are insufficient, e.g., when the control pulses have very limited time duration and bandwidth, false traps will emerge to likely halt the optimization procedure in a suboptimal minimum (Fig.~\ref{fig:schematics_form}(b)). Stochastic algorithms can be effective in this circumstance, but they are often costly to run. Recently, it was discovered that barren plateaus~\cite{McClean2018}, which refer to exponentially vanishing gradients with the growth of the qubit number as a consequence of the measure concentration, may become dominant on the optimization landscapes (see Fig.~\ref{fig:schematics_form}(c)), on which the gradient-based searches cannot find any effective descending directions to follow.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{schematics_form.pdf}
\caption{General forms of the landscapes: (a) a trap-free landscape with non-trapping saddles; (b) a rugged landscape with many false traps; (c) a landscape with a barren plateau.}
\label{fig:schematics_form}
\end{figure}
In this review, we will give a unified survey over the existing results on these different types of optimization landscapes. The remainder of this paper is organized as follows. Section~\ref{Sec:Problems} will formulate the landscape problems and how they arise in QOC and VQA applications. Section~\ref{Sec:Unlimited Systems} will analyze the landscape topology under unlimited resources, followed by Sec.~\ref{Sec:Limited Systems} which will introduce the change of topology when the resources are limited. In Sec.~\ref{Sec:BP}, the interesting Barren Plateau (BP) phenomena will be discussed as well as possible ways to avoid them. Finally, conclusions are drawn in Sec.~\ref{Sec:Conclusion}.
\section{Hybrid Quantum-Classical Optimization Problems}
\label{Sec:Problems}
As is schematically shown in Fig.~\ref{fig:schematics}, many quantum science and technology goals of fundamental and practical interest involve the directing quantum dynamics (e.g., molecules, spin ensembles, superconducting quantum devices) via iterative optimization of control fields, or variational optimization of parameterized quantum circuits. These problems commonly lead to the maximization or minimization of the expectation value of some objective observable $O$, which can be written as follows~\cite{Rabitz2006}
\begin{equation}
J(\theta) =\langle O\rangle = {\rm tr}\left[\rho(\theta)O\right],
\end{equation}
where $\rho(\theta)$ is the parameterized quantum state that encodes the solution to the problem. Here $\theta$ represents the practically manipulatable control variables for engineering the quantum system. Since the quantum state $\rho(\theta)$ is always prepared by some unitary transformation $U(\theta)$ on the system (assumed to be closed), we can alternatively write the objective function as
\begin{equation}\label{eq:landscape function}
J(\theta) = {\rm tr}\left[U(\theta)\rho_0 U^\dag(\theta)O\right],
\end{equation}
where $\rho_0$ is the initial state of the system. Here, the unitary transformation $U(\theta)$ represents the controlled dynamical propagator in QOC or the parameterized quantum circuit in VQA. In the following, we will show how these problems appear in quantum optimal control systems and variational quantum algorithms.
\subsection{Quantum optimal control systems}
The control of quantum systems exists in almost all quantum technologies. For example, shaped femtosecond laser pulses can be applied to manipulate chemical reactions by selectively breaking or forming chemical bonds~\cite{Assion1998,Daniel2003}. In quantum metrology, a quantum sensor can be optimally tuned to improve its sensitivity~\cite{Xu2021}. In quantum computing, such problems are prevalent because state initialization, gate operation and the suppression of noises can all be treated as control problems~\cite{Dong2010}.
Consider an ideal closed system where the general controlled quantum dynamics can be described by the following Schrodinger equation:
\begin{equation}\label{eq:quantum control system}
\dot{U}(t; \vec{u}) =-i\left[H_0+\sum_{k=1}^m u_k(t)H_k\right]U(t).
\end{equation}
Here, $U(t)$ is the system's unitary propagator and $H_0$ is the internal Hamiltonian. The control function $\vec{u}=\{u_1(t),\cdots,u_m(t)\}$ can be freely varied to manipulate the system via the respective control Hamiltonians $H_1,\cdots,H_k$. In many control problems, it is desired to find proper control functions such that
\begin{equation}\label{eq:QOC}
J(\vec{u}) = {\rm tr}\left[U(T;\vec{u})\rho_0 U^\dag(T;\vec{u})O\right]
\end{equation}
is extremized at some prescribed time $t=T$, where $\rho_0$ is the system's initial state. The observable may be a projector $O=|0\rangle\langle 0|$ onto the ground state, which appears in the initialization of a quantum information system. Let $\theta$ be the free parameters involved in the control function $\vec{u}$ (e.g., amplitudes and phases of piecewise-constant pulses), then the propagator $U(T;\vec{u})$ is implicitly parameterized as $U(T;\vec{u}(\theta))$ by $\theta$, and hence we can formulate (\ref{eq:QOC}) into the standard form (\ref{eq:landscape function}).
In quantum optimal control, the resources available for the optimizer depends on the number and range of variables in the control pulses, e.g., the time duration, the sampling rate, the power and the bandwidth, etc. The resources are also implicitly dependent on the coherence time of the system within which the propagator $U(T;\vec{u})$ in (\ref{eq:QOC}) is at least approximately unitary.
\subsection{Variational quantum algorithms}
Variational quantum algorithms can be deployed on any quantum system realization while exploiting quantum control resources. However, the use of NISQ computers may have a long term standing or shorter term period of utility before fault-tolerant quantum computers are available. For intermediate-scale or even large-scale quantum computers, parameterized quantum circuits (PQC) are broadly adopted as the computational ansatz with assistance of a classical optimizer, which are also referred to as quantum neural networks (QNN)~\cite{Benedetti2019}. Such computational models are expected to achieve a quantum advantage when there are sufficiently many qubits whose noises are sufficiently low and adequate control resources are available~\cite{Bishop2006}. PQC usually consists of layers of elementary single- or two-qubit gates, and part of them are tunable. For example, the single-qubit gates can be chosen as rotations around the $x$-axis on the Bloch sphere with the rotation angle being tunable. Let $\theta_1,\cdots,\theta_n$ be the parameters in these layers, and we have
\begin{equation}\label{}
U(\theta)=U_n(\theta_n)\cdots U_1(\theta_1).
\end{equation}
The architecture of the parameterized quantum circuits is determined by the qubit connectivity topology of the hardware device, and two-qubit gates are most convenient between directly coupled qubits. The selection of the quantum ansatz is sometimes inspired by the problem itself. For example, the quantum approximate optimization algorithm (QAOA)~\cite{Farhi2014,Farhi2019} uses alternating evolution of the initial and problem Hamiltonians $H_0$ and $H_P$, respectively, such that
\begin{equation}\label{}
U(\theta)=e^{-it_nH_P}e^{-i\tau_nH_P}\cdots e^{-it_1H_P}e^{-i\tau_1H_0},
\end{equation}
which is actually a quantum control system under bang-bang controls. The parameters $\theta$ consist of the evolution times $t_1,\tau_1,\cdots,t_n,\tau_n$. In practice, the problem-inspired ansatz may need to be transformed to hardware-inspired ones by decomposition and Trotterization.
The objective observable chosen for VQA depends on the specific applications. For a variational quantum eigensolver~\cite{Cao2019} or QAOA, the observable may be chosen as a non-local Hamiltonian encoding the problem that involves many qubit-qubit interactions. For machine learning tasks (e.g., classification), the observable can be defined locally on a few qubits whose states indicate the candidate output~\cite{Chen2021}. Later we will see that the choice of observables affects the landscape geometry.
Similar to QOC applications, the resources available for VQA are correlated with the tunable parameters involved in the quantum circuit, which is jointly determined by its width {(i.e., the number of qubits)} and number of layers. Due to the noise in NISQ devices, only shallow circuits can be properly utilized by the objective function (\ref{eq:landscape function}), otherwise any useful information will be buried in the noise.
\section{The Trap-free Landscape with Abundant Resources}
\label{Sec:Unlimited Systems}
In this section, we will show that the landscape is almost always trap-free under appropriate conditions and when the control resources are abundant. The analysis has been applied to explain the large number of successes in QOC experiments and simulations. The same method can be naturally generalized to small-scale quantum circuits by which any unitary can be achieved~\cite{Magann2021b}.
\subsection{Basic assumptions}
Suppose that the resources are abundant in the sense that any unitary $U$ can be realized by some properly chosen parameter $\theta\in\mathcal{X}$, i.e., the mapping $U(\theta)$ from $\mathcal{X}$ to the unitary group $\mathcal{U}(N)$ is surjective. This means that $M\geq N^2$ for physically suitable controls, and under many practical circumstances we actually have $M\gg N^2$. In control systems, this implies that the system is fully controllable over $\mathcal{U}(N)$, namely any unitary matrix can be produced by some control fields.
The presence of surjectivity makes it possible to transfer the landscape analysis to the following kinematic landscape
\begin{equation}\label{}
J(U)={\rm tr}[U\rho_0 U^\dag O],
\end{equation}
which is defined on the image of $\mathcal{X}$ that fills up $\mathcal{U}(N)$ under the resource-abundance assumption. This kinematic landscape is relatively easy to analyze because it is only quadratically dependent on $U$, while $J(\theta)$ may involve very complicated nonlinearities. The connection between the two landscapes can be understood from the chain rule for the $\alpha$-th control $\theta_\alpha$:
\begin{equation}\label{eq:chain1}
\frac{\partial J}{\partial \theta_\alpha} = \sum_{i,j=1}^N\left[ \frac{\partial J}{\partial U_{ij}}\frac{\partial U_{ij}}{\partial \theta_\alpha}\right].
\end{equation}
In addition, at kinematic critical points where $\frac{\partial J}{\partial U(\theta)}=0$, the second-order derivatives are connected by
\begin{equation}\label{eq:chain2}
\frac{\partial^2 J}{\partial \theta_\alpha\partial\theta_\beta} = \sum_{i,j=1}^N \sum_{k,l=1}^N\left[\frac{\partial U_{ij}}{\partial \theta_\alpha} \frac{\partial^2 J}{\partial U_{ij}\partial U_{kl}}\frac{\partial U_{kl}}{\partial \theta_\beta}\right],
\end{equation}
Clearly, the vanishing of the kinematic gradient $\frac{\partial J}{\partial U(\theta)}$ must lead to $\frac{\partial J}{\partial \theta} \equiv 0$, which implies that $\theta$ must be critical if the corresponding $U(\theta)$ is a critical point of $J(U)$. However, not all critical controls $\theta$ come from kinematic points, because the kinematic gradient can be nonzero when the Frechet derivative $\frac{\partial U(\theta)}{\partial \theta}$ is rank-deficient. Therefore, we conclude that the landscape topology is equivalent with the kinematic one under the following assumptions.
(1) The mapping from $\theta$ to $U$ is globally surjective, i.e., any unitary $U$ can be realized by some admissible $\theta$~\cite{HUANG1983}.
(2) The mapping from $\theta$ to $U$ is everywhere locally surjective, i.e., the Jacobian $\frac{\partial U}{\partial \theta}$ is full rank for all admissible $\theta$~\cite{Wu2012a}.
The assumptions also guarantee that, up to second order, any locally maximal (locally minimal) or saddle critical point must correspond to a kinematic critical point of the same type, because Eq.~(\ref{eq:chain2}) defines a congruent transformation that preserves the sign of non-zero Hessian eigenvalues, as long as the Jacobian mapping is full rank at the critical point. Theoretically, the second-order analysis is incomplete because higher-order variations can matter along directions associated with zero Hessian eigenvalues, implying that additional critical points that are unseen in the kinematic picture may exist due to the violation of local regularity. Nevertheless, both theoretical analysis and empirical studies indicate that such critical points are rare and have negligible influence on the search for globally optimal controls~\cite{Wu2012a,Riviello2014}.
\subsection{The critical topology of the landscape}
The above analysis shows that, as long as the classical optimizer has sufficiently abundant resources, generic landscape features can be extracted from the kinematic landscape. It is easy to prove that the condition for a unitary transformation $U$ to be a kinematic critical point is
\begin{equation}\label{}
[U\rho_0U^\dag,O]=0.
\end{equation}
At the critical point, the Hessian form is
\begin{equation}\label{}
\mathcal{H}(A)={\rm tr}(AU\rho_0U^\dag O-A^2U\rho_0U^\dag),
\end{equation}
which can be obtained from Taylor expanding $J(U)$ in the neighborhood of $U$ parameterized by $Ue^{iA}$ with $A$ being Hermitian and of small norm.
These conditions provide the basis for extracting all possible kinematic critical points and the curvature near them via Hessian analysis.
It is revealed that the kinematic landscape possesses a number of critical submanifolds among which only one is locally maximal (or minimal)~\cite{Rabitz2006,Rabitz2005}. The absence of other locally suboptimal extrema indicates that the gradient-based optimization of $\theta$ starting from an arbitrary point should almost always reach the top (or bottom) of the landscape without being trapped at lower (or upper) suboptimal values. There is no definite conclusion on the connectedness of the dynamical submanifolds, but most simulations appear to support that the maximal (minimal) submanifold is very likely connected, which can be numerically detected by the level-set exploration technique~\cite{Beltrani2007,Beltrani2011a} using homotopy algorithms\cite{Rothman2005,Dominy2008}.
In addition to the unique maximal and minimal submanifolds, there are usually multiple saddle submanifolds of $\mathcal{U}(N)$. Generically, there are fewer saddle submanifolds when $\rho_0$ or $O$ is highly degenerate, and these submanifolds tend to be high dimensional. Typically, when $\rho_0$ and $O$ are both fully non-degenerate, there is a total of $N!$ critical submanifolds, among which $N!-2$ are saddle submanifolds~\cite{Rabitz2006,Wu2008a}. However, when $\rho_0$ is a pure state, there are at most $N$ critical submanifolds, among which $N-2$ are saddles. For general cases, the contingency table technique~\cite{Wu2008a} was proposed to explicitly enumerate the critical submanifolds. Although the resulting combinatorial problem has no general analytic solutions, some special cases can be approximately estimated to get a good understanding of the distribution of critical manifolds~\cite{Wu2008a,Wu2008}.
\subsection{Fundamental bounds}
The bounds on the value of the objective function and the curvature are fundamental for the geometric understanding of the optimization landscape. They scale with the dimensionality of the quantum ansatz as well as the available resources of the classical optimizer. Here, we introduce some existing results on such bounds for landscapes with unlimited control resources.
It is clear that the value of the objective function is ultimately bounded by the maximal and minimal eigenvalues of the observable $O$. They are achievable when $\rho_0$ is a pure state and any unitary $U$ can be realized, but that may not always be the case when $\rho_0$ is mixed. Nevertheless, the presence of an ancillary system as a quantum controller (e.g., an engineerable environment for a control system or ancilla qubits for PQC) may purify the system's mixed state and thus broaden the achievable bounds of the landscape. Typically, when the ancillary system is initially at a state of thermal equilibrium, and the temperature decreases from infinite to zero, it was proven that the bounds limited by the purity of $\rho_0$ can be surpassed when the temperature is below some threshold value determined by the minimal energy gap of the Hamiltonian of the environment $H_E$, and the ultimate bound can be approached when the temperature goes to absolute zero. The threshold temperature can be taken as a witness index of the quantum effect of the environment, and the minimal energy gap of $H_E$ can be treated as the ``bandwidth" of the quantum controller that quantifies the ability of performance improvements~\cite{Wu2015aa}.
For the optimal control of quantum systems, the norm of the gradient is upper bounded by the product of the operator norms of $O$ and the control Hamiltonians, implying that the gradient-based search will never explode~\cite{Ho2006}. In other words, the landscape has limited slope. Moreover, when the system's dimension increases, any gradient component may shrink into an extremely narrow distribution centered at zero, leading to the so-called barren-plateau landscape~\cite{McClean2018}. On the plateau, it is exponentially expensive to precisely evaluate the gradient by sampling. Also, moving along the gradient will be very slow unless exponentially many control resources are available. More details of barren plateaus will be discussed in Sec.~\ref{Sec:BP}.
The flatness of the landscape can be also observed from the curvature associated with the Hessian form~\cite{Beltrani2011}, as gradient-based algorithms converge faster along Hessian eigenvectors associated with large Hessian eigenvalues. However, large Hessian eigenvalues also imply that the control is less robust to noise varying along the associated eigenvectors. Thus, a trade-off needs to be made between the convergence speed and the noise robustness~\cite{Hocker2014}.
\section{The Rugged Landscape with Limited Resources}
\label{Sec:Limited Systems}
The above analysis indicates that false traps (see Fig.~\ref{fig:schematics_form}(b)) will likely emerge when the classical optimizer does not have sufficiently many parameters to generate arbitrary quantum unitaries. In this section, we discuss how the landscape is reshaped under insufficient optimization resources.
From the control system point of view, the emergence of false traps is ascribed to the loss of controllability and regularity (regularity means that the Jacobian $\partial U/\partial \theta$ is full rank). The controllability of quantum systems can be examined by the rank of the Lie algebra generated by the drift and control Hamiltonians via their nested commutators. The system is controllable when the generated Lie algebra is identical with the Lie algebra ${\bf u}(N)$ of $\mathcal{U}(N)$~\cite{HUANG1983}. When the system possesses certain dynamic symmetry, i.e., when the generated Lie algebra is a proper Lie subalgebra of ${\bf u}(N)$, the system will become uncontrollable even when the associated control field resources are unlimited.
In the literature, several cases of dynamical symmetry have been proven to introduce no traps for gate control landscapes, defined as $J(U)=Re{\rm tr}(W^\dag U)$ with $W$ being the target gate, including the set of symmetric unitary transformations, the set of symplectic dual transformations~\cite{Hsieh2010}, and the set of symplectic transformations in continuous-variable quantum computing systems~\cite{Wu2010}. However, false traps may appear when the generated Lie algebra is relatively small. In Ref.~\cite{Wu2011}, it is explicitly shown that $N/2$ physically nontrivial traps exist under $\mathcal{SU}(2)$ dynamic symmetry when the target gate $W$ is reachable (see Fig.~\ref{fig3}(a)), and the landscape usually becomes more rugged when $W$ is not reachable (see Fig.~\ref{fig3}b).
\begin{figure
\centering
\includegraphics[width=1\columnwidth]{traps.pdf}
\caption{Local traps in the landscape induced by ${\bf SU}(2)$ dynamical symmetry reproduced with permission from Ref.~\cite{Wu2011}: (a) when the target gate is inside the ${\bf SU}(2)$ subgroup; (b) when the target gate is outside the ${\bf SU}(2)$ subgroup.}
\label{fig3}
\end{figure}
The loss of controllability and regularity are more commonly caused by physical constraints on the control field, even if the controllability Lie algebra is full rank. The constraints may be in the form of bounded power, finite bandwidth, etc., or the length of pulse duration limited by the coherence time. A simple way to systematically explore the landscape with constrained control resources is by restricting kinematic controls (e.g., entries of $U(T)$) that can be mapped to corresponding dynamic controls via a topology-preserving transformation. Suboptimal dynamic controls are identified as isolated points on the landscape, and they are shown to have rich and complex features~\cite{Donovan2013,Donovan2014a,Donovan2015}. Numerical simulations~\cite{Riviello2015} show that the search for a globally optimal solution may be prevented when constraints are above certain thresholds, and thus careful choice of relevant control parameters helps to eliminate such traps and facilitate successful optimization.
In variational quantum algorithms, uncontrollable quantum dynamics generally corresponds to under-parameterized quantum circuits such that reachability is in deficit. Hence, one can expect that the optimization landscape will likely be rugged as well. {Actually, the under-parameterized quantum circuits commonly have spurious local optima (i.e., traps) and thus the classical gradient-based optimizers may fail to achieve a globally optimal solution. For example, in Ref.~\cite{You2021}, a class of simple QNNs is identified to be hard to train, and there exist datasets that induce many spurious local suboptima that are exponential in the number of control variables. In such circumstances, the optimizer and the relevant algorithmic hyperparameters (e.g., the learning rate) can be carefully choosed to escape traps~\cite{Wierichs2020}. The undesired local suboptima can be also avoided by connecting the quantum circuits with a classical feedforward neural network which is expected to modify the landscape itself~\cite{Rivera2021}.} In noisy quantum devices, the landscape suffers more heavily from local traps, because the noise (e.g., non-unital Pauli noise) can break the symmetries in under-parameterized quantum circuits and lift the degeneracy of minima, making many of them false traps. Hence, novel optimization methods, e.g., the symmetry-based Minima Hopping (SYMH) optimizer, are required to mitigate the effect of noise and guide the search to more noise-resilient minima~\cite{Fontana2020}.
{In the literature, the influence of the number of quantum circuit parameters upon the landscape has been studied. It is noted that a few specific structured QNNs exhibit the over-parametrization phenomenon~\cite{Lee2021,Anschuetz2021,Larocca2021}, which is commonly observed in classical neural networks. The over-parametrization means that the QNN has more than a critical number of parameters which guarantees that the achievable rank of the quantum Fisher information matrix (QFIM) can be saturated at least at a point of the landscape. Numerical simulations show that the QFIM rank is saturated almost everywhere simultaneously. That is to say, the mapping from the parameters to the final state is surjective almost everywhere, leading to a favorable landscape. Thus, with increasing the number of the parameters, the QNN will experience a phase transition in trainability. For periodic-structured QNNs, it has been demonstrated that the critical threshold value of the parameter number is related to the dimension of the dynamical Lie Algebra (DLA) obtained from the QNN generators~\cite{Larocca2021}. For a quantum ansatz, deep layers needed to be over-parameterized may lead to barren plateaus. Therefore, the structure of quantum ansatz should be carefully designed to guarantee the scalability and trainability. }
\section{The Barren-plateau Landscape with Scarce Resources}
\label{Sec:BP}
In practical applications (specially with VQAs), for an $n$ qubit system the number of tunable parameters is generally in $\mathcal{O}({\rm poly}(n))$ for the sake of computation efficiency, which is relatively scarce compared with the quantum system dimension when $n$ scales up. Under such circumstances, false traps may not be the major obstacle for optimization, because the presence of a barren plateau (BP) brings up greater challenges. In this section, we will discuss how BPs arise as well their origins and the ways to escape BPs.
\subsection{The effects of barren plateaus}
The BP phenomenon was first noticed in the study of quantum neural networks~\cite{McClean2018}. BP means that any gradient component has the zero mean value, i.e., ${\rm E}_{\theta} \left[\frac{\partial J}{\partial \theta_i}\right]=0$ for all $1\leq i\leq M$, over the parameter space $\mathcal{X}$ of the quantum ansatz, and its variance is exponentially bounded by
\begin{equation}
{\rm Var}_\theta\left(\frac{\partial J}{\partial \theta_i}\right)\leq e^{-\beta n}
\end{equation}
for some positive constant $\beta >0$. From Chebyshev's inequality, this further implies that the probability that the partial derivative differs from its zero mean at least by $\epsilon$ vanishes exponentially as
\begin{equation}
P\left(\left|\frac{\partial J}{\partial \theta_i}\right|\geq \epsilon\right)\leq \epsilon^{-2}e^{-\beta n}.
\end{equation}
This behavior indicates that, for sufficiently large qubit number $n$, the gradient is almost always vanishing at any randomly chosen $\theta$. In other words, almost all $\theta$ look like critical points, and when attempting to follow such gradients the search can hardly move. Consequently, a large number of iterations will be taken for training optimal parameters even in the best of circumstances when noise is very weak. Additionally, the exponential suppression of the gradient is often accompanied by exponentially narrowed minima~\cite{Arrasmith2021}, forcing the learning rate to be exponentially slow so as not to overstep the narrow gorge solutions. All these factors make the training extremely hard when the number of qubits is large.
Another side-effect of BPs is on the estimation of gradients (especially in VQA applications), which is often done via the parameter shift rule~\cite{Mitarai2018} as follows
\begin{equation}
\frac{\partial J(\theta)}{\partial \theta_j}=\frac{1}{2}\left[J\left(\theta_1,\cdots,\theta_j+\frac{\pi}{4},\cdots,\theta_M\right)-J\left(\theta_1,\cdots,\theta_j-\frac{\pi}{4},\cdots,\theta_M\right)\right].
\end{equation}
To guarantee a reliable gradient direction, an exponential number of repeated measurements on the objective function $J(\theta)$ is needed to overcome sampling noise. Otherwise, the optimization will perform no better than a random walk. This effect is unavoidable by simply changing the classical optimizer. In Ref.~\cite{Cerezo2021a}, it was shown that Newton-type algorithms make no difference because exponentially many measurements are still required to evaluate the Hessian matrix obtained by applying the parameter shift rule twice. In the presence of BPs, the landscape value will exhibit an exponential concentration about the mean~\cite{Arrasmith2021,Arrasmith2021b}, i.e., the variation ${\rm Var}_\theta[J(\theta)]$ exponentially decays with the qubit number $n$. Other gradient-free optimizers (e.g., Nelder-Mead, Powell, and COBYLA) cannot improve the efficiency of optimization, because decisions made in these algorithms are based on the comparison of the objective function values between different points~\cite{Arrasmith2021b}.
It should be noted that the exponential vanishing of the gradient components can be compensated for by adding exponentially many control parameters. In such a case, the gradients with a suitable norm could provide an effective direction to update the parameters $\theta$~\cite{Kim2021}. In addition, the landscape may not exhibit the narrow gorges or the value concentration phenomena, which generally occurs attendant on BPs when the control resources are scarce.
\subsection{The origins of barren plateaus}
There are multiple factors that form barren plateaus on the optimization landscapes of hybrid quantum-classical algorithms, including high expressibility of the quantum ansatz~\cite{McClean2018}, a non-locally defined objective function~\cite{Cerezo2021b,Sharma2020}, as well as excess entanglement~\cite{Patti2021,Marrero2021} and noises in the quantum ansatz~\cite{Wang2021,Franca2021}. {Besides the VQA landscapes, the quantum optimal control landscapes also suffer from the BPs with the scaling system dimension. As shown in Ref.~\cite{Arenz2020}, for the uniformly random target state generation problem, the control landscape is exponentially flat as a consequence of the concentration of measure.}
The first discovery of the BP phenomenon was in random and deep parameterized quantum circuits~\cite{McClean2018}, where BP was proven to exist when the unitary transformations produced by the PQC form a $2$-design, i.e., they are sufficiently random so that the average
\begin{equation}
\int_{\mathcal{X}} {\rm d}U(\theta) U(\theta)^{\otimes 2}\rho (U^\dagger(\theta))^{\otimes 2}=\int_{\mathcal{U}(N)}{\rm d}\mu (U)U^{\otimes 2}\rho (U^\dagger)^{\otimes 2}
\end{equation}
holds for any $\rho$, where $\mu (U)$ denotes the Haar distribution on the unitary group. In practice, a PQC can approximate a $2$-design when it is sufficiently deep, e.g., a hardware-efficient ansatz with depth $\mathcal{O}({\rm poly}(n))$~\cite{Renes2004,Harrow2009}. {Since the distance of the ansatz from being a 2-design measures its expressibility,} similar to the controllability of a quantum control system, the appearance of BPs indicates that a highly expressive quantum ansatz would exhibit a flat optimization landscape and the training gets harder~\cite{Holmes2022}.
The BP phenomenon can be observed in shallow quantum circuits with certain objective functions~\cite{Cerezo2021b,Uvarov2021}. As shown in Fig.~\ref{fig:locality}, global objective functions (i.e., when the observable $O$ non-trivially acts on all qubits) can lead to BPs for circuits at any depth as long as the layered hardware-efficient ansatz consists of blocks of local 2-designs. When the objective function is
locally defined (i.e., when $O$ acts only on a few qubits), the quantum ansatz is trainable when the circuit depth is at the level of $\mathcal{O}({\rm log}(n))$ because the gradient vanishes at worst polynomially, while BPs appear when the depth is polynomial in $n$. The dependence of the locality of the observable can be found for more general objective functions consisting of Pauli strings~\cite{Uvarov2021}, but the structure of the ansatz has even more subtle influence.
\begin{figure
\centering
\includegraphics[width=0.6\textwidth]{locality.pdf}
\caption{Trainability of the hardware-efficient ansatz in terms of the circuit depth for the global cost function (upper) and the local cost function (lower), respectively~\cite{Cerezo2021b}.}
\label{fig:locality}
\end{figure}
The connection between BPs and non-local objective functions can be understood from another perspective. Recently, it was found that BPs can be also induced by strong entanglement between qubits~\cite{Patti2021,Marrero2021}, the non-local correlation that is deemed as the most valuable resource for achieving advantageous quantum computing. For example, the QNN that satisfies a volume-law in the entanglement entropy is difficult to train because the gradient vanishes exponentially with the number of hidden qubits for any bounded observable operators. Like the above conflict with expressivity, this raises the trade-off between trainability and the entanglement resource that has to be made in practice.
On NISQ devices, the noise also induces BPs~\cite{Wang2021}, because the final quantum state processed by the ansatz exponentially converges to the maximally mixed state which leads to a flattening of the optimization landscape. Complexity analysis shows that, for the local Pauli noise that acts throughout the PQC, the gradient vanishes exponentially in the number of qubits $n$ when the depth of the ansatz is linear with $n$. However, it should be noted that the exponential decay is for the gradient itself, but not the variance of the gradient discussed above, and the decay is purely a decoherence effect that is independent of the parameter initialization strategy, the locality of the objective funciton, or the structure of ansatz.
\subsection{Algorithm design for mitigating barren-plateau effects}
Based on the above understanding of BPs, several strategies have been proposed to avoid or mitigate the effects of the BPs on optimization.
Well-designed PQC architectures can be resilient to the BP phenomenon. In Ref.~\cite{Nakaji2021}, it was shown that the expressibility and the trainability can coexist for a class of shallow alternating layered ansatzs. As shown in Fig.~\ref{fig:QNN}, high trainability can be achieved in QNNs with at most a polynomially decaying gradient~\cite{Pesah2021,Zhang2020}, such as the convolutional QNN and QNN with a tree tensor structure or with a step controlled structure. {Another special structure, namely system-agnostic ansatzs based on trainable Fourier coefficients of Hamiltonian system parameters, has been also shown to be mild or entirely absent from BPs ~\cite{Broers2021}. In addition, it is noted that for a periodic-structured problem-inspired ansatz the variance of the partial derivative is inversely proportional to the dimension of the DLA and thus the gradient scaling can be diagnosed by the degree of the system controllability~\cite{Larocca2021b}. This provides a better understanding and predication {for the presence or absence of BPs} in problem-inspired ansatzs such as QAOA and the Hamiltonian variational ansatz (HVA)~\cite{Larocca2021b,Wiersema2020}.}
\begin{figure
\centering
\includegraphics[width=0.6\textwidth]{QNN.pdf}
\caption{Quantum neural networks with special structure: (a) quantum convolutional neural network involving a sequence of convolutional and pooling layers~\cite{Pesah2021}; (b) quantum neural network with a tree tensor structure~\cite{Zhang2020}; (c) quantum neural network with a step controlled structure~\cite{Zhang2020}.}
\label{fig:QNN}
\end{figure}
In addition to the structure design, BPs can also be avoided by specially designed initialization schemes. For the example of QAOA, a high-quality parameter initialization can be obtained by solving a similar but smaller-size problem owing to the widespread parameter concentration phenomenon~\cite{Brandao2018,Rudolph2021}. For general PQCs, it is proposed that one can randomly select some of the initial parameter values and fit the remaining values such that the result is a fixed unitary matrix~\cite{Grant2019}. The main idea thereof is to restrict the randomness and circuit depth so that it cannot approach a $2$-design. Alternatively, one can reduce the dimensionality of the parameter space by using random PQC architectures containing correlated parameters, even when the objective function is defined by a global operator~\cite{Volkoff2021}. However, expressivity is sacrificed in the circumstance.
Similar to the training of classical deep neural networks, the layerwise training strategy has also been proposed to avoid the problem of BPs because each training stage addresses only a low-depth circuit~\cite{Skolik2021}.
In Ref.~\cite{Verdon2019}, the idea of pretraining is introduced, i.e., the parameters are pretrained by classical neural networks, which are then transferred to the quantum neural network and fine tuned. In this way, the total number of optimization iterations required to reach a given accuracy can be significantly improved.
Both the initialization and ansatz design strategies are useless for noise-induced BPs. In such a case, efficient error mitigation techniques~\cite{Temme2017,Endo2021} become an indispensable ingredient to improve trainability. As an example, a VQE combined with advanced error mitigation strategies was applied to accurately model the binding energy of hydrogen chains, which uses up to a dozen qubits on the Sycamore quantum processor~\cite{Arute2020}.
\section{Conclusion}
\label{Sec:Conclusion}
To conclude, we have reviewed existing studies on the optimization landscape of hybrid quantum-classical algorithms that consist of a quantum ansatz and a classical optimizer. In the greater view that ranges from small-scale quantum control systems to intermediate- and then large-scale quantum circuits, a morphological transition of the landscape is displayed when the optimizer turns from resource-abundant to resource-scarce with respect to the exponentially increasing size of the ansatz.
These results explain why the control of small-size quantum systems was so successful over the past decades, and how hard it will be to work with NISQ devices and algorithms. In particular, the established landscape for high-dimensional quantum systems shows that a compromise needs to be made between the expressivity (or the controllability in the context of control) of the quantum ansatz and the trainability (or the efficiency) of the optimizer. This conclusion implies that, with regard to practical applications, quantum advantages that are expected with NISQ devices may not be easy to find. A better understanding of the ansatz-optimizer trade-off between the quantum and classical players may provide clearer path towards maximally extracting the power of NISQ algorithms. Many open challenges are ahead to explore.
Further into the future, the hybrid quantum-classical algorithms may gradually evolve into full quantum-quantum algorithms where the optimizer is realized by a fault-tolerant programmable quantum computer. In this scenario, the optimizer may possess equivalently as many resources as the ansatz, and the underlying landscape may look very different. Some relevant investigations have been done from the view of convex optimization~\cite{Wu2019,Banchi2020}, but there are many more directions ahead to explore.
\section*{References}
|
1,108,101,562,787 | arxiv | \section*{Nomenclature}
\section{Introduction}
Space settlements need a nearly closed ecosystem for food production. One of
the fundamental parts of a closed ecosystem is the carbon cycle. In
the carbon cycle (Fig.~\ref{fig:earth}), plants fix carbon from
atmospheric CO$_2$ by photosynthesis, producing biomass [approximately
sugar, net formula \textit{n}(CH$_2$O)] and liberating oxygen,
\begin{equation}
\mathrm{CO}_2 + \mathrm{H}_2\mathrm{O} + \mathrm{light} \to \mathrm{CH}_2\mathrm{O} + \mathrm{O}_2.
\label{eq:photosynthesis}
\end{equation}
The biomass is consumed and metabolised by decomposers, animals and
people. Metabolism is the reverse reaction of photosynthesis,
\begin{equation}
\mathrm{CH}_2\mathrm{O} + \mathrm{O}_2 \to \mathrm{CO}_2 +
\mathrm{H}_2\mathrm{O} + \mathrm{energy}.
\label{eq:burning}
\end{equation}
\begin{figure}
\includegraphics[width=0.49\textwidth]{earth.pdf}
\caption{Carbon cycle on Earth.}
\label{fig:earth}
\end{figure}
On Earth, the atmospheric CO$_2$ and the biospheric CH$_2$O contain
comparable amounts of carbon. This is so because the amount of carbon in the atmospheric CO$_2$
is 1.66 kgC/m$^2$, while the world average biospheric carbon is 1.08
kgC/m$^2$ \citep{Bar-OnEtAl2018}\footnote{550 billion tonnes of
carbon\citep[Table 1]{Bar-OnEtAl2018} is 1.08 kgC/m$^2$.}.
Because the atmospheric carbon buffer is large, on Earth the atmospheric CO$_2$ level is not sensitive to
fluctuations in the primary production of the biosphere.
The Earth's atmosphere is massive (10 tonnes per square metre),
while most of Earth's surface area is open ocean, desert or glacier
so that the globally averaged biomass
areal density is only moderate. For example in
average African tropical rainforest, the
carbon stock is 18.3 kgC/m$^2$ i.e.~183 Mg/ha \citep[Table 2]{SullivanEtAl2017},
which is as much as 17 times larger than the global average.
In a space settlement, the atmosphere mass is likely to be much less
than 10 tonnes/m$^2$. In O'Neill's original large habitat concepts
\citep{ONeill1974,ONeill1977}, the atmosphere had several kilometres
depth. However, a massive atmosphere includes a lot of
nitrogen. Nitrogen is not too abundant on asteroids, and would only be
widely available in the outer solar system. One way to avoid the
nitrogen supply problem would be to use a reduced pressure pure oxygen
atmosphere, but then the risk of fire would be increased since the
flame is not cooled by inert gas. Also birds and insects (needed for
pollination) would have difficulty in flying in a pure oxygen
atmosphere, because its mass density would be several times less than
on Earth. Hence it is likely that most settlements would prefer to use
a shallower N$_2$/O$_2$ atmosphere of e.g.~$\sim 50$ m depth
\citep{Janhunen2018}. A 50 m height allows forests with maximum tree
height of $\sim 30$ m plus some room for horizontal winds to mix gases
above the treetops. The nitrogen (47 kg/m$^2$) can be obtained
from the asteroids, as a byproduct of the mining that produces
the combined structures and radiation shielding of the settlement ($10^4$ kg/m$^2$).
Carbon dioxide is necessary for plants to grow. To maintain good
growth, the concentration should be at least $\sim 300$ ppmv (parts
per million by volume). The pre-industrial level on Earth
was 280 ppmv, which, as we know, already allowed plants to grow reasonably well. On the
other hand, for human safety the amount should not exceed $\sim 2000$
ppmv. The U.S.~occupational safety limit for a full working day is
5000 ppmv. The atmospheric concentration must be clearly less, however,
since local concentration near sources is always higher than the
atmospheric one. An example of local source is indoors where people continuously produce CO$_2$ by breathing.
A shallow atmosphere is unable to absorb fluctuations in the biomass
carbon pool while keeping the CO$_2$ level within safe bounds. The
timescales can be rather fast. A tropical rainforest can bind 2.0
kgC/m$^2$/year \citep{wikipediaBiomass,RicklefsAndMiller2000}, so in a
shallow 50 m atmosphere, maximal plant growth could reduce the
concentration of CO$_2$ by 1000 ppmv in as short time as 4.5 days. In
temperate forest the rate of biomass production is somewhat less (1.25
kgC/m$^2$/year) and in cultivated areas even less (0.65
kgC/m$^2$/year)\citep{wikipediaBiomass,RicklefsAndMiller2000}, but the
timescales are still only weeks. Hence the atmospheric CO$_2$ must be
controlled by technical means, which is the topic of this paper.
\rfoot{\textit{NSS Space Settlement Journal}}
\pagestyle{fancy}
\section{Feasibility of a closed ecosystem}
\label{sect:feasibility}
There are many examples of nearly semi-closed small ecosystems that
interact with the rest of Earth's biosphere mainly via air only:
a potted flower, a vivarium, a fenced garden, a small island, etc. To
turn a semi-closed system into a fully closed one, one only needs to
worry about a few gases. This
is an engineering task, where the complexity of biology has been factored out. More specifically, there are five
parameters to consider:
\begin{enumerate}
\item O$_2$ partial pressure. Oxygen is needed for humans and animals to breath, and the partial pressure should be about 0.21 bar.
\item N$_2$ partial pressure. Nitrogen is needed for fire safety and for birds and insects to fly, and the partial pressure should be about 0.79 bar.\footnote{We do not consider argon and other noble gases because they are even less abundant on asteroids than N$_2$. Also, at high concentrations some noble gases have narcotic effects.}
\item CO$_2$ concentration. Carbon dioxide is needed by plants to
grow, but too high a value is unsafe to people. The allowed range is 300--2000 ppmv.
\item CH$_4$ concentration. Methane is not needed so the lower limit is zero, but if generated by the biosphere, it is tolerable up to 30 mbar, which is well below the ignition limit of 44 mbar. Methane's only health effect is oxygen displacement, which is however negligible at 0.03 bar.
\item Other gases should remain at low concentration.
\end{enumerate}
Considering oxygen, a biosphere does not fix it from the
atmosphere. The oxygen atoms that biomass CH$_2$O contains originate
from the water that enters photosynthesis. When organisms
do metabolism and breathe (Eq.~\ref{eq:burning}), they transform O$_2$ molecules into
CO$_2$ molecules, but the process involves no net transfer of O atoms
from the atmosphere into the body. Hence one does not need to do
anything special to maintain the right O$_2$ partial
pressure.
Considering N$_2$, a biosphere fixes some of it since nitrogen is a
key nutrient, present in proteins and DNA. The C:N ratio of cropland
soil is 13.2 and for other biomes it varies between 10.1 and
30 \citep[Table 1]{WangEtAl2010}. For leaves, wood and roots the C:N
ratio is higher \citep{WangEtAl2010}. To get an upper limit, the
carbon stock of average African rainforest is 18.3 kgC/m$^2$
\citep{SullivanEtAl2017}. With the minimal soil C:N ratio across
biomes of 10.1, this corresponds to 1.81 kgN/m$^2$ of fixed
nitrogen. But the mass of nitrogen in a 50 m high atmosphere is 46
kgN/m$^2$, so clearly the biosphere can assimilate only a small
fraction of atmospheric N$_2$. Hence one does not need to do anything
special with N$_2$, either. Its partial pressure will remain
sufficiently close to the initial value. Circulation of
nitrogen from the point of view of nutrient supply is a related
topic \citep{JewellAndValentine2010}, which is however outside
the scope of this paper.
Thus, since N$_2$ and O$_2$ are not changed too much by the
biosphere, the task of maintaining a good atmosphere
is reduced to three issues:
\begin{enumerate}
\item Maintaining CO$_2$ within the 300--2000 ppmv bounds. This is treated
in the next section.
\item Ensuring that if net methane is emitted by the biosphere, its
concentration does not increase beyond $\sim 3$\,\% by
volume. \footnote{On Earth the methane concentration is 1.8 ppmv, which is responsible for part of the terrestrial greenhouse effect. For atmospheric height of 50 m, a similar greenhouse effect arises at 200 times higher concentration, i.e.~at 360 ppmv. Thus a 3\,\% (30,000 ppmv) methane concentration would cause a significant greenhouse effect for a 50 m atmosphere, which should be taken into account in the settlement's heat budget. Greenhouse effects are nonlinear so quantitative prediction would need modelling.}
\item Ensuring that the concentration of other gases stay low. This may
possibly happen automatically, because plants are known to
remove impurities from air \citep{WolvertonEtAl1989}. We shall say a bit more on this in the Discussion section below.
\end{enumerate}
\section{Biomass burning}
Above we described the carbon cycle problem of the orbital space
settlement. The problem is that the settlement's atmosphere is much
shallower than on Earth, and hence the atmospheric carbon buffer is
much smaller than the biospheric carbon stock. Fluctuations in the
amount of biospheric carbon can occur for many reasons, and the
fluctuations would cause the atmospheric CO$_2$ concentration to go off bounds.
A way to solve the problem is to store some biomass and to burn it
when the atmosphere needs more CO$_2$
(Fig.~\ref{fig:carbcycle}). Agricultural waste is a necessary
byproduct of food production. One stores the waste biomass in such a
way that it does not decompose and then burns it at a controlled
rate. Methods to store biomass include drying, freezing and
freeze-drying. Drying is feasible at least if the relative humidity
is not too high.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{carbcycle.pdf}
\caption{Carbon cycle in the settlement.}
\label{fig:carbcycle}
\end{figure*}
It is sufficient for only part of the biomass to go through the
storing and burning pathway. The higher the burned fraction is, the
larger is the CO$_2$ control authority of the scheme. The control
authority is sufficient if the total amount of carbon in the
settlement exceeds the maximum mass of carbon that can be fixed in
living organisms at any one time. When the atmospheric CO$_2$ drops
below a target value, one burns some stored biomass. If there is too
much of CO$_2$ in the atmosphere, one ceases the burning activity for
a while. After some delay plant growth will take down the CO$_2$
concentration.
Burning consumes oxygen, but the same amount of oxygen is liberated
into the atmosphere when the CO$_2$ is used by photosynthesis
(Eqs.~\ref{eq:photosynthesis} and \ref{eq:burning}). Thus the O$_2$
concentration stays constant, apart from an insignificant
part that exists temporarily as CO$_2$. This is especially advantageous in the
build-up phase of the biosphere. In the build-up phase, one needs to
add carbon constantly to the atmosphere, as trees and other plants are
growing. Depending on the type of ecosystem we are building,
the growth phase might last up to tens or even hundreds of years as
trees grow and the soil builds up. It is not necessary to wait for
the growth phase to finish until people can move in, but while the
growth phase is ongoing, one must be
prepared to put in new carbon as needed to avoid CO$_2$ starvation. If this carbon would be added in the form of new CO$_2$ from
an external tank, for example, the level of atmospheric oxygen would
build up. However, if one adds the carbon by burning biomass, sugar or
carbon, the O$_2$ level stays constant.\footnote{Burning hydrocarbons
($\sim$CH$_2$) in the buildup phase is not recommended, because then
net consumption of O$_2$ would take place as oxygen would be bound
with hydrogen to make water.} Carbon can be sourced from
carbonaceous asteroids. Possibly sugar [net formula \textit{n}(CH$_2$O)] can be synthesised from
C-type asteroids as well. Thus the biosphere can be bootstrapped
without massive importing of biomass from Earth.
When burning biomass, the rate must be controllable and fire safety
must be maintained. One also wants to minimize smoke production (particulate
emission), because otherwise the settlement's sunlight-passing windows
would need frequent washing and because we want to avoid
atmospheric pollution \citep{SoilleuxAndGunn2018}. One way to facilitate clean burning is to
mechanically process the biomass (or part of it which is used in the
ignition phase) into some standardised form such as pellets
\citep{ThomsonAndLiddell2015} or wood chips. It is also possible to use a
bioreactor to turn the biomass into biogas (methane) which burns
without smoke. To further reduce smoke, one might add an electrostatic smoke precipitator in
the smokestack.
A combination of approaches is also possible. One can
ignite the flame using easy fuel and then continue with more
unprocessed material. The burning activity could be continuous, but in a
50 m high atmosphere, enough constant CO$_2$ is reached by a
daily burning session.
Atmospheric pollution should be avoided, so smoke production
should be minimised. However,
plants and soil are known to clean up the atmosphere rather well
\citep{WolvertonEtAl1989}. Hopefully, if
the above measures to promote clean burning are used, the plants can
accomplish the rest so that the atmosphere remains clean. To investigate the question
experimentally, one could burn
biomass inside a greenhouse by different methods, while using
standard air quality monitoring equipment for measuring the atmosphere.
In a rainforest, the maximum carbon fixation rate is 2
kgC/m$^2$/year and in a cultivated area it is 0.65 kgC/m$^2$/year (see
last paragraph of Introduction). If the average is $\sim 1$
kgC/m$^2$/year and if 50\,\% of it is burned while the remaining
part is decomposed naturally or eaten as crop, then the burned
amount is 0.5 kgC/m$^2$/year, which corresponds to 34 kg of dry
biomass per hectare per day. When wood is burned, the mass fraction
of ash varies between 0.43 and 1.8 per cent \citep[Table
1]{MisraEtAl1993}, so that the ash produced is
a few hundred grams per day per hectare. The ash must be
distributed evenly back into the environment. The amount of ash is
modest enough that the settlers could even do the spreading manually
if they wish. The heat produced by the
burning is of the order of 0.8 W/m$^2$ as a temporal average, which
is two orders of magnitude less than the heat dissipation of sunlight, or
artificial light if that is employed.
In reality, a smaller burning rate than
this calculation would probably suffice. It is only necessary to
burn enough biomass to maintain sufficient control authority of the
CO$_2$ level. Burning as much 50\,\% of the growth is likely to
be overkill, but we assume it to arrive at a conservative estimate.
Animal and human wastes are not burned, but composted to make
leaf mold which is spread onto the fields. Our recommendation is to
primarily burn agricultural plant waste which is poor in non-CHO
elements, comprising substances such as cellulose, lignin and
starch. In this way we avoid unnecessarily releasing fixed nitrogen
and other valuable nutrients into the atmosphere, where they would
also be pollutants.
\section{Backup techniques}
As was pointed out above, typically the biosphere is not able to fix
so much oxygen or nitrogen that it would change the atmospheric
concentrations of these gases too much. However, to facilitate dealing
with accident scenarios like air leakages or atmospheric poisonings,
having compressed or liquefied O$_2$ and N$_2$ available could be
desirable \footnote{In addition, one probably wants to divide the
settlement into separately pressurisable sectors
\citep{Janhunen2018} so that people can be evacuated from a sector
that suffered an accident.}. If so, it may make sense to also have
a mechanism available for moving O$_2$ and N$_2$ selectively from the
habitat into the tanks by e.g.~cryogenic distillation of air \citep{SoilleuxAndGunn2018}. If such
process is implemented, then CO$_2$ is also separable. For managing
CO$_2$, such process would be energetically inefficient because
e.g.~to reduce the CO$_2$ concentration into one half, one has to
process 50\,\% of the air by liquefaction, separating out the CO$_2$
and returning the O$_2$ and N$_2$ back into the settlement. However,
if energy is available, energy efficiency is not a requirement for
backup strategies. Chemical scrubbing of CO$_2$ into amines or
hydroxides is another possible backup strategy for emergency removal
of CO$_2$. Table \ref{tab:alternatives} lists these
alternatives and their potential issues.
\begin{table}[htb]
\caption{Some alternatives of habitat CO$_2$ control.}
\label{tab:alternatives}
\begin{tabular}{|l|l|}
\hline
\textbf{Method} & \textbf{Potential issues} \\
\hline
Biomass burning & --Smoke \\
& --Need to handle fire \\
\hline
Cryo-distillation & --Power-intensive \\
or air & --Reliability concern/moving parts \\
& --Mass overhead of CO$_2$ tanks \\
\hline
Scrubbing & --Reliability concern/moving parts\\
into amines & --Safety concern due to chemicals \\
or hydroxides & \\
\hline
\end{tabular}
\end{table}
\section{Discussion}
As described in Section \ref{sect:feasibility}, gardens,
vivariums and other widespread examples of semi-closed (i.e., only
gases exchanged) ecosystems show that closed biospheres are
feasible. The only issue is to maintain the right atmospheric
composition, but this is only a technical problem to which there are
many solutions. The biomass burning is one of them. The complexity of
biology cannot spoil the feasibility of closed biospheres. If it
could, it would already have been seen in gardens and vivariums. The
complexity of biology is factored out of the feasibility equation.
The biomass burning method works, as such, only in a tropical climate
with no dark season. During dark season photosynthesis is stopped
and the level of CO2 would probably build up too high in the
atmosphere. Therefore, if seasons are wanted, one has to use sectoring
such as discussed in \citet{Janhunen2018}. Different sectors must
then be phased in different seasons and air must be exchanged between
sectors.
Biomass burning seems to be a straightforward, scalable, low-tech
and reliable solution. A possible drawback is the production of smoke.
As on Earth, plants and soil are absorbers of air pollution, but production
of smoke should nevertheless be minimised to prevent health issues.
In addition, smoke in a settlement environment is more harmful than on Earth,
because the settlement has windows through which sunlight enters, or
if it is artificially lighted, the lamps have cover glasses.
The production of smoke can be minimised by technical means such as
igniting the fire by a biogas flame or by mechanically making the
biomass into pellets or other granular form.
Biomass burning involves fire, and fire is in principle a risk because
conflagration in a space settlement would be very dangerous.
Concerning fire risk in general, it is not feasible to eliminate
it entirely by removing all possible ignition sources, e.g.~because
electric equipment is necessary and malfunctioning electric
equipment is a potential ignition source. The risk of wildfire can be
lowered by having frequent artificial rain so that the environment is
fresh and green. Lush nature also boosts agricultural output and is
good for aesthetic reasons. However, not everything can be humid since
the stored biomass must be dry in order to burn cleanly. Thus
the relative humidity should be less than 100\,\%, which is
also convenient for people. To reduce the fire risk further, an easy
way is to store the dry biomass far from the locations where it is
burned. Artificial rain or sprinkler system must
be possible to turn on quickly in case a fire breaks out.
Also other approaches for reducing the fire risk are possible. For
example, one can freeze-dry the biomass and store it in a refrigerated
space. Storage under nitrogen-enriched atmosphere is another
possibility, which eliminates the fire risk during
storage. Nitrogen-enriched gas can be made e.g.~by filtering air
through certain polymeric membranes.
The methods discussed in this paper do not involve moving materials
through airlocks. Thus there is no issue of losing atmospheric gases
into space.
After O$_2$, N$_2$ and CO$_2$ are controlled, the remaining issue is
how to keep the level of other volatile compounds low. Plants remove
harmful impurities \citep{WolvertonEtAl1989}, but they also produce some volatile organic
compounds (VOCs) of their own, such as isoprene and terpenes. This
smell of plants can be experienced e.g.~in greenhouses and it is
generally considered pleasant. However, too much of a good thing is potentially
a bad thing, so let us briefly discuss loss mechanisms of VOCs. It is
thought that the hydroxyl radical OH is an important ``detergent'' of
the troposphere that oxidises VOCs \citep{LelieveldEtAl2004}. On
Earth, the primary formation of OH is by solar UV
and is highest in the tropics where the solar zenith angle is
smallest, the stratospheric ozone layer is thinnest and the humidity
is highest \citep{LelieveldEtAl2004}. Thus, in the settlement it might
be a good idea not to filter out the solar UV entirely, but let a
small part of it enter so that the UV radiation level mimics the
conditions in Earth's troposphere, thus maintaining some OH
to remove VOCs and also methane by oxidation.
One of the referees pointed out that the carbon
stock of soil might potentially grow in time due to incomplete
decomposing. While certain biomes like some wet peatlands
exhibit slow continuous carbon accumulation, typical biomes such as forests
have moderate carbon stocks \citep{WangEtAl2010} that
presumably have not essentially grown even in
millions of years. Earth's significant fossil coal deposits are thought to
have been accumulated before lignin-degrading organisms developed
around the end of the Carboniferous period
\citep{FloudasEtAl2012}. On modern Earth, termites are good
lignin decomposers \citep{ButlerAndBuckerfield1979} so their presence in the habitat ecosystem
could be beneficial for efficient carbon circulation.
\section{Summary and conclusions}
Controlling a habitat's carbon dioxide level is a nontrivial problem
because the atmospheric volume per biosphere area is typically much
smaller than on Earth. The problem is important because too low CO$_2$
($\lessapprox 300$ ppmv) slows down plant growth and thus food
production while too high concentration ($\gtrapprox 2000$ ppmv) begins
to cause health problems for people.
The problem can be solved by biomass burning. In particular,
agricultural waste is a necessary byproduct of food production. One
can dry and store this biomass and burn some of it when the CO$_2$
level in the settlement's atmosphere drops too low. The method is
straightforward, robust and low-tech. It ensures large control
authority of the CO$_2$ while keeping the O$_2$ partial pressure
unchanged. The method scales to habitats of all sizes.
In the initial growth phase of the biosphere, one can obtain the
CO$_2$ by burning sugar or carbon. They can be sourced from
carbonaceous asteroid materials so that bootstrapping the biosphere
does not require lifting large masses from Earth.
Closed ecosystems in habitats are feasible. We know this because there
are many examples of semi-closed ecosystems such as gardens --
and because it has been done e.g.~in Biosphere-II and BIOS-1, 2 and 3\citep{SalisburyEtAl1997}.
Maintaining the atmosphere is an engineering problem that can
be solved. For gases other than CO$_2$, the problem is in fact
solved automatically. For the control of CO$_2$, the biomass burning
method seems simple and effective.
\section{Acknowledgement}
The results presented have been achieved under the framework of the
Finnish Centre of Excellence in Research of Sustainable Space (Academy
of Finland grant number 312356). I am grateful to journalist Hanna
Nikkanen for providing her compilation of papers on the topic.
|
1,108,101,562,788 | arxiv | \section{\bf{Introduction}}
A smooth metric measure space is a triple $(M, g, e^{-f}dvol)$, where $M$ is a smooth manifold; $g$ is the
Riemannian metric
on $M$; $f$ is a smooth function and $dvol$ is the volume form induced by $g$. This object has been studied
extensively
in geometric analysis in recent years, e.g, \cite{[P1]}\cite{[Lo]}\cite{[WW]}\cite{[MW]}\cite{[MW1]}\cite{[MW2]}.
Perelman \cite{[P1]} introduces
a functional which involves an integral of the scalar curvature with respect to a weighted measure. The Ricci flow is thus
a gradient
flow of such a functional. Metric measure spaces also arise in smooth collapsed Gromov-Hausdorff limits. In the
physics
literature, $f$ is refered to as the dilation field. On the smooth metric measure space, there is an important
curvature quantity
called the Bakry-Emery Ricci curvature, which is defined in \cite{[BE]} by
$$Ric_f = Ric + \nabla^2 f.$$
One observes that $Ric_f = \lambda g$ for some constant $\lambda$ is exactly the gradient Ricci soliton equation,
which
plays an essential role in the analysis of the singularities of the Ricci flow.
A lower bound for Bakry-Emery curvature is a natural assumption to make and it has significant geometric
consequences. More generally, $Ric_f$ has a natural extension to metric measure spaces, see \cite{[LV]}\cite{[S1]}\cite{[S2]}.
Recently, in \cite{[WW]}, G. F. Wei and W. Wylie proved the weighted volume comparison theorems; O. Munteanu and J. Wang
established
the gradient estimate for positive weighted harmonic functions. It should be noted that a while back, Lichnerowicz
\cite{[Lc]}
has generalized the classical Cheeger-Gromoll splitting theorem \cite{[CG]} to the metric measure spaces with $Ric_f \geq 0$ and
$f$ is bounded(See \cite{[FLZ]} for more generalizations).
In Riemannian geometry, minimal surfaces arise naturally in the variation of the area functional.
A minimal surface is called stable if the second variation of the area is nonnegative for any compactly
supported variations.
Minimal surfaces have their own beauties, e.g, Bernstein's theorem.
Moreover, they have important applications to the geometry and topology of manifolds.
For example, more than 60 years ago, the Synge theorem and the Bonnet-Meyers theorem were proved by the variation of geodesics(one
dimensional minimal surface).
More recently, by using minimal surfaces, Schoen and Yau proved the famous positive mass conjecture \cite{[SY2]}\cite{[SY3]}
. Meeks and Yau \cite{[MY1]}\cite{[MY2]} proved the loop theorem, sphere theorem
and Dehn lemma together with the equivariant forms. In \cite{[SY1]}, Schoen and Yau proved that a complete
noncompact 3-manifold with positive Ricci curvature is diffeomorphic to $\mathbb{R}^3$.
Anderson \cite{[An1]} studied the restriction of the first betti number for manifolds with nonnegative Ricci curvature;
the author \cite{[L]} used the minimal surface theory to classify complete three dimensional
manifolds with nonnegative Ricci curvature.
In the study of smooth metric measure spaces, it is natural to add a weight $e^{-f}$ on the area functional
of the surface.
The critical points of the weighted area functional are called weighted minimal surfaces. A weighted minimal
surface is called stable if the second
variation of the weighted area is nonnegative.
Very recently, X. Cheng, T. Mejia and D. T. Zhou \cite{[CZ]} studied the stability condition and compactness of
$f$-minimal surfaces. They \cite{[CZ1]} also gave eigenvalue estimates for certain closed $f$-minimal surfaces.
In this paper, we will investigate some geometric and topological results for smooth metric measure spaces via
analyzing stable weighted minimal surfaces. We shall assume that the Bakry-Emery Ricci curvature is nonnegative.
Below is the organization of this paper.
In section 2, we will derive the second variation formula for the weighted area(see also \cite{[CZ]} and \cite{[Bay]} for the derivation).
We give an application to compact stable $f$-minimal surfaces in section 3. This generalizes some previous works of Heintze and Karcher \cite{[HK]}.
An example is given in section 4 to show that a result of Schoen and Fischer-Colbrie \cite{[FS]} cannot
be extended to the case when Bakry-Emery Ricci curvature is nonnegative.
In section 5 we give an application of the stability inequality to noncompact case.
In section 6, we study the topology of complete 3-manifolds with nonnegative Bakry-Emery Ricci curvature.
\section*{Acknowledgements}
The author would like to express his deep gratitude to his advisor, Professor Jiaping Wang, for his interest in this problem and useful suggestions.
He also thanks Professor Frank Morgan for pointing out more references.
\section{\bf{Second variation formula}}
\begin{definition}
Let $(M^m, g, e^{-f}dv)$ be a complete smooth metric measure space and $\Sigma$ be a complete submanifold in $M$. We say $\Sigma$ is $f$-minimal in $M$, if the first variation of the $e^{-f}$ weighted area functional vanishes at $\Sigma$. $\Sigma$ is called stable $f$-minimal if the second variation of the $e^{-f}$ weighted area functional is nonnegative along any compactly supported variational normal vector field.
\end{definition}
\begin{prop}
Let $(M^m, g, e^{-f}dv)$ be a complete smooth metric measure space and $\Sigma^n$ be a complete $f$-minimal submanifold in $M$.
Let $e_i(0\leq i\leq n)$ be an orthonormal frame in an open set of $\Sigma$. Define $\nabla^T$ and $\nabla^\perp$ to be the connections
projected to the tangential and normal spaces on $\Sigma$.
Then $$H = \nabla^{\perp}f$$ where $H = -\sum\limits_{i}\nabla_{e_i}^\perp e_i$ is the mean curvature vector. If $\Sigma_t$($-\epsilon < t < \epsilon$) is a smooth family of the submanifolds such that $\Sigma_0 = \Sigma$ and the variational normal vector field $\nu$ is compactly supported on $\Sigma_t$, then at $t=0$,
$$\frac{d^2\int_{\Sigma_t}e^{-f}}{dt^2} = \int_\Sigma e^{-f}(-\sum\limits_{i=1}^nR_{i\nu\nu i}-\frac{1}{2}\Delta_\Sigma(|\nu|^2)+|\nabla_\Sigma \nu|^2-2|A^\nu|^2
-f_{\nu\nu}+\frac{1}{2}\langle\nabla^Tf, \nabla^T(|\nu|^2)\rangle)$$ where $A^\nu_{ij} = -\langle \nabla_{e_i}e_j, \nu\rangle$.
\end{prop}
\begin{proof}
For any point $p \in \Sigma_0$, consider a local frame $e_i(1\leq i \leq n)$ near $p$ such that they are tangential to $\Sigma_t$ and $[e_i, \nu] = 0$ for all small $t$. We can also assume that at $p$, $e_i$ is an orthonormal frame and $\nabla^{T}_{e_i}e_j = 0$. Let $g_{ij} = \langle e_i, e_j\rangle$ and $g^{ij}$ be the inverse matrix of $g_{ij}$. We have
$$\frac{d\int_{\Sigma_t}e^{-f}}{dt} = \int_{\Sigma_t}e^{-f}\langle H - \nabla^{\perp}f, \nu\rangle$$
where $$H = -(\nabla_{e_i}e_j)^{\perp}g^{ij}.$$ Thus if $\Sigma_0$ is $e^{-f}$ minimal, $$H = \nabla^{\perp}f.$$
At $p$, we have
\begin{equation}
\begin{aligned}
\frac{d\langle H, \nu\rangle}{dt}& = -(\langle\nabla_{\nu}\nabla_{e_i}e_j, \nu\rangle g^{ij} + \langle\nabla_{e_i}e_j, \nabla_\nu\nu\rangle g^{ij} + \langle\nabla_{e_i}e_j, \nu\rangle\nu(g^{ij}))\\&= -(\sum\limits_{i=1}^nR_{\nu ii\nu}
+ \langle\nabla_{e_i}\nabla_\nu e_i, \nu\rangle -\langle H, \nabla_\nu\nu\rangle - \sum\limits_{i, j = 1}^n
\langle\nabla_{e_i}e_j, \nu\rangle(\langle \nabla_{\nu}e_i, e_j\rangle + \langle \nabla_{\nu}e_j, e_i\rangle))\\&=
-(\sum\limits_{i=1}^nR_{\nu ii\nu}+\frac{1}{2}\Delta_\Sigma(|\nu|^2)-\sum_{i=1}^{n}|\nabla_{e_i}\nu|^2+2\sum\limits_{i, j = 1}^n|\langle\nabla_{e_i}e_j, \nu\rangle|^2 -\langle H, \nabla_\nu\nu\rangle).
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\frac{d\langle \nabla^\perp f, \nu\rangle}{dt}& =\nu\nu (f)\\&=f_{\nu\nu} + \langle\nabla^T f, \nabla_\nu\nu\rangle + \langle\nabla^\perp f, \nabla_\nu\nu\rangle\\&=f_{\nu\nu}+\sum\limits_{i=1}^ne_i(f)\langle e_i, \nabla_\nu\nu\rangle +
\langle\nabla^\perp f, \nabla_\nu\nu\rangle\\&=f_{\nu\nu} - \frac{1}{2}\langle\nabla^T f, \nabla^T(|\nu|^2)\rangle+
\langle\nabla^\perp f, \nabla_\nu\nu\rangle.
\end{aligned}
\end{equation}
Since $\Sigma_0$ is $f$ minimal, by the two equalities above, we have
\begin{equation}
\begin{aligned}
\frac{d^2\int_{\Sigma_t}e^{-f}}{dt^2}& = \frac{d\int_{\Sigma_t}e^{-f}\langle H - \nabla^{\perp}f, \nu\rangle}{dt}
\\&=\int_\Sigma e^{-f}(-\sum\limits_{i=1}^nR_{i\nu\nu i}-\frac{1}{2}\Delta_\Sigma(|\nu|^2)+|\nabla_\Sigma\nu|^2-2|A^\nu|^2
-f_{\nu\nu}+\frac{1}{2}\langle\nabla^Tf, \nabla^T(|\nu|^2)\rangle).
\end{aligned}
\end{equation}
\end{proof}
\begin{cor}
Let $(M^m, g, e^{-f}dv)$ be a complete oriented Riemannian manifold and $\Sigma_t$ be a smooth family of oriented
hypersurfaces in $M$. Let $N$ be the unit normal vector field on $\Sigma_t$. Suppose the variational vector field
for $\Sigma_t$ is given by $\lambda N$ where $\lambda$ is smooth function with compact support on $\Sigma_t$.
If $\Sigma_0$ is $e^{-f}$ minimal, then the mean curvature of $\Sigma_0$ satisfies $$H = f_n.$$ where $f_n$ is the normal derivative of $f$.
Moreover,
$$\frac{d^2\int_{\Sigma_t}e^{-f}}{dt^2}|_{t=0} =
\int_{\Sigma_0}(|\nabla \lambda|^2 - \lambda^2(Ric_f(n,n) + |A|^2))e^{-f}$$ where $Ric_f = Ric + \nabla^2 f$, $A$ is the second fundamental form.
Therefore, the stability inequality is $$\int_{\Sigma_0}(|\nabla \lambda|^2 - \lambda^2(Ric_f(n,n) + |A|^2))e^{-f}\geq 0$$
for any compactly supported function $\lambda$ on $\Sigma_0$.
\end{cor}
\begin{proof}
Since $\Sigma_0$ is weighted minimal, according to Proposition 1, $$H = \langle\nabla^\perp f, N\rangle = f_n.$$
Let $\nu = \lambda N$. For an orthonormal frame $e_i$ at a point on $\Sigma_0$,
\begin{equation}
\begin{aligned}
|\nabla_\Sigma\nu|^2 &= |\langle\nabla_{e_i}(\lambda N), \nabla_{e_i}(\lambda N)\rangle|^2
\\&=|\nabla\lambda|^2+\sum\limits_{i, j}|\langle\nabla_{e_i}(\lambda N), e_j\rangle|^2\\&=
|\nabla\lambda|^2 + \lambda^2|A|^2.
\end{aligned}
\end{equation}
Therefore
\begin{equation}
\begin{aligned}
\frac{d^2\int_{\Sigma_t}e^{-f}}{dt^2}& =\int_{\Sigma_0} e^{-f}(-\sum\limits_{i=1}^nR_{i\nu\nu i}-\frac{1}{2}\Delta_\Sigma(|\nu|^2)+|\nabla_\Sigma\nu|^2-2|A^\nu|^2
-f_{\nu\nu}+\frac{1}{2}\langle\nabla^Tf, \nabla^T(|\nu|^2)\rangle)\\&=\int_{\Sigma_0} e^{-f}
(-\lambda^2Ric_f(n, n)-\lambda\Delta\lambda-\lambda^2|A|^2+\langle\nabla f, \nabla\lambda\rangle\lambda)
\\&=\int_{\Sigma_0}(|\nabla \lambda|^2 - \lambda^2(Ric_f(n,n) + |A|^2))e^{-f}.
\end{aligned}
\end{equation}
In the last step, we have used the integration by parts.
\end{proof}
\section{\bf{An application to the compact case}}
In \cite{[S]}, Simons observed that there are no closed, stable minimal 2-sided hypersurfaces in a manifold with positive Ricci
curvature. Later Heintze and Karcher \cite{[HK]} proved that the exponential map of the normal bundle of a hypersurface
$\Sigma\in M$ is area decreasing, if $\Sigma$ is stable, minimal and $M$ has nonnegative Ricci curvature.
Anderson extended this result, he also proved that a version of the Cheeger-Gromoll splitting theorem in the compact
case, see \cite{[An2]}. More recently, F. Morgan \cite{[M]} obtained the upper bound of weighted volume of one side of a hypersurface which generalizes some works in \cite{[HK]}. See also chapeter $18$ in \cite{[M1]} for more discussion.
In this section, we shall prove the following:
\begin{theorem}
{Let $(M^m, g, e^{-f}dv)$ be an oriented complete Riemannian manifold and $\Sigma$ be
a closed oriented stable $f$-minimal hypersurface in $M$.
If $Ric_f \geq 0$, then $\Sigma$ is totally geodesic and $Ric_f(n, n) = 0$. If $\Sigma$ is
weighted $f$-area-minimizing in its homology class, then $M^m$ is isometric to a quotient of
$\Sigma \times \mathbb{R}$. In this case, if $m = 3$, then topologically $\Sigma$ is either a sphere or a torus.
In the torus case, $M^3$ is flat.}
\end{theorem}
\begin{proof}
The first conclusion follows if we take $\lambda = 1$ in corollary $1$. Let $N$ be the unit normal
vector field on $\Sigma$. For $x$ close to $\Sigma$ in $M$, consider the oriended distance function
$d(x)=Sign(x)dist(x, \Sigma)$, where $Sign(x)$ is $1$ if $x$ is on one side of $\Sigma$; $Sign(x) = -1$ if $x$
is on the other side of $\Sigma$. Then $d(x)$ is smooth near $\Sigma$ and let $\Sigma_t$ be the level set
of $d(x)$. Then for $t$ small, $\Sigma_t$ is a smooth family of hypersurfaces on $M$ and we have
$$\frac{d(H-f_n)}{dt} = -Ric(n, n)-|A|^2-f_{nn} = -Ric_f(n, n)-|A|^2 \leq 0.$$ Note that $\Sigma_0$
is totally geodesic and $f_n = H = 0$ at $t=0$. Therefore $$H-f_n \leq 0$$ for
all $t$ and $$\frac{d\int_{\Sigma_t}e^{-f}}{dt} = \int_{\Sigma_t}(H-f_n)e^{-f} \leq 0.$$ Since $\Sigma_0$ is
area-minimizing in its homology class, $\Sigma_t$ are all totally geodesic. By induction, one can easily
show that $M$ is isometric to the quotient of $\Sigma_0 \times \mathbb{R}$. Therefore
$$f_n = H = 0, f_{nn} = \frac{\partial f_n}{\partial t} =0, Ric_{nn} = 0$$ for all $t$.
Now consider the case when $m = 3$. Let $e_1, e_2$ be a local orthonormal frame on $\Sigma_0$. Let $S$ be
the scalar curvature on $M$; $S_f = S +\Delta f$; $K_\Sigma$ be the Gaussian curvature on $\Sigma$. Since
$\Sigma_0$ is totally geodesic,
$$2K_{\Sigma_0} = 2R_{1221} = S - 2Ric_{nn} = S_f - f_{11}- f_{22} = S_f - \Delta_{\Sigma_0}f.$$ In
the above equality, we have used the fact that $f_{nn} = 0$. Since $S_f\geq 0$,
the Gauss-Bonnet theorem says that $\Sigma_0$ is either a sphere or a torus. In the torus case, $S_f = 0$
everywhere, thus on $\Sigma$, $Ric + \nabla^2 f = 0$. So $\Sigma$ is a $2$ dimensional steady soliton.
Thus the Gaussian curvature on $\Sigma$ is nonnegative. This means that $\Sigma$ and $M$ are flat.
\end{proof}
\section{\bf{An example}}
In \cite{[FS]}, R. Schoen and D. Fischer-Colbrie proved the following theorem:
\begin{theorem}[R. Schoen and D. Fischer-Colbrie]
Let $M$ be a complete oriented 3-manifold with nonnegative scalar curvature. Let $\Sigma$ be an oriented complete stable minimal surface in $M$, then
if $\Sigma$ is compact, then it is conformal to $\mathbb{S}^2$ or a torus $\mathbb{T}^2$; if $\Sigma$ is not compact, it is conformally covered by $\mathbb{C}$.
\end{theorem}
In view of Theorem 2, it is natural to ask whether we can weaken the condition in theorem 1 when $dim(M) = 3$. We will show that at least locally,
even if the Bakry-Emery Ricci curvature is nonnegative, the stability of a weighted stable minimal surface $\Sigma$ does not provide any information on the conformal structure on $\Sigma$.
Let $M^3$ be an oriented manifold with nonnegative Bakry-Emery Ricci curvature and $\Sigma$ be an oriented stable $f$-minimal surface in $M$.
In this section we will give an explicit example so that $\Sigma$ is hyperbolic.
\bigskip
Let $(\Sigma, ds_{\Sigma}^2)$ be a complete surface with curvature $-1$. Let $M = (-\frac{1}{2}, \frac{1}{2}) \times \Sigma$ and define metric on $M$ by $$ds^2 = dt^2 + g(t)ds_{\Sigma}^2.$$ Note that the metric on $M$ is not complete. Let $p\in M$ and consider a product chart $U\ni p$ such that $e_1=\frac{\partial}{\partial x_1}, e_2=\frac{\partial}{\partial x_2}$ are tangential to $\Sigma_t$ and $\frac{\partial}{\partial t} = e_3$ on $U$.
We may assume that $e_1, e_2, e_3$ is an orthogonal frame in $U$ and $ds_\Sigma^2(e_1, e_1) = ds_\Sigma^2(e_2, e_2)= 1$. Then $$\langle\nabla_{e_1}e_3, e_1\rangle=\langle\nabla_{e_2}e_3, e_2\rangle=\frac{1}{2}g'(t),$$ $$\langle\nabla_{e_1}e_3, e_2\rangle = \langle\nabla_{e_2}e_3, e_1\rangle = 0.$$ Therefore, $\nabla^{\Sigma_t}A = 0$ for all $t$.
By Gauss equation, $$K_{\Sigma_t} - \frac{R_{1221}}{g^2} = \frac{A_{11}A_{22}}{g^2} = \frac{\langle\nabla_{e_1}e_3, e_1\rangle\langle\nabla_{e_2}e_3, e_2\rangle}{g^2}.$$ Since the Gaussian curvature $K_{\Sigma_t} = -\frac{1}{g}$, $$R_{1221} = -g-\frac{1}{4}g'^2.$$
It is easy to see that $\nabla_{e_3}e_3 \equiv 0$, thus
\begin{equation}
\begin{aligned}
R_{1331} &= \langle\nabla_{e_1}\nabla_{e_3}e_3, e_1\rangle-\langle\nabla_{e_3}\nabla_{e_1}e_3,e_1\rangle\\&=-\langle\nabla_{e_3}\nabla_{e_1}e_3,e_1\rangle\\&=
-(e_3(\langle\nabla_{e_1}e_3, e_1\rangle)-|\nabla_{e_3}e_1|^2)\\&=
-\frac{1}{2}g''+\frac{1}{4}\frac{g'^2}{g}.
\end{aligned}
\end{equation}
From the same computation, we see that $R_{1332} = 0$.
By Codazzi equation, $$R_{1223} = (\nabla^{\Sigma_t}_{e_1}A)(e_2, e_2) - (\nabla^{\Sigma_t}_{e_2}A)(e_1, e_2) = 0.$$ Therefore
$$Ric_{11} = \frac{R_{1221}}{g} + R_{1331} = -1-\frac{1}{2}g'' = Ric_{22},$$ $$Ric_{33} = -\frac{g''}{g}+\frac{1}{2}(\frac{g'}{g})^2,$$
$$Ric_{12} = Ric_{13} = Ric_{23} = 0.$$
Let $f =f(t)$ be a function of $M$, then $$f_{11} = - \langle\nabla f, \nabla_{e_1}e_1\rangle=\frac{g'f'}{2}=f_{22},$$ $$f_{12}=f_{13}=f_{23}=0, f_{33} = f''.$$
Therefore $$Ric_f(e_1, e_1) = -1-\frac{g''}{2}+\frac{f'g'}{2}, Ric_f(e_3, e_3) = \frac{-2g''g+g'^2+2g^2f''}{2g^2}$$
If $f=-2t^2, g = 1-2t^2$, then
one gets that $$Ric_f(e_2, e_2) = Ric_f(e_1, e_1) = 1+8t^2 \geq 0, Ric_f(e_3, e_3) = 4(\frac{1}{(1-2t^2)^2}-1)\geq 0.$$
Therefore, $M$ has nonnegative Bakry-Emery Ricci curvature.
Moreover, the second fundamental form and $Ric_f(e_3, e_3)$ vanish at $t = 0$. According to corollary 1, $\Sigma_0$ is a stable $f$-minimal surface in $M$. However, $\Sigma$ is hyperbolic.
\section{\bf{Applications to the noncompact case}}
Now consider the case when $\Sigma$ is noncompact. The following proposition follows from a simple cut-off argument:
\begin{prop}
{Let $(M^m, g, e^{-f}dv)$ be an oriented complete Riemannian manifold and $\Sigma$ be a complete noncompact oriented stable $f$-minimal hypersurface in $M$.
If $Ric_f \geq 0$ on $\Sigma$ and that the weighted volume growth of $\Sigma$ with respect to its intrinsic distance to a point $p \in \Sigma$ satisfy $$V_{\Sigma, f}(B_p(r)) \leq Cr^2$$ for all large $r$ , then $\Sigma$ is totally geodesic and $Ric_f(n, n) = 0$.}
\end{prop}
\begin{proof}
Let $r$ be a distance function to $p\in M$. Given any $a > 1$, consider the cut-off function
\begin{equation}
\lambda(r) = \left\{
\begin{array}{rl} 1 & 0 \leq r \leq a \\
\frac{2\log a - \log r}{\log a} & a< r < a^2 \\ 0 & r\geq a^2.
\end{array}\right.
\end{equation}
Define $V(r) = \int_{B_{\Sigma}(p,r)}e^{-f}$. Plugging this in the stability inequality in corollary $1$, we find that
\begin{equation}
\begin{aligned}
&\int_{B_{\Sigma}(a)}(Ric_f(n,n)+|A|^2)|\lambda|^2e^{-f} \\&\leq \int_{B_{\Sigma}(a^2)}|\nabla\lambda|^2e^{-f}\\&=\int_{a}^{a^2}\frac{V'(r)}{r^2\log^2a}dr
\\&=\frac{V(r)}{r^2\log^2a}|_{r=a}^{r=a^2}-\int_{a}^{a^2}V(r)(\frac{1}{r^2\log^2a})'dr\\&
\leq \frac{C}{\log^2a}+C\frac{1}{\log^2a}\int_{a}^{a^2}\frac{dr}{r} \\&
=O(\frac{1}{\log a}).
\end{aligned}
\end{equation}
The proposition follows by taking $a\to \infty$.
\end{proof}
Now recall the following theorem in \cite{[WW]}\cite{[MW]}
\begin{lemma}
{Let $(M^m, e^{-f}dv)$ be a smooth metric measure space with $Ric_f \geq 0$, then along any miminizing geodesic starting from $x\in B_p(R)$ we have
$$\frac{J_f(x, r_2, \xi)}{J_f(x, r_1, \xi)}\leq e^{4A(R)}(\frac{r_2}{r_1})^{m-1}$$
for $0 < r_1<r_2<R$. In particular, for $0 < r_1<r_2<R$, the weighted area of the geodesic spheres satisfy $$\frac{ A_f(\partial B_x(r_2))}{A_f(\partial B_x(r_1))} \leq e^{4A(R)}\frac{r_2^{m-1}}{r_1^{m-1}}.$$ Here $A(R) = Sup_{x \in B_x(3R)}|f|(x)$} and $J_f(x, r, \xi)=e^{-f}J(x, r, \xi)$ is the $e^{-f}$ weighted volume in geodesic polar coordinates.
\end{lemma}
If $f$ is bounded, $Vol_f(B_x(r))$ has polynomial growth of order at most $m$.
\begin{prop}
{Let $(M^3, e^{-f}dv)$ be a smooth metric measure space with $Ric_f \geq 0$ and $f$ is bounded.
If $\Sigma$ is a complete weighted area-minizing hypersurface which is the boundary of least weighted area in $M$, then $\Sigma$ is totally geodesic and $Ric_f(n, n) = 0$.}
\end{prop}
\begin{proof}
According to lemma 1, the weighted volume of the geodesic sphere has at most quadratic growth. Since $\Sigma$ is weighted area minimizing and is a
boundary of least weighted area in $M$, $vol_f(\Sigma\vdash B_x(r)) \leq A_f(\partial B_x(r))\leq Cr^2$. Proposition 3 follows from Proposition 2.
\end{proof}
\section{\bf{Application to complete 3-manifolds with nonnegative Bakry-Emery Ricci curvature}}
The classification of complete 3-manifolds with nonnegative Ricci curvature has been complete by various authors' works.
By using the Ricci flow, Hamilton \cite{[H1]}\cite{[H2]} classified all compact 3-manifolds with nonnegative Ricci curvature.
He proved that the universal cover is either diffeomorphic to $\mathbb{S}^3$, $\mathbb{S}^2\times\mathbb{R}$ or $\mathbb{R}^3$.
In the latter two cases, the manifold splits.
In \cite{[SY1]}, Schoen-Yau proved that a complete noncompact 3-manifold with positive Ricci curvature is diffeomorphic to the Euclidean space.
Anderson-Rodriguez \cite{[AR]} and Shi \cite{[Sh]} classified complete noncompact 3-manifolds with nonnegative Ricci curvature by assuming an upper bound of the
sectional curvature. Very recently, the author \cite{[L]} classified all complete noncompact 3-manifolds with nonnegative Ricci curvature.
In view of the results above, it is natural to ask what happens to a 3-manifold when the Bakry-Emery Ricci curvature is nonnegative.
Below is a partial classification when $f$ is bounded.
\begin{theorem}
Let $(M^3,g, e^{-f}dv)$ be a complete 3-manifold with bounded $f$
and $Ric_f\geq 0$.
\begin{itemize}
\item If $M$ is noncompact, then either $M$ is contractible or through each point in $M$, there exists a
totally geodesic surface with $Ric_f(n, n) = 0$.
If in addition the rank of $Ric_f$ is at least $2$ everywhere,
then the universal of $M$ splits as a Riemann product as $\Sigma \times \mathbb{R}$.
In particular, if the Bakry-Emery Ricci curvature is positive, then $M$ is contractible.
\item If $M$ is compact, then either it is a quotient of $\mathbb{S}^3$ or the universal cover splits as a product $\Sigma\times \mathbb{R}$.
\end{itemize}
In each splitting case, $\Sigma$ is conformal to $\mathbb{S}^2$ or $\mathbb{C}$ and $f$ is constant along the $\mathbb{R}$ factor.\end{theorem}
\begin{proof}
First we consider the case when $M$ is noncompact.
The argument is similar to \cite{[SY1]}\cite{[L]}.
Assume $M$ is simply connected, if $\pi_2(M) \neq 0$, according to Lemma 2 in \cite{[SY1]}, $M$ must have at least
two ends. From Lichnerowicz's extension of the Cheeger-Gromoll splitting theorem \cite{[Lc]}, the universal cover
splits. So we assume $\pi_2(M) = 0$. Therefore, the universal cover of $M$ is contractible. If $M$ is not simply
connected, Schoen and Yau \cite{[SY1]} proved that $\pi_1(M)$ must have no torsion elements. Thus, after
replacing $M$ by a suitable covering, we may assume that $\pi_1(M) = \mathbb{Z}$ and that $M$ is orientable.
Recall lemma $2.2$ in \cite{[An1]} by Anderson:
\begin{lemma}(Anderson)
{ Let $M$ be a complete Riemannian manifold with finitely generated homology $H_1(M ,\mathbb{Z})$. Then any
non-zero line $\mathbb{R}\cdot\alpha, \alpha\in H_1(M, \mathbb{Z})$ gives rise to a complete homologically area-minimizing
hypersurface $\Sigma_{\alpha}$, which is the boundary of least area in a cover $\mathbb{Z} \rightarrow \overline{M}\rightarrow M$. Moreover,
the volume growth of $\Sigma_{\alpha}$ satisfies $vol(\Sigma\vdash B^{\overline M}(r)) \leq vol(\partial B^{\overline M}(r))$ and the intersection
number $I(\Sigma, \alpha) \neq 0$.
}
\end{lemma}
The proof of the above lemma in \cite{[An1]} can be carried out without any modification to weighted volume case.
Taking $\alpha$ to be the generator of $H_1(M, \mathbb{Z})$, we can find a complete oriented boundary $\Sigma$ of least weighted area in the universal cover
$\tilde{M}$. By proposition 3, $\Sigma$ is totally geodesic and $Ric_f(n, n) = 0$. If $Ric_f > 0$ on $M$, then this is a contradiction.
Now consider the case when $Ric_f \geq 0$.
We shall use a perturbation argument in \cite{[E]}\cite{[L]}.
For any point $p\in M$, consider a family of metric $g(t) = e^{2t\lambda}g_0$, where $\lambda=\lambda(x)$ is a function on $M$.
Let $(U, g_{ij},x_i)$ be a normal coordinate for $g_0$ at $p$ such that $\frac{\partial}{\partial x_i} = e_i$. We have $$f^t_{ij} = e_je_i(f) -(\nabla^t_{e_j}e_i) f,$$ $$\Gamma^s_{ij}(g(t)) = \frac{1}{2}g^{sl}(t)(\frac{\partial g_{il}(t)}{\partial x_j} + \frac{g_{jl}(t)}{\partial x_i} - \frac{\partial g_{ij}(t)}{\partial x_l}).$$ Then at $p$,$$\Gamma^s_{ij}(g(t)) = t(\lambda_j\delta_{is}+\lambda_i\delta_{js}-\lambda_s\delta_{ij}).$$
Therefore,
\begin{equation}
\begin{aligned}
f^t_{ij}-f_{ij} &= -\Gamma^s_{ij}(g(t))f_s \\&= t(f_s\lambda_s\delta_{ij}-\lambda_if_j-\lambda_jf_i)\\& \geq-3t|\nabla f||\nabla\lambda|.
\end{aligned}
\end{equation}
Let $m = dim(M) = 3$.
Recall that
$$Ric^t(v, v) = (Ric(v, v) - t(m-2)\lambda_{vv} - t\Delta\lambda + t^2(m-2)(v(\lambda)^2-|\nabla\lambda|^2))$$ for $|\nu|_{g_0} = 1.$
Let $r(x) = dist(x, p)$ on $M$.
For a very small $R > 0$, consider the function
$\rho = R-r$ for $\frac{R}{2}< r < R$. Then we extend $\rho$ to be a positive smooth function for $0 \leq r < \frac{R}{2}$.
Define $\lambda = -\rho^5$.
Now $$\nabla^2 (\rho^5)(v, v) = 20\rho^3v(\rho)^2+ 5\rho^4\nabla^2(\rho)(v, v).$$ For $aR < r < R$, we have
\begin{equation}
\begin{aligned}
Ric^t(v, v)+f^t_{vv} &\geq Ric^0(v, v)+f^0_{vv} + 20t\rho^3+ 5t\rho^4(\Delta \rho+ \\&(m-2)\nabla^2 (\rho)(v, v)) -25(m-2)t^2\rho^8-15t\rho^4|\nabla f|.
\end{aligned}
\end{equation} Using the fact that the manifold is almost Euclidean near $p$, for small $R$, we have $$|\Delta \rho + (m-2)\nabla^2 \rho(v, v)| \leq \frac{9(2m-3)}{8(R-\rho)}.$$ Therefore, there exists small $R > 0$ such that for all small $t$, $Ric^t_f(v, v) > 0$ in an annulus $B_p(R)\backslash B_p(aR)$ for $a = \frac{7}{8}$. The metric remains the same outside $B_p(R)$. The deformation is $C^4$ continuous with respect to the metric and $C^{\infty}$ with respect to $t$.
Let $\gamma$ be a closed curve in $M$ which represents the generator of $\pi_1(M)$. We can apply the perturbation finitely many times such that $Ric_f > 0$ on $\gamma$ and $Ric_f$ is nonnegative on $M$ except a small neighborhood $U$ of $p$. Then for the perturbed metric $g_t$, we can apply lemma 2 to obtain a complete oriented boundary $\Sigma$ of least weighted area in the universal cover
$\tilde{M}$. Since $g_t$ is uniformly equivalent to $g_0$, we can show $\Sigma_t$ has quadratic weighted volume growth. Let $q \in \Sigma_t$, then for any $r > 0$,
\begin{equation}
\begin{aligned}
vol_{g(t)}(\Sigma_t\vdash B_{g(t)}(q, \tilde{M})(r)) &\leq vol_{g(t)}(\Sigma_t\vdash B_{g(0)}(q, \tilde{M})(Cr)) \\&\leq vol_{g(t)}(\partial B_{g(0)}(q, \tilde{M})(Cr))\\&\leq Cvol_{g(0)}(\partial B_{g(0)}(q, \tilde{M})(Cr))\\&\leq C_1r^2.
\end{aligned}
\end{equation}
If $\Sigma_t$ does not intersect the preimage of $U$ in $\tilde{M}$, then on $\Sigma_t$, $Ric_f \geq 0$ and $Ric_f>0$ at $\Sigma_t \cap \gamma$. This contradicts proposition 2.
For each $\Sigma_t$, we can find deck transformation $l_t$ on $\tilde{M}$ such that $l_t(\Sigma_t)$ intersects the preimage of $U$ at some fixed compact set in $\tilde{M}$.
Therefore, if we shrink the size of the neighborhood of $p$ and let $t\to 0$ sufficiently fast, a subsequence of $\Sigma_t$ will converge to a weighted area minimizing surface $\Sigma$ satisfying $$vol_{g(0)}(\Sigma\vdash B_{g(0)}(q, \tilde{M})(r)) \leq Cr^2.$$
Thus, by proposition 2, $\Sigma$ is totally geodesic and $Ric_f(n, n) = 0$. Since $p$ is arbitrary,
though each point there exists a totally geodesic surface with $Ric_f(n, n) = 0$.
\bigskip
Now we use the assumption that the rank of $Ric_f$ is at least $2$ everywhere.
Then through each point $p\in \tilde{M}$, there exists a unique totally geodesic surface.
Therefore we have a foliation on $\tilde{M}$. We can parametrize the surfaces as $\Sigma_t$.
Let $N$ be the unit normal vector and $\lambda N$ be the variational vector field of $\Sigma_t$.
Since the smooth family of surfaces $\Sigma_t$ never intersect with each other, $\lambda$ is nonnegative. A simple computation shows that the
variational vector field of these totally geodesic surfaces satisfies $$\Delta\lambda + \lambda Ric(n, n) = 0.$$ Since
$$H = f_n = 0,$$
\begin{equation}
\begin{aligned}
0&=\frac{df_n}{dt}\\&= \lambda f_{nn}+\langle \nabla f, \nabla_{\lambda N} N\rangle\\&
=\lambda f_{nn} + \sum\limits_{i=1}^{2}\langle\nabla f, e_i\rangle\langle e_i, \nabla_{\lambda N} N\rangle\\&
=\lambda f_{nn} - \langle \nabla f, \nabla\lambda\rangle.
\end{aligned}
\end{equation}
In the above computation, $e_i$ is an orthonormal frame on an open set of $\Sigma$.
But $$0 = Ric_f(n, n) = Ric(n, n) + f_{nn},$$
thus we have $$\Delta_f\lambda = \Delta\lambda - \langle\nabla\lambda, \nabla f\rangle= 0$$ on $\Sigma$.
The lemma below is close to corollary 1 in \cite{[CY]}.
\begin{lemma}
For a smooth metric measured space $(M, g, e^{-f}dv)$ with quadratic weighted volume growth, if $\lambda$ is a positive function which satisfies $\Delta_f\lambda = 0$, then $\lambda$ is a constant.
\end{lemma}
\begin{proof}
Let $\lambda = e^h$, then $$\Delta h + |\nabla h|^2 -\langle\nabla h, \nabla f\rangle=0.$$
Let $\varphi$ be a cut-off function, we find
$$\int\varphi^2\Delta he^{-f} + \int\varphi^2|\nabla h|^2e^{-f} - \int\varphi^2\langle\nabla h, \nabla f\rangle e^{-f} = 0.$$
By integration by parts, $$\int\varphi^2(\Delta h)e^{-f} = -\int h_i2\varphi\varphi_ie^{-f}+\int h_i\varphi^2f_ie^{-f}.$$
Therefore $$\int\varphi^2|\nabla h|^2e^{-f} = 2\int\varphi_ih_i\varphi e^{-f}\leq 2(\int\varphi^2|\nabla h|^2e^{-f})^{\frac{1}{2}}(\int|\nabla\varphi|^2e^{-f})^{\frac{1}{2}}.$$ Thus $$\int\varphi^2|\nabla h|^2e^{-f} \leq 4\int|\nabla\varphi|^2e^{-f}.$$
Now we can use the same cut-off function in proposition 2 to show that $\nabla h \equiv 0$. Thus $\lambda$ is a constant.
\end{proof}
Since $\lambda$ is nonnegative, by lemma 3, $\lambda$ is constant. After a reparametrization of $\Sigma_t$, we may assume $\lambda = 1$.
Now for $X\in T\Sigma_t$, $\nabla_XN = 0$, since $\Sigma_t$ is totally geodesic. Since $\lambda$ is a constant, we may assume $[X, N] = 0$. $\langle\nabla_NN, X\rangle = -\langle N, \nabla_NX\rangle
=-\langle N, \nabla_XN\rangle = 0$. Thus $\nabla N \equiv 0$.
Therefore $M$ is locally isometric to $\Sigma\times\mathbb{R}$.
$f$ is constant along the $\mathbb{R}$ factor, since $f_n = 0$.
\bigskip
Now consider the case when $M$ is compact. If the universal cover is compact, then according to Perelman's
solution to the Poincare conjecture, $M$ is covered by $\mathbb{S}^3$. If the universal cover $\tilde{M}$ is noncompact, then according
to Theorem 6.6 in \cite{[WW]}, $\tilde{M}$ splits as a product $\Sigma\times \mathbb{R}$.
Finally, we show that in the splitting case, $\Sigma$ is conformal to $\mathbb{C}$ or $\mathbb{S}^2$.
There are two methods to do this. Note that on $\Sigma$, $$Ric_\Sigma + \nabla^2 f \geq 0.$$
Consider the conformal change of the metric $\tilde{g} = e^{-f}g$ on $\Sigma$, then the tensor $$Ric_\Sigma(\tilde{g}) = Ric_\Sigma(g)
+ \frac{1}{2}(\Delta_\Sigma f) g\geq 0.$$
As $f$ is bounded, $\tilde{g}$ is complete. Since $\Sigma$ is simply connected, $\Sigma$ is conformal to $\mathbb{C}$ or $\mathbb{S}^2$.
The second way is this: By lemma 1, the weighted volume growth of $\Sigma$ is at most quadratic. Since $f$ is bounded, the volume growth of $\Sigma$
is at most quadratic. If $\Sigma$ is conformal to the Poincare disk, then there exists a nontrivial bounded harmonic function on $\Sigma$.
But according to corollary 1 in \cite{[CY]}, the function is a constant. This is a contradiction.
\end{proof}
\begin{remark}
The bounded condition of $f$ cannot be dropped in the above theorem. For example, consider the warped product metric
$ds^2 = dt^2 + g(t)ds_{\Sigma}^2$ on
$M = \mathbb{S}^2 \times \mathbb{R}$. Here $ds_{\Sigma}^2$ is the standard metric on $\mathbb{S}^2$ with curvature $1$.
Consider an orthogonal frame $e_1, e_2, e_3$ on $M$ such that $ds_{\Sigma}^2(e_1, e_1) = ds_{\Sigma}^2(e_2, e_2) = 1$ and $\frac{\partial}
{\partial t} = e_3$.
If we take $f$ as a function of $t$ on $M$, then by similar computations in section $4$, we see
$$Ric_f(e_1, e_1) = 1-\frac{g''}{2}+\frac{f'g'}{2}, Ric_f(e_3, e_3) = \frac{-2g''g+g'^2+2g^2f''}{2g^2}.$$
If $f(t) = t^2$, $g(t) = e^t$, then one can check that $Ric_f > 0$, however, $M$ is not a Riemann product or a contractible manifold.
\end{remark}
|
1,108,101,562,789 | arxiv | \section{Introduction}
\label{sec:intro}
Magnetic reconnection is a fundamental problem essential for understanding the
magnetohydrodynamic (MHD) flows. Within such flows magnetic flux tubes cross
each other and therefore the properties of the flow depend on whether the tubes
can or cannot cross each other.
The answer that follows from the Sweet--Parker theory of magnetic reconnection
\citep{Parker:1957, Sweet:1958} is that in typical astrophysical situations the
magnetic flux tubes cannot reconnect and change the magnetic field topology.
Indeed, the Sweet-Parker reconnection rate is $V_\mathrm{rec, SP} \approx
V_\mathrm{A} S^{1/2} \ll V_\mathrm{A}$ with $S = L V_\mathrm{A} / \eta$ being
the Lundquist number, where $L$ is a scale of the reconnecting flux tube and
$V_A$ is the Alfv\'en speed. Given the large scales of magnetic fields involved
in astrophysical flows and the highly conductive nature of astrophysical
plasmas, it is obvious that $S$ is so large that the rates predicted by the
Sweet--Parker mechanism are absolutely negligible. This, however, is in gross
contradiction with observational data, e.g., the data on Solar flares. The
Sweet--Parker reconnection is an example of a slow reconnection, while one
requires much faster reconnection to explain observations. Formally, the fast
reconnection is the reconnection that does not depend on $S$ or, if depends,
depends on it logarithmically.
For years the fast reconnection research was focused on the X-point reconnection,
i.e. the reconnection at which magnetic field is brought at a sharp angle in the
reconnection zone. This is opposed to the Sweet-Parker reconnection which is an
example of the Y-point reconnection. The X-point reconnection was proposed by
\cite{Petschek:1964} and required that the inflow and outflow of the matter
into reconnection zone are comparable. Indeed, the slow rate of reconnection with
Y-point can be viewed as a direct consequence of the disparity of the
scale of astrophysical inflow of the fluid and the scale of outflow determined by
microphysics, i.e., the resistivity or plasma effects, the latter, however,
challenged by \citet[][henceforth LV99]{LazarianVishniac:1999}.
The most significant point of the LV99 theory was that in the presence of the 3D
turbulence, the reconnection outflow is determined by the magnetic field
wandering and the width of the outflow is the function of turbulence intensity
rather than the resistivity of plasma effects\footnote{The LV99 proposal was
radically different from earlier suggestions of enhancing the reconnection rate
by turbulence. For instance, \cite{JacobsonMoses:1984} considered effect of
turbulence on micro-scales by increasing Ohmic resistivity. Obviously, this
could provide the change of the Sweet-Parker rate only by an insignificant
factor. Similarly, 2D simulations of turbulence in \cite{MatthaeusLamkin:1985,
MatthaeusLamkin:1986} could not shed light on the actual 3D physics of magnetic
reconnection. Indeed, the component of magnetic field responsible for the
wondering in LV99 model is the Alfv\'enic mode. This mode is absent in 2D
simulations. Thus the authors were appealing to the X-points that are produced
by turbulence.}. The predicted by the LV99 theory dependence of the
reconnection rate on the level of turbulence was successfully tested in the
numerical studies of \cite{Kowal_etal:2009, Kowal_etal:2012}. More recently,
these predictions received an additional support from relativistic MHD
simulations by \cite{Takamoto_etal:2015}. The most important consequence of the
LV99 theory, contrary to all the previous theories of fast reconnection, was the
prediction that the reconnection does not require any special settings, but
happens everywhere in turbulent media. As a result, this violates flux freezing
in astrophysical fluids, which are generically turbulent \citep{Eyink:2011,
Eyink_etal:2011}. This remarkable break down of the classical magnetic flux
freezing \citep{Alfven:1942} theorem was numerically demonstrated in
\citep{Eyink_etal:2013}\footnote{The violation of flux freezing in turbulent
fluids entails an important effect of reconnection diffusion that has big
consequences, changing the paradigm of magnetically controlled star formation
\cite[see][]{Lazarian:2005, Santos-Lima_etal:2010, Lazarian_etal:2012}.}.
Turbulence can be both externally driven, as it is testified from the
observations of the ISM and molecular clouds \cite[see][]{Armstrong_etal:1995,
Padoan_etal:2009, ChepurnovLazarian:2010, Chepurnov_etal:2015}, but it can be
also driven by the reconnection process as first discussed in LV99 and further
elaborated in \cite{LazarianVishniac:2009}. The first numerical study of
magnetic reconnection induced by turbulence that is generated by reconnection
were performed in \cite{Beresnyak:2013} with an incompressible code, and later
in \cite{Oishi_etal:2015} and \cite{HuangBhattacharjee:2016} taking into account
compressibility. A detailed numerical study of reconnection with self-generated
turbulence was performed in \cite{Kowal_etal:2017}.
One of the most important questions of the current research in 3D reconnection
faces the nature of turbulence in the reconnection events. Our earlier study in
\cite{Kowal_etal:2017} demonstrated that the turbulence generated in the
reconnection events follows the Goldreich--Sridhar statistics
\citep{GoldreichSridhar:1995}. However, an open issue is related to the driving
mechanism of the observed turbulent motions. The literature has suggested that
tearing modes, plasmoid instabilities, and shear-induced instabilities could
mediate the energy transfer from coherent to turbulent flows. The issue of the
relative importance of different driving processes has not been explored
quantitatively.
In our numerical experiments we do not identify tearing modes, although
filamentary plasmoid-like structures are present. Visual inspection shows,
however, that the filling factor of these is small. Sheared flows, on the other
hand, are present around and within the whole current sheet. As the field lines
reconnect, the $\vec{v} \times \vec{B} + \vec{E}$ force increases, accelerating
the plasma and creating the current sheet. This process is, in three
dimensions, patchy and bursty. Therefore, the accelerated flows are strongly
sheared. The statistical importance of these burst flows is large, as already
shown in the previous work \citep{Kowal_etal:2017}, as we compared the velocity
anisotropy of reconnecting events to that of decaying turbulence without the
reversed field. Kelvin--Helmholtz instability due to the sheared velocities in
reconnecting layers has already been conjectured as possible origin of
turbulence by \cite{Beresnyak:2013}. In \cite{Kowal_etal:2017}, we provided the
solid evidence for the self-generated turbulence driven by the velocity shear.
Here we perform a proper analysis of the growth-rates of such instabilities.
Velocity shear is a global process that occurs in regular magnetized and
unmagnetized fluids. The nonlinear evolution of the related instabilities, such
as Kelvin--Helmholtz instability, is known to be one of the main contributors to
the energy transfer between wave modes, i.e., the energy cascade. If the energy
cascade in reconnection layers is led by similar mechanisms, it is
straightforward to understand why the statistics observed resemble those of
Kolmogorov-like turbulence, and Goldreich--Sridhar anisotropy scaling. In other
words, our claim is that the turbulent onset and cascade in reconnection is not
different to those found in regular MHD and hydrodynamic systems.
In what follows we provide the analysis of Kelvin-Helmholtz and tearing
instabilities and define the conditions for their suppression in \S
\ref{sec:instabilities}, describe our approach and numerical simulations in \S
\ref{sec:model}, compare the rates of the two instabilities at different times
in \S \ref{sec:results}, discuss and state our conclusions in \S
\ref{sec:discussion} and \S \ref{sec:conclusions}.
\section{Analyzed Instabilities}
\label{sec:instabilities}
\subsection{Tearing Mode Instability}
\label{ssec:tr-analysis}
In the following analysis we considered two possible instabilities, namely
tearing mode instability \citep{Furth_etal:1963}, which naturally develops in a
thin elongated current sheet, and Kelvin--Helmholtz instability
\citep[e.g.][]{Chandrasekhar:1961}, which could result from the local shear
produced by the outflows from reconnection sites. Both instabilities are able
to generate turbulence near current sheets, however, there is no clear answer
which one is responsible for or dominates the generation of turbulence from
stochastic reconnection, i.e. the reconnection without an externally imposed
turbulence and resulting from a weak initial plasma irregularities.
Following the analytic work by \cite{Furth_etal:1963}, which investigated the
finite--resistivity instabilities of a sheet pinch, we know that the tearing
instability develops under condition $k \delta < 1$, where $k$ is the
perturbation wavelength (in the sheet plane) and $\delta$ is the
current sheet half-width \cite[see Table 1 in][]{Furth_etal:1963}. When this
condition is satisfied, the growth rate of the instability $\omega \tau_A$
within one Alfvén time $\tau_A = L/v_A$, where $L$ is the current sheet length
and $v_A = |\vec{B}|/\sqrt{\mu_0 \rho}$ is the Alfvén speed, is given by
\begin{equation}
\omega \tau_A = p \frac{\tau_A}{\tau_R}
= \left( \frac{2 S_\delta}{\pi k \delta} \right)^{2/5} S_\delta^{-1}
= \left( \frac{2}{\pi} \right)^{2/5} \left( k \delta \right)^{-2/5} S_\delta^{-3/5},
\label{eq:tr_grate}
\end{equation}
where $p = \omega \tau_R = \left( \frac{2 S_\delta}{\pi k \delta} \right)^{2/5}$
is the growth rate in terms of the resistive time scale $\tau_R$ \cite[as
provided in][]{Furth_etal:1963}, $S_\delta = \frac{v_A \delta}{\eta} = \frac{v_A
L}{\eta} \frac{\delta}{L} = S_L \frac{\delta}{L}$ is the specific Lundquist
number related to $\delta$. The regular Lundquist number $S_L={v_A L}/{\eta}$
is typically much larger than unity, e.g. $S_L \approx 10^{3} - 10^{4}$ in
numerical simulations and $S_L \gg 10^9$ in astrophysical plasmas. Also, the
current sheet thickness is typically much smaller than its length, i.e.
$\frac{L}{\delta} \gg 1$. These conditions indicate that both, in numerical
simulations and astrophysical plasmas, tearing instability shall be common.
However, the tearing instability is a subject to suppression under some
circumstances, in particular in the presence of turbulence. For instance,
\cite{SomovVerneta:1993} performed an analytic derivation of the instability in
the presence of the transverse component of magnetic field, which could be
easily generated by turbulence. They have shown that the expression for the
growth rate, once the transverse component is taken into account, changes to
\begin{equation}
\left( \omega \tau_A \right)^5 = \left( \frac{2}{\pi} \right)^2
\left( k \delta \right)^{-2} S_\delta^{-3} - \xi^2 S_\delta
\left( \omega \tau_A \right)^4,
\label{eq:tr_grate_xi}
\end{equation}
where $\xi = B_n / B$ is the ratio of the transverse component of magnetic field
to the reconnecting one. It can be seen from the equation above, that for $\xi
> S_\delta^{-3/4}$, the tearing instability can be partially or completely
stabilized. Moreover, the turbulence shearing should destroy the tearing
instability, i.e., if the rate $v_l/l$ is larger than the tearing instability
rate, the instability should not appear.
\subsection{Kelvin--Helmholtz Instability}
\label{ssec:kh-analysis}
In the presence of a velocity shear, the Kelvin--Helmholtz instability can
develop. Following \cite{Chandrasekhar:1961}, we can write its growth rate for
incompressible flow as
\begin{equation}
\omega \tau_A = 2 \pi k \sqrt{ \frac{1}{4} \frac{\Delta U^2}{v_A^2} - 1},
\label{eq:kh_grate}
\end{equation}
where $\Delta U$ is the shear velocity, i.e. the change of velocity in its
perpendicular direction, $v_A$ is the Alfvén speed and $k$ is the wave number of
the perturbation. Once the compressibility is taken into account
\cite[see][]{MiuraPritchett:1982}, the instability condition changes to
\begin{equation}
\omega^2 = \left( k_x^2 + k_y^2 \right) - \frac{1}{4} k_y^2 \frac{{\cal M}_s^2
{\cal M}_A^2}{\left( {\cal M}_A^2 + {\cal M}_s^2 - \frac{4 \left( \vec{k}
\cdot \vec{B} \right)^2}{k_y^2} \right)},
\label{eq:kh_grate_comp}
\end{equation}
where ${\cal M}_s = \Delta U / a$ and ${\cal M}_A = \Delta U / v_A$ are sonic
and Alfvénic Mach numbers related to the shear strength, respectively, $a$ is
the sound speed, and $k_x$ and $k_y$ are the wave numbers of perturbation in two
directions perpendicular to the shear. The analysis in
\cite{MiuraPritchett:1982} has shown, that Kelvin--Helmholtz instability is
completely suppressed for ${\cal M}_s > 2$ or ${\cal M}_A < 2$. We should
stress, that these numbers are related to $\Delta U$ and not to the absolute
value of velocity. However, it is important to notice that even though the
system may be strongly magnetized overall, in the regions where reconnection
occurs the local degree of magnetization decreases considerably, allowing the
growth of KH-unstable modes. Moreover, if the direction of the perturbation
propagation is perpendicular to the local magnetic field, the stabilization
effect of the magnetic field is negligible.
\section{Methodology and Modeling}
\label{sec:model}
\subsection{Numerical Simulations}
\label{ssec:simulations}
In this work we analyze numerical simulations obtained in studies on the
statistics of the reconnection-driven turbulence, presented in
\cite{Kowal_etal:2017}. The simulations were done in a 3D domain with physical
dimensions $1.0 \times 4.0 \times 1.0$ using adaptive mesh with the effective
grid size $h = 1/1024$ (the same along each direction), by solving the
isothermal compressible magnetohydrodynamic equations using a high-order
shock-capturing Godunov-type code AMUN\footnote{The code is freely available at
\href{http://amuncode.org}{http://amuncode.org}}. The magnitude of the initial
$X$-component of magnetic field was set to $1.0$ with opposite signs above and
below the $XZ$ plane at $y=0$. A guide field, along the $Z$ direction was also
set with a uniform value $0.1$. Density was set to $1.0$ in the whole
computational domain initially. The initial velocity perturbation with a random
distribution of directions and small amplitude was set in the region $y \le
0.1$. For analysis here, we have selected models with sound speed $a = 1.0$
($\beta = 2.0$) only. For more details about the numerical setup, boundary
conditions and methods please refer to \citet{Kowal_etal:2017}. In all these
models we did not set explicit viscosity and resistivity. In addition, we
performed one numerical simulation of stochastic reconnection with sound speed
$a = 1.0$ and finite viscosity $\nu$ and resistivity $\eta$, both equal to
$10^{-5}$, in order to control better the effects of numerical diffusion. The
additional model was ran with the effective cell size $h = 1 / 1024$ up to about
$t = 7.0$.
\subsection{Shear Detection in Vector Fields}
\label{ssec:shear_detection}
In order to detect the locations of current sheet we analyze a quantity which is
correlated with the local change of the polarization of magnetic field, or
simply magnetic shear. There are several techniques proposed in the literature
to determine locations where the reconnection takes place. The most
straightforward is the amplitude of current density $|\vec{J}|$. We can also
use the magnetic shear angle, i.e. the rotation of the magnetic field vector
across the current sheet, or the Partial Variance of Increments method (PVI)
which measures the variation of the magnetic field across the current sheet
\cite[see][]{Greco_etal:2008,Servidio_etal:2011}. Similarly, for the velocity
shear, we can consider, for example, the vorticity $\vec{\omega} = \nabla \times
\vec{v}$ as the shear detector. In this work we analyze the local maximums of
the euclidean norm of the shear rate tensor $S_{ij} = {\partial u_i}/{\partial
x_j} + {\partial u_j}/{\partial x_i}$, $S = \|S_{ij}\| \equiv \sqrt{\sum_{ij}
S_{ij}^2}$, where $x_i, x_j \in \{x, y, z\}$ and $\vec{u}$ means either
$\vec{B}$ or $\vec{v}$, depending on the analyzed instability. Clearly, $S =
S(x,y,z)$ is a function of position.
Our algorithm to determine the local geometry of the shear consists of the
following steps:
\begin{enumerate}
\item At each domain point $(x,y,z)$ (e.g. cell) we calculate the 2$^{nd}$
order partial derivatives of $S(x,y,z)$ along each direction. If at least one
of the directional derivatives is negative, the current position is selected as
belonging to the shear ridge.
\item At each selected ridge position we calculate the Hessian of the analyzed
shear detector $S$
\begin{equation}
H_{ij}^J = \frac{\partial^2 S}{\partial_{x_i} \partial_{x_j}},
\end{equation}
where again $x_i, x_j \in \{x, y, z\}$, and solve its eigenproblem. The
minimum eigenvalue $\lambda_\mathrm{min} = \min{\{\lambda_i\}}$, if negative,
gives the steepest decay of the shear detector $S$ and the corresponding
eigenvector $\hat{e}_{n} = \hat{e}(\lambda_\mathrm{min})$ indicates the
direction of this decay. If the minimum eigenvalue is not negative, the
location is skipped.
\item The eigenvector $\hat{e}_{n}$ is perpendicular to the shear plane. In
order to determine the direction of the shear we use the fact that the vector of
the curl of the analyzed vector field (current density $\vec{J}$ for $\vec{B}$
or vorticity $\vec{\omega}$ for $\vec{v}$), which can be easily obtained, is
perpendicular to $\hat{e}_{n}$. Therefore, the direction of the sheared
component $\hat{e}_{s}$ is
\begin{equation}
\hat{e}_s = \hat{e}_n \times \hat{w},
\end{equation}
where $\hat{w} = \vec{w} / |\vec{w}|$ and $\vec{w} = \nabla \times \vec{u}$.
\item Next, we perform the interpolation of all three components of the
analyzed vector field, $u_x$, $u_y$, and $u_z$ along the vector $\hat{e}_{n}$
within a distance of several cell sizes (e.g., $s = {-20h, \dots, 20h}$, where
$s$ is the distance in the units of cell size $h$), and project the resulting
vectors on the direction of shear component $\hat{e}_{s}$
\begin{equation}
u_s(s) = \hat{e}_{s} \cdot \vec{u}(s \hat{e}_{n}).
\end{equation}
We use a piecewise quintic Hermite interpolation which preserves continuity of
the first and second derivatives \cite[see, e.g.,][]{Dougherty_etal:1989}.
\item At this point we perform fitting of a function $f(s) = u_a \tanh\left[(s
- s_0) / \delta \right] + u_0$ to the obtained profile $u_s(s)$. The estimated
value $u_a$ corresponds to the amplitude of the sheared component of vector
field and $u_0$ is the uniform component across the shear plane, but
perpendicular to the guide field.
\item Along the normal direction we can also project other quantities, for
example, in the case of the tearing mode we estimate the transverse component of
magnetic field $B_n(s) = \hat{e}_{n} \cdot \vec{B}(s \hat{e}_{n})$ or Alfvén
speed $v_A(s) = \hat{e}_{n} \cdot \vec{v}_A(s \hat{e}_{n})$. By averaging them
within the local current sheet, i.e. within the interval $s \in (-\delta,
\delta)$ of the fitted function, we can get the mean value of the transverse
component of magnetic field $\langle B_n \rangle$ and the Alfvén speed $\langle
v_A \rangle$, which is necessary to estimate the specific Lundquist number
$S_{\delta}$.
\item Finally, in order to estimate the length of the current sheet, i.e. the
longitudinal dimension of the local sheet plane, we project the shear detector
$S$ along the vector parallel to the reconnecting component of magnetic field,
$S(s) = S(s \, \hat{e}_r)$, and analyze the decay of $S$ along $s$. We measure
the distance $l$ between points where $S$ drops to a half of its central value
treating $l$ as the longitudinal length of the current sheet.
\end{enumerate}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f1a}
\includegraphics[width=0.48\textwidth]{f1b}
\caption{{\it Left:} Sketch of a shear region with the local reference frame
indicating directions of the shear $\hat{e}_s$, transverse (normal) $\hat{e}_n$,
and guide components $\hat{e}_g$ of the analyzed vector field (magnetic field
$\vec{B}$ or velocity $\vec{v}$). The length $l$ of the shear region and its
thickness $\delta$ are determined along $\hat{e}_s$ and $\hat{e}_n$ axes,
respectively.
{\it Right:} Example profiles of the shear (reconnecting), transverse and guide
components of magnetic field ($B_s$, $B_n$, and $B_g$, respectively) projected
on the direction normal to current sheet in one of the selected points of the
detected current sheet. The estimated parameters of the fitting of the
reconnecting component $B_s(s)$ are shown in the title. The horizontal scale is
in the units of the effective cell size $h$.
\label{fig:fit}}
\end{figure*}
The procedure described above allows us to estimate the thickness $\delta$ and
the longitudinal dimension $l$ of a shear region at its arbitrary position, to
determine the direction of the shear (e.g. the reconnecting component in case of
magnetic field) $\hat{e}_s$, and to estimate other related to growth rate
parameters, such as the mean transverse and guide magnetic components, $B_n$ and
$B_g$, respectively, the relative strength of the transverse component $\xi =
B_n / B_a$, the maximum current density $J_m = B_p^2 / \delta$, or the specific
Lundquist number $S_\delta = {v_A \delta}/{\eta}$, in the case of tearing mode,
or the shear strength $\Delta U$ and Alfvén speed $v_A$ in the case of
Kelvin--Helmholtz instability.
In the left panel of Figure~\ref{fig:fit} we show a sketch of a shear region
(with arbitrary orientation) with vector field lines of the opposite
polarization (red and blue) with the local reference frame used to project the
field components on three axes $\hat{e}_s$, $\hat{e}_n$, and $\hat{e}_g$,
corresponding to shear, transverse and guide components. For the case of
magnetic shear, an extracted profiles of shear (reconnecting) $B_s(s)$,
transverse $B_n(s)$, and guide $B_g(s)$ components (blue, orange, and green
respectively) along the direction normal to the current sheet are shown in the
right panel of Figure~\ref{fig:fit}. This panel also shows the fitting of the
shear component (red dashed line) with estimated parameters $B_a$, $B_0$ and
$\delta$, which values are shown in the title together with the estimated
stabilizing parameter $\xi$.
\subsection{Extracted Parameters for Tearing Instability}
\label{ssec:tr-tests}
The estimation of the growth rate of tearing mode in fluid simulations is not
trivial. First of all, it is necessary to detect the locations of current
sheets using, for example, the algorithm presented in the previous subsection.
Once it is done, one have to estimate the length $l$ and thickness $\delta$
of the local sheet. Within the local sheet we can estimate the strength
of the transverse and reconnecting components, $B_n$ and $B$, respectively, in
order to determine $\xi$, and the specific Lundquist number $S_\delta$. It is
especially difficult to characterize the local perturbations. Usually we
have a situation where several perturbation waves of different amplitudes and
travelling in different directions are present in the analyzed region. It is
enough, however, to determine the limit on the minimum wavelength $k_{min}$,
which can be obtained from the already estimated length of the current sheet,
i.e. $k_{min} \approx L / l$. The maximum wavelength is determined by the
resolution of the simulation, $k_{max} \approx L / h$.
\subsection{Extracted Parameters for Kelvin--Helmholtz Instability}
\label{ssec:kh-tests}
The Kelvin--Helmholtz instability is analyzed in a similar manner as the tearing
mode. Here we determine the positions of the velocity shear using the algorithm
describe in this section. Once the shear region is detected, its thickness
$\delta$ and longitudinal dimension $l$ are estimated. These two parameters
allow up to estimate the permitted range of perturbation wave numbers. In order
to determine the growth rate of the Kelvin--Helmholtz instability in each
detected region, we calculate the shear velocity $\Delta U = v_s(s_0 +\delta) -
v_s(s_0-\delta) $ from the interpolated transverse component of velocity, where
$s_0$ and $\delta$ are obtained from the fitting of the function $f(s)$ (see
step 5 in \S\ref{ssec:shear_detection}). Similarly to tearing mode, we estimate
the mean Alfvén speed $v_A$ across the shear region. In this way we build a
vector of samples for the shear width $\delta$, the shear velocity $\Delta U$,
and the Alfvén speed $v_A$, necessary to verify the stability conditions and
estimate the growth rate from Eq.~\ref{eq:kh_grate} which statistics we analyze
in the next section.
\section{Analysis and Results}
\label{sec:results}
\subsection{Tearing instability analysis}
Before we estimate the growth rate of tearing mode instability, we should
analyze the properties of current sheet in the system, which influence the
growth rate. From Eq.~\ref{eq:tr_grate} we see, that the growth rate increases
with the decrease of the current sheet thickness $\delta$ and the increase of
the perturbation wavelength $k$. These two quantities also determine the
instability condition $k \delta < 1$. Therefore, we will analyze them first.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f2a}
\includegraphics[width=0.48\textwidth]{f2b}
\caption{Statistics of the current sheet thickness $\delta$ (left) and length
$l$ (right) for the model with sound speed $a=1.0$ with explicit resistivity
$\eta = 10^{-5}$ and effective cell size $h = 1/2048$ for different evolution
moments, $t = 0.1$, $1.0$, $3.0$, $5.0$, $7.0$ (blue, orange, green, red, and
purple, respectively). The right vertical dashed line (teal) shows the initial
thickness of current sheet $\delta_{ini} = 3.16\times10^{-3}$, and the left line
(grey) shows the effective grid size $h$.
\label{fig:csheet_dimensions}}
\end{figure*}
In the left plot of Figure~\ref{fig:csheet_dimensions} we show the distribution
of current sheet thicknesses $\delta$ in all detected current sheet cells in the
model with sound speed $a = 1.0$ at times $t = 0.1$, $1.0$, $3.0$, $5.0$, and
$7.0$. Two vertical lines correspond to the effective cell size $h = 1/2048$
(left) and the initial current sheet thickness $\delta_{ini} =
3.16\times10^{-3}$. We see that at initial times two populations of the samples
corresponding current sheet regions, the first one which is dominating, where
its thickness broadens to values several times larger than the initial thickness
$\delta_{ini}$, and the second one characterized by very broad current sheets
with $\delta$ comparable to a fraction of the unit length $L$ (see the green
line in Fig.~\ref{fig:csheet_dimensions} corresponding to $t = 3.0$). This
population of broad current sheets seems to be transient, since at $t = 5.0$
(red line in Fig.~\ref{fig:csheet_dimensions}) it is significantly decreased.
For $t \ge 5.0$, the distributions are not characterized by two populations
anymore, and they shift to much smaller values of $\delta$, becoming at a
fraction of the detected current sheet samples comparable or below the effective
cell size $h$, indicating a sharp change of magnetic field orientation across
the sheet (only two cells to change the polarization of magnetic field lines)
and probably related to turbulent dynamics near the sheet plane. On the other
side, the number of current sheet samples quickly decays with the value of
$\delta$, indicating that thick current sheets are not too common in the system
anymore.
Respectively, in the right plot of Figure~\ref{fig:csheet_dimensions} we show
the evolution of distribution of the current sheet region lengths for the same
model at the same times. As expected, initially we have one current sheet
plane, extended over the whole box. This is indicated by a significant number
of samples of $l \approx 1.0$ at $t = 0.1$. However, we can also see, that a
less significant population of samples show lengths being a fraction of $L$.
We would interpret them as the points belonging to parts of the current sheet
already mostly deformed, since using our analysis, we cannot determine if these
points belong to the same or separated current sheets. What is important, that
this population increases with time, as seen at times $t = 1.0$ and $2.0$
(orange and green curves, respectively). At later times, $t > 3.0$, nearly all
points belong to significantly shorted current sheet regions than initially,
with values mostly spread between $l = 10^{-2}$ and $10^{-1}$ at $t = 7.0$
(purple line).
Analyzing Figure~\ref{fig:csheet_dimensions} we can deduce, that tearing mode
should be a preferential instability at initial times, which are characterized
by relatively thin and extended current sheets with $\delta \approx 0.005 -
0.05$ and $k = 1/l \approx 1 - 10$ resulting in instability condition $k \delta
\lesssim 0.5 < 1$. At later times the thickness decreases to values $\delta
\approx 0.001 - 0.1$, which should support development of the tearing mode,
however, the fragmentation or deformations of the current sheet decrease
significantly the length of the current sheet regions increasing somewhat the
condition $k \delta$. Nevertheless, we should remember, that initial
perturbations were imposed at very small scales $k > 100$, resulting in a
relatively inefficient development of tearing mode during the first stage,
reduced even more by initial broadening of the current sheet thickness. At
later times the situation could improve, since the developed turbulence generate
fluctuations at larger scales and helps to decrease the thickness of current
sheet.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f3a}
\includegraphics[width=0.48\textwidth]{f3b}
\caption{Correlations between the ratio of the transverse component of magnetic
field to the magnetic field amplitude within the current density $\xi = B_n / B$
against the Lundquist number $S_\delta = \delta \, v_A / \eta$ at two evolution
moments $t = 3.0$ (left) and $t = 7.0$ (right). The red dashed line divide the
unstable (below) and stable (above) regions.
\label{fig:correlations}}
\end{figure*}
The analysis above did not respond clearly, if initially the turbulence can be
generated by tearing mode. At later times, the turbulence develops in regions
where current sheet is thinner, potentially increasing the growth rate of the
instability. At the same time, however, it is possible that the same turbulence
generates component of magnetic field normal to current sheet, which, according
to Equation~\ref{eq:kh_grate_comp}, may suppress the instability. In order to
analyze the stabilizing effect of this component, we show the correlations
between the normalized transverse component of magnetic field $\xi = B_n/B$ and
the specific Lundquist number $S_\delta = v_A \delta / \eta$ in
Figure~\ref{fig:correlations} for two moments, $t = 3.0$ (left panel) and $t =
7.0$ (right panel). The red line, corresponding to the relation $\xi =
S_\delta^{-3/4}$, divides the plot into two regions: one below the line, where
$\xi$ has negligible effect, and another above the line, where the stabilization
by $\xi$ takes place and is significant. From the distribution shown in the
left panel we see, that most of the detected cells are unstable at $t = 3.0$
with values of $S_\delta$ concentrated slightly below the value of $10^3$ and
the stabilization parameter $\xi$ spreading up to value $10^{-2}$. A partial
stabilization in the upper tail, i.e. for $\xi \approx 10^{-2} - 10^{-1}$,
already takes place. It has a characteristic increase in the direction of
larger values of $S_\delta$ at $\xi \approx 10^{-1}$. This is probably due to
the broadening of the current sheet seen in the left panel of
Figure~\ref{fig:csheet_dimensions}. The points above $\xi = 1.0$ are
statistically insignificant.
At later time, $t = 7.0$, shown in the right panel of
Figure~\ref{fig:correlations}, the situation is very different. The points of
distribution are spread toward lower values of $S_\delta$, roughly between $10^1
and 10^3$, and across many orders of magnitude along the stabilization parameter
$\xi$, nearly up to $10^2$. We see a significant concentration of detected
samples slightly above the red line dividing two stability regions. The spread
along the horizontal direction should be attributed to the decrease of current
sheet thicknesses due to the action of turbulence, which is also responsible for
generating transverse component stabilizing tearing instability. From the
distributions shown we see that the generation of $\xi$ by turbulence cannot be
ignored in any analysis of the growth rate of tearing mode. If this effect does
not completely stabilize the instability, it can at least significantly suppress
its growth (see the second term on the right hand side in
Eq.~\ref{eq:tr_grate_xi}).
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f4a}
\includegraphics[width=0.48\textwidth]{f4b}
\caption{Distribution of the magnetic shear direction in unstable cells for the
same model and moments as shown in Fig.~\ref{fig:correlations}. The horizontal
angle corresponds to the azimuthal angle projected on the XZ plane with respect
to the X axis. The vertical angle is the angle between the shear direction and
the XZ plane.
\label{fig:tr_angular}}
\end{figure*}
An interesting question to ask is what is the principal direction of magnetic
shear in the unstable cells at different moment, considering the presence of a
guide field and weak initial perturbations. In Figure~\ref{fig:tr_angular} we
show the angular distribution of the shear measure for the unstable cells only
at two moment, $t = 3.0$, and $7.0$. We notice, that at $t=3.0$ the shear
direction is still strongly concentrated along the X direction, with a spread
roughly from $-20^\circ$ to $20^\circ$ in the azimuthal and from $-10^\circ$ to
$10^\circ$ in the vertical direction, with some very rare events reaching even
higher altitudes. At the final time, $t = 7.0$, we notice, that the
distribution of directions, even though still concentrated along the X axis, but
this time which much larger spread in both directions, azimuthal and vertical.
This indicates, that the turbulence acting on the current sheet, can
significantly bend it, modifying its local topology.
\subsection{Kelvin--Helmholtz instability analysis}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f5a}
\includegraphics[width=0.48\textwidth]{f5b}
\caption{Statistics of the velocity shear region thickness $\delta$ (left) and
length $l$ (right) for the same model as shown in
Fig.~\ref{fig:csheet_dimensions} at different evolution moments, $t = 0.1$,
$1.0$, $3.0$, $5.0$, $7.0$ (blue, orange, green, red, and purple, respectively).
Solid lines represent Kelvin--Helmholtz unstable cells only, which the dashed
lines show distribution for all shear detected regions. The vertical dashed line
(grey) shows the effective grid size $h$.
\label{fig:kh_shear_dimensions}}
\end{figure*}
Similarly to the tearing mode analysis, we first show in
Figure~\ref{fig:kh_shear_dimensions} evolution of distributions of the thickness
and length of the velocity shear regions applied in order to analyze the
Kelvin--Helmholtz instability. The first interesting observation from these
plots is that there are no detected velocity shear regions for times $t = 0.1$
and $1.0$, or the shear strength was too weak, below the set threshold value
$\Delta U_{min} = 10^{-4}$ set in the shear detection algorithm. These
distributions are for all cells with detected velocity shear, not only the
unstable ones. At $t = 3.0$ we already see a number of cell belonging to a
shear region of thicknesses between $2\times10^{-3}$ and a fraction of length
unit, with a peak value shifting from values below $10^{-2}$ for $t = 3.0$ to
values larger than $10^{-2}$ at $t = 7.0$. Looking at the right panel of
Figure~\ref{fig:kh_shear_dimensions} we see that these shear regions spread in
longitudinal dimension from several cells to the length unit, indicating a
generation of nearly global shear in the computational domain. Transforming
these lengths into wave number indicates, that perturbations of any $k$, from $k
= 1$ up to nearly $k \sim 1000$, may grow due to Kelvin--Helmholtz instability,
if appropriate conditions are fulfilled in the local shear region. The peak
value for the longitudinal dimension of shear regions is about $l \sim 0.1$,
decreasing slightly for later times, corresponding to wave number of $k \sim
10$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f6a}
\includegraphics[width=0.48\textwidth]{f6b}
\caption{{\it Left:} Evolution of the distribution of the velocity shear
strength $\Delta U$ for different times. Dashed lines correspond to all cells
where shear was detected, while solid lines show only Kelvin--Helmholtz unstable
cells.
{\it Right:} Distribution of sonic (blue) and Alfvénic (grey) Mach numbers
related to the velocity shear in Kelvin--Helmholtz unstable samples for model
with sound speed $a = 1.0$ at $t = 7.0$. The red dashed line corresponds to
Mach number equal $2.0$. As predicted by \cite{MiuraPritchett:1982}, for all
unstable cells ${\cal M}_s < 2$ and ${\cal M}_A > 2$.
\label{fig:kh-mach}}
\end{figure*}
The most important parameter in the development of the Kelvin--Helmholtz
instability is the shear strength $\Delta U$. In the left panel of
Figure~\ref{fig:kh-mach} we show the evolution of distribution of $\Delta U$ for
all cells where shear was detected (dashed lines) and only for cells which are
unstable, i.e. where $\Delta U > 2 v_A$. We notice, that even though shear is
relatively common after $t = 3.0$, only the cell with the strongest $\Delta U$
are in fact unstable. We see that for these unstable cells the shear strength
spreads between $10^{-2}$ to nearly $2.0$, measured in Alfvén speed $v_A$. At
later times, the distribution peaks at values close to $v_A$. This plot clearly
indicates, that strong shear can be generated in such systems in relatively
short time.
In the right panel of Figure~~\ref{fig:kh-mach} we verify prediction for the
compressible system by \cite{MiuraPritchett:1982}, that if sonic Mach number
${\cal M}_s > 2.0$ or ${\cal M}_A < 2.0$ the instability is stabilized. We show
distribution of both Mach numbers in the last snapshot of our simulation, at $t
= 7.0$. Clearly, all unstable cells, have sonic Mach number ${\cal M}_s < 2.0$
and Alfvénic Mach number ${\cal M}_A > 2.0$, being in perfect agreement with the
theoretical prediction.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f7a}
\includegraphics[width=0.48\textwidth]{f7b}
\caption{Distribution of the velocity shear direction in unstable cells for the
same model and moments as shown in Fig.~\ref{fig:correlations}. The horizontal
angle corresponds to the azimuthal angle projected on the XZ plane with respect
to the X axis. The vertical angle is the angle between the shear direction and
the XZ plane.
\label{fig:kh_angular}}
\end{figure*}
Similarly to tearing mode analysis, we show the distribution of shear direction,
this time the velocity one in Figure~\ref{fig:kh_angular} for two moments of
time, at $t = 3.0$ (left), when the Kelvin--Helmholtz unstable cells appear, and
at the final moment $t = 7.0$ (right). We see that at $t = 3.0$ the
Kelvin--Helmholtz instability is still insignificant, with only around 900
unstable cells detected. However, these unstable cells have relatively large
angular spread around the X direction. The velocity shear directions are
scattered over all angles in both directions, azimuthal and vertical, although
the statistically significant part is within $20^\circ$ from the X axis,
elliptically elongated in the azimuthal direction.
\subsection{Evolution of the Growth Rates: Tearing vs. Kelvin--Helmholtz}
Supported by the results from previous subsections, showing analysis of the
factor which are important for development of both instabilities or suppress
them, we can now estimate the growth rate for both instabilities, assuming the
wave number $k$ of perturbations. As we already showed, the range of possible
wave numbers $k$ for both instabilities can be estimated from distributions of
the longitudinal dimension of shear regions $l$. In case of both instabilities,
the minimum wave number $k_{min} = 1$ due to the size of the box. However, the
estimated maximum wave number $k_{max}$ is slightly different for both
instabilities, with $k_{max} \approx 300$ for tearing mode and $k_{max} \approx
800$, with peak values between $k = 1$ and $k = 100$ for both. Therefore, the
estimation of growth rates was done for three assumed values of perturbation
wave number, $k = 1$, $10$, and $100$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{f8a}
\includegraphics[width=0.48\textwidth]{f8b}
\includegraphics[width=0.48\textwidth]{f8c}
\includegraphics[width=0.48\textwidth]{f8d}
\includegraphics[width=0.48\textwidth]{f8e}
\includegraphics[width=0.48\textwidth]{f8f}
\caption{Evolution of the growth rates for tearing model (left column) and
Kelvin--Helmholtz (right column) at three different time moments, $t = 3.0$,
$5.0$, and $7.0$ (upper, middle, and lower rows, respectively) for three
selected wave numbers of perturbations, $k = 1$, $10$, and $100$ (grey, blue,
and green histograms, respectively).
\label{fig:grate-evol}}
\end{figure*}
In Figure~\ref{fig:grate-evol} we show the estimated growth rate distributions
for both instabilities. The statistics for tearing mode and Kelvin--Helmholtz
instability are shown in the left and right column, respectively. Three
different time moments were chosen, $t = 3.0$, $5.0$, and $7.0$, shown in the
upper, middle, and lower rows, respectively. We see, that at earlier times, the
Kelvin--Helmholtz instability is relatively negligible. The number of detected
cells is orders of magnitude lower comparing to the case of tearing mode.
However, even though the tearing mode is widespread, its growth rates are very
small, meaning that it would need several Alfvén times to develop, considering
that the initial perturbations have mostly small scales. On the contrary, the
Kelvin--Helmholtz instability, even with insignificant filling factor at $t=3.0$,
can develop in a fraction of Alfvén time, since its estimated growth rates are
much larger than the tearing mode ones, as seen in the right upper panel of
Figure~\ref{fig:grate-evol}.
At $t=5.0$, the number of samples of Kelvin--Helmholtz instability is comparable
to the one of tearing mode (see middle row of Fig.~\ref{fig:grate-evol}). The
tearing mode growth rates extend to higher values, roughly by order of
magnitude, comparing to earlier time shown, $t = 3.0$. However, the
Kelvin--Helmholtz growth rates extends toward both, smaller and higher values,
reaching values of $10^4$ for $k=100$, three orders of magnitude higher
comparing to tearing mode.
For the final moment shown, $t = 7.0$, the distributions of growth rates for
both instabilities seem very similar to the one shown for $t=5.0$. Both
instabilities have slightly above $10^5$ samples at peak values of
distributions, however, the tearing mode peaks at growth rates $\omega \tau_A
\lesssim 1$, while the Kelvin--Helmholtz instability peaks at $\omega \tau_A \gg
1$. This indicates, that if any favorable velocity shear is formed by
turbulence, the instability can grow nearly instantly.
\section{Discussion}
\label{sec:discussion}
\subsection{Limitations of our approach}
Our approach in analyzing tearing and Kelvin-Helmholtz instabilities is robust,
however, it has it drawbacks, which should be pointed out. First of all, in the
presence of growing turbulent fluctuations, it is nearly impossible to determine
the characteristics of the perturbations present in the analyzed local shear
region. In order to determine the growth rate precisely, we would have to
posses information about the wave number and direction of each local
perturbation. In order to compensate the lack of these data, we assumed a few
typical wave numbers ($k = 1, 10, 100$), and assumed direction which produce
smallest growth rate (e.g., the term $\vec{k} \cdot \vec{B}$ for
Kelvin--Helmholtz case).
Our results are based on the statistics extracted from the cell by cell
analysis. We do not determine the individual shear regions and analyze each
region separately. For example, in the case of tearing instability analysis, we
have one current sheet initially crossing the whole computational box. Due to
developing turbulence, this current sheet is being deformed, and eventually
fragmented into a number of current sheet regions, not necessarily separated,
but interlinked in a complex manner. Therefore, our analysis should be
understood in terms of volume (or filling factor) rather, than individual
structures. This should be kept in mind especially when interpreting the
statistics of longitudinal dimensions of the shear regions.
Original derivation of the growth rate of the Kelvin--Helmholtz instability
considered discontinuous velocity shear \cite[see][]{Chandrasekhar:1961}. As a
result, the growth rate depends linearly on the perturbation wave number.
Later, a number of works analyzed this instability taking into account a smooth
transition of velocity within the shear region, concluding the existence of the
wave number $k_{max}$ for which the growth rate is maximum, and which typically
is related to the thickness of the region $\delta$
\cite[see][]{OngRoderick:1972, Walker:1981, MiuraPritchett:1982, Chen_etal:1997,
BerlokPfrommer:2019}. This maximum growth rate is usually a fraction of the
growth rate corresponding to the discontinuous shear. Still, it is much larger
than the growth rates for tearing mode obtained in our analysis. They also
demonstrated stabilization of the instability for $k \delta \gg 1$. In our
models $\delta$ varies between $10^{-3}$ and $10^{-1}$, which confirms that
selected values of wave number $k$ are reasonable. Nevertheless, we aim to
study the effects of finite thickness of velocity shear region in the
forthcoming paper.
\subsection{Turbulent reconnection as a dominant process}
Suggested 20 years ago, the turbulent reconnection model has gotten significant
support both from subsequent numerical \cite[see][]{Kowal_etal:2009,
Kowal_etal:2012, Kowal_etal:2017, Eyink_etal:2013, Oishi_etal:2015,
Takamoto_etal:2015, Beresnyak:2017, Takamoto:2018}, theoretical
\citep{Eyink_etal:2011, Eyink:2011, Eyink:2015, Lazarian_etal:2015,
Lazarian_etal:2019}, as well as observational
\cite[see][]{CiaravellaRaymond:2008, Sych_etal:2009, Sych_etal:2015,
KhabarovaObridko:2012, Lazarian_etal:2012, Santos-Lima_etal:2013,
Leao_etal:2013, Gonzales-Casanova_etal:2018} studies. At the moment of its
introduction the model was an alternative to the Hall-MHD models predicting
Petschek X-point geometry of reconnection point, i.e., very regular type of
reconnection. The later model required plasma to be collisionless, which is in
contrast to the turbulent one, which did not depend on any plasma microphysics
and was applicable to both collisional and collisionless media. It was later
understood that the X-point geometry is not tenable in realistic settings.
Instead, the tearing reconnection \cite[see][]{Syrovatskii:1981,
Loureiro_etal:2007, Bhattacharjee_etal:2009} became the main alternative
scenario for the turbulent model. So far 2-dimensional simulations demonstrated
fast reconnection for both MHD and kinetic regimes. Compared to earlier Hall-MHD
reconnection that necessarily required collisionless plasma condition this was
definitely an important improvement. The tearing reconnection shares many
features with the turbulent one. For instance, Hall-MHD reconnection required a
particular set of boundary conditions that was difficult to preserve in the
realistic setting with the random external perturbations, not needed for tearing
case.
With two reconnection processes providing fast reconnection, it is important to
understand the applicability of each. It has been numerically demonstrated in
\cite{Kowal_etal:2009} that including additional microscopic effects simulating
enhanced plasma resistivity does not change the turbulent reconnection rate.
This agrees well with the theoretical expectations in turbulent reconnection
(see LV99 and \cite{Eyink:2011}), in particular with the generalized Ohm's law
derived in \cite{Eyink:2015}. As a result, if media is already turbulent, one
does not expect to see effects of tearing reconnection. With the existing
observational evidence about the turbulence of astrophysical fluids this means
that the turbulent reconnection is dominant for most of the cases. For
instance, we expect the turbulent reconnection to govern violation of the flux
freezing in turbulent fluids. This results in reconnection diffusion that
governs star formation \citep{Lazarian_etal:2012}, induces the violations of the
structure of the heliospheric current sheet and the Parker spiral
\citep{Eyink:2015}.
The numerical results on flux freezing violation that follows from the LV99
theory cannot be possibly explained with the tearing reconnection. This clearly
demonstrates that there are situations when the turbulent reconnection is at
work, while tearing reconnection is not expected.
The “pure” problem of self-driven turbulent reconnection was the focus of our
study in \cite{Kowal_etal:2017}. There we showed that in the absence of the
external turbulence driving the turbulence develops in the reconnection region
and this turbulence has the properties corresponding to the expectations of the
MHD turbulence. This was in contrast to \cite{HuangBhattacharjee:2016} who
claimed that turbulence produced in reconnection regions is radically different
from the \cite{GoldreichSridhar:1995} one. The properties of turbulence are
important, as the LV99 magnetic reconnection and closely connected to it
Richardson dispersion \citep{Eyink_etal:2011} are proven to work in conditions
where no tearing instability is expected. Therefore, if such type of turbulence
is present in the reconnection regions it is expected to induce fast
reconnection. The correspondence of the reconnection rates in self-driven
reconnection with the expectations of the LV99 theory was demonstrated in
\cite{Lazarian_etal:2015}, where the results of earlier simulations, e.g.
\cite{Beresnyak:2013}, were analyzed.
The present paper is a step forward in understanding the process of self-induced
fast reconnection. Here we explore the nature of turbulence driving in the
reconnection region. If tearing mode is absolutely essential for driving
turbulence, one may still argue that the actual reconnection is happening via
tearing, while the turbulence is playing only an auxiliary role for the process.
Our results, in fact, testify that the actual picture is very different. The
process of tearing mode plays in 3D a role at the earliest stage of
reconnection. As the system evolves in time the outflows induced by the
reconnection region become turbulent, with Kelvin-Helmholtz instability playing
the dominant role. As the reconnection grows, the region becomes more and more
turbulent with the tearing instability being overtaken or even suppressed, not
playing a role on the reconnection process overall.
Our simulations are performed in the high beta plasma regime and in such
conditions the reconnection outflow does not induce sufficient turbulence to
trigger the self-accelerating process of "reconnection instability"
\cite[see][]{LazarianVishniac:2009} though.
While the MHD simulations show a very different picture for 2D and 3D
self-driven reconnection, the particles-in-cell (PIC) simulations tend to show
similar tearing patterns both in 3D and 2D. One possible explanation is related
to limitations of present-day PIC simulation, given that these do not present
enough particles in the reconnection regions to result in developed turbulence.
Therefore, in such "viscous" regime the Kelvin-Helmholtz instability is
suppressed and cannot operate and the only signatures that can be seen arise
from the tearing instability. In other words, the "viscous" outflow does not
feel the additional degrees of freedom that would allow high Reynolds turbulent
behavior to take place. Nevertheless, the high resolution PIC simulations
presented by Hui Li in a reconnection review by \cite{Lazarian_etal:2019} show
the signatures of developing turbulence, e.g., the Richardson dispersion of
magnetic field lines was reported. Therefore, we expect that the results we now
obtained with MHD modelling can be also obtained/confirmed with very high
particle number PIC simulations.
There is, however, another puzzle that is presented by the comparison of the 3D
kinetic and MHD simulations. The kinetic simulations show higher reconnection
rates and it is important to understand whether these differences persist for
reconnection at all scales or they are just a transient feature of reconnection
processes taking place for small-scale reconnection. This issue was recently
addressed in \cite{Beresnyak:2018} using Hall-MHD code. The results there
testify that the reconnection rates for self-driven 3D turbulent reconnection
obtained with Hall-MHD gradually converge to the results obtained for the 3D MHD
self-driven reconnection. This is what one expects from the theory \cite[see
LV99;][]{Eyink_etal:2011, Eyink:2015}. Nevertheless, in terms of our present
study the convergence of the results obtained with MHD and Hall-MHD code testify
that the results in the present paper will not change in the presence of
additional plasma effects.
Our confirmation of the predictions of turbulent reconnection theory formulated
in the LV99, and subsequent theoretical studies, also has bearing on the ongoing
discussion of the so called "reconnection-mediated turbulence" idea presented in
a number of theoretical papers \cite[see][]{LoureiroBoldyrev:2017a,
LoureiroBoldyrev:2017b, BoldyrevLoureiro:2017, BoldyrevLoureiro:2018,
Walker_etal:2018, Mallet_etal:2017a, Mallet_etal:2017b, Comisso_etal:2018,
Vech_etal:2018a}. For sufficiently large Reynolds numbers, due to both the
process of "dynamical alignment" and the effect of magnetic fluctuations getting
more anisotropic with the decrease of the scale, the current sheets prone to
tearing instability can develop. Such changes of the turbulence at the scale
$\lambda_c$ in the vicinity of the dissipation scale do not change the nature of
turbulent cascade, which lies on scales of the inertial range $\gg \lambda_c$.
Our study is therefore suggestive that if reconnection takes place at small
scales, comparable to $\lambda_c$ , it will also be turbulent as demonstrated by
our simulations. However, since reconnection does not happen at the larger
eddies scales it is preferred to refer this hypothetical regime as
"tearing-mediated turbulence" instead. The objective reality is, however, that
in the reconnection community historically only bursts of reconnection was
considered.
\subsection{Our advance of the field}
One of the most important results of the work on reconnection-driven turbulence
is the reassurance of self-generation of turbulence in reconnection events. The
other is that this turbulence follows standard Kolmogorov and Goldreich--Sridhar
statistics. However, a still open issue is related to the driving mechanism of
the observed turbulent motions. What drives the turbulence in reconnection
events? The literature has suggested, without any quantitative proof, that
tearing modes, plasmoid instabilities, and shear-induced instabilities could
mediate the energy transfer from coherent to turbulent flows.
In our numerical experiments we do not identify tearing modes, although
filamentary plasmoid-like structures are present. The filling factor of these
are, however, visually recognized as very small. Sheared flows, on the other
hand, are present around and within the whole current sheet. As the field lines
reconnect, the $\vec{v} \times \vec{B} + \vec{E}$ force increases, accelerating
the plasma and creating the current sheet. This process is, in three
dimensions, patchy and bursty. Therefore, the accelerated flows are strongly
sheared. The statistical importance of these bursty flows is large, as we
compared the velocity anisotropy of reconnecting events to that of decaying
turbulence without the reversed field. Kelvin--Helmholtz instability due to the
sheared velocities in reconnecting layers has already been conjectured as
possible origin of turbulence by \cite{Beresnyak:2017}. In the previous work,
we therefore provided real evidence for the self-generated turbulence driven by
the velocity shear. Here we have performed a proper analysis of the
growth-rates of such instabilities.
Another consequence of the mechanism responsible for turbulence self-generation
is on the statistics of perturbations. Velocity shear is a global process that
occurs in regular magnetized and unmagnetized fluids. The nonlinear evolution
of related instabilities, such as Kelvin--Helmholtz instability, is known to be
one of the main contributors to the energy transfer rate between wave modes,
i.e., the energy cascade. If the energy cascade in reconnection layers is led
by similar mechanisms, it is straightforward to understand why the statistics
observed resemble those of Kolmogorov-like turbulence, and Goldreich--Sridhar
anisotropy scaling. In other words, our claim is that the turbulent onset and
cascade in reconnection is not different to those found in regular MHD and
hydrodynamic systems.
\section{Conclusions}
\label{sec:conclusions}
In this work we analyzed two MHD instabilities: tearing mode and
Kelvin--Helmholtz, which are candidates for processes responsible for turbulence
generation in spontaneous reconnection, e.g. the reconnection without externally
imposed turbulent driving. The generated turbulence is due to the initially
imposed weak noise present in the vicinity of the Harris current sheet. We
analyzed factors important for growth of both instabilities, but also those
which suppress them. The analysis presented in this work has shown important
results which can be synthesized in the following:
\begin{itemize}
\item The region of current sheet with the presence of initial noise develops
into a region with conditions favorable for development of MHD instabilities,
such as tearing mode or Kelvin--Helmholtz instability. Although the tearing
instability is natural for thin elongated current sheets, the conditions for
Kelvin--Helmholtz instability have never been verified before in systems with
stochastic reconnection.
\item Evolution of stochastic reconnecting provides to formation of shear
regions, both magnetic and velocity, with broad range of thicknesses and
longitudinal dimensions. We estimated, that the maximum perturbation wave
number is smaller for tearing mode compared to Kelvin--Helmholtz. Since the
growth rate is proportional to $k$, it increases quicker for the later.
\item Tearing instability is expected to develop at earlier stages, while, once
a sufficient amplitude of turbulence is generated near the current sheet, it can
be suppressed due to the presence of the transverse component of magnetic field
$B_n$. As shown in \cite{SomovVerneta:1993}, for $\xi = B_n / B >
S_\delta^{-3/4}$ this instability is suppressed. We demonstrate that in our
models $\xi$ can be sufficiently large to shut the instability down in most of
the simulated volume. Still, taking into account the contribution of transverse
component $B_n$, the instability can develop with dynamical time shorter than
the Alfvénic time $t_A$ under favorable circumstances.
\item Due to misalignment of the outflows from neighboring reconnection events,
they can generate enough sheared flows to induce Kelvin--Helmholtz instability.
The Mach numbers calculated with respect to the shear velocity $\Delta U$
satisfy necessary conditions for the instability to develop. Our analysis
indicates the presence of sheared regions with broad range of amplitudes,
$10^{-2} \le \Delta U \le 1$, and thicknesses, $10^{-3} \le \delta \le 0.5$. The
estimated growth rates $\omega \tau_A$, larger than $10^2$ for $k \ge 10$,
suggest the growth of Kelvin--Helmholtz instability (within the dynamical time)
much shorter than the Alfvénic time $t_A$ itself.
\end{itemize}
\acknowledgments
G.K. acknowledges support from CNPq (no. 304891/2016-9). D.F.G. thanks the
Brazilian agencies CNPq (no. 311128/2017-3) and FAPESP (no. 2013/10559-5) for
financial support. A.L. acknowledges the NSF grant 1816234 and NASA TCAN
144AAG1967. E.T.V. acknowledges the support of the AAS. This work has made use
of the computing facilities the Academic Supercomputing Center in Krak\'ow,
Poland (Supercomputer Prometheus at ACK CYFRONET AGH) and of the Laboratory of
Astrophysics (EACH/USP, Brazil).
|
1,108,101,562,790 | arxiv | \section{Introduction}
\label{sec:intro}
Autonomous driving is believed \cite{geiger2012we} to have a tremendous positive impact on human society. To ensure a high degree of safety even in uncertain or dynamically changing environments, an autonomous vehicle should be able to anticipate the future trajectories of the surrounding agents (\emph{e.g.} vehicles, pedestrians, and cyclists) in advance and plan a plausible path in response to the behaviour of other agents such that the probability of collision is minimized. However, the motion trajectory of the surrounding agents is often hard to predict without explicitly knowing their intention. In this case, we need to utilize other useful information to improve safety and efficacy of the planned path of the ego-vehicle, including the observed current status of notable surrounding agents, possible physically acceptable routes in the current traffic scenario, and possible interaction outcomes with their likelihoods. Unfortunately, several challenges still exist that prevents us from utilizing this information to achieve reliable trajectory prediction. In this paper, five main challenges in trajectory prediction for autonomous driving are summarized and discussed as follows:
\noindent \textbf{Considering surrounding traffic environments}. In real-world traffic scenarios, the movement of traffic must obey traffic rules, and avoid surrounding obstacles in the meantime. That useful information can be found in the high definition (HD) map.
\noindent \textbf{Dealing with social interactions}. To avoid the collision, the trend of interacting with surrounding traffic agents needs to be captured. However, interactions between different types of traffic are very different, \emph{e.g.}{} the interaction between pedestrians is different from the interaction between a car and a pedestrian.
\noindent \textbf{Handling traffic of multi-class movement}. The movement patterns of different types of traffic need to be considered for autonomous driving, including cars, buses, trucks, motorcycles, bicycles, and pedestrians. In this paper, those types of traffic are divided into three categories, namely vehicles (cars, buses, and trucks), cyclists (motorcycles and bicycles) and pedestrians.
\noindent \textbf{Predicting multi-modal trajectories with probability}. In reality, people may follow several plausible ways when navigating crowd and traffic. To avoid potential collisions, the most probable future movements should be considered.
\noindent \textbf{Probability awareness}. The probability value of each possible path of surrounding obstacles is a considerable factor in the planning and control of the autonomous driving car.
State-of-the-art methods only solve some, but not all, challenges at once as shown in Table \ref{tab:challenges}. In this paper, we present a multi-modal trajectory prediction method to tackle all these challenges, which models the dynamic social interactions among agents using Graph Attention Network (GAT) \cite{velivckovic2017graph} and semantic map. The contributions of our proposed method are summarized as follows:
\begin{itemize}
\item[$\bullet$]
The proposed method is designed to achieve multi-modal predictions with considering traffic environments, dealing with social interactions, and predicting multi-class movement patterns with probability values, simultaneously.
\item[$\bullet$]
In the proposed Dynamic Graph Attention Network (DGAN), Dynamic Attention Zone and GAT are combined to model the intention and habit of human driving in heterogeneous traffic scenarios.
\item[$\bullet$]
To capture complex social interactions among road agents, we combine different types of information, including a semantic HD map, observed trajectories of road agents, and the current status of the traffic.
\end{itemize}
\section{Related Work}
\label{sec:relatedwork}
\begin{table}[t]
\footnotesize
\centering
\setlength{\abovecaptionskip}{0cm}
\setlength{\belowcaptionskip}{0cm}
\caption{Comparison of challenges handled in different methods in trajectory prediction.}
\begin{tabular}{l|c|c|c|c|c}
\toprule
\rowcolor{mylavender}
Methods & Traffic Environments & Social & Multi-class & Multi-modal & Probability \\
Social LSTM \cite{SocialLstm} & & \checkmark & & & \\
\rowcolor{mygray}
Social GAN \cite{SocialGan} & & \checkmark & & \checkmark & \\
PECNet \cite{mangalam2020not} & & \checkmark & & \checkmark & \\
\rowcolor{mygray}
Argoverse \cite{Argoverse} & \checkmark & & & & \\
Trajectron++ \cite{salzmann2020trajectron} & & \checkmark & \checkmark & \checkmark & \\
\rowcolor{mygray}
Multipath \cite{Multipath}& \checkmark & & & \checkmark & \checkmark \\
DGAN (ours) & \checkmark & \checkmark& \checkmark& \checkmark& \checkmark \\
\bottomrule
\end{tabular}
\label{tab:challenges}
\end{table}
Here, we review recent literature on trajectory prediction with social interactions.
\textbf{RNN-related methods}. The recurrent neural network (RNN) \cite{mikolov2010recurrent} and long short term memory (LSTM) \cite{hochreiter1997long} have proven to be very effective in time-related prediction tasks. To capture social interactions between pedestrians in crowds, Alexandre \emph{et al}. \cite{SocialLstm} used a social pooling layer in LSTMs to capture social interactions based on the relative distance between different pedestrians. Chandra \emph{et al}. \cite{Traphic} introduced an LSTM-CNN hybrid method with the weighted horizon and local relative interactions in heterogeneous traffic. However, those previous studies only focus on predicting future trajectories for one class, \emph{e.g.}{} pedestrians or vehicles.
\textbf{GAN-related methods}. As there are multiple plausible paths that people could take in the future, several methods \citep{SocialGan, Social-bigat, Social-STGCNN} were proposed using the GAN framework to generate multiple trajectories for a given input. However, to generate multiple results for one target in practice, the generative model should be executed repeatedly with a latent vector randomly sampled from $\mathcal{N}(0, 1)$ as input. Randomly initialised inputs will generate random outcomes, which may lead to large margins between the generated results and the ground truth. To cover the most likely future paths, the number of executions has to be increased.
\textbf{Methods that encode traffic rules}. To predict trajectories that obey traffic rules, several methods used features learned from customised semantic HD map or static-scene images to encode prior knowledge on traffic rules. Chai \emph{et al.} \cite{Multipath} proposed a multipath model to predict parametric distributions of future trajectories with HD map. It regresses offsets for each predefined anchor and predicts a Gaussian Mixture Model (GMM) at each time step. Meanwhile, with a birds-eye-view (BEV) binary image, probabilities are predicted over the fixed set of $K$ predefined anchor trajectories. Cui \emph{et al}. introduced a multi-modal architecture using a raster image from an HD map with each agent's surrounding content encoded. In \cite{Argoverse}, lane sequences were extracted from rich maps as reference lines to predict cars' trajectories. Sadeghian \emph{et al.} \cite{Sophie} presented a GAN framework integrating features encoded from the static-camera frames as the traffic rule constraints using the attention mechanism. However, those works only encode car lanes without considering pedestrian crossings, cycle lanes and other static obstacles labeled in the HD map at the same time.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.3cm}
\centering
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{images/simulatedmap1.jpg}
\caption{Dynamic attention zone and graph modelling for simulating the interaction pattern in real world traffic scenario.}
\label{fig:1a}
\end{minipage}
\hspace{0.1in}
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=0.65\textwidth]{images/simulatedmap3.jpg}
\caption{RGB image representation of semantic HD map for encoding the real world traffic environments.}
\label{fig:1b}
\end{minipage}
\end{figure}
\section{Methodology}
\label{sec:methodology}
\subsection{Problem Definition}
Given a set of $N$ agents in a scenario with their corresponding observed information over a time period $T_{ob}$ from time steps $1,...,t_{ob}$, our goal is to predict the future trajectories $\hat{\textbf{Y}} = \{\hat{Y_1},...,\hat{Y_N}\}$ of all agents involved in the scenario over a time period $T_{f}$ from time step $t_{ob}+1,...,t_{f}$. $N$ agents belong to multiple $c$ classes, \emph{e.g.}{} vehicle, cyclist, and pedestrian. Similarly, the ground truth of the future trajectory is defined as $\textbf{Y} = \{Y_1,...,Y_N\}$, where $Y_i =\{p^{t}_{i}=(x^t_i, y^t_i) | t\in\{t_{ob}+1,...,t_{f}\}$, and $i\in\{1,...,N\}$. There are three different kinds of observed information as inputs to our model, including the semantic map $map^{t_{ob}}$ of the current scenario at time stamp $t_{ob}$, the traffic state $S^{t_{ob}}_i$ of agent $i$ at current time stamp $t_{ob}$, and the observed trajectories of all agents $\textbf{X} = \{X_{1}, ..., X_{N}\}$, where $X_{i} = \{p^{t}_{i}=(x^t_i, y^t_i) | t\in\{1,...,t_{ob}\}\}$.
\subsection{Dynamic Graph Attention Network}
\label{subsec:model}
\subsubsection{Dynamic Attention Zone and Graph Modelling}
\label{subsubsec:zone}
Inspired by the real-world traffic moving pattern, a dynamic attention zone is designed to capture the normal ability of people when interacting with others in traffic. Human beings have the natural sense to choose which surrounding moving agents should be noticed by judging their current status, such as distances, headings, velocities, and sizes. Then, we model each object in the scenario to have an attention circle. Based on the intersection status of the attention circles, we can easily select surrounding agents to have social interactions with. The radius $r$ of the circle is defined as follows:
\begin{equation}
r^{t}_{i} = velocity^{t}_{i} * T_{f} + \lambda * length_{i},
\end{equation}
\noindent where $T_{f}$ represents the period of future time for prediction, and $\lambda$ is a constant value. The $velocity^{t}_{i}$ and $length_{i}$ represent the speed at time $t$ and length of object $i$, respectively. The attention zone at time $t$ covers all potential future positions over a time period $T_{f}$ based on the observed speed at the current time step and the length of the agent. If the agent accelerates or decelerates, the region of attention zone will be enlarged or reduced accordingly to predict the future movement for the next time step.
As illustrated in Figure \ref{fig:1a}.(a), based on the current position and radius of each agent, attention zones of all agents are firstly drawn. Then, the graph of the current scenario at time step $t$ is generated based on the intersection relations of every attention zone.
We define $G$ as $(V,E)$, in which $V = \{v_i| i\in\{1,..,N\}\}$ and $E = \{e_{ij}| \forall i,j\in\{1,..,N\}\}$, where $V$ and $E$ denotes the vertexes and edges of the graph $G$. As shown in Figure \ref{fig:1a}.(b), the graph represents the relations in the whole scenario, but in Figure \ref{fig:1a}.(c), we only focus on the partial graph related to the target in red color. The value of $e_{ij}$ will be calculated and updated in the GAT model in section \ref{subsubsec:GAT}. Each node in $V$ denotes feature embeddings calculated from three different sources including semantic map, observed trajectory, and traffic state.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.3cm}
\begin{center}
\fbox{\includegraphics[width=0.98\textwidth]{images/simulatedmap3-2.jpg}}
\end{center}
\caption{Dynamic Graph Attention Network.}
\label{fig:network}
\end{figure*}
\subsubsection{Feature Extraction}
\label{subsubsec:feature}
To make the best use of the available information, three types of features are jointly extracted from the semantic map, observed history trajectories, and current moving status.
\textbf{Semantic Map}. In autonomous driving applications, semantic HD map contains valuable traffic rule information. We create an RGB image representation to encode traffic rule information contained in semantic HD map. In the RGB image representation of the semantic HD map (Figure.\ref{fig:1b}), pink regions represent commonly seen un-movable road obstacles, \emph{e.g.}{} median strips or barriers. Yellow lines represent road boundaries. Grey and white regions represent pedestrian crossings and bicycle lanes. The green lines are the centre lines of lanes. Blue boxes denote movable obstacles (\emph{i.e.} it can move even though it could be stationary) in the current traffic scenario. Dotted white lines and solid white lines are the traffic lane lines and edge lines, respectively. The middle-layer output estimated by the CNN is extracted as the visual feature $V^{t_{ob}}_{map}$ to represent traffic rule information in $map^{t_{ob}}$:
\begin{equation}
V^{t_{ob}}_{map} = CNN(map^{t_{ob}};W_{cnn}).
\end{equation}
\textbf{Observed Trajectory}. An LSTM is used to extract joint features from the observed trajectories of all involved agents. Similar to \cite{SocialGan}, we first embed the location using a single-layer multilayer perceptron (MLP) to get a fixed-length vector $e_i^t$ as the input of the LSTM cell:
\begin{equation}
\begin{split}
e_i^t &= \phi_{ot}(X_i^t;W_{ot}),\\{}
V^{t}_{oti} &= LSTM(V^{t-1}_{oti}, e_i^t; W_{ot}),
\end{split}
\end{equation}
\noindent where $\phi$ is an embedding function with a rectified linear unit (ReLU) nonlinearity, and $W_{ot}$ is the embedding weight. The LSTM weight ($W_{ot}$) is shared between all agents.
\textbf{traffic state}. The traffic state $S$ is very important for capturing extra information to predict the future trajectories, where $S_{i}^{t} = (velocity_{i}^{t}, acceleration_{i}^{t}, heading_{i}^{t}, width_{i}, length_{i}, c_{i})$ represent the velocity, acceleration, heading, width, length, and class of agent $i$, respectively. A simple MLP is used for encoding to get the embedding feature $V^{t}_{ts}$ of the traffic state.
\begin{equation}
V^{t}_{tsi} = \phi_{ts}(S_{i}^{t}; W_{ts}),
\end{equation}
\noindent where $W_{ts}$ is the embedding weight of the MLP.
The final embedding feature is defined as $V^{t_{ob}}_{i}$, which concatenates the three types of embedding calculated from the semantic map, observed trajectory, and agent status at the current time step:
\begin{equation}
V^{t_{ob}}_{i} = concatenate(V^{t_{ob}}_{map}, V^{t_{ob}}_{oti}, V^{t_{ob}}_{tsi}).
\end{equation}
\subsubsection{Graph Attention Network}
\label{subsubsec:GAT}
The attention mechanism is found to be extremely powerful to draw global dependencies between inputs and outputs \cite{vaswani2017attention}. In attention-related methods, the GAT \cite{velivckovic2017graph} can naturally work with our proposed dynamic attention zone and graph modelling described in section \ref{subsubsec:zone}. In the graph, the vertex $V_{i}$ represents the embedding feature of agent $i$, and $e_{ij}$ represents the relative weight between an agent $i$ and its neighbour $j$ according to the graph generated from the dynamic attention zone. We use multiple stacked graph attention layers, and for each layer $l$, $W_{gat}$ is updated during training.
\begin{equation}
\begin{split}
e_{ij}&=a(W_{gat} V^{t_{ob}}_{i},W_{gat}V^{t_{ob}}_{j}),\\
a_{ij}&=softmax(e_{ij}),\\
P^{l}(i)&=\sum_{j\in N_{i}}a_{ij}W_{gat}V^{t_{ob}}_{j},
\end{split}
\end{equation}
\noindent where $e_{ij}$ indicates the importance of node $j's$ feature to node $i$, $a$ is the shared attentional mechanism described in \cite{velivckovic2017graph}, and $P^l$ is the output of the $l$th layer by summing the corresponding weighted feature of each $j$ in $N_{i}$ neighbours of agent $i$. We define $P^L$, the output from the last GAT layer $L$, as the final feature.
Finally, the final feature $P^L$ and the original feature $V^{t_{ob}}_{i}$ are concatenated as the input of the final MLP layers $\phi_{f}$ to predict the future trajectories. We follow the idea of hierarchical classification \cite{redmon2017yolo9000} to calculate the probabilities belonging to class $c$ and anchor $k_c$.
\begin{equation}
(prob(c)_{i}, prob(k_c|c)_{i}), \mathbf{\mu}_{ik} = \phi_{f}(concatenate(P^L, V_{i}); W_{ac}, W_{or}),
\end{equation}
\noindent where $W_{ac}$ and $W_{or}$ are weights of the MLPs for the two parallel headers, anchor classification and offset regression, respectively; $prob(c)_{i}$ and $prob(k_c|c)_{i}$ are the hierarchical probabilities for agent $i$ classified into class $c$ and anchor $k_c$; and $\mathbf{\mu}_{ik_c}$ is the predicted future trajectory offset based on the $k_c$-th anchor for the $i$-th agent.
\subsection{Multi-modal Trajectory Prediction}
\label{subsec:multimodal}
The proposed method is capable of predicting multiple possible future trajectories with corresponding probability using pre-defined anchor trajectories. In this section, we present the details of multi-modal trajectory prediction.
For the anchor and loss design, we follow the methods described in \cite{Multipath} and \cite{Uber-multimodal}, respectively. First, all ground-truth future trajectories are normalized in the training dataset. Then, an unsupervised classification algorithm \cite{Multipath} such as the k-means or uniform sampling algorithm, depending on datasets, is applied to obtain a fixed number of anchors with squared distance $dist(Y_{i}, Y_{j})$ between future trajectories.
\begin{equation}
dist(Y_{i}, Y_{j}) = \sum_{t=t_{ob}}^{t_f}||M_{i}p^t_i - M_{j}p^t_j||^2_2,
\end{equation}
\noindent where $M_i$ and $M_j$ are transform matrices which transform trajectories into the agent-centric coordinate frame with the same orientation at time step $t_{obs}$.
However, those unsupervised classification algorithms always generate redundant results for a heavily skewed distribution. In practice, we manually select anchors based on the normalized ground-truth trajectories. For each class $c$, we extract $K_c$ anchors. In total, we have $K$ anchors for anchor classification and corresponding offset regression.
The final loss consists of anchor classification loss and trajectory offset loss:
\begin{equation}
\mathcal{L}_{\theta} = \sum_{i=1}^{N}[\mathcal{L}^{class}_{i} + \alpha \sum_{c=1}^{C}\sum_{k_{c}=1}^{K_{c}}I_{k_{c}=k^{*}} L(\hat{Y}_{ik_{c}}, Y_{i})].
\end{equation}
$L(\hat{Y}_{ik}, Y_{i})$ represents the single-mode loss $L$ of the $i$th agent's $k_c$th anchor, where:
\begin{equation}
L(\hat{Y}_{ik_{c}}, Y_{i}) = \frac{1}{T_{f}} \sum_{t=t_{ob}+1}^{t_{f}}\| a^{t}_{ik_{c}} + \mu_{ik_{c}}^{t} - M_{i}p^{t}_{i} \|_{2},
\end{equation}
\noindent where $a^{t}_{ik_{c}}$, $\mu_{ik_{c}}^{t}$, and $p^{t}_{i}$ are points at each time step $t$ of the $k_{c}$th anchor, corresponding offset based on the $k_{c}$th anchor, and $Y_{i}$, respectively.
$\mathcal{L}^{class}_{i}$ is the hierarchical classification loss \cite{redmon2017yolo9000}:
\begin{equation}
\mathcal{L}^{class}_{i} = -\sum_{c=1}^{C}\sum_{k_{c}=1}^{K_{c}}I_{c=c^{*}}I_{k_{c}=k_{c}^{*}}\log (prob(c)_{i}*prob(k|c)_{i}),
\end{equation}
\noindent where $I$ is the indicator function; $c^{*}$ is the ground-truth class of the agent $i$; $k_{c}^{*}$ is the index of the anchor trajectory closest to the ground-truth trajectory according to the squared distance function $dist(\hat{Y}_{ik_{c}}, Y_{i})$:
\begin{equation}
k_{c}^* = \mathop{\arg\min}_{k_{c}\in\{1,...,K_{c}\}} dist(\hat{Y}_{ik_{c}}, Y_{i}).
\end{equation}
\section{Experiments}
\label{sec:experiments}
In this section, we evaluate the proposed methods on three datasets, including our internal proprietary logistic delivery dataset and two publicly available datasets, the Stanford drone dataset \cite{stanforddrone}, and ETC-UCY datasets. These three datasets all include trajectories of multiple agents with social interaction scenarios and birds-eye-view RGB frames used for semantic maps.
The commonly used metrics \cite{SocialLstm, SocialGan, Traphic, Multipath}, including Average Displacement Error (ADE), Final Displacement Error (FDE), and Minimum Average Displacement Error (minADE$_{N}$), are used to assess the performances of the proposed trajectory prediction method. minADE$_{N}$ is the displacement error against the closest trajectory in the set of size $N$. minADE$_{N}$ \cite{Multipath} is computed to evaluate the method with the multi-modal property.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.5cm}
\begin{center}
\fbox{\includegraphics[width=0.98\textwidth]{images/trajectory-results.jpg}}
\end{center}
\caption{Logistic delivery dataset examples and results using our proposed method DGAN. Left: Logistic delivery dataset example, consisting of three-dimensional cloud points with manually labeled information, front camera image, and semantic map. Middle: observed in dashed yellow and future ground truth trajectories in red. Right: Prediction results using our proposed DGAN method showing up the two most likely future trajectories, and corresponding probabilities encoded in a color map to the right. The green box on the semantic map represents our autonomous driving vehicle, and only agents around it are evaluated using the proposed method.}
\label{fig:results}
\end{figure*}
\subsection{Implementation Details}
The proposed learning framework is implemented using PyTorch
Library \cite{paszke2017automatic}. For the selection of the base CNN model, we follow a similar setting as Multipath \cite{Multipath} method. Firstly, the base CNN model is a Resnet50 network with a depth multiplier of 25\%, followed by a depth-to-space operation to restore the spatial resolution of the feature map to 200$\times$200. Then we extract patches of size 11$\times$11 centered on agents locations in this feature map followed by a single-layer MLP as the representation of the traffic rules. Then, the 640-dimension feature embedding is calculated from the feature extraction block, concatenated with 256, 256 and 128-dimensional embeddings from the semantic map, observed trajectory, and current status, respectively. For the dynamic attention zone, we set the parameter $\lambda$=0.5. We train one model for each class using baseline methods, and only one model for all classes with our method.
\subsection{Logistic Delivery Dataset}
Our autonomous driving dataset for the logistic delivery purpose is collected by a vehicle equipped with multiple RGB cameras, Lidar and, radar from several regions in Beijing. We benchmark the performance of the proposed method with these baseline methods, including linear, a basic LSTM, Social LSTM(S-LSTM) \cite{SocialLstm}, Social GAN (S-GAN) \cite{SocialGan}, and Multipath \cite{Multipath}. For the logistic delivery dataset, we sample time steps every 0.2 (5Hz) from the original data and use 2 seconds of history (10 frames) to predict 3 seconds (15 frames) into the future. This dataset contains around 0.8 million agents. We extract approximately 2 million trajectories and use 90\% for training and the rest for testing.
\begin{table}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{0.cm}
\footnotesize
\centering
\caption{Comparison of our proposed method (DGAN) and baselines on our logistic delivery dataset. kS means the method with $K=k$ anchors using our semantic map (the S of kS stands for evaluating with semantic map).}
\begin{tabular}{l|c|c|c|c|c|c|c|c}
\toprule
\rowcolor{mylavender}
Methods & ADEv & FDEv & ADEc & FDEc & ADEp & FDEp \\
linear & 3.8809 & 6.7718 & 3.7221 & 6.0352 & 1.5334 & 3.2096 \\
\rowcolor{mygray}
LSTM &3.2296 & 5.1659 & 3.0519 & 4.8564 & 1.3536 & 2.7642 \\
S-LSTM \cite{SocialLstm} & 2.9196 & 5.0659 & 2.9519 & 4.7145 & 1.2561 & 2.6018 \\
\rowcolor{mygray}
S-GAN 20VP \cite{SocialGan} & 2.7276 & 4.5493 & 2.7567 & 4.1431 & 1.0305 & 2.2416\\
Multipath 20S \cite{Multipath} &1.9366 & 3.2300 & 1.8573 & 2.9416 & 0.9416 & 1.8603 \\
\rowcolor{mygray}
DGAN 20S (ours) &\textbf{1.8398} & \textbf{3.0685} & \textbf{1.7593}& \textbf{2.7945} & \textbf{0.9312} & \textbf{1.8314} \\
\rowcolor{mylavender}
Methods & minADE$_5$v & minFDE$_5$v & minADE$_5$c & minFDE$_5$c & minADE$_5$p & minFDE$_5$p\\
S-GAN 20VP \cite{SocialGan} & 1.6840 & 2.8835 & 1.6511 & 2.6134 & 0.6645 & 1.2848 \\
\rowcolor{mygray}
\rowcolor{mygray}
Multipath 20S \cite{Multipath} & 1.4595 &2.5293& 1.1391 & 2.2136 & 0.5534 & 1.1590 \\
\rowcolor{mygray}
DGAN 20 (ours) & 1.4697 & 2.5531 & 1.1415 & 2.1918 & 0.5530 & 1.1153\\
DGAN 20S (ours) &\textbf{1.4323} & \textbf{2.3946} & \textbf{1.1309}& \textbf{2.1636} & \textbf{0.5521} & \textbf{1.1134} \\
\bottomrule
\end{tabular}
\label{tab:resultADD}
\end{table}
We compare our method on ADE, FDE, and minADE$_{5}$ against different baselines and other state-of-the-art methods. We define ADEv, FDEv, ADEc, FDEc, ADEp, and FDEp representing the ADE and FDE of vehicles, cyclists, and pedestrians, respectively. The experimental results for the logistic delivery dataset are shown in Table \ref{tab:resultADD}. As expected, the linear method performs the worst for only predicting straight paths. Our method DGAN with setting 20S (k$_c$=20 with semantic map) performs the best compared with other methods.
Figure \ref{fig:results} illustrates the original labeled dataset, ground truth trajectories, and the top two generated results with probabilities using our method. We compare with different settings of our method, including using or not using the semantic map (Table \ref{tab:resultADD}) and the different number of K (Figure \ref{fig:anchorAnalysis}). The proposed method using the semantic map performs significantly better than without using it for the vehicle and cyclist classes. However, due to the unpredictability of movements of pedestrians and the unavailability of traffic marks in the HD map for pedestrians, the influence of the semantic map is small for the pedestrian class. The results demonstrate that our method can handle complex situations at traffic intersections. It also indicates the predicted trajectory with the maximum probability value is more likely to follow center lines of lanes guiding by the semantic map.
\subsection{Stanford Drone Dataset}
The Stanford drone dataset \cite{stanforddrone} is collected by drones in college campus scenarios for trajectory prediction applications, consisting of birds-eye-view videos and labels of multi-class agents, including pedestrians, cyclists, and vehicles. The RGB camera frames encode traffic rule information in a semantic HD map and can serve as input to our method without any modification. For the Stanford drone dataset, we use the direction calculated from positions at the latest two observed time steps as the heading information. We use the length of the labeled bounding box as the length information of the agent. In addition to pedestrians as one class, the largest category in this database, we treat cyclists, skateboarders as one class, and the rest (carts, cars, and buses) as another class. We sample the dataset every 0.4s (2.5Hz) and use five frames of information to predict the trajectory in the next 12 frames. We evaluate the ADE, FDE, and minADE$_5$ for all agents in the test dataset compared with several state-of-the-art methods, and results are shown in Table \ref{tab:resultSDD}.
\makeatletter\def\@captype{figure}\makeatother
\begin{minipage}[ht]{0.45\linewidth}
\setlength{\abovecaptionskip}{-0.1cm}
\centering
\includegraphics[width=0.98\textwidth]{images/anchorAnalysis.jpg}
\caption{The impact of the number of anchors K$_c$ on the final ADE result for each class.}
\label{fig:anchorAnalysis}
\end{minipage}
\hspace{0.05in}
\makeatletter\def\@captype{table}\makeatother
\begin{minipage}[ht]{0.5\linewidth}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{0.1cm}
\footnotesize
\centering
\caption{Comparison of our proposed method (DGAN) and other state-of-the-art methods on the Stanford Drone Dataset. Following a similar setting with Multipath \cite{Multipath} method, distance metrics are in terms of pixels in the original resolution.}
\begin{tabular}{l|c|c|c}
\toprule
\rowcolor{mylavender}
Methods & ADE & FDE & minADE$_5$\\
Linear & 26.14 & 53.24 & - \\
\rowcolor{mygray}
CVAE \cite{lee2017desire} & 30.91 & 61.40 & 26.29 \\
DESIRE-SI-IT0 \cite{lee2017desire} & 36.48 & 61.35 & 30.78\\
\rowcolor{mygray}
Social Forces \cite{yamaguchi2011you} & 36.48 & 58.14 & - \\
S-LSTM \cite{SocialLstm} & 31.19 & 56.97 & -\\
\rowcolor{mygray}
Multipath $\mu, \Sigma$ \cite{Multipath} & 28.32 & 58.38 & 17.51 \\
CAR-Net \cite{sadeghian2018car} & 25.72 & 51.80 & - \\
\rowcolor{mygray}
DGAN (ours) & \textbf{24.53} & \textbf{50.78} & \textbf{17.28} \\
\bottomrule
\end{tabular}
\label{tab:resultSDD}
\end{minipage}
\subsection{ETH and UCY Datasets}
The ETH\cite{pellegrini2009you} and UCY\cite{lerner2007crowds} datasets for pedestrian trajectory prediction only, include 5 scenes in total, including ETH, HOTEL, ZARA1, ZARA2, and UNIV. The trajectories were sampled every 0.4 seconds. The information in 8 frames (3.2 seconds) is observed and the model predicts the trajectories for the next 12 frames (4.8 seconds). We follow a similar setting with other relevant works \cite{SocialLstm, SocialGan} for evaluating those two datasets. Results are shown in Table \ref{tab:eth}.
\begin{table}[ht]
\setlength{\abovecaptionskip}{-0.1cm}
\setlength{\belowcaptionskip}{0.1cm}
\centering
\caption{ADE/FDE metrics for several methods on ETH and HCY datasets.}
\begin{tabular}{l|c|c|c|c|c|c}
\toprule
\rowcolor{mylavender}
Methods & ETH & HOTEL & UNIV & ZARA1 & ZARA2 & AVG \\
Linear & 1.33/2.94 & 0.39/0.72 & 0.82/1.59 & 0.62/1.21 & 0.77/1.48 & 0.79/1.59 \\
\rowcolor{mygray}
LSTM & 1.09/2.41 & 0.86/1.91 & 0.61/1.31 & 0.41/0.88 & 0.52/1.11 & 0.72/1.52 \\
S-LSTM \cite{SocialLstm} & 1.09/2.35 &0.79/1.76 & 0.67/1.40 & 0.47/1.00 & 0.56/1.17 & 0.72/1.54\\
\rowcolor{mygray}
S-GAN \cite{SocialGan} & 0.81/1.52 & 0.72/1.61 & 0.60/\textbf{1.26}& 0.34/0.69 & 0.42/0.84 & 0.58/\textbf{1.18} \\
S-GAN-P \cite{SocialGan} & 0.87/1.62 & \textbf{0.67}/\textbf{1.37} & 0.76/1.52 & 0.35/0.68 & 0.42/0.84 & 0.61/1.21 \\
\rowcolor{mygray}
Ours &\textbf{0.78}/\textbf{1.50} & 0.80/1.71 & \textbf{0.59}/\textbf{1.26} & \textbf{0.31}/\textbf{0.64} & \textbf{0.39}/\textbf{0.79} &\textbf{0.57}/\textbf{1.18} \\
\bottomrule
\end{tabular}
\label{tab:eth}
\end{table}
\vspace{-0.8cm}
\section{Conclusion}
\label{conclusion}
We have introduced a dynamic social interaction-aware model that predicts the future trajectories of agents in real-world settings to solve several challenges simultaneously. In the proposed framework, we use an encoded semantic map, the observed history trajectories, and the current status of agents as the input of the GAT. To generate the graph at the current time step, we use the dynamic attention zone to simulate the intuitive ability of people to navigate roads in real-world traffic. The proposed method is evaluated in different datasets, including our internal logistic delivery dataset and two publicly available datasets. The results demonstrate the potential ability of our method for trajectory prediction in a real-world setting. Through synthetic and real-world datasets, we have shown the benefits of the proposed method over previous methods.
\subsection{References}
\small
\label{ref}
\bibliographystyle{plain}
|
1,108,101,562,791 | arxiv | \section{Introduction}
Recently, much attention has been focused on the new idea suggested by
Verlinde~\cite{verlinde} in which gravity can be explained as an
emergent phenomenon originated from the statistical properties of
unknown microstructure of spacetime. The essential part of this
idea is based on two key ingredients:
holographic principle and equipartition rule of the energy. With help
of these principles, the Newton's law of gravity was derived by
interpreting it as an entropic force i.e., force on a test particle at
some point was defined as the product of the entropy gradient and the
temperature at that point, and relativistic generalization leads to
the Einstein equations.
This entropic formulation of gravity has been used to study
thermodynamics at the apparent horizon of the
Friedmann-Robertson-Walker universe~\cite{Shu:2010nv}, Friedmann
equations~\cite{Cai:2010hk,Pad:10013380},
entropic correction to Newtonian gravity ~\cite{Smolin:2010kk, PNicolini, LModesto},
holographic dark
energy~\cite{Li:2010cj, efs, danielson, YFCai}. There have been many works
for the entropic force in cosmological models~\cite{Gao:2010fw, zgz,
wang:y, Wei:2010ww} and the black hole
backgrounds~\cite{Myung:2010jv,mk,Liu:2010na, Cai:2010sz, Tian:2010uy,
Kuang:2010gs, EEKL, RAKonoplya, FCaravelli}.
In a spacetime admitting timelike killing vector one can define a
gravitational potential, and the holographic screen is given by
equipotential surface. In general, the holographic screen can have
multiple disconnected parts depending upon the matter distribution.
The temperature on the holographic screen is given by Unruh-Verlinde
temperature associated with the proper acceleration of a particle near
the screen. This prescription works well for spacetime with a single
holographic screen, however, there has been no known work for multiple
holographic screens so far.
On the other hand, the observational evidence for late-time
cosmological acceleration~\cite{Perl:1998, Reiss:1998}
gave much impetus on studying the de Sitter space with black holes.
Since Schwarzschild-de Sitter black hole is asymptotically de Sitter
space, it has cosmological event horizon in addition to black hole
horizon and these horizons can form holographic screens. In fact, the
potential of Schwarzschild-de Sitter space has two equipotential
surfaces for a given potential value, and the two horizons correspond
to equipotential surfaces. In this paper we investigate the entropic
formulation in the background geometry of the Schwarzschild-de Sitter
space, which provides a model for multiple holographic screens.
In the Verlinde's formalism, two equipotential holographic screens in
the Schwarzschild-de Sitter space have different temperatures.
Thus the whole system cannot be treated as a thermodynamical system in
equilibrium. In Ref.~\cite{Bousso:1996au}, Bousso and Hawking set up
a reference point in the radial direction, at which force
vanishes. They have pointed out that this reference point can play a
role of a point at infinity in an asymptotically flat space. Besides,
the temperature at this reference point is zero, and thus no thermal
exchange can occur across this point. This makes the reference point
behave like a thermally insulating wall. Therefore, we can regard the
Schwarzschild-de Sitter space as two thermally independent systems:
the inner system in the black hole side and the outer system in the
cosmological horizon side. Gibbons and Hawking also considered similar
construction in a slightly different context~\cite{GH_desitter}: they
constructed two separated thermal equilibrium systems by introducing a
perfectly reflecting wall in the Schwarzschild-de Sitter space for the
calculation of the Hawking temperatures of black hole and cosmological
horizons.
Based on the above consideration, we apply the Verlinde's formalism to
each system. In the Schwarzschild-de Sitter case we choose the
holographic screen of equipotential surface having spherical
symmetry. With this choice of holographic screen we show that the
thermodynamic relationship $E = 2TS$ holds for each holographic
screen, where $E$, $T$, and $S$ are the quasilocal energy given by
Komar mass, temperature, and entropy, respectively. We then check
this result with the known cases: i) when the holographic screens lie
at the black hole and cosmological horizons, ii) in the Nariai limit.
In the following section, we briefly review the Verlinde's formalism
of entropic approach to gravitational interaction. In
section~\ref{sec:sch.dS}, we apply the Verlinde's formalism to a
Schwarzschild-de Sitter space which provides a prototype of multiple
holographic screens. Finally, we summarize our results. In this
paper, we adopt the convention $c=k_B=\hbar =1$.
\section{Verlinde's entropic formalism}
\label{sec:setup}
According to the Verlinde's formalism~\cite{verlinde}, gravity is an
entropic force emerging from coarse graining process of information
for a given energy distribution. In this process, information is
stored on holographic screens. In the nonrelativistic case, the
holographic screens correspond to Newtonian equipotential surfaces and
the holographic direction is given by the gradient of the potential.
In a curved spacetime with a timelike Killing vector $\xi^\mu$, the
generalized Newton's potential is given by
\begin{equation}
\label{potential}
\phi = \frac12 \ln (-\xi^\mu \xi_\mu ).
\end{equation}
This potential can be used to define a foliation of space. For a
particle with a four velocity $u^\mu$, its proper acceleration is
given by $a^\mu = u^\nu \nabla_\nu u^\mu$. In terms of the potential
$\phi$ and the Killing vector $\xi^\mu$, the velocity and the
acceleration can be written as
\begin{align}
u^\mu &= e^{-\phi} \xi^\mu, \label{u:killing} \\
a^\mu &= - \nabla^\mu \phi, \label{a:killing}
\end{align}
where the Killing equation has been used to derive
Eq.~(\ref{a:killing}). In Eq.~(\ref{a:killing}), the acceleration is
normal to holographic screen. The Unruh-Verlinde temperature on
the screen is defined as
\begin{equation}
\label{T:def}
T = \frac{1}{2\pi} e^\phi n^\mu \nabla_\mu \phi,
\end{equation}
where $n^\mu$ is the unit outward pointing vector normal to the screen
and to the Killing vector.
The ``outward'' indicates that the potential increases along $n^\mu$,
\textit{i.e.,} the normal vector can be written as
\begin{equation}
\label{def:n}
n_\mu = \frac{\nabla_\mu \phi}{\sqrt{\nabla_\nu \phi \nabla^\nu \phi}}.
\end{equation}
In Eq.~(\ref{T:def}), a redshift factor $e^\phi$ is inserted because
the temperature is measured with respect to the reference point. For
asymptotically flat space this reference point corresponds to spatial
infinity. In the Schwarzschild-de Sitter case, we choose this
reference point as the Bousso-Hawking reference
point~\cite{Bousso:1996au} to be explained in the next section.
We denote the number of bits on the holographic screen $\mathcal{S}$
by $N$ which is assumed to be proportional to the area of the
screen~\cite{verlinde},
\begin{equation}
\label{N:assume}
N = \frac{A}{G}.
\end{equation}
Applying the equipartition rule of the energy, each bit of holographic
screen contributes an energy $T/2$ to the system, and the total energy
on the holographic screen can be written as
\begin{equation}
\label{E:equipartition}
E = \frac12 \oint_{\mathcal{S}} T dN.
\end{equation}
Note that in the above expression the temperature $T$ on the screen is not
constant in general. Substituting Eqs.~(\ref{T:def})
and (\ref{N:assume}) into Eq.~(\ref{E:equipartition}), the energy
associated with the holographic screen can
be rewritten as
\begin{equation}
\label{E:pot.area}
E = \frac{1}{4\pi G} \oint_{\mathcal{S}} n^\mu \nabla_\mu e^\phi dA.
\end{equation}
This expression is the conserved Komar mass associated with
timelike Killing vector $\xi^\mu$.
\section{Schwarzschild-de Sitter black hole}
\label{sec:sch.dS}
Now, we consider a spherically symmetric Schwarzschild-de Sitter black
hole as a model of multiple holographic screens. The Schwarzschild-de
Sitter space is described locally by the line element,
\begin{equation}
\label{metric:ds}
ds^2 =
-f(r) dt^2 + \frac{dr^2}{f(r)} + r^2 (d\theta^2 + \sin^2
\theta \, d\varphi^2),
\end{equation}
with
\begin{equation}
\label{f}
f(r) = 1 - \frac{2GM}{r} - \frac13\Lambda r^2,
\end{equation}
where $G$ and $M$ are the gravitational Newton's constant and the
mass parameter, respectively, and the cosmological constant will be
taken as $\Lambda = 3/\ell^2$. When $0 < M < M_{\mathrm{max}}\equiv
\ell/(3^{2/3} G)$ static region exists between two horizons with radii
$r_b$ and $r_c$, the black hole and cosmological event horizons. For
$M = M_{\mathrm{max}}$, the two horizons coincide, the Nariai
limit. In the Nariai limit, there exists no timelike Killing vector.
\begin{figure}[pbt]
\centering
\includegraphics[width=0.5\textwidth]{frEP}
\caption{The Schwarzschild-de Sitter space has the event horizon of
the black hole at $r=r_b$ and the cosmological event horizon at
$r=r_c$. At $r=r_g$, the proper acceleration vanishes. For a given
potential value, there exist two screens at $r=r_1$ and $r=r_2$,
and each screen has different temperature. Note that the unit
normal vectors on both screens direct to the surface $r=r_g$.}
\label{fig:screen}
\end{figure}
In order to get the potential of the Schwarzschild-de Sitter spacetime,
we first consider a timelike Killing vector of Eq.~(\ref{metric:ds}),
given by
\begin{equation}
\label{xi}
\xi^\mu = \gamma\left( \partial/\partial t \right)^\mu,
\end{equation}
where $\gamma$ is a normalization constant. If space is
asymptotically flat, we may choose the standard Killing vector
normalization, $ \gamma = 1$. Since Schwarzschild-de Sitter space is
not asymptotically flat, we encounter a difficulty in taking the
normalization of Killing vector. To avoid this, Bousso and
Hawing~\cite{Bousso:1996au} chose a normalization such that the norm
of the Killing vector becomes unity at the region where the force
vanishes, the gravitational attraction is exactly balanced out by the
cosmological repulsion. Adopting this normalization corresponds to
choosing a special observer who follows geodesics.
Since the magnitude of the acceleration of a particle in the
Schwarzschild-de Sitter spacetime is obtained as $a = \sqrt{a^\mu
a_\mu} = |f'(r)|/\sqrt{2f(r)}$, the geodesic point with no
acceleration is given by
\begin{equation}
\label{def:rg}
r_g = (GM\ell^2)^{1/3}.
\end{equation}
With this normalization, the gravitational potential is obtained from
Eq.~(\ref{potential}),
\begin{equation}
\label{potential:f}
\phi = \frac12 \ln (\gamma^2 f)
= \frac12 \ln \frac{f(r)}{f(r_g)}.
\end{equation}
For a given potential value $\phi_s$, there exist two equipotential
surfaces at $r=r_1$ and $r=r_2$ as shown in Fig.~\ref{fig:screen}.
Then, the Unruh-Verlinde temperature on each screen is given by
\begin{equation}
\label{T}
T = \frac{1}{2\pi} e^\phi n^\mu \nabla_\mu \phi =
\gamma \frac{|f'(r)|}{4\pi},
\end{equation}
where the unit normal vector $n^\mu$ is given by $n^\mu = \delta^\mu_r
\sqrt{f}$ for $r<r_g$ and $n^\mu = -\delta^\mu_r \sqrt{f} $ for
$r>r_g$. Note that the temperature of the holographic screen at
$r=r_1$ is different from that of the screen at $r=r_2$. The
temperature on each screen is given by
\begin{equation}
\label{T:r}
T_i = \frac{\gamma}{2\pi} \left| \frac{GM}{r_i^2} - \frac{r_i}{\ell^2} \right|,
\end{equation}
where $i = 1, 2$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{spherical_shell}
\caption{
We consider the force free reference point of Bousso and Hawking
as a separating boundary dividing the system into two subsystems.
Since the temperature of each subsystem is above zero
and the boundary between them is maintained at zero,
thermal exchange does not occur between the two subsystems.}
\label{fig:tempq}
\end{figure}
The temperature becomes zero at the Bousso-Hawking reference point
$r=r_g$ from Eq.~(\ref{T}). Now, assume that the region between the
black hole and cosmological horizons is separated by a boundary at the
reference point $r=r_g$ as shown in Fig.~\ref{fig:tempq}.
Then the two regions divided by this boundary
cannot have thermal exchange between them because the temperature on
this boundary is kept at zero always in our static geometry
setup. Thus, we can regard this boundary as a thermally insulating
wall. Therefore, the two regions separated by the surface at $r=r_g$
can be thought as independent systems: the total system becomes the
sum of two independent systems, the inner ($r<r_g$) and outer
($r>r_g$) regions. The concept of thermally insulating wall in our
consideration is similar to that of perfectly reflecting wall in the
Gibbons-Hawking's work~\cite{GH_desitter}: they constructed two
separated thermal equilibrium systems by introducing a perfectly
reflecting wall in Schwarzschild-de Sitter space for the calculation
of the Hawking temperatures of black hole and cosmological horizons.
This can be also understood as follows. The line
element~(\ref{metric:ds}) approaches the pure de Sitter spacetime when
$M$ goes to zero and the pattern of the metric for $r>r_g$ has a
similarity to that of the pure de Sitter spacetime (see
Fig.~\ref{fig:mass}). And the spacetime approaches the Schwarzschild
black hole with asymptotically flat spacetime when $\Lambda$ goes to
zero and the pattern of the metric for $r<r_g$ has the similarity to
that of the Schwarzschild black hole (see Fig.~\ref{fig:cosmo}). This
suggests that the whole system has the characteristics of both
Schwarzschild black hole and pure de Sitter spacetime.
\begin{figure}[pbt]
\centering
\includegraphics[width=0.5\textwidth]{frM}
\caption{The geodesic point with no acceleration is plotted by the
dashed line. The metric approaches the pure dS spacetime as the
mass parameter of Schwarzschild-de Sitter space goes to
zero. ($0<M_1 < M_2 < M_3 <M_{\mathrm{max}}$)}
\label{fig:mass}
\end{figure}
\begin{figure}[pbt]
\centering
\includegraphics[width=0.5\textwidth]{frl}
\caption{The geodesic point with no acceleration is plotted by the
dashed line. The metric approaches the Schwarzschild black hole as
the cosmological constant $\Lambda = 3/\ell^2$ goes to
zero. ($\ell_{\mathrm{min}} < \ell_1 < \ell_2 < \ell_3 < \infty$)}
\label{fig:cosmo}
\end{figure}
Plugging the potential~(\ref{potential:f}) into the
energy~(\ref{E:pot.area}) gives the same result from the Komar energy
for the Schwarzschild-de Sitter black hole,
\begin{equation}
\label{E:g}
E = \frac{1}{4\pi G} \oint_{\mathcal{S}} \nabla^\mu \xi^\nu
\sigma_\mu n_\nu dA,
\end{equation}
where $\sigma_\mu$ is the unit normal timelike vector perpendicular to
the hypersurface surrounded by the screen $\mathcal{S}$.
Since $\sigma_\mu = - \sqrt{f} \, \delta_\mu^t$, the Komar
energy~(\ref{E:g}) becomes
\begin{equation}
E_i = \gamma \frac{r_i^2 |f'(r_i)|}{2G} = \gamma \left| M -
\frac{r_i^3}{G\ell^2} \right|, \label{E:r}
\end{equation}
for each screen at $r=r_i$ ($i = 1, 2$).
If the associated holographic entropy is given by
\begin{equation}
S_i = \frac{A_i}{4G} = \frac{\pi r_i^2}{G}, \label{S:r}
\end{equation}
then with Eqs.~(\ref{T:r}) and
(\ref{E:r}) the thermodynamic relation $E_i = 2 T_i
S_i$ holds for each system.
This relation certainly holds for event horizons.\footnote{
In Refs.~\cite{Pad:0912,RBan}, it was shown that this relation holds when
the equipartition rule
of energy is assumed for event horizons of stationary spacetimes.}
When the
spacetime is static and spherically symmetric,
we can also get this relation directly from Eq.~(\ref{E:equipartition})
with the relation \eqref{S:r},
since the temperature on the holographic screen is constant.
Note that the thermodynamic relation $E=2TS$ does not hold for the
whole system,
since the energy and entropy are additive and the temperatures on the holographic screens are different.
Now, we check the validity of our formulation in two specific
cases. First, we consider the case when the holographic inner and
outer screens become the event horizon of black hole and the
cosmological horizon, respectively. As the locations of the
holographic screens, $r_1$ and $r_2$, move to the two roots of $f(r) =
0$, $r_b$ and $r_c$, as shown in Fig.~\ref{fig:screen}, the inner
screen becomes the black hole event horizon and the outer one becomes
the cosmological event horizon. The temperatures on the screens seen
by an observer located at the Bousso-Hawking reference point are given by
\begin{eqnarray}
T_{b/c}= \frac{1}{\sqrt{1-(9G^2M^2\Lambda)^{1/3}}}
\frac{1}{4\pi}\left| \frac{2GM}{r_{b/c}^2} - \frac{2\Lambda r_{b/c}}{3}\right|.
\end{eqnarray}
Since the system is composed of the sum of two independent systems, the total
entropy is given by the sum of the entropies of subsystems,
\begin{equation}
\label{S:total}
S = S_1 + S_2.
\end{equation}
In the present case, $S_1$ and $S_2$ correspond to the usual entropy
of the black hole and cosmological horizons, respectively. And thus,
our result agrees with the previously obtained entropy of
Schwarzschild-de Sitter space~\cite{GH_desitter, Kas:1996,SdS_entropy,
BHlee:jkps,SdS_nar_ent}.
Next, we consider the case when the two event horizons, $r_b$ and
$r_c$, approach each other, the Nariai limit~\cite{nariai_bh}. In this
case, the temperature and the energy on each horizon become
\begin{align}
T_i &\longrightarrow T^{\rm Nariai} = \frac{\sqrt{3}}{2\pi \ell}, \label{T:Nariai}\\
E_i &\longrightarrow E^{\rm Nariai} = \sqrt{3} \left(\frac{M^2 \ell}{G}
\right)^{1/3}. \label{E:Nariai}
\end{align}
In this limit, the entropy of each system becomes
\begin{equation}
S_i \longrightarrow\ \frac{\pi r_g^2}{G}.\label{S:i:N}
\end{equation}
The total entropy is the sum of the two subsystems', thus it is twice
of the above given entropy~(\ref{S:i:N}). This agrees with the
entropy of the Schwarzschild-de Sitter black hole in the Nariai limit
obtained in Refs.~\cite{BHlee:jkps,SdS_nar_ent}.
In summary, we apply the Verlinde's entropic formalism of gravity to
the Schwarzschild-de Sitter space as a model of multiple holographic
screens. Since the Unruh-Verlinde temperature vanishes at the
Bousso-Hawking reference point, we can regard two regions separated by
zero temperature barrier as thermodynamically isolated systems and
thus independently apply the entropic formalism to each region. We
confirm that the Verlinde's formalism agrees with the conventional
result at least in the following cases; i) when the holographic
screens become event horizons, ii) in the Nariai limit.
\section*{Acknowledgments}
This work was supported by the National Research Foundation (NRF) of
Korea grants funded by the Korean government (MEST)
[R01-2008-000-21026-0 and NRF-2009-0075129 (E.\ C.-Y.\ and K.\ K.),
NRF-2009-351-C00109 (M.\ E.), and NRF-2009-351-C00111 (D.\ L.)].
|
1,108,101,562,792 | arxiv | \section{Introduction}
The increasing observational evidences that the expansion of the
universe is accelerating (see Ref.~\cite{refsn,refdata} and
reference therein) has stimulated a rising interest to the
reconstruction of its expansion history. An important outcome of
these theoretical studies is to clarify the sensitivity of
observational tests to dark energy properties and to assess how
each could be corrupted by extra-noise from other cosmological
effects. Comprehensive investigations of these nuisances have been
carried out on ``standard" tests, like SNIa, weak lensing, BAO,
CMB, ISW or clusters of galaxies.
In contrast, very few focussed on the time drift effect that
changes the observed redshift of an object as function of time.
The recent claim that it may drive the conceptual design of
instrument for next generation giant telescopes raised the need
that similar attention should be paid to the theoretical ground of
this novel technique. Interestingly, such an observation may lead
to a better understanding of the physical origin of the recent
acceleration~\cite{refde,refde2} and to a determination of the
dark energy equation of states~\cite{Lake07} as well as
constraints on dark energy models~\cite{demodels} or tests of the
variation of fundamental constants~\cite{constant,Molaro}.
As first pointed out by Sandage~\cite{Sandage}, in a homogeneous
and isotropic spacetime, the time drift of the observed redshift
is directly related to the Hubble function by
\begin{equation}\label{SLformula}
\dot z = (1+z)H_0-H(z)\equiv \dot{\bar z}(\eta_0,z)\ .
\end{equation}
Given the most likely ranges of cosmological parameter values
derived from observations, in a $\Lambda$CDM model the typical
amplitude of the redshift drift is of order $\delta z\sim
-4\times10^{-10}$ on a time scale of $\delta t= 10$~yr, for a
source at redshift $z=4$. This corresponds to a spectroscopic tiny
velocity shift, $\delta v \equiv c\delta z/(1+z)$, of $\delta
v\sim 2.5\ {\rm cm/s}$. Fig.~\ref{AmpError} (left panel) shows the
time drift as function of redshift for the standard $\Lambda$CDM
model and a dark energy models with an equation of state changed
by only 10\% ($w=-0.9$), all other parameters being kept constant.
Both curves have similar shape but the difference the drifts
between a standard $\Lambda$CDM model and cosmological models
tends to zero at hight redshift. Fig.~\ref{AmpError} (right panel)
depicts this difference for two models with either $w = -0.95$ or
$w = -0.98$.
The feasibility of this measurement is most challenging and
impossible with present-day astronomical facilities. However, it
was recently revisited~\cite{Loeb} in the context of the new
generation of Extremely Large Telescopes\footnote{ {\tt
http://www.eso.org/projects/e-elt/Publications/
ELT\_SWG\_apr30\_1.pdf}} (ELT), arguing that with such outstanding
collecting areas one could measure velocity shifts of order
$\delta v\sim 1-10\ {\rm cm/s}$ over a 10 year period from the
observation of the Lyman-alpha forest on QSO absorption spectra.
In particular, it is one of the main science driver to design the
COsmic Dynamics EXperiment (CODEX)
spectrograph~\cite{Pasquini1,Pasquini2} for the future European
ELT (E-ELT).
The performances of CODEX and its capability to measure a time
drift of very distant objects were estimated using Monte-Carlo
simulations of quasar absorption spectra. The expected velocity
accuracy of this experiment can be written as follows (see
Ref.~\cite{Pasquini1})
$$
\sigma_v=1.4\left(\frac{S/N}{2350}\right)^{-1}
\left(\frac{N_{\rm QSO}}{30}\right)^{-1/2}
\left(\frac{1+z}{5}\right)^{-1.8}\,{\rm cm/s}\ ,
$$
provided the absorption lines are resolved. $S/N$ denotes the
signal-to-noise ratio, for a pixel scale of $0.0125\, {\rm\AA}$
and $N_{\rm QSO}$ is the number of quasars. Thus, spectroscopic
measurements of about 40 quasars with $S/N\sim2000$ ten years
apart can reach a $1.5\,{\rm cm/s}$ accuracy. This is within the
reach of a CODEX instrument mounted on a $60-80$ meter ELT by
observing a 16.5$^{\rm th}$ magnitude QSO during 2000
hrs~\cite{Pasquini1}.
Many systematic effects that may spoil the time drift signal, such
as Earth rotation, proper motion of the source, relativistic
corrections etc., are discussed in Ref.~\cite{Pasquini1}. The
acceleration of the Sun in the Galaxy seems more a serious problem
because its amplitude may be of the same order than the cosmic
signal. However, it has not been measured yet, so its nuisance is
still unknown. On the other hand, subtle contaminations like
accelerations produced by large scale structures have never been
estimated in the error budget. The purpose of this work is to
address this issue and to estimate whether it may hamper the
cosmological interpretation of the time drift.
\section{Cosmological perturbations}
Eq.~(\ref{SLformula}) relates the time drift of the observed
redshift to the Hubble function, assuming a perfectly homogeneous
and isotropic Friedmann-Lema\^{\i}tre spacetime. In the real
universe, however, velocity terms arising from cosmological
perturbations add up as noise contributions and increase the
scatter of the redshift drift around its mean value.
The distribution of the redshift drift can be predicted using
the expression of $\dot z$ to first order in the cosmological
perturbations. It is derived in the Appendix A of this work.
At first order in the metric perturbations and in $v/c$ it writes
\begin{equation}
\dot z=\dot{\bar
z}(\eta_0,z) + \zeta({\bm{x}}_{_O},\eta_0,\bm{e};z) \ ,
\label{eq2}
\end{equation}
with
\begin{eqnarray}
\zeta({\bm{x}}_0,\eta_0,\bm{e};z)
& = & -\Phi_{_O}\dot{\bar z}(\eta_0,z)+(1+z)\left[\bm{e}.\dot{\bm{v}}-\dot\Psi
\right]^{^O}_{_E} \!\!\!\!\!.
\label{eq3}
\end{eqnarray}
This formula involves both Bardeen potentials, $\Phi$ and $\Psi$,
and the peculiar acceleration, $\dot v$. A dot denotes a
derivative with respect to observer proper time and $\bm{e}$ is
the direction of observation. $O$ and $E$ refer to the observer
and emitter respectively (see the Appendix more precise
definitions of all the variables involved in this equation).
The first term at the right hand side of Eq.~(\ref{eq3}) clearly
arises from the local position of the observer. The second term of
Eq.~(\ref{eq3}) encodes Doppler effect due to the relative motion
of the observer and the source as well as the equivalent of the
integrated Sachs-Wolfe term~\cite{SW,pubook}. Eq.~(\ref{eq3}) is
the analog of the (direction dependent) temperature anisotropy of
the cosmic microwave background (CMB) compared to the mean CMB
temperature.
\section{Estimate of the variance}
The variance of $\zeta(\bm{e})$ can be split into contributions
coming from the time dependence of the gravitational potentials,
$\zeta_{\dot\Phi}\equiv(1+z)\left[\dot\Psi\right]^{_O}_{_E}$, and
from the peculiar acceleration, $\zeta_{\dot
v}=(1+z)\left[\bm{e}.\dot{\bm{v}}\right]^{_O}_{_E}$.
The estimation of $\zeta_{\dot\Phi}$ demands a full description of
the time evolution of the potential, both at emission and
observing times. In the following we derive it and discuss its
properties using the linear cosmological perturbation theory. The
validity of this approach will be more thoroughly addressed in the
next section.
Using the linear theory of structure growth, the density contrast can be
split as $\delta= D(t ) \varepsilon({\bf x})$, where
$\varepsilon({\bf x})$ comprises all details on the initial conditions. The growth
rate $D_+$ is the growing solution of the equation
\begin{equation}\label{Dequation}
\ddot D(t)+2H\dot D(t)=\frac{3}{2}H^2\Omega_{_{\rm m}}(t)D(t),
\end{equation}
where $\Omega_{_{\rm m}}(t)$ is the time dependent reduced density
parameter for the gravitating matter (see ref.~\cite{revue} for
details). On sub-Hubble scales, Einstein equations imply that
$\Psi=\Phi$ and $\Delta\Phi=\frac{3}{2}H^2 \Omega_{_{\rm m}}\,a^2\,
\delta$.
As the redshift increases, the dynamics of the universe is closer
and closer to the one an Einstein-de Sitter Universe. $\Phi$ is
therefore almost constant and $\dot\Phi$ is expected to vanish.
This is no longer true at low redshift, when the cosmological
constant (or the spatial curvature) starts to dominate. Instead,
the time evolution of the potential writes
\begin{equation}\label{phidotphirelation}
\dot\Phi=H \Phi \left[f(t)-1\right] \ ,
\end{equation}
where $f(t)={\rm d}\ln D_+/{\rm d} \ln a$ comprises the intrinsic
evolution of the potential produced by the growing perturbations.
In a flat $\Lambda$CDM, $f$ is explicitly given by
\begin{equation}
\!f(t)=\!1-\!\frac{6}{11}\frac{
_2F_1\left[2,\frac{4}{3};\frac{17}{6};-\sinh ^2\left(\frac{3
\alpha t}{2}\right)\right] \sinh ^2\left(\frac{3 \alpha t}{2}\right)}
{ _2F_1\left[1,\frac{1}{3};\frac{11}{6};-\sinh ^2\left(\frac{3 \alpha
t}{2}\right)\right]}
\end{equation}
where $\alpha\equiv H_0\sqrt{\Omega_{\Lambda0}}$ and where $_2F_1$
is a hypergeometric function.
Using Eq. (\ref{phidotphirelation}) it is then easy to express the
r.m.s. of $\dot\Phi$ from the r.m.s. of the mass density
fluctuations, $\sigma_\delta$, as derived from the Poisson
equation. More precisely
\begin{equation}
\sigma_\delta=\left[\int{{\rm d}^3{\bm{k}}\over(2\pi)^3}\,P_\delta(k)\right]^{1/2}\ ,
\end{equation}
where $P_\delta$ is the power spectrum of the density contrast,
$\langle\delta_{{\bm{k}}}\delta_{{\bm{k}}'}\rangle = P_\delta(k)
\delta_{\rm D}({\bm{k}}+{\bm{k}}')$, and $\delta_{\bm{k}}$ are the Fourier modes of
$\delta$. To estimate $P_\delta$, we adopt the prescription by
Bond {\it et al.}~\cite{bbks} for the transfer function and the
normalization $\sigma_{8}=1$. The redshift dependence of the power
spectrum is then the one of the growing mode, $D_+(z)$, normalized
to unity at $z=0$.
Turning to the gravitational potential, it appears that, in the
standard model of cosmology with a primordial spectrum
of index $n_s\sim0.95$, the amplitude of the potential
fluctuations is IR divergent. However, since the previous
calculation is only valid for sub-horizon modes, it is necessary
to introduce a cut-off for modes typically beyond the Hubble
scale. The expected potential fluctuations then drop to more
realistic amplitudes of $\sigma_{\Phi}\simeq5\times10^{-5}$. It
follows that, for a source at redshift $z$, the r.m.s. of $\dot z$
induced by the time variation of the gravitational potential is
\begin{equation}
\langle\zeta_{\dot\Phi}^2\rangle^{1/2}\left(z\right) =
\frac{3}{2}(1+z)\Omega_{{_{\rm m}}
0} \ \left[f(0)-1\right]\sigma_{\Phi}\ ,
\label{zetadotphi}
\end{equation}
which is of order $\zeta_{\dot\Phi}\sim (1+z) \times 10^{-5} H_{0}$, a
small number indeed.\\
\begin{widetext}
\begin{figure}[htb]
\centerline{\epsfig {figure=dzdtPlots.eps,width=16cm}}
\caption{(left) The time drift of the redshift as a function of
the redshift of the source obtained from Eq.~(\ref{SLformula}) for
a $\Lambda$CDM model (solid line) and a model with a constant
equation of state $w = -0.9$ for the dark energy (dashed line).
(right) Amplitude of the r.m.s. of the systematic errors
$\zeta_{\dot v}$ due to cosmic acceleration effects. The
contribution of $\zeta_{_O}$ (dashed line) is subdominant compared
to the one of $\zeta_{_E}$ (dotted line). The solid lines
represents the difference between a standard $\Lambda$CDM model
and cosmological models with either $w = -0.95$ (upper solid line)
or $w = -0.98$ (lower solid line).}
\label{AmpError}
\end{figure}
\end{widetext}
The contribution of the peculiar acceleration, $\zeta_{\dot v}$,
is less obvious to derive because we do not have a complete theory
that describes the expected distribution of the local
line-of-sight acceleration. However, in the cosmological linear
theory not only are the metric components supposed to be small (as
explicitly used above), but also the density contrast and the
velocity gradients (compared respectively to unity and $H$), see
Ref.~\cite{revue}.
The Lyman-alpha forest is believed to be dominated by low density
clouds of intergalactic medium, with individual accelerations
primarily triggered by large-scale structures. Assuming then
linear theory holds in our context, the local acceleration writes
$\dot{v}_{i}=-H v_{i}-\partial_{i}\Phi/a$, so that
\begin{equation}\label{zetaac2}
\zeta_{\dot v}(\bm{e},z)=(1+z)\ e^{i}\left[H(z)
v_{i}+\frac{1}{a}\partial_{i}\Phi\right]^{^O}_{_E}\ .
\end{equation}
In terms of the dimensionless divergence $\theta({\bm{x}})=\partial_{i}
v_{i}/aH$, the linear continuity equation reduces to
$\theta(t,{\bm{x}}) = -f(t)\ \delta({\bm{x}})$ at linear order. This implies
that the Fourier components of the velocity, density contrast and
potential are related by $k^2 H v_{i}({\bm{k}})= f(t) a H^2
k_i\delta_{\bm{k}}$ and $k^2\Phi_{,i}({\bm{k}})/a = 3\Omega_{{_{\rm m}}} a\,H^2
k_{i} \delta_{\bm{k}}/2$. Using our previous estimate of $P_\delta$,
one easily derives the r.m.s. of the two contributions to
$\zeta_{\dot v}$,
\begin{eqnarray}\label{zetaO}
\langle\zeta_{_O}^2\rangle^{1/2} &=&
(1+z)\left[\frac{3}{2}\Omega_{{_{\rm m}}
0}-f(0)\right]H_0^2 \ \hat\sigma\,
\end{eqnarray}
that depends on the emission time only through the factor $(1+z)$,
and
\begin{eqnarray}\label{zetaE}
\langle\zeta_{_E}^2\rangle^{1/2} &=&
\left[\frac{3}{2}\Omega_{{_{\rm m}}}(t)-f(t)\right]H^2(t)D_{+}(t)\ \hat\sigma
\end{eqnarray}
where $\hat\sigma^2\equiv\displaystyle{\int\frac{{\rm d}^3{\bm{k}}}{(2\pi)^3}\frac{1}{3
k^2}P(k,z=0)}$.
These two terms are independent and should be summed
quadratically. The resulting r.m.s. of $\zeta_{\dot v}$ depicted
on Fig.~\ref{AmpError} (right panel) shows $\zeta_{_E}$ is the
dominant contribution at all redshifts. It rises to a percent
level from $z=0$ to $z=4$. At $z\sim4$, $\zeta_{\dot v} \sim 0.5\%
$, while $\zeta_{\dot v}(\bm{e},z)$ is ten times smaller. Both
terms have similar behaviour and are basically unchanged for any
realistic flat cosmology having and effective $w$ close to $-1$,
but note that this is {\em a priori} not the case for any model.
\section{Discussion}
Assuming the cosmological time drift derived from QSO absorption
lines by the Lyman-alpha forest may be contaminated by
extra-acceleration of clouds by massive structures, it is
legitimate to question the validity of the linear regime
approximation used throughout this work.
Let us first consider the acceleration of an absorbing Lyman-alpha
cloud. On large scales, clouds are located inside filaments
infalling towards massive clusters or super-clusters of galaxies.
Assume, then, the acceleration is due to the gravitational
attraction of a super-cluster with typical mass of order
$10^{15}M_\odot$, localized at 10~Mpc from the cloud. The
Newtonian acceleration is about $a_{\rm
N}\sim1.45\times10^{-15}\,{\rm km}/{\rm s}^2$. In comparison, the
Hubble acceleration $c H_0$ is $a_{\rm
H}\sim6.8\times10^{-13}\,{\rm km}/{\rm s}^2$ so that $a_{\rm
N}/a_{\rm H}\sim 2\times10^{-3}$. This ratio may change by one
order of magnitude, depending on the mass and length scales one
may consider for clusters, super-clusters or filaments, but is
always sufficiently small to keep the linear approximation valid.
It is also worth noticing its amplitude is close to theoretical
expectations derived in the previous Section. We therefore
speculate the simple interpretation of our theoretical estimate as
being primarily due to the acceleration of the nearest rich
cluster is pertinent\footnote{Liske {\em et al.} (in preparation)
also estimated the contamination of the drift signal produced by
peculiar motions. In contrast to our analysis done in a full
General Relativity context, they simply used Special Relativity
formalism. Note that \cite{peculaccel} derived the peculiar
acceleration of strong gravitational potentials like clusters of
galaxies but to predict the peculiar velocity drift over several
decades produced by nearby systems on a test particle. Both
results agree with our predictions.}. To confirm this and get more
sophisticated description of accelerations the
use of numerical simulations is indeed necessary.\\
In practice, a time drift is not measured from a single absorption
line but by averaging several lines spread over a spectral range
$\Delta \lambda$ defined by the spectrograph. If the acceleration
of Lyman-alpha clouds is primarily driven by clusters of galaxies
located around their neighborhood, then clouds are not dynamically
independent and accelerations of closeby clouds are correlated. We
are thus interested in the variance of $\dot z$, averaged over a
bound comoving distance $\Delta\chi$ along the line of sight,
$$
\bar{\dot z} = \int_\chi^{\chi+\Delta\chi}\dot z(\chi'){\rm d}\chi'\, .
$$
It is related to the variance from correlations obtained on a
single line by
$$
\langle\bar{\dot z}^2\rangle = \alpha^2(\bar z,\Delta z) \langle
\zeta^2(z)\rangle\ ,
$$
where the coefficient $\alpha(\bar z,\Delta z)$ depends on the
physical size over which the average is performed. $\Delta z$ is
the redshift range explored by the spectrograph at the mean
redshift $\bar z$: $\Delta z = (1+\bar z) \Delta\lambda/ \lambda$.
For a $\Lambda$CDM universe, it corresponds to a comoving distance
of $\Delta\chi=D_{H_0}[\Omega_{\mat0}(1+\bar
z)^3+\Omega_{\Lambda0}]^{-1/2}$ , with $D_{H_0}=3000h^{-1}$~Mpc.
$\alpha$ can be computed from the correlation of the acceleration
field,
$$
\langle a(\chi_1)a(\chi_2)\rangle = \int \frac{{\rm d}^3{\bm{k}}}{(2\pi)^3}
\hbox{e}^{i k_z(\chi_1-\chi_2)}\frac{P(k)}{3k^2}\ ,
$$
as
\begin{equation}
\alpha^2 = \frac{1}{\hat\sigma^2}\int \frac{{\rm d} k_z{\rm d}^2{\bm{k}}_\perp}{3 k^2}
\frac{\sin k_z\Delta\chi}{k_z\Delta\chi} P(k) \ ,
\end{equation}
where $\Delta\chi$ is the size of the comoving radial distance over
which the average is performed.
\begin{figure}[htb]
\centerline{\epsfig {figure=alphaplot.eps,width=8cm}}
\caption{The coefficient $\alpha$ that enters in Eq.~(\ref{zf}) as
a function of the width of wavelengths, $\Delta\lambda$, on which
the observations are average for several source redshifts, $\bar
z=4$ (solid line), $\bar z=3$ (long dashed line), $\bar z=2$
(dashed line) and $\bar z=1$ (dotted line) for $\Lambda$CDM with
$h=0.7$, $\Omega_{{_{\rm m}} 0}=0.3$ and $\Omega_{\Lambda0}=0.7$.}
\label{f2}
\end{figure}
If we could naively split the Lyman-alpha forest along a line if
sight into radial bunches of physically decoupled cloud systems,
without correlated accelerations, $1/\alpha^2$ would provide an
estimate of the number of bunches. From an observational point of
view, $1/\alpha^2$ expresses the effective number of absorption
line systems without correlated acceleration probed by a
spectrograph covering a wavelength range $\Delta \lambda$ around
the mean redshift $\bar z$. It increases when the spectral
coverage of the spectrograph increases (see Fig. \ref{f2}). For
example, if $\bar z=4$, and $\Delta\lambda= 100~{\rm\AA},
200~{\rm\AA}, 500~{\rm\AA}$, that is $\Delta z=0.1, 0.2, 0.5$ (see
Fig.~\ref{f2}) respectively, then $\alpha$ is 0.69, 0.55, and
0.38, and $1/\alpha^2=$ 2.1, 3.3 and 6.9. However, $1/\alpha^2$
only takes into account one light of sight. If we average over
$N_{\rm QSO}$ randomly selected lines of sight, then we expect
that
\begin{equation}\label{zf}
\sigma_{\dot z} = \alpha(\Delta\lambda,\bar z)\,N_{\rm
QSO}^{-1/2}\, \ \zeta(\bar z)\ .
\end{equation}
Hence, the spectral range of
the spectrograph together with the number of lines of sight can easily drop the contribution of
cosmological perturbation to the variance budget below a $0.1\%$ level.
It is interesting to notice that the theoretical values of
$1/\alpha^2$ derived in the previous paragraph can easily be
interpreted and predicted from simple physical arguments. The
spectral range of a CODEX-like spectrograph as described in
Ref.~\cite{Pasquini1}, is $\Delta\lambda\sim 500{\rm\AA}$. At a
redshift of $\bar z\sim 4$, it corresponds to $\Delta z \sim
(1+\bar z)\Delta\lambda/ 5000{\rm\AA}\sim 0.5$ and to a comoving
length of $\Delta\chi\sim 300$~Mpc. If we assume that the
coherence scale of the velocity field is the typical size of the
super-cluster ($\sim $30~Mpc), then CODEX can probe about 10
independent systems per line of sight. This is of the same order
as $1/\alpha^2=6.9$ for $\Delta\lambda=500~{\rm\AA}$ discussed
above, which confirms its interpretation as an effective number of
uncorrelated cloud systems. It also simply explains why
$\sigma_{\dot z} \propto \alpha$, which is nothing but the inverse
square root of this number.
\section{Conclusion}
In order to measure the cosmological time drift of the redshift,
many systematic effects will have to be understood. Besides the
systematic errors of astrophysical origin that may affect the
observation of the Lyman-alpha forest, large scale structures will
induce a dispersion of $\dot z$. This work addresses this issue.
First, we have derived the expression of the time drift of
cosmological redshift at first order in the perturbation. This was
then used to estimate its variance and then to demonstrate it
depends on two main effects, the accelerations and the local
gravitational potential at both the source and observer positions.
The contributions at the observer position have not been discussed
further. High precision astrometric observations with GAIA will
soon provide exquisite knowledge of the motion of the Earth in
the Milky Way. It will pinpoint all local acceleration terms with
enough accuracy to remove this contribution easily~\cite{gaiaref}.
In contrast, the contributions at the source position are much
more difficult to subtract. In the linear regime, we have shown
that the gravitational potential contribution is negligible while
the acceleration of the source is typically of the order of 1\% at
$z=2-4$. We argue a dominant contribution to this term is the
acceleration of galaxy clusters near the source.
In order to understand whether the amplitude of this variance can
be reduced, we have estimated the effect of averaging the signal
over several absorption lines. One can either profit from the
total spectral range covered by the spectrograph to measure the
drift from all lines detected along a line of sight, provided
correlated acceleration contributions are taken into account, or
use the mean drift over many randomly selected lines of sight.
The first option reduces the variance by the square root of the
number of uncorrelated clouds systems along a line of sight, the
second by the square root of the number of independent lines of
sight. For an instrument having the current specifications of the
CODEX spectrograph, it is then easy to drop the contribution of
large-scale structures to the total variance of the time drift
down to a 0.1\% level.
\noindent{\bf Acknowledgements:} We thank Jochen Liske and Luca
Pasquini for providing Ref.~\cite{Pasquini1} before publication
and for their comments, Patrick Petitjean for useful discussions,
and Eric Linder his useful comments. We also thank St\'ephane
Charlot and Jean-Gabriel Cuby for organizing the Programme
National de Cosmologie discussion on ELT which triggered these
questions.
|
1,108,101,562,793 | arxiv | \section{Introduction}
\label{sec:intro}
Speech enhancement is an important problem in audio signal processing \cite{loizou2007speech}. The objective is to recover a clean speech signal from a noisy mixture signal.
In this work, we focus on single-channel (i.e. single-microphone) speech enhancement.
Discriminative approaches based on deep neural networks have been extensively used for speech enhancement. They try to estimate a clean speech spectrogram or a time-frequency mask from a noisy speech spectrogram, see e.g. \cite{xu2015regression, weninger2015speech, wang2017supervised, Li_SPL_2019, Li_WASPAA2019}. Recently, deep generative speech models based on variational autoencoders (VAEs) \cite{kingma2014auto} have been investigated for single-channel \cite{bando2017statistical,Leglaive_MLSP18,Leglaive_ICASSP2019b,parienteInterspeech19} and multi-channel speech enhancement \cite{BayesianMVAE,Leglaive_ICASSP2019a,fontaine_cauchy_MVAE}. A pre-trained deep generative speech model is combined with a nonnegative matrix factorization (NMF) \cite{ISNMF} noise model whose parameters are estimated at test time, from the observation of the noisy mixture signal only. Compared with discriminative approaches, these generative methods do not require pairs of clean and noisy speech signal for training. This setting was referred to as ``semi-supervised source separation'' in previous works \cite{smaragdis2007supervised, mysore2011non, mohammadiha2013supervised}, which should not be confused with the supervised/unsupervised terminology of machine learning.
To the best of our knowledge, the aforementioned works on VAE-based deep generative models for speech enhancement have only considered an independent modeling of the speech time frames, through the use of feed-forward and fully connected architectures. In this work, we propose a recurrent VAE (RVAE) for modeling the speech signal. The generative model is a special case of the one proposed in \cite{chung2015recurrent}, but the inference model for training is different. At test time, we develop a variational expectation-maximization algorithm (VEM) \cite{neal1998view} to perform speech enhancement. The encoder of the RVAE is fine-tuned to approximate the posterior distribution of the latent variables, given the noisy speech observations. This model induces a posterior temporal dynamic over the latent variables, which is further propagated to the speech estimate. Experimental results show that this approach outperforms its feed-forward and fully-connected counterpart.
\section{Deep generative speech model}
\subsection{Definition}
\label{subsec:VAE_speech_models}
Let $\mbf{s} = \{\mbf{s}_n \in \mathbb{C}^F \}_{n=0}^{N-1}$ denote a sequence of short-time Fourier transform (STFT) speech time frames, and $\mbf{z} = \{\mbf{z}_n \in \mathbb{R}^L \}_{n=0}^{N-1}$ a corresponding sequence of latent random vectors. We define the following hierarchical generative speech model independently for all time frames $n \in \{0,...,N-1\}$:
\begin{align}
\mbf{s}_n \mid \mbf{z} \sim \mathcal{N}_c\left(\mbf{0}, \diag\left\{ \mbf{v}_{\mbf{s},n}(\mbf{z}) \right\}\right), \hspace{.25cm} \text{with} \hspace{.25cm} \mbf{z}_n \overset{\text{i.i.d}}{\sim} \mathcal{N}\left(\mbf{0}, \mbf{I}\right),
\label{speech_generative_model}
\end{align}
and where $\mbf{v}_{\mbf{s},n}(\mbf{z}) \in \mbb{R}_+^F$ will be defined by means of a \emph{decoder} neural network. $\mathcal{N}$ denotes the multivariate Gaussian distribution for a real-valued random vector and $\mathcal{N}_c$ denotes the multivariate complex proper Gaussian distribution \cite{properComplex}. Multiple choices are possible to define the neural network corresponding to $\mbf{v}_{\mbf{s},n}(\mbf{z})$, which will lead to different probabilistic graphical models represented in Fig.~\ref{fig:speech_models}.
\paragraph{FFNN generative speech model} $\mbf{v}_{\mbf{s},n}(\mbf{z}) = \bs{\varphi}_{\text{dec}}^{\text{FFNN}}(\mbf{z}_n ; \bs{\theta}_{\text{dec}})$ where $\bs{\varphi}_{\text{dec}}^{\text{FFNN}}(\cdot \,; \bs{\theta}_{\text{dec}}) : \mathbb{R}^L \mapsto \mathbb{R}_+^F$ denotes a feed-forward fully-connected neural network (FFNN) of parameters $\bs{\theta}_{\text{dec}}$. Such an architecture was used in \cite{bando2017statistical,Leglaive_MLSP18, Leglaive_ICASSP2019b, parienteInterspeech19, BayesianMVAE, Leglaive_ICASSP2019a,fontaine_cauchy_MVAE}. As represented in Fig.~\ref{fig:FFNN_speech_model}, this model results in the following factorization of the complete-data likelihood:
\begin{equation}
p(\mathbf{s}, \mathbf{z}; \bs{\theta}_{\text{dec}}) = \prod\nolimits_{n=0}^{N-1} p(\mbf{s}_n | \mbf{z}_n; \bs{\theta}_{\text{dec}}) p(\mbf{z}_n).
\end{equation}
Note that in this case, the speech STFT time frames are not only conditionally independent, but also marginally independent, i.e. $p(\mathbf{s}; \bs{\theta}_{\text{dec}}) = \prod\nolimits_{n=0}^{N-1} p(\mathbf{s}_n; \bs{\theta}_{\text{dec}})$.
\paragraph{RNN generative speech model} $\mbf{v}_{\mbf{s},n}(\mbf{z}) = \bs{\varphi}_{\text{dec},n}^{\text{RNN}}(\mbf{z}_{0:n} ; \bs{\theta}_{\text{dec}})$ where $\bs{\varphi}_{\text{dec},n}^{\text{RNN}}(\cdot \,; \bs{\theta}_{\text{dec}}) : \mathbb{R}^{L\times (n+1)} \mapsto \mathbb{R}_+^F$ denotes the output at time frame $n$ of a recurrent neural network (RNN), taking as input the sequence of latent random vectors $\mbf{z}_{0:n} = \{\mbf{z}_{n'} \in \mathbb{R}^L \}_{n'=0}^{n}$. As represented in Fig.~\ref{fig:RNN_speech_model}, we have the following factorization of the complete-data likelihood:
\begin{equation}
p(\mathbf{s}, \mathbf{z}; \bs{\theta}_{\text{dec}}) = \prod\nolimits_{n=0}^{N-1} p(\mbf{s}_n | \mbf{z}_{0:n}; \bs{\theta}_{\text{dec}}) p(\mbf{z}_n).
\end{equation}
Note that for this RNN-based model, the speech STFT time frames are still conditionally independent, \emph{but not marginally independent}.
\paragraph{BRNN generative speech model} $\mbf{v}_{\mbf{s},n}(\mbf{z}) = \bs{\varphi}_{\text{dec},n}^{\text{BRNN}}(\mbf{z} ; \bs{\theta}_{\text{dec}})$ where $\bs{\varphi}_{\text{dec},n}^{\text{BRNN}}(\cdot \,; \bs{\theta}_{\text{dec}}) : \mathbb{R}^{L\times N} \mapsto \mathbb{R}_+^F$ denotes the output at time frame $n$ of a bidirectional RNN (BRNN) taking as input the complete sequence of latent random vectors $\mbf{z}$. As represented in Fig.~\ref{fig:BRNN_speech_model}, we end up with the following factorization of the complete-data likelihood:
\begin{equation}
p(\mathbf{s}, \mathbf{z}; \bs{\theta}_{\text{dec}}) = \prod\nolimits_{n=0}^{N-1} p(\mbf{s}_n | \mbf{z}; \bs{\theta}_{\text{dec}}) p(\mbf{z}_n).
\end{equation}
As for the RNN-based model, the speech STFT time frames are conditionally independent but not marginally.
Note that for avoiding cluttered notations, the variance $\mbf{v}_{\mbf{s},n}(\mbf{z})$ in the generative speech model \eqref{speech_generative_model} is not made explicitly dependent on the decoder network parameters $\bs{\theta}_{\text{dec}}$, but it clearly is.
\begin{figure}[t]
\centering
\subfloat[FFNN]{
\resizebox{.22\linewidth} {!} {
\begin{tikzpicture}
\node[obs, minimum size=.8cm] (s0) {$\mbf{s}_0$};
\node[latent, above=1cm of s0, minimum size=.8cm] (z0) {$\mbf{z}_0$};
\node[obs, right=.25cm of s0, minimum size=.8cm] (s1) {$\mbf{s}_1$};
\node[latent, above=1cm of s1, minimum size=.8cm] (z1) {$\mbf{z}_1$};
\node[obs, right=.25cm of s1, minimum size=.8cm] (s2) {$\mbf{s}_2$};
\node[latent, above=1cm of s2, minimum size=.8cm] (z2) {$\mbf{z}_2$};
\edge {z0} {s0} ; %
\edge {z1} {s1} ; %
\edge {z2} {s2} ; %
\end{tikzpicture}
\label{fig:FFNN_speech_model}
}
}\hfill
\subfloat[RNN]{
\resizebox{.22\linewidth} {!} {
\begin{tikzpicture}
\node[obs, minimum size=.8cm] (s0) {$\mbf{s}_0$};
\node[latent, above=1cm of s0, minimum size=.8cm] (z0) {$\mbf{z}_0$};
\node[obs, right=.25 of s0, minimum size=.8cm] (s1) {$\mbf{s}_1$};
\node[latent, above=1cm of s1, minimum size=.8cm] (z1) {$\mbf{z}_1$};
\node[obs, right=.25 of s1, minimum size=.8cm] (s2) {$\mbf{s}_2$};
\node[latent, above=1cm of s2, minimum size=.8cm] (z2) {$\mbf{z}_2$};
\edge {z0.south} {s0.90} ;
\edge {z0.south} {s1.100} ;
\edge {z0.south} {s2.120} ; %
\edge {z1.south} {s1.90} ; %
\edge {z1.south} {s2.100} ; %
\edge {z2.south} {s2.90} ; %
\end{tikzpicture}
\label{fig:RNN_speech_model}
}
}\hfill
\subfloat[BRNN]{
\resizebox{.22\linewidth} {!} {
\begin{tikzpicture}
\node[obs, minimum size=.8cm] (s0) {$\mbf{s}_0$};
\node[latent, above=1cm of s0, minimum size=.8cm] (z0) {$\mbf{z}_0$};
\node[obs, right=.25 of s0, minimum size=.8cm] (s1) {$\mbf{s}_1$};
\node[latent, above=1cm of s1, minimum size=.8cm] (z1) {$\mbf{z}_1$};
\node[obs, right=.25 of s1, minimum size=.8cm] (s2) {$\mbf{s}_2$};
\node[latent, above=1cm of s2, minimum size=.8cm] (z2) {$\mbf{z}_2$};
\edge {z0.south} {s0} ; %
\edge {z0.south} {s1.100} ; %
\edge {z0.south} {s2.120} ; %
\edge {z1.south} {s0.80} ; %
\edge {z1.south} {s1} ; %
\edge {z1.south} {s2.100} ; %
\edge {z2.south} {s0.60} ; %
\edge {z2.south} {s1.80} ; %
\edge {z2.south} {s2} ; %
\end{tikzpicture}
\label{fig:BRNN_speech_model}
}
}
\caption{Probabilistic graphical models for $N=3$.}
\label{fig:speech_models}
\end{figure}
\subsection{Training}
\label{subsec:training}
We would like to estimate the decoder parameters $\bs{\theta}_{\text{dec}}$ in the maximum likelihood sense, i.e. by maximizing $\sum\nolimits_{i=1}^{I} \ln p\left(\mbf{s}^{(i)} ; \bs{\theta}_{\text{dec}}\right)$, where $\{\mbf{s}^{(i)} \in \mathbb{C}^{F \times N}\}_{i=1}^{I}$ is a training dataset consisting of $I$ i.i.d sequences of $N$ STFT speech time frames. In the following, because it simplifies the presentation, we simply omit the sum over the $I$ sequences and the associated subscript $(i)$.
Due to the non-linear relationship between $\mbf{s}$ and $\mbf{z}$, the marginal likelihood $p(\mbf{s} ; \bs{\theta}_{\text{dec}}) = \int p(\mbf{s} | \mbf{z} ; \bs{\theta}_{\text{dec}}) p(\mbf{z}) d\mbf{z}$ is analytically intractable, and it cannot be straightforwardly optimized. We therefore resort to the framework of variational autoencoders \cite{kingma2014auto} for parameters estimation, which builds upon stochastic fixed-form variational inference \cite{Jordan1999,honkela2010approximate,Salimans2013, Hoffman2013, blei2017variational}.
This latter methodology first introduces a variational distribution $q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})$ (or inference model) parametrized by $\bs{\theta}_\text{enc}$, which is an approximation of the true intractable posterior distribution $p(\mbf{z} | \mbf{s}; \bs{\theta}_{\text{dec}})$. For any variational distribution, we have the following decomposition of log-marginal likelihood:
\begin{equation}
\ln p(\mbf{s}; \bs{\theta}_{\text{dec}}) = \mathcal{L}_{\mbf{s}}(\bs{\theta}_\text{enc}, \bs{\theta}_{\text{dec}}) + D_{\text{KL}}\big( q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc}) \parallel p(\mbf{z} | \mbf{s}; \bs{\theta}_{\text{dec}}) \big),
\label{log_likelihood_decomposition}
\end{equation}
where $\mathcal{L}_{\mbf{s}}(\bs{\theta}_\text{enc}, \bs{\theta}_{\text{dec}})$ is the \emph{variational free energy} (VFE) (also referred to as the evidence lower bound) defined by:
\begin{align}
\mathcal{L}_{\mbf{s}}(\bs{\theta}_\text{enc}, \bs{\theta}_{\text{dec}}) &= \mbb{E}_{q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})} \left[ \ln p(\mbf{s},\mathbf{z} ; \bs{\theta}_{\text{dec}}) - \ln q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc}) \right] \nonumber \\
&\hspace{-1.5cm}= \mbb{E}_{q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})} \left[ \ln p(\mbf{s} | \mathbf{z} ; \bs{\theta}_{\text{dec}}) \right] - D_{\text{KL}}\big( q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc}) \parallel p(\mbf{z} ) \big),
\label{varFreeEnergy}
\end{align}
and $D_{\text{KL}}(q \parallel p) = \mbb{E}_q[\ln q - \ln p]$ is the Kullback-Leibler (KL) divergence. As the latter is always non-negative, we see from \eqref{log_likelihood_decomposition} that the VFE is a lower bound of the intractable log-marginal likelihood. Moreover, we see that it is tight if and only if $q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc}) = p(\mbf{z} | \mbf{s}; \bs{\theta}_{\text{dec}})$. Therefore, our objective is now to maximize the VFE with respect to (w.r.t) both $\bs{\theta}_\text{enc}$ and $\bs{\theta}_{\text{dec}}$. But in order to fully define the VFE in \eqref{varFreeEnergy}, we have to define the form of the variational distribution $q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})$.
Using the chain rule for joint distributions, the posterior distribution of the latent vectors can be exactly expressed as follows:
\begin{equation}
p(\mathbf{z} | \mathbf{s}; \bs{\theta}_{\text{dec}}) = \prod\nolimits_{n=0}^{N-1} p(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_{\text{dec}}),
\label{factorized_posterior_VAE}
\end{equation}
where we considered $p(\mathbf{z}_0 | \mathbf{z}_{-1}, \mathbf{s} ; \bs{\theta}_{\text{dec}}) = p(\mathbf{z}_0 | \mathbf{s} ; \bs{\theta}_{\text{dec}})$. The variational distribution $q(\mathbf{z} | \mathbf{s}; \bs{\theta}_\text{enc})$ is naturally also expressed as:
\begin{equation}
q(\mathbf{z} | \mathbf{s}; \bs{\theta}_\text{enc}) = \prod\nolimits_{n=0}^{N-1} q(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_\text{enc}).
\label{factorized_variational_distribution_VAE}
\end{equation}
In this work, $q(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_\text{enc})$ denotes to the probability density function (pdf) of the following Gaussian \emph{inference model}:
\begin{equation}
\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} \sim \mathcal{N}\big(\bs{\mu}_{\mbf{z},n}(\mathbf{z}_{0:n-1}, \mathbf{s}), \diag\big\{ \mbf{v}_{\mbf{z},n}(\mathbf{z}_{0:n-1}, \mathbf{s}) \big\}\big),
\label{variational_distribution_VAE}
\end{equation}
where $\{ \bs{\mu}_{\mbf{z},n}, \mbf{v}_{\mbf{z},n} \}(\mathbf{z}_{0:n-1}, \mathbf{s}) \in \mbb{R}^L \times \mbb{R}_+^L$
will be defined by means of an \emph{encoder} neural network.
\paragraph{Inference model for the BRNN generative speech model} For the BRNN generative speech model, the parameters of the variational distribution in \eqref{variational_distribution_VAE} are defined by
\begin{align}
\{ \bs{\mu}_{\mbf{z},n}, \mbf{v}_{\mbf{z},n} \}(\mathbf{z}_{0:n-1}, \mathbf{s}) = \bs{\varphi}_{\text{enc},n}^{\text{BRNN}}(\mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_\text{enc}),
\end{align}
where $\bs{\varphi}_{\text{enc},n}^{\text{BRNN}}(\cdot, \cdot \,; \bs{\theta}_\text{enc}) : \mathbb{R}^{L \times n} \times \mathbb{C}^{F \times N} \mapsto \mathbb{R}^L \times \mathbb{R}_+^L$ denotes the output at time frame $n$ of a neural network whose parameters are denoted by $\bs{\theta}_\text{enc}$. It is composed of:
\begin{enumerate}[leftmargin=*]
\item \emph{``Prediction block''}: a causal recurrent block processing $\mbf{z}_{0:n-1}$;
\item \emph{``Observation block''}: a bidirectional recurrent block processing the complete sequence of STFT speech time frames $\mbf{s}$;
\item \emph{``Update block''}: a feed-forward fully-connected block processing the outputs at time-frame $n$ of the two previous blocks.
\end{enumerate}
If we want to sample from $q(\mathbf{z} | \mathbf{s}; \bs{\theta}_\text{enc})$ in \eqref{factorized_variational_distribution_VAE}, we have to sample recursively each $\mathbf{z}_n$, starting from $n=0$ up to $N-1$. Interestingly, the posterior is formed by running \emph{forward} over the latent vectors, and both \emph{forward} and \emph{backward} over the input sequence of STFT speech time-frames. In other words, the latent vector at a given time frame is inferred by taking into account not only the latent vectors at the previous time steps, but also all the speech STFT frames at the current, past and future time steps. The anti-causal relationships were not taken into account in the RVAE model \cite{chung2015recurrent}.
\paragraph{Inference model for the RNN generative speech model} Using the fact that $\mathbf{z}_n$ is conditionally independent of all other nodes in Fig.~\ref{fig:RNN_speech_model} given its Markov blanket (defined as the set of parents, children and co-parents of that node) \cite{Bishop}, \eqref{factorized_posterior_VAE} can be simplified as:
\begin{equation}
p(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_{\text{dec}}) = p(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s}_{n:N-1} ; \bs{\theta}_{\text{dec}}),
\end{equation}
where $\mbf{s}_{n:N-1} = \{\mbf{s}_{n'} \in \mathbb{C}^F \}_{n'=n}^{N-1}$. This conditional independence also applies to the variational distribution in \eqref{variational_distribution_VAE}, whose parameters are now given by:
\begin{align}
\{ \bs{\mu}_{\mbf{z},n}, \mbf{v}_{\mbf{z},n} \}(\mathbf{z}_{0:n-1}, \mathbf{s}) = \bs{\varphi}^{\text{RNN}}_{\text{enc},n}(\mathbf{z}_{0:n-1}, \mathbf{s}_{n:N-1} ; \bs{\theta}_\text{enc}),
\end{align}
where $\bs{\varphi}^{\text{RNN}}_{\text{enc},n}(\cdot, \cdot \,; \bs{\theta}_\text{enc}) : \mathbb{R}^{L \times n} \times \mathbb{C}^{F \times (N-n)} \mapsto \mathbb{R}^L \times \mathbb{R}_+^L$ denotes the same neural network as for the BRNN-based model, except that the observation block is not a bidirectional recurrent block anymore, but an \emph{anti-causal} recurrent one.
The full approximate posterior is now formed by running \emph{forward} over the latent vectors, and \emph{backward} over the input sequence of STFT speech time-frames.
\paragraph{Inference model for the FFNN generative speech model} For the same reason as before, by studying the Markov blanket of $\mathbf{z}_n$ in Fig.~\ref{fig:FFNN_speech_model}, the dependencies in \eqref{factorized_posterior_VAE} can be simplified as follows:
\begin{equation}
p(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{s} ; \bs{\theta}_{\text{dec}}) = p(\mathbf{z}_n | \mathbf{s}_n ; \bs{\theta}_{\text{dec}}).
\end{equation}
This simplification also applies to the variational distribution in \eqref{variational_distribution_VAE}, whose parameters are now given by:
\begin{equation}
\{ \bs{\mu}_{\mbf{z},n}, \mbf{v}_{\mbf{z},n} \}(\mathbf{z}_{0:n-1}, \mathbf{s}) = \bs{\varphi}^{\text{FFNN}}_{\text{enc}}(\mbf{s}_n ; \bs{\theta}_\text{enc}),
\end{equation}
where $\bs{\varphi}^{\text{FFNN}}_{\text{enc}}(\cdot \,; \bs{\theta}_\text{enc}) : \mathbb{C}^F \mapsto \mathbb{R}^L \times \mathbb{R}_+^L$ denotes the output of an FFNN. Such an architecture was used in \cite{bando2017statistical,Leglaive_MLSP18, Leglaive_ICASSP2019b, parienteInterspeech19, BayesianMVAE, Leglaive_ICASSP2019a,fontaine_cauchy_MVAE}.
This is the only case where, from the approximate posterior, we can sample all latent vectors in parallel for all time frames, without further approximation.
Here also, the mean and variance vectors in the inference model \eqref{variational_distribution_VAE} are not made explicitly dependent on the encoder network parameters $\bs{\theta}_{\text{enc}}$, but they clearly are.
\paragraph{Variational free energy} Given the generative model \eqref{speech_generative_model} and the general inference model \eqref{variational_distribution_VAE}, we can develop the VFE defined in \eqref{varFreeEnergy} as follows (derivation details are provided in Appendix~\ref{appendix:VFE}):
\begin{align}
\mathcal{L}_{\mbf{s}}(\bs{\theta}_\text{enc}, \bs{\theta}_{\text{dec}}) \overset{c}{=}& -\sum\limits_{f=0}^{F-1} \sum\limits_{n=0}^{N-1} \mbb{E}_{q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})} \Big[ d_{\text{IS}} \left( |s_{fn}|^2, v_{\mbf{s},fn}(\mbf{z}) \right) \Big] \nonumber \\
&\hspace{0cm}+ \frac{1}{2}\sum\limits_{l=0}^{L-1} \sum\limits_{n=0}^{N-1} \mbb{E}_{q(\mbf{z}_{0:n-1} | \mbf{s} ; \bs{\theta}_\text{enc})} \Big[ \ln\big(v_{\mbf{z},ln}(\mathbf{z}_{0:n-1}, \mathbf{s}) \big) \nonumber \\
&\hspace{0cm} \qquad - \mu_{\mbf{z},ln}^2(\mathbf{z}_{0:n-1}, \mathbf{s}) - v_{\mbf{z},ln}(\mathbf{z}_{0:n-1}, \mathbf{s}) \Big],
\label{varFreeEnergy_developed_IS}
\end{align}
where $\overset{c}{=}$ denotes equality up to an additive constant w.r.t $\bs{\theta}_\text{enc}$ and $\bs{\theta}_{\text{dec}}$, $d_{\text{IS}}(a, b) = a/b -\ln(a/b) - 1$ is the Itakura-Saito (IS) divergence \cite{ISNMF}, $s_{fn} \in \mbb{C}$ and $v_{\mbf{s},fn}(\mbf{z}) \in \mbb{R}_+$ denote respectively the $f$-th entries of $\mbf{s}_n$ and $\mbf{v}_{\mbf{s},n}(\mbf{z})$, and $\mu_{\mbf{z},ln}(\mathbf{z}_{0:n-1}, \mbf{s}) \in \mbb{R}$ and $v_{\mbf{z},ln}(\mathbf{z}_{0:n-1}, \mbf{s}) \in \mbb{R}_+$ denote respectively the $l$-th entry of $\bs{\mu}_{\mbf{z},n}(\mathbf{z}_{0:n-1}, \mbf{s})$ and $\mbf{v}_{\mbf{z},n}(\mathbf{z}_{0:n-1}, \mbf{s})$.
The expectations in \eqref{varFreeEnergy_developed_IS} are analytically intractable, so we compute unbiased Monte Carlo estimates using a set $\{ \mbf{z}^{(r)} \}_{r=1}^R$ of i.i.d. realizations drawn from $q(\mbf{z} | \mbf{s} ; \bs{\theta}_\text{enc})$. For that purpose, we use the ``reparametrization trick'' introduced in \cite{kingma2014auto}.
The obtained objective function is differentiable w.r.t to both $\bs{\theta}_{\text{dec}}$ and $\bs{\theta}_\text{enc}$, and it can be optimized using gradient-ascent-based algorithms.
Finally, we recall that in the final expression of the VFE, there should actually be an additional sum over the $I$ i.i.d. sequences in the training dataset $\{\mbf{s}^{(i)}\}_{i=1}^{I}$. For stochastic or mini-batch optimization algorithms, we would only consider a subset of these training sequences for each update of the model parameters.
\section{Speech enhancement: Model and algorithm}
\subsection{Speech, noise and mixture model}
The deep generative \emph{clean} speech model along with its parameters learning procedure were defined in the previous section. For speech enhancement, we now consider a Gaussian noise model based on an NMF parametrization of the variance \cite{ISNMF}. Independently for all time frames $n \in \{0,...,N-1\}$, we have:
\begin{equation}
\mbf{b}_n \sim \mathcal{N}_c(\mbf{0}, \diag\{ \mbf{v}_{\mbf{b}, n} \}),
\end{equation}
where $\mbf{v}_{\mbf{b}, n} = (\mbf{W}_b \mbf{H}_b)_{:,n}$ with $\mbf{W}_b \in \mbb{R}_+^{F \times K}$ and $\mbf{H}_b \in \mbb{R}_+^{K \times N}$.
The noisy mixture signal is modeled as $\mbf{x}_n = \sqrt{g_n}\mbf{s}_n + \mbf{b}_n$, where $g_n \in \mbb{R}_+$ is a gain parameter scaling the level of the speech signal at each time frame \cite{Leglaive_MLSP18}. We further consider the independence of the speech and noise signals so that the likelihood is defined by:
\begin{equation}
\mbf{x}_n \mid \mbf{z} \sim \mathcal{N}_c\left(\mbf{0}, \diag\{ \mbf{v}_{\mbf{x},n}(\mbf{z}) \} \right),
\label{mixture_likelihood}
\end{equation}
where $\mbf{v}_{\mbf{x},n}(\mbf{z}) = g_n \mbf{v}_{\mbf{s},n}(\mbf{z}) + \mbf{v}_{\mbf{b}, n}$.
\subsection{Speech enhancement algorithm}
We consider that the speech model parameters $\bs{\theta}_{\text{dec}}$ which have been learned during the training stage are fixed, so we omit them in the rest of this section. We now need to estimate the remaining model parameters $\bs{\phi} = \left\{ \mbf{g}=[g_0,...,g_{N-1}]^\top, \mbf{W}_b, \mbf{H}_b \right\}$ from the observation of the noisy mixture signal $\mbf{x} = \{\mbf{x}_n \in \mathbb{C}^F \}_{n=0}^{N-1}$. However, very similarly as for the training stage (see Section~\ref{subsec:training}), the marginal likelihood $p(\mbf{x}; \bs{\phi})$ is intractable, and we resort again to variational inference. The VFE at test time is defined by:
\begin{equation}
\mathcal{L}_{\mbf{x}}(\bs{\theta}_\text{enc}, \bs{\phi}) \hspace{-.03cm}=\hspace{-.03cm} \mbb{E}_{q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc})} \hspace{-.03cm}\left[\hspace{-.03cm} \ln p(\mbf{x} | \mathbf{z} ; \bs{\phi}) \hspace{-.03cm}\right] - D_{\text{KL}}\big(\hspace{-.03cm} q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc}) \parallel p(\mbf{z} ) \hspace{-.05cm}\big).
\label{VFE_test_time}
\end{equation}
Following a VEM algorithm \cite{neal1998view}, we will maximize this criterion alternatively w.r.t $\bs{\theta}_\text{enc}$ at the E-step, and $\bs{\phi}$ at the M-step. Note that here also, we have $\mathcal{L}_{\mbf{x}}(\bs{\theta}_\text{enc}, \bs{\phi}) \le \ln p(\mbf{x}; \bs{\phi})$ with equality if and only if $q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc}) = p(\mbf{z} | \mbf{x}; \bs{\phi})$.
\paragraph{Variational E-Step with fine-tuned encoder} We consider a fixed-form variational inference strategy, reusing the inference model learned during the training stage. More precisely, the variational distribution $q(\mathbf{z} | \mathbf{x}; \bs{\theta}_\text{enc})$ is defined exactly as $q(\mathbf{z} | \mathbf{s}; \bs{\theta}_\text{enc})$ in \eqref{variational_distribution_VAE} and \eqref{factorized_variational_distribution_VAE} except that $\mbf{s}$ is replaced with $\mbf{x}$. Remember that the mean and variance vectors $\bs{\mu}_{\mbf{z},n}(\cdot, \cdot)$ and $\mbf{v}_{\mbf{z},n}( \cdot, \cdot )$ in \eqref{variational_distribution_VAE} correspond to the VAE encoder network, whose parameters $\bs{\theta}_\text{enc}$ were estimated along with the parameters $\bs{\theta}_{\text{dec}}$ of the generative speech model. During the training stage, this encoder network took \emph{clean speech} signals as input. It is now \emph{fine-tuned} with a \emph{noisy speech} signal as input. For that purpose, we maximize $\mathcal{L}_{\mbf{x}}(\bs{\theta}_\text{enc}, \bs{\phi})$ w.r.t $\bs{\theta}_\text{enc}$ only, with fixed $\bs{\phi}$. This criterion takes the exact same form as \eqref{varFreeEnergy_developed_IS} except that $|s_{fn}|^2$ is replaced with $|x_{fn}|^2$ where $x_{fn} \in \mbb{C}$ denotes the $f$-th entry of $\mbf{x}_n$, $\mbf{s}$ is replaced with $\mbf{x}$, and $v_{\mbf{s},fn}(\mbf{z})$ is replaced with $v_{\mbf{x},fn}(\mbf{z})$, the $f$-th entry of $\mbf{v}_{\mbf{x},n}(\mbf{z})$ which was defined along with \eqref{mixture_likelihood}. Exactly as in Section~\ref{subsec:training}, intractable expectations are replaced with a Monte Carlo estimate and the VFE is maximized w.r.t. $\bs{\theta}_\text{enc}$ by means of gradient-based optimization techniques. In summary, we use the framework of VAEs \cite{kingma2014auto} both at training for estimating $\bs{\theta}_{\text{dec}}$ and $\bs{\theta}_\text{enc}$ from clean speech signals, and at testing for fine-tuning $\bs{\theta}_\text{enc}$ from the noisy speech signal, and with $\bs{\theta}_{\text{dec}}$ fixed. The idea of refitting the encoder was also proposed in \cite{mattei2018refit} in a different context.
\paragraph{Point-estimate E-Step} In the experiments, we will compare this variational E-step with an alternative proposed in \cite{KameokaNeuralComputation2019}, which consists in relying only on a point estimate of the latent variables. In our framework, this approach can be understood as assuming that the approximate posterior $q(\mathbf{z} | \mathbf{x}; \bs{\theta}_\text{enc})$ is a dirac delta function centered at the maximum a posteriori estimate $\mathbf{z}^\star$. Maximization of $p(\mbf{z} | \mbf{x} ; \bs{\phi}) \propto p(\mbf{x} | \mbf{z} ; \bs{\phi}) p(\mbf{z})$ w.r.t $\mbf{z}$ can be achieved by means of gradient-based techniques, where backpropagation is used to compute the gradient w.r.t. the input of the generative decoder network.
\paragraph{M-Step} For both the VEM algorithm and the point-estimate alternative, the M-Step consists in maximizing $\mathcal{L}_{\mbf{x}}(\bs{\theta}_\text{enc}, \bs{\phi})$ w.r.t. $\bs{\phi}$ under a non-negativity constraint and with $\bs{\theta}_\text{enc}$ fixed. Replacing intractable expectations with Monte Carlo estimates, the M-step can be recast as minimizing the following criterion \cite{Leglaive_MLSP18}:
\begin{align}
\mathcal{C}(\bs{\phi}) &= \sum\nolimits_{r=1}^{R} \sum\nolimits_{f=0}^{F-1} \sum\nolimits_{n=0}^{N-1} d_{\text{IS}}\left(|x_{fn}|^2, v_{\mbf{x},fn}\left(\mathbf{z}^{(r)}\right)\right),
\label{cost_M_Step}
\end{align}
where $v_{\mbf{x},fn}(\mathbf{z}^{(r)})$ implicitly depends on $\bs{\phi}$. For the VEM algorithm, $\{\mbf{z}^{(r)}\}_{r=1}^R$ is a set of i.i.d. sequences drawn from $q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc})$ using the current value of the parameters $\bs{\theta}_\text{enc}$. For the point estimate approach, $R=1$ and $\mbf{z}^{(1)}$ corresponds to the maximum a posteriori estimate. This optimization problem can be tackled using a majorize-minimize approach \cite{hunter2004tutorial}, which leads to the multiplicative update rules derived in \cite{Leglaive_MLSP18} using the methodology proposed in \cite{fevotte2011algorithms} (these updates are recalled in Appendix~\ref{appendix:Mstep}).
\paragraph{Speech reconstruction} Given the estimated model parameters, we want to compute the posterior mean of the speech coefficients:
\begin{align}
\hat{s}_{fn} &= \mathbb{E}_{p(s_{fn} \mid x_{fn} ; \bs{\phi})} [s_{fn}] = \mathbb{E}_{p(\mathbf{z} \mid \mathbf{x} ; \bs{\phi}) }\left[\frac{\sqrt{g_n}v_{\mbf{s},fn}(\mathbf{z})}{v_{\mbf{x},fn}(\mathbf{z})}\right]x_{fn}.
\label{Wiener_filtering}
\end{align}
In practice, the speech estimate is actually given by the scaled coefficients $\sqrt{g_n}\hat{s}_{fn}$. Note that \eqref{Wiener_filtering} corresponds to a Wiener-like filtering, averaged over all possible realizations of the latent variables according to their posterior distribution. As before, this expectation is intractable, but we approximate it by a Monte Carlo estimate using samples drawn from $q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc})$ for the VEM algorithm. For the point-estimate approach, $p(\mbf{z}|\mbf{x}; \bs{\phi})$ is approximated by a dirac delta function centered at the maximum a posteriori.
In the case of the RNN- and BRNN-based generative speech models (see Section~\ref{subsec:VAE_speech_models}), it is important to remember that sampling from $q(\mathbf{z} | \mathbf{x}; \bs{\theta}_\text{enc})$ is actually done recursively, by sampling $q(\mathbf{z}_n | \mathbf{z}_{0:n-1}, \mathbf{x} ; \bs{\theta}_\text{enc})$ from $n=0$ to $N-1$ (see Section~\ref{subsec:training}). Therefore, there is a \emph{posterior temporal dynamic} that will be propagated from the latent vectors to the estimated speech signal, through the expectation in the Wiener-like filtering of \eqref{Wiener_filtering}. This temporal dynamic is expected to be beneficial compared with the FFNN generative speech model, where the speech estimate is built independently for all time frames.
\section{Experiments}
\paragraph{Dataset} The deep generative speech models are trained using around 25 hours of clean speech data, from the "si\_tr\_s" subset of the Wall Street Journal (WSJ0) dataset \cite{WSJ0}. Early stopping with a patience of 20 epochs is performed using the subset "si\_dt\_05" (around 2 hours of speech). We removed the trailing and leading silences for each utterance. For testing, we used around 1.5 hours of noisy speech, corresponding to 651 synthetic mixtures. The clean speech signals are taken from the "si\_et\_05" subset of WSJ0 (unseen speakers), and the noise signals from the "verification" subset of the QUT-NOISE dataset \cite{dean2015qut}. Each mixture is created by uniformly sampling a noise type among \{"café", "home", "street", "car"\} and a signal-to-noise ratio (SNR) among \{-5, 0, 5\}~dB. The intensity of each signal for creating a mixture at a given SNR is computed using the ITU-R BS.1770-4 protocol \cite{ITU}. Note that an SNR computed with this protocol is here 2.5~dB \emph{lower} (in average) than with a simple sum of the squared signal coefficients. Finally, all signals have a 16~kHz-sampling rate, and the STFT is computed using a 64-ms sine window (i.e.~$F=513$) with 75\%-overlap.
\paragraph{Network architecture and training parameters} All details regarding the encoder and decoder network architectures and their training procedure are provided in Appendix~\ref{appendix:architectures}.
\paragraph{Speech enhancement parameters} The dimension of the latent space for the deep generative speech model is fixed to $L = 16$. The rank of the NMF-based noise model is fixed to $K = 8$. $\mbf{W}_b$ and $\mbf{H}_b$ are randomly initialized (with a fixed seed to ensure fair comparisons), and $\mathbf{g}$ is initialized with an all-ones vector. For computing \eqref{cost_M_Step}, we fix the number of samples to $R=1$, which is also the case for building the Monte Carlo estimate of \eqref{Wiener_filtering}. The VEM algorithm and its "point estimate" alternative (referred to as PEEM) are run for 500 iterations. We used Adam \cite{kingma2014adam} with a step size of $10^{-2}$ for the gradient-based iterative optimization technique involved at the E-step. For the FFNN deep generative speech model, it was found that an insufficient number of gradient steps had a strong negative impact on the results, so it was fixed to 10. For the (B)RNN model, this choice had a much lesser impact so it was fixed to 1, thus limiting the computational burden.
\begin{table}[t]
\centering
\resizebox{.95\linewidth}{!}{
\begin{tabular}{cc|cccc}
Algorithm & Model & SI-SDR (dB) & PESQ & ESTOI \\\hline
MCEM \cite{Leglaive_MLSP18} & FFNN & 5.4 $\pm$ 0.4 & 2.22 $\pm$ 0.04 & 0.60 $\pm$ 0.01 \\\hline
\multirow{3}{*}{PEEM} & FFNN & 4.4 $\pm$ 0.4 & 2.21 $\pm$ 0.04 & 0.58 $\pm$ 0.01 \\
& RNN & 5.8 $\pm$ 0.5 & {\color{gray}\textbf{2.33}} $\pm$ 0.04 & 0.63 $\pm$ 0.01 \\
& BRNN & 5.4 $\pm$ 0.5 & {\color{gray}\textbf{2.30}} $\pm$ 0.04 & 0.62 $\pm$ 0.01 \\\hline
\multirow{3}{*}{VEM} & FFNN & 4.4 $\pm$ 0.4 & 1.93 $\pm$ 0.05 & 0.53 $\pm$ 0.01 \\
& RNN & {\color{gray}\textbf{6.8}} $\pm$ 0.4 & {\color{gray}\textbf{2.33}} $\pm$ 0.04 & \textbf{0.67} $\pm$ 0.01 \\
& BRNN & \textbf{6.9} $\pm$ 0.5 & \textbf{2.35} $\pm$ 0.04 & \textbf{0.67} $\pm$ 0.01 \\\hline
\multicolumn{2}{c|}{noisy mixture} & -2.6 $\pm$ 0.5 & 1.82 $\pm$ 0.03 & 0.49 $\pm$ 0.01 \\
\multicolumn{2}{c|}{oracle Wiener filtering} & 12.1 $\pm$ 0.3 & 3.13 $\pm$ 0.02 & 0.88 $\pm$ 0.01 \\
\end{tabular}
}%
\caption{Median results and confidence intervals.}
\label{table:results}
\vspace{-.5cm}
\end{table}
\paragraph{Results} We compare the performance of the VEM and PEEM algorithms for the three types of deep generative speech model. For the FFNN model only, we also compare with the Monte Carlo EM (MCEM) algorithm proposed in \cite{Leglaive_MLSP18} (which cannot be straightforwardly adapted to the (B)RNN model). The enhanced speech quality is evaluated in terms of scale-invariant signal-to-distortion ratio (SI-SDR) in dB \cite{LeRoux_ICASSP19}, perceptual evaluation of speech quality (PESQ) measure (between -0.5 and 4.5) \cite{rix2001perceptual} and extended short-time objective intelligibility (ESTOI) measure (between 0 and 1) \cite{taal2011algorithm}. For all measures, the higher the better. The median results for all SNRs along with their confidence interval are presented in Table~\ref{table:results}. Best results are in black-color-bold font, while gray-color-bold font indicates results that are not significantly different. As a reference, we also provide the results obtained with the noisy mixture signal as the speech estimate, and with oracle Wiener filtering. Note that oracle results are here particularly low, which shows the difficulty of the dataset. Oracle SI-SDR is for instance 7~dB lower than the one in \cite{Leglaive_MLSP18}. Therefore, the VEM and PEEM results should not be directly compared with the MCEM results provided in \cite{Leglaive_MLSP18}, but only with the ones provided here.
From Table~\ref{table:results}, we can draw the following conclusions: First, we observe that for the FFNN model, the VEM algorithm performs poorly. In this setting, the performance measures actually strongly decrease after the first 50-to-100 iterations of the algorithm. We did not observe this behavior for the (B)RNN model. We argue that the posterior temporal dynamic over the latent variables helps the VEM algorithm finding a satisfactory estimate of the overparametrized posterior model $q(\mbf{z} | \mbf{x} ; \bs{\theta}_\text{enc})$. Second, the superiority of the RNN model over the FFNN one is confirmed for all algorithms in this comparison. However, the bidirectional model (BRNN) does not perform significantly better than the unidirectional one. Third, the VEM algorithm outperforms the PEEM one, which shows the interest of using the full (approximate) posterior distribution of the latent variables and not only the maximum-a-posteriori point estimate for estimating the noise and mixture model parameters. Audio examples and code are available online \cite{companion_website}.
\vspace{-.15cm}
\section{Conclusion}
\vspace{-.15cm}
In this work, we proposed a recurrent deep generative speech model and a variational EM algorithm for speech enhancement. We showed that introducing a temporal dynamic is clearly beneficial in terms of speech enhancement. Future works include developing a Markov chain EM algorithm to measure the quality of the proposed variational approximation of the intractable true posterior distribution.
\balance
\bibliographystyle{IEEEbib_initial}
|
1,108,101,562,794 | arxiv | \section*{Introduction}
There are two particular situations where the period map plays an essential role for studying moduli spaces, namely principally polarized abelian varieties and (lattice) polarized $K3$ surfaces. In these cases, the period domains are Hermitian symmetric domains and the period maps are both injective and dominant. It is an interesting problem to find more examples where the period maps are injective and the images lie in certain Mumford-Tate subdomains which are locally Hermitian symmetric (but the Griffiths infinitesimal period relations may be non-trivial on the ambient period domains). We mention the examples previously studied by Allcock, Carlson, and Toledo \cite{ACT2, ACT}, Kondo \cite{Kondo}, Borcea \cite{Borcea}, Voisin \cite{Voisin_cy}, Rohde \cite{Rohde}, Garbagnati and van Geemen (\cite{vGG_cyshimura}).
The general problem of determining whether the period map is injective is called the Torelli problem. There are four types of Torelli problem (we follow the terminology of \cite{Catanese_torelli}): local Torelli (whether the differential of the period map is injective), infinitesimal Torelli (local Torelli for the semi-universal deformation), global Torelli (whether the period map is injective) and generic global Torelli (whether the period map is injective over a open dense subset).
Various Torelli theorems have been proved for a large class of varieties (see for example \cite{Catanese_torelli}). However, Kunev \cite{Kynev} constructed a counterexample for the infinitesimal Torelli and generic global Torelli problem (see also \cite{Catanese_kunev} \cite{Catanese_torellifail} \cite{Todorov_kunev}). Let us briefly recall the construction. Let $C_1$, $C_2$ be two smooth plane cubic curves intersecting transversely and $L$ be a general line. Let $X$ be the $(\mathbb{Z}/2\mathbb{Z})^2$-cover of $\mathbb{P}^2$ branched along $C_1+C_2+L$. Then $X$ is a minimal algebraic surface with $q(X)=0$, $p_g(X)=1$ and $K_X^2=1$. Following \cite{Catanese_kunev} such surfaces $X$ are {\sl special Kunev surfaces} whose bicanonical maps are Galois covers of $\mathbb{P}^2$. The infinitesimal period map and period map for special Kunev surfaces both have $2$-dimensional fibers (the rough reason is that the period map only sees the Hodge structures of the intermediate $K3$ surfaces obtained as the desingularizations of the double planes along $C_1+C_2$).
One may ask whether is it possible to modify the construction or the period map for Kunev surfaces to get Torelli. Usui \cite{Usui_vmhs} and Letizia \cite{Letizia} considered the complement of the canonical curve $\Lambda \subset X$ and the mixed Hodge structure $H^2(X-\Lambda)$, and proved infinitesimal mixed Torelli and generic global mixed Torelli for special Kunev surfaces $X$ respectively. Our idea is to modify the branch data (see also \cite[Def. 2.1]{Pardini_cover}) $C_1+C_2+L$. Specifically, we consider the $(\mathbb{Z}/2\mathbb{Z})^2$-cover $S$ of $\mathbb{P}^2$ along a smooth quintic $C$ together with two generic lines $L_0$ and $L_1$. The surfaces $S$ are minimal surfaces with $q(S)=0$, $p_g(S)=2$ and $K_S^2=1$ which have been studied by Horikawa \cite{Horikawa_II}. We shall call such bidouble covers $S$ {\sl special Horikawa surfaces}. We should remark that these surfaces $S$ are also mentioned in the recent preprint \cite{Garbagnati_doublecoverk3} by Garbagnati. But her perspective (classify the possible branch loci of a smooth double cover of a $K3$ surface) is quite different from ours.
Let us explain why we want to modify the branch data in this way. On one hand, there are two (lattice polarized) $K3$ surfaces hidden in the construction of a special Horikawa surface $S$ (this is also observed in \cite{Garbagnati_doublecoverk3}). Namely, one resolves the singularities of the double cover of $\mathbb{P}^2$ branched along $C+L_1$ (resp. $C+L_0$) and gets $X_0$ (resp. $X_1$) which is a $K3$ surface. On the other hand, it is also natural to study the action of the Galois group $(\mathbb{Z}/2\mathbb{Z})^2$ on the periods of the bidouble covers $S$. We shall show that the eigenperiods (cf. \cite[\S7]{DK_ball}) of $S$ are determined by the Hodge structures of the $K3$ surfaces $X_0$ and $X_1$, and apply the global Torelli theorem for (lattice polarized) $K3$ surfaces (a modified version is needed, see the next paragraph) to prove a generic global Torelli theorem for special Horikawa surfaces.
By the work of Catanese \cite{Catanese_bidouble} and Pardini \cite{Pardini_cover} the isomorphism classes of the bidouble covers $S$ are determined by the branch data $C+L_0+L_1$. This can be used to construct the coarse moduli space $\mathcal{M}$ for special Horikawa surfaces (the moduli of special Kunev surfaces has been constructed in a similar manner, see \cite{Usui_type1kunev}). It also follows that we need the global Torelli theorem \cite[Thm. 4.1]{Laza_n16} for degree $5$ pairs $(C,L)$ consisting of plane quintics $C$ and lines $L$ (up to projective equivalence). A key point is that one needs to choose a suitable arithmetic group as explained in \cite[Prop. 4.22]{Laza_n16}.
A typical way to prove a generic global Torelli is to study the infinitesimal variation of Hodge structure and first prove a variational Torelli theorem (cf. \cite{CDT_variational}). Our approach is different. One advantage is that we are able to describe the open dense subset over which the period map is injective explicitly. We suspect that variational Torelli fails for special Horikawa surfaces (otherwise by op. cit. the global Torelli holds for any discrete group of the automorphism group of the period domain, as long as the period map is well-defined, which seems not true; see also \cite{Hayashi_variational}).
After ``labeling" the lines $L_0$ and $L_1$, we obtain a period map (using the period maps for the degree $5$ pairs $(C,L_0)$ and $(C, L_1)$) from (a double cover of) the moduli $\mathcal{M}$ of special Horikawa surfaces $S$ to a product of two arithmetic quotients of Type IV domains. The period map is generically injective. Therefore, special Horikawa surfaces are along the lines of the examples mentioned at the beginning of the paper. In an ongoing project with Gallardo and Laza, we use this period map as our guide to compactify the moduli space of special Horikawa surfaces.
A few words on the structure of the paper. The construction of special Horikawa surfaces is given in Section \ref{sec_construction}. As is well-known (cf. \cite{Horikawa_II}) the canonical model of an algebraic surface with $q=0$, $p_g=2$ and $K^2=1$ is a degree $10$ hypersurface in the weighted projective space $\mathbb{P}(1,1,2,5)$. We shall give the equations for (the canonical models of) special Horikawa surfaces and use Griffiths residue to study the decomposition of the Hodge structures. The infinitesimal Torelli problem will be discussed in Section \ref{sec_inftorelli}. Usui \cite{Usui_wci} has proved the infinitesimal Torelli theorem for nonsingular weighted complete intersections satisfying certain conditions which will be checked for special Horikawa surfaces. A second proof will also be included which can be viewed as a boundary case of \cite[Thm. 3.1]{Pardini_surface} or \cite[Thm. 4.2]{Pardini_torelli} and might be of independent interest. In Section \ref{sec_globaltorelli} we discuss the generic global Torelli problem for special Horikawa surfaces and prove the main result.
\subsection*{Acknowledgement} The work is partly motivated by the recent project by Green, Griffiths, Laza and Robles on studying degenerations of ``H-surfaces" (which are of general type with $p_g=K^2=2$) using Hodge theory. We thank Griffiths for his interest in this paper. We also thank the referee for the valuable comments. Finally, we are grateful to P. Gallardo and R. Laza for several useful discussions.
\section{Special Horikawa surfaces} \label{sec_construction}
Let $C$ be a smooth plane quintic curve. Let $L_0$, $L_1$ be two distinct lines which intersect $C$ transversely and satisfy that $C \cap L_0 \cap L_1 = \emptyset$. We are interested in the bidouble cover $S$ of $\mathbb{P}^2$ branched along $C+L_0+L_1$.
Specifically, the surface $S$ can be constructed in the following way. Take the double cover $\bar{X}_0$ of $\mathbb{P}^2$ branched along the sextic curve $C+ L_1$. The surface $\bar{X}_0$ is a singular $K3$ surface with five $A_1$ singularities. Let $X_0$ be the $K3$ surface obtained by blowing up the singularities (i.e. take the canonical resolution of $\bar{X}_0$). Denote by $E_1, \cdots, E_5$ the exceptional curves on $X_0$ with self intersection $(-2)$. Set $D_0$ to be the preimage of $L_0$ in $X_0$ and let $D_1\subset X_0$ be the strict transform of $L_1$. Choose $\mathcal{L} := \mathcal{O}_{X_0}(D_1 + E_1 + \cdots + E_5)$. By computing the pull-back of $\mathcal{O}_{\mathbb{P}^2}(1)$ one sees that that $D_0 \sim 2D_1 + E_1 + \cdots + E_5$ and hence $\mathcal{L}^{\otimes 2} = \mathcal{O}_{X_0}(D_0 + E_1 + \cdots + E_5)$. Now we take the branched double cover $S_0$ of $X_0$ along $D_0 + E_1 + \cdots + E_5$. The exceptional curves $E_1, \cdots, E_5$ become $(-1)$-curves on $S_0$. Contract these $(-1)$-curves and one obtains a surface $S$. For later use let us denote by $\sigma_0$ the involution of $S$ so that $\bar{X}_0 = S/\sigma_0$. To summarize we have the following diagram (the left one).
\begin{equation} \label{diagram}
\begin{tikzcd}
S_0 \arrow{r}{f_0} \arrow{d}{\varphi_0}
&S \arrow{d}{\psi_0} \arrow[loop right]{}{\sigma_0}\\
X_0 \arrow{r}{g_0} \arrow{d}{\tau_0}
&\bar{X}_0 \arrow{d}{\delta_0} \\
\widetilde{\mathbb{P}}^2 \arrow{r}{h_0} & \mathbb{P}^2
\end{tikzcd}
\hspace{1.5 cm}
\begin{tikzcd}
S \arrow[leftarrow]{r}{f_1} \arrow{d}{\psi_1} \arrow[loop left]{}{\sigma_1}
&S_1 \arrow{d}{\varphi_1}\\
\bar{X}_1 \arrow[leftarrow]{r}{g_1} \arrow{d}{\delta_1}
&X_1 \arrow{d}{\tau_1} \\
{\mathbb{P}^2} \arrow[leftarrow]{r}{h_1} & \widetilde{\mathbb{P}}^2
\end{tikzcd}
\end{equation}
\begin{lemma}
The surface $\psi_0 \circ \delta_0: S \rightarrow \mathbb{P}^2$ is a Galois cover with group $(\mathbb{Z}/2\mathbb{Z})^2$.
\end{lemma}
\begin{proof}
By a result of Zariski the fundamental group $\pi_1(\mathbb{P}^2-(C+L_0+L_1))$ is abelian (see for example \cite{Fulton_complement} or \cite[Thm. 1.6]{Catanese_bidouble}). Therefore, the covering map $\psi_0 \circ \delta_0$ is defined by a normal subgroup and is Galois. Clearly, the Galois group is an abelian group of order $4$. But it can not be $\mathbb{Z}/4\mathbb{Z}$ because otherwise the branched loci $C+L_0+L_1$ is divisible by $4$ in $\operatorname{Pic}(\mathbb{P}^2)$ (or one can directly check that $\sigma_0^2=\mathrm{id}$ in the group of deck transformations $\mathrm{Deck}(S/\mathbb{P}^2)$).
\end{proof}
Since the Galois group of the bidouble cover $S$ is $(\mathbb{Z}/2\mathbb{Z})^2$, there is a symmetric construction for $S$. Namely, one takes the double cover $\delta_1: \bar{X}_1 \rightarrow \mathbb{P}^2$ branched along $C+L_0$ and resolves the five $A_1$ singularities to obtain a $K3$ surface $g_1: X_1 \rightarrow \bar{X}_1$. Call the exceptional curves $F_1, \cdots, F_5 \subset X_1$. It can be shown that $(\delta_1\circ g_1)^{-1}(L_1) + F_1 + \cdots + F_5$ is divisible by $2$ in $\operatorname{Pic}(X_1)$. Let $S_1$ be the double cover of $X_1$ along $(\delta_1\circ g_1)^{-1}(L_1) + F_1 + \cdots + F_5$. The surface $S$ is obtained by contracting the $(-1)$-curves on $S_1$. Let us use $\sigma_1$ to denote the involution of $S$ with $\bar{X}_1 = S/\sigma_1$. (Note that $\sigma_0$ and $\sigma_1$ generate the Galois group $(\mathbb{Z}/2\mathbb{Z})^2$.) See the right part of Diagram (\ref{diagram}).
\begin{proposition} \label{Hodge numbers}
Let $C$ be a smooth quintic curve and $L_0$, $L_1$ be two transverse lines with $C \cap L_0 \cap L_1 = \emptyset$. Let $S$ be the $(\mathbb{Z}/2\mathbb{Z})^2$-cover of $\mathbb{P}^2$ branched along $C+L_0+L_1$. Then the surface $S$ is a minimal algebraic surface of general type with $p_g(S) =2$ and $K_S^2=1$. Moreover, $S$ is simply connected and the canonical bundle $K_S$ is ample.
\end{proposition}
\begin{proof}
Notations as above. Denote by $\pi: S \rightarrow \mathbb{P}^2$ the covering map $\delta_0 \circ \psi_0 = \delta_1 \circ \psi_1$. It is clear from the construction that $S$ is smooth. By \cite[Lem. 3.2]{Morrison_todorov} the canonical bundle $K_S$ of the double cover $S$ can be computed as $2K_S \sim \pi^*\mathcal{O}_{\mathbb{P}^2}(1)$. It follows that $K_S$ is ample and $K_S^2=1$. In particular, $K_S$ is big and nef and hence $S$ is a minimal surface of general type. Now let us compute $h^{2,0}(S)$ which clearly equals $h^0(S_0, \mathcal{O}_{S_0}(K_{S_0}))$. Let $\mathcal{L} = \mathcal{O}_{X_0}(D_1 + E_1 + \cdots + E_5)$ be the line bundle associated with the double cover $\varphi_0: S_0 \rightarrow X_0$ of the $K3$ surface $X_0$. Note that $H^0(S_0, \mathcal{O}_{S_0}(K_{S_0})) = H^0(X_0, \varphi_{0*}\mathcal{O}_{S_0}(K_{S_0})) = H^0(X_0, \varphi_{0*}\varphi_0^*(\mathcal{L}))$. Since $\varphi_{0*}\varphi_0^*(\mathcal{L}) = \mathcal{L} \otimes \varphi_{0*}(\mathcal{O}_{X_0}) = \mathcal{L} \otimes (\mathcal{O}_{X_0} \oplus \mathcal{L}^{-1}) = \mathcal{L} \oplus \mathcal{O}_{X_0}$, we have $h^0(S_0, \mathcal{O}_{S_0}(K_{S_0})) = h^0(X_0, \mathcal{O}_{X_0}) + h^0(X_0, \mathcal{L})$. Because $D_1 + E_1 + \cdots + E_5$ is effective and $(D_1 + E_1 + \cdots + E_5)^2 = -2$, the space $H^0(X_0, \mathcal{L})$ is $1$-dimensional. Thus $h^{2,0}(S)=h^0(S_0, K_{S_0})=2$ (this is also mentioned in \cite[Rmk. 8]{Catanese_kunev}). By \cite[Thm. 11, Thm. 14]{Bombieri} or \cite[Prop. 2.7]{Catanese_bidouble} the surface $S$ is simply connected.
\end{proof}
Algebraic surfaces of general type with $p_g=2$ and $K^2=1$ have been studied by Horikawa \cite{Horikawa_II}. We call the bidouble covers $S$ constructed above {\sl special Horikawa surfaces}.
\begin{proposition}
The canonical model of an algebraic surface $Y$ of general type with $q(Y)=0$, $p_g(Y)=2$ and $K_Y^2 = 1$ is a hypersurface of degree $10$ in $\mathbb{P}(1,1,2,5)$. If $K_Y$ is ample, then $Y$ is isomorphic to a quasi-smooth hypersurface of degree $10$ in $\mathbb{P}(1,1,2,5)$.
\end{proposition}
\begin{proof}
This has been proved in \cite[\S 2]{Horikawa_II}. See also \cite[\S VII.7]{BPV}.
\end{proof}
\begin{remark}
A weighted hypersurface $Y \subset \mathbb{P}$ is {\sl quasi-smooth} if the associated affine quasicone is smooth outside the vertex $0$ (cf. \cite{Dolgachev_weighted}). If in addition $\mathrm{codim}_{Y}(Y \cap \mathbb{P}_{\mathrm{sing}}) \geq 2$, then $Y_{\mathrm{sing}} = Y \cap \mathbb{P}_{\mathrm{sing}}$. (In our case, we have $\mathbb{P} = \mathbb{P}(1,1,2,5)$ which has two singular points $[0,0,1,0]$ and $[0,0,0,1]$ and hence $S_{\mathrm{sing}} = S \cap \mathbb{P}_{\mathrm{sing}}$.) Moreover, the cohomology $H^k(Y, \mathbb{C})$ of a quasi-smooth hypersurface $Y$ admits a pure Hodge structure and explicit calculation can be done using (a slightly generalized version of) Griffiths residue.
\end{remark}
We have shown that special Horikawa surfaces $S$ have ample canonical bundles. A natural question is which degree $10$ quasi-smooth hypersurfaces in $\mathbb{P}(1,1,2,5)$ do they correspond to.
\begin{proposition} \label{S equation}
Let $S$ be a $(\mathbb{Z}/2\mathbb{Z})^2$-cover of $\mathbb{P}^2$ branched over a smooth quintic $C$ and two general lines $L_0$, $L_1$ (i.e. $S$ is a special Horikawa surface). Then $S$ is isomorphic to a quasi-smooth hypersurface $z^2 = F(x_0^2, x_1^2, y)$ in $\mathbb{P}(1,1,2,5) = \mathrm{Proj}(\mathbb{C}[x_0, x_1, y, z])$ where $F$ is a quintic polynomial.
\end{proposition}
\begin{proof}
Denote by $\pi: S \rightarrow \mathbb{P}^2$ the covering map $\delta_0 \circ \psi_0 = \delta_1 \circ \psi_1$. For $i=0,1$ we let $\Lambda_i$ be the reduced inverse image $\pi^{-1}(L_i)$ of $L_i$ in $S$. By the proof of Proposition \ref{Hodge numbers} we have $\Lambda_i \in |K_S|$. Choose a section $x_i \in H^0(S, \mathcal{O}_S(K_S))$ which cuts out $\Lambda_i$. Clearly $\{x_0,x_1\}$ forms a basis of $H^0(S, \mathcal{O}_S(K_S))$. Since $2K_S \sim \pi^*\mathcal{O}_{\mathbb{P}^2}(1)$, the covering map $\pi$ is defined by a subspace of $|2K_S|$. Choose $y \in H^0(S, \mathcal{O}_S(2K_S))$ so that the subspace of $|2K_S|$ is generated by $x_0^2$, $x_1^2$ and $y$. Note that $y \neq x_0x_1$. By choosing a suitable $z \in H^0(S, \mathcal{O}_S(5K_S))$ we assume that the equation for $S$ is $z^2 = F'(x_0,x_1,y)$ where $F'$ is a weighted homogeneous polynomial of degree $10$ in $\mathbb{P}(1,1,2,5)$. (The defining equation must contain $z^2$ otherwise $S$ is not quasi-smooth. Then we complete the square for $z$ which does not affect the other coordinates.) The ramification locus $\pi$ consists of three components $(x_0=0)$, $(x_1=0)$ and $(z=0)$ which are mapped to $L_0$, $L_1$ and $C$ respectively. The proposition then follows.
Alternatively, one considers the action of $\sigma_i$ (see Diagram (\ref{diagram})) on the canonical ring $\bigoplus_{m\geq0} H^0(S, \mathcal{O}_S(mK_S))$, hence on $\mathbb{P}(1,1,2,5)$. Note that the involution $\sigma_0$ fixes $\Lambda_0$ pointwise and $\sigma_0(\Lambda_1) = \Lambda_1$. (Similarly for the involution $\sigma_1$.) As in the proof of \cite[Thm. $3$]{Catanese_kunev} one can choose $y, z$ such that $\sigma_0$ acts on $\mathbb{P}(1,1,2,5)$ by $[x_0,x_1,y,z] \mapsto [-x_0,x_1,y,z]$. Since $\sigma_0$ acts on $S$ we assume that the defining equation for $S$ is an eigenvector for $\sigma_0$. It is not difficult to see that only even powers of $x_0$ can appear in the defining equation. Next one normalizes the equation and gets $z^2 = F'(x_0,x_1,y)$. Since $F'$ is a weighted homogeneous polynomial of degree $10$, $x_1$ also has even powers.
\end{proof}
\begin{remark}
\leavevmode
\begin{enumerate}
\item Because $S$ is quasi-smooth, the quintic $F$ must contain $y^5$ (more generally, see \cite[Thm. 3.3]{FPR}).
\item We have shown that a special Horikawa surface $S$ is isomorphic to a quasi-smooth hypersurface in $\mathbb{P}(1,1,2,5)$ with the equation $z^2 = F(x_0^2, x_1^2, y)$. The covering map $S \rightarrow \mathbb{P}^2$ is given by $[x_0, x_1, y, z] \mapsto [x_0^2, x_1^2, y]$. The Galois group is generated by $\sigma_{x_0}: [x_0, x_1, y, z] \mapsto [-x_0, x_1, y, z]$ and $\sigma_{x_1}: [x_0, x_1, y, z] \mapsto [x_0, -x_1, y, z]$ which correspond to $\sigma_0$ and $\sigma_1$ respectively.
\item Special Horikawa surfaces have moduli dimension $({2+5 \choose 2}-1) + 2 + 2 - 8 = 16$ (dimension of moduli for plane quintics together with two lines minus dimension of $\mathrm{PGL}(3)$). The dimension of the moduli for degree $10$ quasi-smooth hypersurfaces in $\mathbb{P}(1,1,2,5) = \mathrm{Proj}(\mathbb{C}[x_0, x_1, y, z])$ cut out by $z^2 = F(x_0^2, x_1^2, y)$ is ${2+5 \choose 2} - 5 = 16$ (dimension of the quintic polynomials $F$ minus dimension of the subgroup of the automorphism group of $\mathbb{P}(1,1,2,5)$ acting on $z^2 = F(x_0^2, x_1^2, y)$: a semidirect product of the group consisting of the elements $[x_0, x_1, y, z] \mapsto [ax_0, bx_1, cy+dx_0^2+ex_1^2, z]$ with the group generated by $[x_0, x_1, y, z] \mapsto [x_1, x_0, y, z]$).
\end{enumerate}
\end{remark}
Now let us study how the Galois group $(\mathbb{Z}/2\mathbb{Z})^2$ acts on the Hodge structures of special Horikawa surfaces $S$. We view $S$ as a degree $10$ quasi-smooth hypersurface in $\mathbb{P}(1,1,2,5)$ cut out by the equation $z^2 = F(x_0^2, x_1^2, y)$. Choose the K\"ahler form corresponding to the canonical curve $(x_0=0)$ or $(x_1=0)$ (which are the reduced inverse image of $L_0$ and $L_1$ respectively) and the primitive cohomology can be described using Griffiths residue.
\begin{proposition} \label{residue}
Let $Y$ be a quasi-smooth hypersurface of degree $d$ in a weighted projective space $\mathbb{P}(a_0,a_1,\cdots,a_n)$. That is, $Y$ is given by a weighted homogeneous polynomial $G(z_0, z_1, \cdots, z_n)$ of degree $d$ whose partial derivatives have no common zero other that the origin. Let $$E = \sum \limits_{i=0}^n z_i\dfrac{\partial}{\partial z_i}$$ be the Euler vector field. Let $dV = dz_0 \wedge \cdots \wedge dz_n$ be the Euclidean volume form, and let $\Omega = i(E)dV$ (where $i$ denotes interior multiplication) be the projective volume form (which has degree $a_0 + \cdots + a_n$). Consider expressions of the form $$\Omega(A) = \dfrac{A \cdot \Omega}{G^{q}}$$ where $A$ is a homogeneous polynomial whose degree is such that $\Omega(A)$ is homogeneous of degree $0$. Then the Poincar\'{e} residues of $\Omega(A)$ span $F^{n-q}H_{\mathrm{prim}}^{n-1}(Y, \mathbb{C})$ where $F^{\bullet}$ denotes the Hodge filtration. Moreover, the residue lies in $F^{n-q+1}$ if and only if $A$ lies in the Jacobian ideal $J_G$ of $G$ (the ideal generated by the first partial derivatives of $G$).
\end{proposition}
\begin{proof}
This is \cite[Prop. 1.2]{ACT}. See also \cite{Dolgachev_weighted}.
\end{proof}
\begin{proposition} \label{decomposition}
Let $S \in \mathbb{P}(1,1,2,5)$ be a quasi-smooth hypersurface of degree $10$ given by $z^2 = F(x_0^2, x_1^2, y)$. Set $\sigma_0$ and $\sigma_1$ to be the automorphisms defined by $[x_0, x_1, y, z] \mapsto [-x_0, x_1, y, z]$ and $[x_0, x_1, y, z] \mapsto [x_0, -x_1, y, z]$ respectively. Let $\chi_0$ and $\chi_1$ be the corresponding characters of the Galois group defined by $\chi_0(\sigma_0) = 1$, $\chi_0(\sigma_1) = -1$ and similarly for $\chi_1$. Then we have the following decomposition of Hodge structures ($H^2_{\chi_l}(S,\mathbb{Q})$ is the eigenspace corresponding to $\chi_l$ for $l=0,1$) $$H^2_{\mathrm{prim}}(S,\mathbb{Q}) = H^2_{\chi_0}(S,\mathbb{Q}) \oplus H^2_{\chi_1}(S,\mathbb{Q}).$$ Moreover, $H^2_{\chi_l}(S,\mathbb{Q})$ ($l=0,1$) has Hodge numbers $[1,14,1]$.
\end{proposition}
\begin{proof}
Notations as in Proposition \ref{residue}. The decomposition is obtained by a Griffiths residue calculus. Specifically, let $G= F(x_0^2, x_1^2, y) - z^2$. Take a basis for $H^2_{\mathrm{prim}}(S,\mathbb{C})$ of the forms $\mathrm{Res} \frac{A \cdot \Omega}{G^{q}}$ with $A$ being certain monomials $x_0^ix_1^jy^k$. The cohomology group $H^{2,0}(S)$ (resp. $H_{\mathrm{prim}}^{1,1}(S), H^{0,2}(S))$ correspond to $q=1$ (resp. $q=2$, $q=3$) and hence $i+j+2k=1$ (resp. $i+j+2k=11$, $i+j+2k=21$). In particular, $i+j$ must be odd. It follows that only the characters $\chi_0$ and $\chi_1$ appear in the decomposition of $H^2_{\mathrm{prim}}(S,\mathbb{C})$ (and hence $H^2_{\mathrm{prim}}(S,\mathbb{Q})$). The eigenspace $H^2_{\chi_0}(S,\mathbb{Q})$ is a sub-Hodge structure because $H^2_{\chi_0}(S,\mathbb{Q}) = \ker(\sigma_0^* -\mathrm{id}) (= \ker(\sigma_1^* +\mathrm{id}))$. Similarly for $H^2_{\chi_1}(S,\mathbb{Q})$. The claim on Hodge numbers can be checked for a special surface $z^2=x_0^{10} + x_1^{10} + y^5$ (consider the induced action of $\sigma_0$ or $\sigma_1$ on the polarized variation of rational Hodge structure with fibers $H^2_{\mathrm{prim}}(S,\mathbb{Q})$).
\end{proof}
\begin{remark}
Let $X_0$ and $X_1$ be the $K3$ surfaces associated to a special Horikawa surface $S$ in Diagram (\ref{diagram}). One can show that $T_{\mathbb{Q}}(S) \cong T_{\mathbb{Q}}(X_0) \oplus T_{\mathbb{Q}}(X_1)$ where $T_\mathbb{Q} = \mathrm{Tr} \otimes \mathbb{Q}$ denotes the rational transcendental lattice. Specifically, we consider the induced action $\sigma_0^*$ and $\sigma_1^*$ on $T_{\mathbb{Q}}(S)$. The space $T_{\mathbb{Q}}(S)$ decomposes as a direct sum of the eigenspaces of $\sigma_0^*$ with eigenvalues $1$ and $-1$. By \cite[Prop. $5$]{Shioda} we have $T_{\mathbb{Q}}(X_0) \cong T_{\mathbb{Q}}(S)^{\sigma_0^*}$ and $T_{\mathbb{Q}}(X_1) \cong T_{\mathbb{Q}}(S)^{\sigma_1^*}$ (where $T_{\mathbb{Q}}(S)^{\sigma_l^*}$ denotes the invariant part). Proposition \ref{decomposition} allows us to identify $T_{\mathbb{Q}}(S)^{\sigma_1^*}$ with the $(-1)$-eigenspace of $\sigma_0^*$ which completes the proof. Generically, $T_{\mathbb{Q}}(S)$ has Hodge numbers $[2,28,2]$ (one applies \cite[Prop. 4]{Moonen} to $z^2 = x_0^{10} + x_1^{10}+ y^5$ to see the generic Picard number is $1$) and $T_{\mathbb{Q}}(X_l)$ has Hodge numbers $[1,14,1]$ (cf. \cite[Cor. 4.15]{Laza_n16}) for $l=0,1$.
\end{remark}
\section{The infinitesimal Torelli theorem} \label{sec_inftorelli}
We shall show in this section that (unlike for special Kunev surfaces \cite{Kynev} \cite{Catanese_kunev} \cite{Todorov_kunev}) the infinitesimal Torelli holds for special Horikawa surfaces $S$.
\begin{theorem}
Let $S$ be a bidouble cover of $\mathbb{P}^2$ branched along a smooth quintic $C$ and two transverse lines $L_0$ and $L_1$ with $C \cap L_0 \cap L_1 = \emptyset$. The natural map
\begin{equation} \label{inf torelli}
p: H^1(S, \calT_S) \rightarrow \operatorname{Hom}(H^0(S, \omega_S), H^1(S, \Omega_S^1)),
\end{equation}
given by cup product, is injective.
\end{theorem}
\begin{proof}
Usui \cite{Usui_wci} has proved the infinitesimal Torelli theorem for the periods of holomorphic $d$-forms on certain $d$-dimensional complete intersections ($d \geq 2$) in certain weighted projective spaces. One can check that the conditions in op. cit. Theorem $2.1$ are satisfied for the special Horikawa surface $S$ and then apply the theorem. Specifically, by \propositionref{S equation} the surface $S \subset \mathbb{P}(1,1,2,5)$ is defined by $z^2 = F(x_0^2, x_1^2, y)$ and it is not difficult to verify that $z^2 - F(x_0^2, x_1^2, y), x_0, x_1, y$ forms a regular sequence for $\mathbb{C}[x_0,x_1,y,z]$. (One can apply op. cit. Proposition 3.1 to check the conditions and prove the infinitesimal Torelli for the periods of holomorphic $2$-forms on any smooth hypersurface with ample canonical bundle in $\mathbb{P}(1,1,2,5)$.)
\end{proof}
Pardini \cite[Thm 3.1]{Pardini_surface} \cite[Thm 4.2]{Pardini_torelli} has considered the infinitesimal Torelli problem for certain abelian covers (including bidouble covers). The conditions of the theorems do not hold for special Horikawa surfaces $S$ but the strategy still works. (In particular, we need the notation of prolongation bundle discussed in \cite[\S 2]{Pardini_torelli}.) Let us sketch the proof which might be useful for finding more boundary cases of Pardini's theorems.
\begin{proof}
The idea is to decompose the infinitesimal Torelli map $$p: H^1(S, \calT_S) \rightarrow \operatorname{Hom}(H^0(S, \omega_S), H^1(S, \Omega_S^1))$$ using the Galois group action. The first step is to figure out the building data (see \cite[\S 2]{Catanese_bidouble} and \cite[Def. 2.1]{Pardini_cover}) of the bidouble cover $\pi: S \rightarrow \mathbb{P}^2$. The (reduced) branched locus of $\pi$ consists of three irreducible components: $$D_0 := L_0, \,\, D_1 := L_1, \,\, \text{and} \,\, D_z := C.$$ Let $\sigma_0$, $\sigma_1$ be the involutions in Diagram (\ref{diagram}). Let $\sigma_z := \sigma_0 \circ \sigma_1$. The Galois group $G$ of the abelian cover $S \rightarrow \mathbb{P}^2$ consists of $\mathrm{id}$, $\sigma_0$, $\sigma_1$, $\sigma_z$. Let $\chi_0$, $\chi_1$, $\chi_z$ be the corresponding nontrivial characters of $G$ (and we shall denote the character group by $G^*$). Write $$\pi_*\mathcal{O}_S = \mathcal{O}_{\mathbb{P}^2} \oplus \mathcal{L}_{\chi_0}^{-1} \oplus \mathcal{L}_{\chi_1}^{-1} \oplus \mathcal{L}_{\chi_z}^{-1}$$ where $\mathcal{L}_{\chi}^{-1}$ denotes the eigensheaf on which $G$ acts via the character $\chi$. One easily verify that $\mathcal{L}_{\chi_0} = \mathcal{L}_{\chi_1} = \mathcal{O}_{\mathbb{P}^2}(3)$ and $\mathcal{L}_{\chi_z} = \mathcal{O}_{\mathbb{P}^2}(1)$.
Next we compute the direct images of various sheaves. Let $D = D_0 + D_1 + D_z$. For the trivial character $\chi=1$, define $\Delta_{1} = D$. Set $$\Delta_{\chi_0} := D_0, \,\, \Delta_{\chi_1} := D_1, \,\, \Delta_{\chi_z} := D_z.$$ We also let $$D_{1, 1^{-1}} := \emptyset, \,\, D_{\chi_0, \chi_0^{-1}} := D_1 + D_z, \,\, D_{\chi_1, \chi_1^{-1}} := D_0 + D_z, \,\, D_{\chi_z, \chi_z^{-1}} := D_0 + D_1. $$ (For every pairs of characters $\chi, \phi \in G^*$, $D_{\chi,\phi}$ is defined in \cite{Pardini_surface} and \cite{Pardini_torelli}. The fundamental relations of the building data are $\mathcal{L}_{\chi} + \mathcal{L}_{\phi} \equiv \mathcal{L}_{\chi\phi} + D_{\chi,\phi}$, see \cite[Thm. 2.1]{Pardini_cover}.) For any character $\chi$, we have (see \cite[Prop. 4.1]{Pardini_cover})
\begin{itemize}
\item $(\pi_*\calT_S)^{(\chi)} = \calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1}$ (In particular, $(\pi_*\calT_S)^{(\mathrm{inv})} = \calT_{\mathbb{P}^2}(-\log D)$);
\item $(\pi_*\Omega^1_S)^{(\chi)} = \Omega_{\mathbb{P}^2}^1(\log D_{\chi, \chi^{-1}}) \otimes \mathcal{L}_{\chi}^{-1}$ (In particular, $(\pi_*\Omega^1_S)^{(\mathrm{inv})} = \Omega^1_{\mathbb{P}^2}$);
\item $(\pi_*\omega_S)^{(\chi)} = \omega_{\mathbb{P}^2}\otimes \mathcal{L}_{\chi^{-1}}$ (In particular, $(\pi_*\omega_S)^{(\mathrm{inv})} = \omega_{\mathbb{P}^2}$.
\end{itemize}
Since the map $\pi$ is finite, for every coherent sheaf $\mathcal{F}$ on $S$ one has $H^k(S, \mathcal{F}) = H^k(\mathbb{P}^2, \pi_*\mathcal{F})$ ($k=0,1,2$). In particular, we have $$H^1(S, \calT_S) = H^1(\mathbb{P}^2, \pi_*\calT_S), \,\, H^1(S, \Omega_S) = H^1(\mathbb{P}^2, \pi_*\Omega_S), \,\, H^0(S, \omega_S) = H^0(\mathbb{P}^2, \pi_*\omega_S).$$ Combining with the splittings of $\pi_*\calT_S$, $\pi_*\Omega_S$ and $\pi_*\omega_S$, we obtain the following decompositions:
\begin{equation}
H^1(S, \calT_S) = H^1(\mathbb{P}^2, \calT_{\mathbb{P}^2}(-\log D)) \oplus (\mathop{\oplus} \limits_{\chi \in G^* \backslash \{1\}} H^1(\mathbb{P}^2, \calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1}))
\end{equation}
\begin{equation}
H^1(S, \Omega^1_S) = H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}) \oplus (\mathop{\oplus} \limits_{\chi \in G^* \backslash \{1\}} H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}(\log D_{\chi, \chi^{-1}}) \otimes \mathcal{L}_{\chi}^{-1}))
\end{equation}
\begin{equation}
H^0(S, \omega_S) = H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2}) \oplus (\mathop{\oplus} \limits_{\chi \in G^* \backslash \{1\}} H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\chi^{-1}}))
\end{equation}
Since the cup product (and hence the infinitesimal Torelli map $p: H^1(S, \calT_S) \rightarrow \operatorname{Hom}(H^0(S, \omega_S), H^1(S, \Omega_S^1))$) is compatible with the group action, for characters $\chi, \phi \in G^*$ we consider
\begin{equation}
\begin{split}
p_{\chi,\phi}: & H^1(\mathbb{P}^2, \calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1})) \\
& \rightarrow \operatorname{Hom}(H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\phi^{-1}})
,H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}(\log D_{\chi\phi, (\chi\phi)^{-1}}) \otimes \mathcal{L}_{\chi\phi}^{-1})))
\end{split}
\end{equation}
Clearly, one has
\begin{equation}
p = \mathop{\oplus} \limits_{\chi, \phi \in G^*} p_{\chi, \phi}.
\end{equation}
\begin{lemma} \label{p_{chi, phi}}
The infinitesimal Torelli holds for $S$ (i.e. the map $p$ is injective) if and only if $\mathop{\cap} \limits_{\phi \in G^*} \ker p_{\chi, \phi} = \{0\}$ for any character $\chi$.
\end{lemma}
Let us take a closer look at the maps $p_{\chi,\phi}$. For every pairs of characters $\chi, \phi \in G^*$ and every section $\xi \in H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2}\otimes \mathcal{L}_{\phi^{-1}})$, consider the following diagram
\begin{equation*}
\begin{CD}
\calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1} @>>> \Omega^1_{\mathbb{P}^2}(\log D_{\chi\phi, (\chi\phi)^{-1}}) \otimes \omega_{\mathbb{P}^2}^{-1} \otimes (\mathcal{L}_{\chi\phi} \otimes \mathcal{L}_{\phi^{-1}})^{-1}\\
@VVV @VVV\\
\calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1} \otimes \omega_{\mathbb{P}^2}\otimes \mathcal{L}_{\phi^{-1}} @>>> \Omega^1_{\mathbb{P}^2}(\log D_{\chi\phi, (\chi\phi)^{-1}}) \otimes \mathcal{L}_{\chi\phi}^{-1}
\end{CD}
\end{equation*}
where the vertical maps are given by multiplication by $\xi$ and the horizontal maps are defined by contraction of tensors and by the fundamental relations \cite[Thm. 2.1]{Pardini_cover} of the building data (N.B. there is a canonical isomorphism between $\calT_{\mathbb{P}^2}(-\log D_{\chi\phi, (\chi\phi)^{-1}})$ and $\Omega^1_{\mathbb{P}^2}(\log D_{\chi\phi, (\chi\phi)^{-1}}) \otimes (\omega_{\mathbb{P}^2}(D_{\chi\phi, (\chi\phi)^{-1}}))^{-1}$). Consider
\begin{equation}
\begin{split}
q_{\chi,\phi}: &H^1(\mathbb{P}^2, \calT_{\mathbb{P}^2}(-\log \Delta_{\chi}) \otimes \mathcal{L}_{\chi}^{-1}) \\
& \rightarrow H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}(\log D_{\chi\phi, (\chi\phi)^{-1}}) \otimes \omega_{\mathbb{P}^2}^{-1} \otimes (\mathcal{L}_{\chi\phi} \otimes \mathcal{L}_{\phi^{-1}})^{-1})
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
r_{\chi,\phi}: &H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}(\log D_{\chi, \chi^{-1}}) \otimes (\omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\chi} \otimes \mathcal{L}_{\phi^{-1}})^{-1}) \\
& \rightarrow \operatorname{Hom}(H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\phi^{-1}})
,H^1(\mathbb{P}^2, \Omega^1_{\mathbb{P}^2}(\log D_{\chi, \chi^{-1}}) \otimes \mathcal{L}_{\chi}^{-1})))
\end{split}
\end{equation}
Obviously we have $p_{\chi,\phi} = r_{\chi\phi,\phi} \circ q_{\chi, \phi}$.
Let us analyze the maps $r_{\chi,\phi}$.
\begin{itemize}
\item The maps $r_{1,1}$, $r_{1,\chi_0}$, $r_{1,\chi_1}$, $r_{1,\chi_z}$ are injective: by explicit computation and Bott's vanishing theorem.
\item The maps $r_{\chi_0,1}$, $r_{\chi_1,1}$, $r_{\chi_z,1}$, $r_{\chi_0,\chi_z}$, $r_{\chi_1,\chi_z}$ are zero maps: in these cases $H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\phi^{-1}}) = 0$.
\item The maps $r_{\chi_0,\chi_0}$, $r_{\chi_0,\chi_1}$, $r_{\chi_1,\chi_0}$, $r_{\chi_1,\chi_1}$ are injective: we use the prolongation bundle of the irreducible components of $D_{\chi,\chi^{-1}}$ defined in \cite[\S 2]{Pardini_torelli} and show that if the multiplication map \begin{equation*}
\begin{split}
& H^0(\mathbb{P}^2, \omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\phi^{-1}}) \otimes (\mathop{\oplus} \limits_{
\substack{
B \, \text{irreducible} \\
\text{components of} \, D_{\chi,\chi^{-1}}}} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(B) \otimes\omega_{\mathbb{P}^2} \otimes \mathcal{L}_{\chi})) \\
& \rightarrow \mathop{\oplus} \limits_{
\substack{
B \, \text{irreducible} \\
\text{components of} \, D_{\chi,\chi^{-1}}}} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(B) \otimes \omega_{\mathbb{P}^2}^{\otimes 2} \otimes \mathcal{L}_{\chi} \otimes \mathcal{L}_{\phi^{-1}})
\end{split}
\end{equation*}
is surjective, then the map $r_{\chi,\phi}$ is injective (cf. \cite[\S 3]{Pardini_torelli}). As argued in \cite[Prop. 3.5]{Pardini_torelli} the surjectivity of the multiplication map follows from a special case of \cite[Thm. 2.1]{EL}.
\item The maps $r_{\chi_z,\chi_0}$, $r_{\chi_z,\chi_1}$, $r_{\chi_z,\chi_z}$ are injective: we also consider the prolongation bundle, and also note that $H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(B) \otimes \omega_{\mathbb{P}^2}^{\otimes 2} \otimes \mathcal{L}_{\chi} \otimes \mathcal{L}_{\phi^{-1}}) = 0$ (with $B$ any irreducible component of $D_{\chi,\chi^{-1}}$) in these cases.
\end{itemize}
We use the method of \cite[\S 3]{Pardini_surface} to study the maps $q_{\chi,\phi}$ (especially Diagram $(3.4)$ and Lemma $3.1$). The results are as follows.
\begin{itemize}
\item For $\chi = 1$, we have $\ker q_{1,\chi_0} \cap \ker q_{1, \chi_1} = \{0\}$ (and hence $\ker p_{1,\chi_0} \cap \ker p_{1, \chi_1} = \{0\}$).
\item For $\chi = \chi_0$, both $q_{\chi_0,\chi_1}$ and $q_{\chi_0,\chi_z}$ are injective (and hence $p_{\chi_0,\chi_1} = r_{\chi_z,\chi_1} \circ q_{\chi_0, \chi_1}$ is injective).
\item For $\chi = \chi_1$, both $q_{\chi_1,\chi_0}$ and $q_{\chi_1,\chi_z}$ are injective (and hence $p_{\chi_1,\chi_0} = r_{\chi_z,\chi_0} \circ q_{\chi_1, \chi_0}$ is injective).
\item For $\chi = \chi_z$, both $q_{\chi_z,\chi_0}$ and $q_{\chi_z,\chi_1}$ are injective (and hence $p_{\chi_z,\chi_0} = r_{\chi_1,\chi_0} \circ q_{\chi_z, \chi_0}$ is injective, and $p_{\chi_z,\chi_1} = r_{\chi_0,\chi_1} \circ q_{\chi_z, \chi_1}$ is also injective).
\end{itemize}
The theorem clearly follows from these observations.
\end{proof}
\section{Degree $5$ pairs and a generic global Torelli theorem} \label{sec_globaltorelli}
Let us review the period map for degree $5$ pairs which will be used later in this section to prove a generic global Torelli problem for special Horikawa surfaces $S$.
Following \cite[Def. 2.1]{Laza_n16} we call a pair $(C,L)$ consisting of a plane quintic curve $C$ and a line $L \subset \mathbb{P}^2$ a {\sl degree $5$ pair}. Two such pairs are equivalent if they are projectively equivalent. We are interested in the degree $5$ pairs $(C, L)$ with $C+L$ defining a sextic curve admitting at worst $ADE$ singularities. The coarse moduli space $\mathcal{M}_{ADE}$ is contained in the GIT quotient $(\mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(5)) \times \mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(1))) \git \mathrm{SL}_3(\mathbb{C})$ (with respect to the linearization $\pi_1^*\mathcal{O}_{\mathbb{P}^2}(1) \otimes \pi_2^*\mathcal{O}_{\mathbb{P}^2}(1)$).
To every degree $5$ pair $(C,L)$ such that $C+L$ has at worst $ADE$ singularities, we associate a $K3$ surface $X_{(C,L)}$ obtained by taking the canonical resolution of the double cover $\bar{X}_{(C,L)}$ of $\mathbb{P}^2$ along the sextic $C+L$. The period map for degree $5$ pairs $(C,L)$ is defined via the periods of $X_{(C,L)}$. Specifically, one first considers a generic pair $(C,L)$ where $C$ is smooth and $L$ is a transversal. The $K3$ surface $X_{(C,L)}$ contains $5$ exceptional curves $e_1, \cdots, e_5$ corresponding to the five intersection points $C \cap L$ and the strict transform $l'$ of $L$. By \cite[Prop. 4.12, Def. 4.17]{Laza_n16}, the lattice generated by $\{l', e_1, \cdots, e_5\}$ is a primitive sublattice of $\operatorname{Pic}(X_{(C,L)})$ and hence $X_{(C,L)}$ is a lattice polarized $K3$ surface (note that the lattice polarization depends on the labeling of the points of intersection $C \cap L$).
\begin{notation}
Let $\Lambda$ be an even lattice. We define:
\begin{itemize}
\item $\Lambda^*$: the dual lattice;
\item $A_{\Lambda} = \Lambda^*/\Lambda$: the discriminant group endowed with the induced quadratic form $q_{\Lambda}$;
\item $O(\Lambda)$: the group of isometries of $\Lambda$;
\item $O(q_\Lambda)$: the automorphisms of $A_\Lambda$ that preserves the quadratic form $q_{\Lambda}$;
\item $O_{-}(\Lambda)$: the subgroup of isometries of $\Lambda$ of spinor norm $1$ (see also \cite[\S 3.6]{Scattone_bb});
\item $\widetilde{O}(\Lambda) =\ker(O(\Lambda) \rightarrow O(A_{\Lambda}))$: the group of isometries of $\Lambda$ that induce the identity on $A_{\Lambda}$;
\item $O^*(\Lambda): = O_-(\Lambda) \cap \widetilde{O}(\Lambda)$.
\end{itemize}
We also introduce:
\begin{itemize}
\item $\Lambda_{K3}$: the $K3$ lattice $U^{\oplus 3} \oplus E_8^{\oplus 2}$ (we denote the bilinear form by $(\cdot,\cdot)_{K3}$);
\item $M$: the abstract lattice generated by $\{l', e_1, \cdots, e_5\}$ which admits a unique primitive embedding into $\Lambda_{K3}$ (one can also show that $M$ is the generic Picard group of $K3$ surfaces $X_{(C,L)}$ and $M \cong U(2) \oplus D_4$, see \cite[Cor. 4.15, Lem. 4.18]{Laza_n16});
\item $T = M^{\perp}_{\Lambda_{K3}}$: the orthogonal complement of $M$ in $\Lambda_{K3}$ (which is isomorphic to $U \oplus U(2) \oplus D_4 \oplus E_8$);
\item $\calD_M := \{\omega \in \mathbb{P}(T\otimes \mathbb{C}) \mid (\omega, \omega)_{K3} = 0, (\omega, \bar{\omega})_{K3} > 0\}$ which is the period domain for $M$-polarized $K3$ surfaces;
\item $\calD_M^0:$ a connected component of $\calD_M$ which is a type IV Hermitian symmetric domain.
\end{itemize}
\end{notation}
Let $\mathcal{U}$ be an open subset of $\mathcal{M}_{ADE}$ parameterizing the generic degree $5$ pairs $(C, L)$ with $C$ smooth and with transverse intersections $C \cap L$. Let $\widetilde{\mathcal{U}}$ be the $\mathfrak{S}_5$-cover of $\mathcal{U}$ that consists of triples $(C,L,\sigma)$ with $\sigma: \{1,\cdots, 5\} \rightarrow C \cap L$ labelings of $C \cap L$. By \cite{Dolgachev_latticek3} there is a period map $\widetilde{\mathcal{U}} \rightarrow \calD_M^0/O^*(T)$ sending $(C,L,\sigma)$ to the periods of $X_{(C,L)}$ with the $M$-polarization determined by $\sigma$. By the global Torelli theorem and surjectivity of the period map for $K3$ surfaces and \cite[Prop. 4.14]{Laza_n16} the period map is birational. Note that there is a natural $\mathfrak{S}_5$-action on $\widetilde{\mathcal{U}}$. Moreover, the group $O^*(T)$ is a normal subgroup of $O_-(T)$ with $O_-(T)/O^*(T) \cong \mathfrak{S}_5$, and the residual $\mathfrak{S}_5$-action on $M$ is the permutation of the five points of intersection $C \cap L$ (op. cit. Proposition 4.22). (In fact, $O_-(T)$ is the monodromy group for the degree $5$ pairs.) Thus, the period map is $\mathfrak{S}_5$-equivariant and descends to a birational map $\mathcal{U} \rightarrow \calD_M^0/O_-(T)$.
The birational map can be extended to a morphism $\mathcal{M}_{ADE} \rightarrow \calD_M^0/O_-(T)$ using normalized $M$-polarizations. In particular, one needs to construct an $M$-polarization in the case of non-transversal intersections $C \cap L$. We briefly summarize the construction and refer the readers to \cite[\S4.2.3]{Laza_n16} for the details. The construction is a modification of canonical resolution of singularities of double covers (see \cite[Thm. III.7.2]{BPV}). The role of modification is to keep track of the points of intersection $C \cap L$. More precisely, one chooses a labeling of the intersection $\sigma: \{1,2,3,4,5\} \twoheadrightarrow C \cap L$ such that for any $p \in C \cap L$ we have $|\sigma^{-1}(p)| = \mathrm{mult}_p(C \cap L)$. Set $Y_0 = \mathbb{P}^2$ and $B_0 = C+L$. We blow up one singularity at a time (instead of doing simultaneous blow-ups) and do the first five blow-ups in points belonging to $L$. The new branched divisor $B_{i}$ is the strict transform of $B_{i-1}$ together with the exceptional divisor of the blow-up reduced mod $2$. The process is repeated until the resulting divisor $B_N$ is smooth. Denote the blow-up sequence by $Y_N \rightarrow \cdots \rightarrow Y_{i} \rightarrow Y_{i-1} \rightarrow \cdots \rightarrow Y_0=\mathbb{P}^2$. The double cover $X_{(C,L)}$ of $Y_N$ along $B_N$ is a minimal resolution of $\bar{X}_{(C,L)}$. Let $p_i \in Y_{i-1}$ ($1 \leq i \leq 5$) be the centers of the blow-up which lies on the corresponding strict transform of $L$. Now we construct a primitive embedding of $M$ into $\operatorname{Pic}(X_{(C,L)})$ by sending $l$ to the class of the reduced preimage of $L$ and sending $e_i$ ($1 \leq i \leq 5$) to the fundamental cycle associated to the simple singularity of $B_{i-1}$ in the point $p_i$. The embedding is normalized in the sense of \cite[Def. 4.24]{Laza_n16} and the construction fits well in families.
By the global Torelli theorem for $K3$, the surface $X_{(C,L)}$ is unique up to isomorphism. Moreover, one can recover the degree $5$ pair $(C,L)$ because the classes $2l'+e_1+\cdots+e_5$ (which corresponds to the pull-back of $\mathcal{O}_{\mathbb{P}^2}(1)$ and determines the covering map and the branched curve) and $l'$ (which determines the line $L$ and hence the residue quintic $C$) are fixed by the monodromy group $O_-(T)$. It follows that the period map $$\mathcal{M}_{ADE} \hookrightarrow \calD_M^0/O_-(T)$$ for degree $5$ pairs $(C, L)$ with $C+L$ admitting at worst $ADE$ singularities is injective. This is the part we shall need later. For the completeness, let us mention that one can verify that the period map is surjective (see \cite[\S4.3.1]{Laza_n16}, especially Proposition 4.31). By Zariski's main theorem, the bijective birational morphism between two normal varieties $\mathcal{M}_{ADE} \rightarrow \calD^0_M/O_-(T)$ is an isomorphism (op. cit. Theorem 4.1).
Let us focus on the generic global Torelli problem for special Horikawa surfaces. By \cite{Catanese_bidouble} or \cite[Thm. 2.1]{Pardini_cover} we construct the coarse moduli space $\mathcal{M}$ for special Horikawa surfaces as the open subset of the quotient\footnote{for the linearization induced by $\pi_1^*\mathcal{O}_{\mathbb{P}^2}(1) \otimes \pi_2^*\mathcal{O}_{\mathbb{P}^2}(1) \otimes \pi_3^*\mathcal{O}_{\mathbb{P}^2}(1)$} $$(\mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(5)) \times \operatorname{Sym}^2 (\mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(1)))) \git \mathrm{SL}_3(\mathbb{C})$$ corresponding to triples $(C, L, L')$ which consist of smooth quintics $C$ and transversals $L$ and $L'$ with $C \cap L \cap L' = \emptyset$. It is more convenient to work with a double cover $\mathcal{M}'$ of $\mathcal{M}$. Specifically, $\mathcal{M}'$ is the open subset of the GIT\footnote{with respect to the linearization $\pi_1^*\mathcal{O}_{\mathbb{P}^2}(1) \otimes \pi_2^*\mathcal{O}_{\mathbb{P}^2}(1) \otimes \pi_3^*\mathcal{O}_{\mathbb{P}^2}(1)$} $$(\mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(5)) \times \mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(1)) \times \mathbb{P} H^0(\mathbb{P}^2, \mathcal{O}_{\mathbb{P}^2}(1))) \git \mathrm{SL}_3(\mathbb{C})$$ which parameterizes (up to projective equivalence) triples $(C, L_0, L_1)$ with $C$ smooth quintics and $L_0$, $L_1$ ``labeled" lines which intersect $C$ transversely and satisfy $C \cap L_0 \cap L_1 = \emptyset$.
Choose a sufficiently general reference point $b \in \mathcal{M}'$ (in particular, we label the two lines) and let $S_b$ be the corresponding bidouble cover. Let $V = H_{\mathrm{prim}}^2(S_b, \mathbb{R})$ (with respect to the class of a canonical curve or equivalently a hyperplane section in $\mathbb{P}(1,1,2,5)$). Let $Q$ be the polarization on $V$ defined using cup product. We also write $V_{\mathbb{Q}} = H_{\mathrm{prim}}^2(S_b, \mathbb{Q})$. Consider the action of the Galois group $(\mathbb{Z}/2\mathbb{Z})^2$ on $S_b$ and define $\rho: (\mathbb{Z}/2\mathbb{Z})^2 \rightarrow \mathrm{Aut}(V, Q)$ to be the corresponding representation. Notations as in Diagram (\ref{diagram}). The Galois group $(\mathbb{Z}/2\mathbb{Z})^2$ is generated by $\sigma_0$ and $\sigma_1$. Denote the corresponding characters by $\chi_0$ and $\chi_1$.
\begin{notation}
We shall use the following notations:
\begin{itemize}
\item $\calD = \calD(V, Q)$: the period domain parameterizing $Q$-polarized Hodge structures of weight $2$ on $V$ with hodge numbers $[2,28,2]$;
\item $\calD^{\rho} = \{x \in \calD \mid \rho(a)(x) = x, \forall a \in (\mathbb{Z}/2\mathbb{Z})^2\}$;
\item $V(\chi)$: the eigenspace of $V$ corresponding to the character $\chi$ (it is not difficult to see that the eigenspaces $V(\chi)$ and $V(\chi')$ are orthogonal with respect to $Q$ if $\chi \neq \chi'$);
\item $V(\chi)_{\mathbb{Q}} := V(\chi) \cap V_{\mathbb{Q}}$;
\item $\calD(\chi)$: the period domain $\calD(V(\chi), Q|_{V(\chi)})$ of type $[1,14,1]$.
\end{itemize}
\end{notation}
\begin{lemma}
There is a natural map $\calD^{\rho} \rightarrow \calD(\chi_0) \times \calD(\chi_1)$ which is injective.
\end{lemma}
\begin{proof}
The lemma follows from \propositionref{decomposition} and \cite[\S7]{DK_ball}. Specifically, only the characters $\chi_0$ and $\chi_1$ appear in the decomposition of the vector space $V$. Let $V \otimes \mathbb{C} = V^{2,0} \oplus V^{1,1} \oplus V^{0,2}$ be a $Q$-polarized Hodge structure on $V$. The map is defined by sending the Hodge structure to the induced $Q$-polarized Hodge structures on $V(\chi_0)$ and $V(\chi_1)$ which is clearly injective.
\end{proof}
Now we show that the period spaces $\calD(\chi_0)$ and $\calD(\chi_1)$ are both isomorphic to the period space $\calD_M$ for $M$-polarized $K3$ surfaces.
\begin{lemma} \label{reference point}
There exists an isomorphism ($l=0 \,\, \text{or}\,\,1$) $$(V(\chi_l)_{\mathbb{Q}}, \frac12 Q) \cong (T \otimes \mathbb{Q}, (\cdot,\cdot)_{K3} \otimes \mathbb{Q}).$$
\end{lemma}
\begin{proof}
We take $\chi_0$ as an example. Let $S= S_b$ be the bidouble cover corresponding to the reference point $b \in \mathcal{M}'$. Notations as in Diagram (\ref{diagram}). In particular, by abuse of notation $\sigma_0$ also denotes the involution relative to $\varphi_0: S_0 \rightarrow X_0$. Label the points of intersection $C \cap L_1$ (the isomorphism we shall describe does not depend on the labeling) and there is a primitive embedding of $M$ (and hence $T$) into $H^2(X_0,\mathbb{Z})$. Since $b$ is sufficiently general, $M$ is the Picard lattice of the $K3$ surface $X_0$ (\cite[Cor. 4.15]{Laza_n16}). Consider the composition of linear maps $$T \otimes \mathbb{Q} \hookrightarrow H^2(X_0,\mathbb{Q}) \stackrel{\varphi_0^*}{\rightarrow} H^2(S_0,\mathbb{Q})^{\sigma_0^*}.$$ The map $\varphi_0^*$ is an isomorphism of vector spaces. Let $D_1 \subset X_0$ be the strict transform of $L_1$ and set $E_1, \cdots, E_5$ to be the exceptional curves. Clearly, $T \otimes \mathbb{Q}$ is the orthogonal complement of $\mathbb{Q}[D_1] \oplus \mathbb{Q}[E_1] \oplus \cdots \oplus \mathbb{Q}[E_5]$ in $H^2(X_0,\mathbb{Q})$. Thus, $T \otimes \mathbb{Q}$ is mapped onto $V(\chi_0)_\mathbb{Q} = H^2_{\mathrm{prim}}(S, \mathbb{Q})^{\sigma_0^*}$ by $\varphi_0^*$. The claim on the bilinear forms is clear.
\end{proof}
\begin{corollary}
The period domain $\calD(\chi_l)$ has two connected components which are both isomorphic to the $14$-dimensional type IV Hermitian symmetric domain $\mathrm{SO}(2,14)/\mathrm{S}(\mathrm{O}(2) \times \mathrm{O}(14))$.
\end{corollary}
To formulate the theorem we also need to choose a discrete group. Let $\Gamma_0$ (resp. $\Gamma_1$) be the discrete subgroup of $\mathrm{Aut}(V(\chi_0)_{\mathbb{Q}}, Q)$ (resp. $\mathrm{Aut}(V(\chi_1)_{\mathbb{Q}}, Q)$) corresponding to $O_{-}(T)$ (using \lemmaref{reference point}). Set $\Gamma$ to be the discrete subgroup in $\mathrm{Aut}(V_{\mathbb{Q}}, Q)$ which projects onto $\Gamma_0$ and $\Gamma_1$ under the isomorphism $V_{\mathbb{Q}} = V(\chi_0)_{\mathbb{Q}} \oplus V(\chi_1)_{\mathbb{Q}}$. Now we consider the period map $$\mathcal{P}: \mathcal{M} \rightarrow \calD^{\rho}/\Gamma$$ for special Horikawa surfaces (which are canonically polarized). Because the monodromy group (for the very general base point) is contained in the generic Mumford-Tate group and $\sigma_0^*$ and $\sigma_1^*$ are Hodge tensors for every member of the family, the monodromy representation commutes with the representation $\rho$ (see also \cite[pp.67-68]{GGK_mt} and \cite[\S 7]{DK_ball}). By \cite[Prop. 4.22]{Laza_n16} the discrete subgroup $\Gamma$ contains the image of the monodromy representation.
To prove the generic global Torelli theorem for $\mathcal{P}: \mathcal{M} \rightarrow \calD^{\rho}/\Gamma$ we consider the map $$\mathcal{P}_0 \times \mathcal{P}_1: \mathcal{M}' \rightarrow \calD^0_M/O_{-}(T) \times \calD^0_M/O_{-}(T)$$ which is defined using the period maps for the degree $5$ pairs $(C, L_0)$ and $(C, L_1)$.
\begin{proposition} \label{P0P1}
The map $\mathcal{P}_0 \times \mathcal{P}_1: \mathcal{M}' \rightarrow \calD^0_M/O_{-}(T) \times \calD^0_M/O_{-}(T)$ is generically injective.
\end{proposition}
\begin{proof}
By \cite{Laza_n16} Section 4.2.3 or Theorem 4.1, one can recover a degree $5$ pair $(C, L)$ (up to projective equivalence) from the periods of the $K3$ surface $X_{(C,L)}$. As a result, the isomorphism class of a triple $(C,L_0,L_1)$ is determined by the periods of the $K3$ surfaces $X_{(C,L_0)}$ and $X_{(C,L_1)}$ provided that the quintic $C$ has no nontrivial automorphism. More specifically, we assume that $(\mathcal{P}_0 \times \mathcal{P}_1)(C,L_0,L_1) = (\mathcal{P}_0 \times \mathcal{P}_1)(C',L'_0,L'_1)$. Then there exist $f,g \in \mathrm{PGL}_3(\mathbb{C})$ such that $f(C)=C'$, $f(L_0) = L_0'$, $g(C)=C'$, and $g(L_1) = L_1'$. In particular, one has $(g^{-1} \circ f)(C) = C$. Because $\mathrm{Aut}(C) = \{\mathrm{id}\}$ we get $f=g$. Thus, the map $\mathcal{P}_0 \times \mathcal{P}_1: \mathcal{M}' \rightarrow \calD^0_M/O_{-}(T) \times \calD^0_M/O_{-}(T)$ is generically injective.
\end{proof}
\begin{theorem}
The period map $\mathcal{P}: \mathcal{M} \rightarrow \calD^{\rho}/\Gamma$ is generically injective.
\end{theorem}
\begin{proof}
Let $\mathcal{P}': \mathcal{M}' \rightarrow \calD^{\rho,0}/\Gamma \hookrightarrow \calD^{0}(\chi_0)/\Gamma_0 \times \calD^{0}(\chi_1)/\Gamma_1$ (where the superscript $^0$ denotes the choice of a connected component) be the map sending a labeled triple $(C,L_0,L_1)$ to the periods of the corresponding special Horikawa surface $S$ (and to the eigenperiods on the underlying eigenspaces $V(\chi_0)$ and $V(\chi_1)$). We would like to compare $\mathcal{P}'$ and $\mathcal{P}_0 \times \mathcal{P}_1$. Note that the transcendental lattice $T$ has been identified with the invariant parts of the underlying vector space $V_\mathbb{Q} = H^2_{\mathrm{prim}}(S_b, \mathbb{Q})$ for the involutions $\sigma_l^*$ ($l=0,1$) via the natural pull-backs (see \lemmaref{reference point}). We claim that under these identifications $\mathcal{P}'$ coincides with $\mathcal{P}_0 \times \mathcal{P}_1$ up to the order of the periods (after relabeling the two lines one needs to switch the periods of the two $K3$ surfaces, but the period of the special Horikawa surface and the eigenperiods remain the same). The reason is that the eigenperiods are obtained by pulling back the holomorphic $2$-forms of the two $K3$ surfaces $X_0 = X_{(C,L_1)}$ and $X_1 = X_{(C,L_0)}$ (see Diagram (\ref{diagram}) and \propositionref{decomposition}). Specifically, we choose the orderings for the intersection points $C \cap L_1$ (resp. $C \cap L_0$) and define a primitive embedding of $M$ (and also $T$) into $H^2(X_0, \mathbb{Z})$ (resp. $H^2(X_1, \mathbb{Z})$) accordingly. Let $\phi_l: \Lambda_{K3} \stackrel{\cong}{\rightarrow} H^2(X_l, \mathbb{Z})$ ($l=0,1$) be the markings compatible with the primitive embeddings of $M$. The map $\mathcal{P}_0 \times \mathcal{P}_1$ is defined by considering (the $O_-(T)$-orbits of) the $K3$ periods $\phi_{0, \mathbb{C}}^{-1}(H^{2,0}(X_0))$ and $\phi_{1, \mathbb{C}}^{-1}(H^{2,0}(X_1))$. Now let us discuss the (eigen)period map $\mathcal{P}'$. We denote $\sigma_0$ (resp. $\sigma_1$) in Diagram (\ref{diagram}) by $\sigma$ (resp. $\sigma'$) to reflect the fact that they do not depend on how one labels the lines. Let $\chi$ and $\chi'$ be the corresponding characters. We now construct a natural $\rho$-marking (cf. \cite[\S 7]{DK_ball}) on $H_{\mathrm{prim}}^2(S, \mathbb{Q})$ using the $K3$ markings $\phi_0$ and $\phi_1$. On one hand, \lemmaref{reference point} gives us $V(\chi)_{\mathbb{Q}} \cong T \otimes \mathbb{Q}$ and $V(\chi')_{\mathbb{Q}} \cong T \otimes \mathbb{Q}$. On the other hand, one has $T \otimes \mathbb{Q} \stackrel{\phi_{0}}{\rightarrow} H^2(X_0, \mathbb{Q}) \stackrel{\cong}{\rightarrow} H^2(S_0, \mathbb{Q})^{\sigma^*}$ where the second homomorphism is induced by the natural pull-back map. As in the proof of \lemmaref{reference point} one can show that the image of the composition (which is clearly injective) is $H^2_{\mathrm{prim}}(S, \mathbb{Q})^{\sigma^*}$. In other words, we get $T \otimes \mathbb{Q} \stackrel{\cong}{\rightarrow} H^2_{\mathrm{prim}}(S, \mathbb{Q})^{\sigma^*}$. Similarly, the $K3$ marking $\phi_1$ and the natural pull-back map allows us to identify $T \otimes \mathbb{Q}$ and $H^2_{\mathrm{prim}}(S, \mathbb{Q})^{\sigma'^*}$. Combining these observations, we get the markings $V(\chi)_\mathbb{Q} \stackrel{\cong}{\rightarrow} H^2(S_0, \mathbb{Q})^{\sigma^*}$ and $V(\chi')_\mathbb{Q} \stackrel{\cong}{\rightarrow} H^2(S_0, \mathbb{Q})^{\sigma'^*}$. Taking the preimage of the holomorphic $2$-form invariant for $\sigma^*$ (resp. $\sigma'^*$) in $V(\chi)_\mathbb{Q} \otimes \mathbb{C}$ (resp. $V(\chi')_\mathbb{Q} \otimes \mathbb{C}$) one obtains the eigenperiod map $\mathcal{P}'$. Now our claim is clear. By \propositionref{P0P1} generically $\mathcal{P}'$ has degree $2$ (depending on the labelings of the lines). As a result, the period map $\mathcal{P}: \mathcal{M} \rightarrow \calD^{\rho}/\Gamma$ is generically injective.
\end{proof}
\begin{remark}
Assume the quintic curve $C$ admits a nontrivial automorphism $\sigma$ satisfying $\sigma(L_0) \neq L_0$ and $\sigma(L_1) \neq L_1$. Then the triples $(C,L_0,L_1)$ and $(C, \sigma(L_0), L_1)$ are mapped by the period map $\mathcal{P}$ to the same point.
\end{remark}
\begin{remark}
Let $\mathcal W$ be the subset of $|\mathcal{O}_{\mathbb{P}^2}(5)| \times |\mathcal{O}_{\mathbb{P}^2}(1)| \times |\mathcal{O}_{\mathbb{P}^2}(1)|$ corresponding to triples $(C, L_0, L_1)$ with $C+L_0$ and $C+L_1$ admitting at worst $ADE$ singularities and $C \cap L_0 \cap L_1 = \emptyset$. By taking bidouble covers we obtain a family of surfaces $\mathcal S \rightarrow \mathcal W$ with only du Val singularities. By applying a simultaneous resolution to the family $\mathcal S$ we obtain a family $\widetilde{\mathcal S} \rightarrow\mathcal W$ (after a finite base change of $\mathcal W$) of Horikawa surfaces (which are surfaces of general type with $p_g=2$ and $K^2=1$). Consider the period map $\mathcal{P}_0 \times \mathcal{P}_1: \mathcal W \rightarrow \calD^0_M/O_{-}(T) \times \calD^0_M/O_{-}(T)$. The generic global Torelli theorem holds for this family. Namely, if two generic points in $\mathcal W$ have the same image in $\calD^0_M/O_{-}(T) \times \calD^0_M/O_{-}(T)$ then the corresponding triples are projectively equivalent.
\end{remark}
|
1,108,101,562,795 | arxiv | \section{Introduction}
Imbalanced data are ubiquitous in scientific fields and applications with binary response outputs, where the number of instances in the positive class is much smaller than that in the negative class. For example, in online search or recommendation systems where billions of impressions appear each day, non-click impressions usually dominate.
Using these non-click impressions as negative instances is prevalent and proved to be useful especially in modern machine learning systems~\citep{mcmahan2013ad,he2014practical,chen2016deep,guo2017deepfm,huang2020embedding}.
A common approach to address imbalanced data is to balance it by subsampling the negative class \citep{drummond2003c4,Zhou2009Exploratory} and/or up-sampling the positive class \citep{chawla2002smote,Han2005Borderline,mathew2017classification,douzas2017self}, and a great deal of attentions have been drawn to this problem, see \cite{Japkowicz2000,king2001logistic,chawla2004editorial,estabrooks2004multiple,owen2007infinitely,sun2007cost,chawla2009data,rahman2013addressing,fithian2014local,lemaitre2017imbalanced,Wang2020RareICML}, and the references therein.
In this paper, we focus on subsampling the negative class, i.e., negative sampling, while keeping all rare positive instances.
This is beneficial for modern online learning systems, where negative sampling significantly reduces data volume in distributed storage while saving training time for multiple downstream models. %
Rare events data and negative sampling are studied in \cite{Wang2020RareICML}, but it focused on linear logistic regression and only considered uniform sampling. We approach this problem under general binary response models from the perspective of optimal subsampling, which aims to minimize the asymptotic variance of the IPW estimator \citep{WangZhuMa2018,ting2018optimal, wang2021optimal,yu2020quasi}. This topic was not well investigated neither for imbalanced data nor for negative sampling. In addition, existing optimal probability formulation minimizes the conditional asymptotic variance and the variations due to the data randomness are ignored. We fill these gaps by deriving the asymptotic distributions for both full data and subsample estimators under general binary response models, and we find the unconditional optimal probability for negative sampling. %
In addition, since IPW may inflate the variance %
\citep{hesterberg1995weighted, owen2000safe}, we develop a more efficient likelihood-based estimator through nonuniform log odds correction to avoid IPW.
Our main contributions are summarized as follows.
\begin{itemize} %
\item Under a general binary response model with rare events data, we find that the difference between the full data estimator and the true parameter converges to a normal distribution at a rate that is tied to the number of rare positive instances only.
This indicates the possibility to throw away the majority of negative instances without information loss, i.e, it justifies the usage of negative sampling.
\item We show that there is no asymptotic information loss with aggressive negative sampling if the negative instances kept dominates the positive instances. However there is information loss expressed as a variance inflation if the negative instances are brought down to the same level of the positive instances. For this case, we obtain optimal subsampling probabilities by minimizing the unconditional asymptotic variance of a general IPW estimator.
\item We develop a likelihood estimator through nonuniform log odds correction for sampled data, and prove that it has the smallest asymptotic variance %
among a large class of estimators. %
\item We apply the proposed method to a real online streaming click-through rate (CTR) dataset with more than 0.3 trillion instances and demonstrate its effectiveness.
\end{itemize}
The rest of paper is organized as follows. Section~\ref{sec:problem} defines the problem this paper focuses on and shows the full data results. Section~\ref{sec:weight-estim-optim} presents the optimal probability for the IPW estimator. Section~\ref{sec:more-effic-estim} proposes the likelihood-based estimator and %
establishes its asymptotic optimality. Section~\ref{sec:practical} discusses practical implementation %
and the theoretical properties of the resulting estimators. %
Section~\ref{sec:numer-exper} presents numerical experiments with simulated data, and applies the proposed method to a real online streaming CTR dataset with more than 0.3 trillion instances.
Section~\ref{sec:conclusions} concludes the paper and points out limitations of our investigation. Proofs and additional experiments with misspecifications are in the supplementary materials.
\section{Problem setup and full data results}\label{sec:problem}
Let $\{(\x_i,y_i)\}_{i=1}^N$ be training data of size $N$ that satisfies the binary response model,
\begin{equation}\label{eq:1}
\Pr(y=1\mid\x)=p(\x;\btheta),
\end{equation}
where $y\in\{0,1\}$ is the binary class label ($y=1$ for the positive and $y=0$ for the negative), $\x$ is the vector of features, and $\btheta$ is the unknown $d$-dimensional parameter. %
Let $N_1$ be the number of positive instances ($N_1=\sumN y_i$) and $N_0$ be the number of negative instances ($N_0=N-N_1$). We consider the scenario of massive imbalanced data, i.e., $N_1\ll N_0$. %
For asymptotic investigations when $N_1$ is much smaller than $N_0$, it is more appropriate to assume that $N_1$ increases in a slower rate compared with $N_0$, i.e., $N_1/N_0\rightarrow0$ in probability as $N\rightarrow\infty$ \citep{Wang2020RareICML}.
This requires $\Pr(y=1)=\Exp\{p(\x;\btheta)\}\rightarrow0$ as $N\rightarrow\infty$ on the model side. %
We assume that the parameter $\btheta$ contains two components $\btheta=(\alpha,\bbeta\tp)\tp$ and the log odds can be written as
\begin{equation*}
g(\x;\btheta):=
\log\Big\{\frac{p(\x;\btheta)}{1-p(\x;\btheta)}\Big\}
=\alpha+f(\x;\bbeta),
\end{equation*}
where the true parameter, denoted as $\btheta^*=(\alpha^{*},{\bbeta^{*}}\tp)\tp$, satisfies that $\alpha^{*}\rightarrow-\infty$ as $N\rightarrow\infty$ and $\bbeta^{*}$ is fixed. Here $f(\x;\bbeta)$ is a smooth function of $\bbeta$, such as a linear function or a nonlinear neural network model. If it is linear, %
the model reduces to the logistic regression model.
A diverging $\alpha^*$ and a fixed $\bbeta^{*}$ indicate that both the marginal and conditional probabilities for a positive instance are small. Heuristically, this implies that a change in the feature value does make a rare event become a large probability event. We can also allow $\bbeta^{*}$ to change with $N$, but as long as $\bbeta^{*}$ has a finite limit, the situation is essentially the same as a fixed $\bbeta^{*}$. So here we assume $\bbeta^{*}$ to be fixed to simplify the presentation.
Throughout this paper, we use $\dot{g}(\x;\btheta)%
$ and $\ddot{g}(\x;\btheta)%
$ to denote the gradient and Hessian of $g(\x;\btheta)$ with respect to (w.r.t.) $\btheta$, respectively. Let $\|\v\|%
$ be the Frobenius norm and $\v^{\otimes2}=\v\v\tp$ for a vector or a matrix $\v$. %
We denote $\pi(\x,y)$ as the sampling probability for an instance $(\x,y)$. For negative cases, we sometimes use a shorthand notation $\pi(\x)$ to denote its sampling probability.
We use $o_P(1)$ to represent a random quantity that converges to $0$ in probability as $N\rightarrow\infty$. %
\subsection{It is the number of positive instances that matters}\label{sec:it-number-positive}
We show that for rare events data, the estimation error rate for $\btheta$ is related to $N_1$ instead of $N$. Using the full data, a commonly used estimator of $\btheta$ is the maximum likelihood estimator (MLE)
\begin{equation*}%
{\hat{\boldsymbol{\theta}}}_{{\mathrm{f}}}
:=\arg\max_{\btheta}\sumN
\big[y_ig(\x_i;\btheta)-\log\big\{1+e^{g(\x_i;\btheta)}\big\}\big].
\end{equation*}
To investigate the theoretical properties of the estimator ${\hat{\boldsymbol{\theta}}}_{{\mathrm{f}}}$, we make the following assumptions.
\begin{assumption}\label{asmp:1}
The first, second, and third derivatives of $f(\x;\bbeta)$ and $e^{f(\x;\bbeta)}f(\x;\bbeta)$ w.r.t. any components of $\bbeta$
are bounded by a square intergrable random variable $B(\x)$.
\end{assumption}
\begin{assumption}\label{asmp:2}
The matrix $\Exp\{\dot{g}^{\otimes2}(\x;\btheta^{*})\}$ is finite and positive definite.
\end{assumption}
Assumption~\ref{asmp:1} imposes constraints on the smoothness of $f(\x;\btheta)$ w.r.t. $\btheta$ and on the tightness of the distribution of $\x$. %
Assumption~\ref{asmp:2} ensures that all components of $\btheta$ are estimable.
\begin{theorem}\label{thm:1}
Let $\mathbf{V}_{{\mathrm{f}}}=\Exp\{e^{f(\x;\bbeta^{*})}\}\M_{{\mathrm{f}}}^{-1}$ where
$\M_{{\mathrm{f}}}=\Exp\{e^{f(\x;\bbeta^{*})}\dot{g}^{\otimes2}(\x;\btheta^{*})\}$.
Under Assumptions~\ref{asmp:1} and \ref{asmp:2}, %
as $N\rightarrow\infty$,
\begin{equation*}%
\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_{{\mathrm{f}}}-\btheta^{*}) \longrightarrow \mathbb{N}(\0,\ \mathbf{V}_{{\mathrm{f}}}),
\quad\text{ in distribution.}
\end{equation*}
\end{theorem}
This result shows that even if all the $N$ instances are used for model training, the resulting estimator ${\hat{\boldsymbol{\theta}}}_{{\mathrm{f}}}$ converges to the true parameter $\btheta^{*}$ at the rate of $N_1^{-1/2}$, which is much slower than the usual rate of $N^{-1/2}$ for regular cases. This indicates that with rare events data, the available information about unknown parameters is at the scale of $N_1$.
Although $N\rightarrow\infty$ much faster than $N_1\rightarrow\infty$ (in terms of $N_1/N\rightarrow0$), $N$ does not explicitly show in the asymptotic normality result of Theorem~\ref{thm:1}.
For the specific case of linear logistic regression with $g(\x;\btheta)=\alpha+\x\tp\bbeta$, our Theorem~\ref{thm:1} reduces to Theorem 1 of \cite{Wang2020RareICML}.
\subsection{Negative sampling}
Since the available information ties to the number of positive instances instead of the full data size, one can keep all the positive instances and significantly subsmaple the negative instances to reduce the computational cost. For better estimation efficiency, we consider nonuniform sampling. Let $\varphi(\x)>0$ be an integrable function and $\rho$ be the sampling rate on the negative class. %
Without loss of generality, assume $\Exp\{\varphi(\x)\}=1$ so that $\pi(\x_i):=\rho\varphi(\x_i)$ is the sampling probability for the $i$-th data point if $y_i=1$. %
We present the nonuniform negative sampling procedure in Algorithm~\ref{alg1}. %
\begin{algorithm}[H]
\caption{Negative sampling}\label{alg1}
For $i=1, ..., N$:
\-\hspace{0.05cm} if $y_i=1$,\; include $\{\x_i,y_i, \pi(\x_i,y_i)=1\}$ in the sample;\\
\-\hspace{2.55cm} if $y_i=0$,\;
calculate $\varphi(\x_i)$ and generate $u_i\sim \mathbb{U}(0,1)$;\\
\-\hspace{4.1cm} if $u_i\le\rho\varphi(\x_i)$, include $\{\x_i,y_i, \pi(\x_i,y_i)=\rho\varphi(\x_i)\}$ in the sample.
\end{algorithm}
\section{Weighted estimation and its optimal negative sampling probability}
\label{sec:weight-estim-optim}
Let $\delta_i=1$ if the $i$-th data point is selected and $\delta_i=0$ otherwise. %
Given a subsample taken according to $\pi(\x_i,y_i)=y_i+(1-y_i)\rho\varphi(\x_i)$, $i=1, ..., N$, the IPW estimator of $\btheta$ is %
\begin{equation}\label{eq:4}
{\hat{\boldsymbol{\theta}}}_w %
=\arg\max_{\btheta}\sumN\delta_i\pi^{-1}(\x_i,y_i)
\big[y_ig(\x_i;\btheta)-\log\big\{1+e^{g(\x_i;\btheta)}\big\}\big].
\end{equation}
We need an assumption on the subsampling rate to investigate the subsample estimators.
\begin{assumption}\label{asmp:3}
The subsampling rate $\rho$ satisfies that
$c_N:=e^{\alpha^*}/\rho\rightarrow c$ for a constant $c$ such that $0\le c<\infty$. %
\end{assumption}
Note that $c_N$ cannot be zero but its limit $c$ can be exactly zero.
It can be shown that $c_N\Exp\{e^{f(\x;\bbeta^{*})}\} = {N_1}{(N_0\rho)^{-1}}\{1+o_P(1)\}$, %
so $c\Exp\{e^{f(\x;\bbeta^{*})}\}$ is the asymptotic positive/negative ratio for the sample. %
The theorem below shows the asymptotic distribution of ${\hat{\boldsymbol{\theta}}}_w$.
\begin{theorem}\label{thm:2}
Under Assumptions \ref{asmp:1}-\ref{asmp:3},
if $\Exp[\{\varphi(\x)+\varphi^{-1}(\x)\}B^2(\x)]<\infty$ then
as $N\rightarrow\infty$,
\begin{equation*}
\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_w-\btheta^{*}) \longrightarrow \mathbb{N}(\0,\ \mathbf{V}_w),
\quad\text{ in distribution,}
\end{equation*}
where $\mathbf{V}_w=\mathbf{V}_{{\mathrm{f}}}+\mathbf{V}_{\mathrm{sub}}$ and $\mathbf{V}_{\mathrm{sub}}=c\Exp\{e^{f(\x;\bbeta^{*})}\}\M_{{\mathrm{f}}}^{-1}
\Exp\{\varphi^{-1}(\x)e^{2f(\x;\bbeta^{*})}\dot{g}^{\otimes2}(\x;\btheta^{*})\}
\M_{{\mathrm{f}}}^{-1}$.
\end{theorem}
The subsampled estimator ${\hat{\boldsymbol{\theta}}}_w$ have the same convergence rate as the full data estimator. However, the asymptotic variance $\mathbf{V}_w$ may be inflated by the term $\mathbf{V}_{\mathrm{sub}}$ from subsampling the negative cases. Here $\mathbf{V}_{\mathrm{sub}}$ is zero if $c=0$.
If we keep much more negative instances than the positive instances in the subsample ($c=0$), then there is no asymptotic information loss %
due to subsampling ($\mathbf{V}_w=\mathbf{V}_{{\mathrm{f}}}$). In this scenario, the number of negative instances can still be aggressively reduced ($\rho\rightarrow0$) so that the computational efficiency is significantly improved, and the sampling function $\varphi(\x)$ does not play an significant role. If we have to reduce the negative instances to the same level of the positive instances ($0<c<\infty$), then the variance inflation $\mathbf{V}_{\mathrm{sub}}$ is not negligible. In this scenario, %
a well designed sampling function $\varphi(\x)$ it is more relevant and critical.
Theorem~\ref{thm:2} shows that the distribution of the estimation error $err:=\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_w-\btheta^{*})$ is approximated by a normal distribution $\mathbb{N}(\0, \mathbf{V}_w)$. Thus for any $\epsilon>0$ the probability of excess error $\Pr(err>\epsilon)$ is %
approximated by $\Pr\{\|\mathbb{N}(\0,\mathbf{V}_w)\|>\epsilon\}=\Pr\big(\sum_{j=1}^d\lambda_jz_j^2>\epsilon\big)$, where $\lambda_j$'s are eigenvalues of $\mathbf{V}_w$ and $z_j$'s are independent standard normal random variables. Therefore, a smaller trace of $\mathbf{V}_w$ means a smaller probability of excess error of the same level since $\tr(\mathbf{V}_w)=\sum_{j=1}^d\lambda_j$.
The following theorem gives the optimal negative sampling function for the IPW estimator.
\begin{theorem}\label{thm:3}
For a given sampling rate $\rho$, the asymptotically optimal $\varphi(\x)$ %
that minimizes $\tr(\mathbf{V}_w)$ is
\begin{equation}\label{eq:11}
\varphi_{\os}(\x)=
\frac{\min\{t(\x;\btheta^{*}),T\}}{\Exp[\min\{t(\x;\btheta^{*}),T\}]},
\end{equation}
where $t(\x;\btheta)=p(\x;\btheta)\|\M_{{\mathrm{f}}}^{-1}\dot{g}(\x;\btheta)\|$
and $T$ is the maximum number so that $\rho[\min\{t(\x;\btheta^{*}),T\}]\le\Exp[\min\{t(\x;\btheta^{*}),T\}]$ with probability one.
\end{theorem}
\begin{remark}\label{remark:2}
If $\rho\rightarrow0$ then $\rho t(\x;\btheta^{*})\le\Exp\{t(\x;\btheta^{*})\}$ almost surely, so $T$ can be dropped (i.e., $T=\infty$).
If $\lim_{N\rightarrow\infty}\rho>0$ then $c=0$, %
so the variance inflation due to subsampling is negligible. Thus, the truncation term $T$ can be ignored in practice with imbalanced data; it only plays an role for not very imbalanced data when the sampling ratio is not very small.
\end{remark}
In $\varphi_{\os}(\x)$, $t(\x;\btheta^{*})$ consists of two components, $p(\x;\btheta^{*})$ and $\|\M_{{\mathrm{f}}}^{-1}\dot{g}(\x;\btheta^{*})\|$. The term $p(\x;\btheta^{*})$ is from the binary response model structure, and it gives a higher preference for a data point with larger $p(\x;\btheta^{*})$ in the negative class.
The term $\|\M_{{\mathrm{f}}}^{-1}\dot{g}(\x;\btheta^{*})\|$ corresponds to the A optimality criterion \citep{pukelsheim2006optimal}.
For computational benefit in optimal sampling, the A optimality is often replaced by the L optimality \citep[e.g.,][]{WangZhuMa2018,ting2018optimal}.
Under the L optimality, $\|\M_{{\mathrm{f}}}^{-1}\dot{g}(\x;\btheta^{*})\|$ is replaced by $\|\dot{g}(\x;\btheta^{*})\|$, and this minimizes the asymptotic mean squared error (MSE) of $\M_{{\mathrm{f}}}{\hat{\boldsymbol{\theta}}}_w$.
In case the gradient is difficult to obtain, one could ignore the gradient term and simply use $p(\x;\btheta^{*})$ to replace $t(\x;\btheta^{*})$. Although this does not give an optimal sampling probability, it outperforms the uniform subsampling because it takes into account the information of the model structure like the local case-control (LCC) sampling.
\section{More efficient estimation based on the likelihood with log odds correction}
\label{sec:more-effic-estim}
\subsection{Nonuniform log odds correction}
The optimal function $\varphi_{\os}(\x)$ assigns larger probabilities to more informative instances. However, the IPW estimator in (\ref{eq:4}) assigns smaller weights to more informative instances in estimation, so the resulting estimator can be improved to have higher estimation efficiency.
A naive unweighted estimator is biased, and the bias correction approach in \cite{fithian2014local,wang2019more} does not work for the sampling procedure in Algorithm~\ref{alg1} even when the underlying model is the logistic regression. We adopted the idea of \cite{han2020local} and seek to define a more efficient estimator through finding the corrected likelihood of the sampled data based on any negative sampling probability $\pi(\x)$.
For data included in the subsample (where $\delta=1$), the conditional probability of $y=1$ is
\begin{equation}\label{eq:2}
p_{\pi}(\x;\btheta)
:=\Pr(y=1\mid\x,\delta=1)
=\frac{1}{1+e^{-g(\x;\btheta)-l}},
\end{equation}
where $l=-\log\{\pi(\x)\}$. Please see the detailed derivation of (\ref{eq:2}) in the supplement.
To avoid the IPW, we propose the estimator based on the log odds corrected likelihood in (\ref{eq:2}), namely,
\begin{equation}\label{eq:5}
{\hat{\boldsymbol{\theta}}}_{\lik}=\arg\max_{\btheta}\ell_{\lik}(\btheta),
\quad\text{where}\quad
\ell_{\lik}(\btheta)
=\sumN\delta_i\big[y_ig(\x_i;\btheta)
-\log\big\{1+e^{g(\x_i;\btheta)+l_i}\big\}\big].
\end{equation}
With ${\hat{\boldsymbol{\theta}}}_w$ in (\ref{eq:4}), $\pi(\x_i)$ is in the inverse. If an instance with much smaller $\pi(\x_i)$ is selected in the sample, it dominates the objective function, making the resulting estimator unstable. With ${\hat{\boldsymbol{\theta}}}_{\lik}$, this problem is ameliorated because $\pi(\x_i)$ is in the logarithm in the log-likelihood of (\ref{eq:5}) and $\log(v)$ goes to infinity much slower than $v^{-1}$ as $v\downarrow0$.
\subsection{Theoretical analysis of the likelihood-based estimator}
The following shows the asymptotic distribution of ${\hat{\boldsymbol{\theta}}}_{\lik}$.
\begin{theorem}\label{thm:4}
Under Assumptions \ref{asmp:1}-\ref{asmp:3},
if $\Exp\{e^{f(\x;\bbeta^{*})}\varphi^{-1}(\x)B(\x)\}<\infty$,
then
as $N\rightarrow\infty$,
\begin{equation*}
\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_{\lik}-\btheta) \longrightarrow \mathbb{N}(\bm{0},\mathbf{V}_{\lik})
\quad\text{ in distribution,}
\end{equation*}
where $\mathbf{V}_{\lik}=\Exp\{e^{f(\x;\bbeta^{*})}\}\boldsymbol{\Lambda}_{\lik}^{-1}$ and
$\boldsymbol{\Lambda}_{\lik}=\Exp\Big[\frac{e^{f(\x;\bbeta^{*})}\dot{g}^{\otimes2}(\x;\btheta^{*})}
{1+c\varphi^{-1}(\x)e^{f(\x;\bbeta^{*})}}\Big]$.
\end{theorem}
Theorem~\ref{thm:3} shows that the proposed estimator ${\hat{\boldsymbol{\theta}}}_{\lik}$ is asymptotic normal with variance $\mathbf{V}_{\lik}$. We compare $\mathbf{V}_{\lik}$ with $\mathbf{V}_w$ to show that ${\hat{\boldsymbol{\theta}}}_{\lik}$ has a higher estimation efficiency than ${\hat{\boldsymbol{\theta}}}_w$.
\begin{theorem}\label{thm:5}
If $\mathbf{V}_w$ and $\mathbf{V}_{\lik}$ are finite and positive definite, then
$\mathbf{V}_{\lik}\le\mathbf{V}_w$, i.e., $\mathbf{V}_w-\mathbf{V}_{\lik}$ is non-negative definite. The equality holds when $c=0$ and in this case $\mathbf{V}_{\lik}=\mathbf{V}_w=\mathbf{V}_{{\mathrm{f}}}$.
\end{theorem}
Since the estimator ${\hat{\boldsymbol{\theta}}}_{\lik}$ in Eg.~(\ref{eq:5}) is based on the conditional log-likelihood of the sampled data, it actually has the highest estimation efficiency among a class of asymptotically unbiased estimators. %
\begin{theorem}\label{thm:6}
Let $\X=(\x_1,...,\x_N)\tp$ be the feature matrix and $\Dd$ denote the sampled data. Let
${\hat{\boldsymbol{\theta}}}_U$ be a subsample estimator with the following asymptotic representation:
\begin{equation}\label{eq:24}
{\hat{\boldsymbol{\theta}}}_U=\mathbf{U}(\btheta^{*};\Dd)+N_1^{-1/2}o_P(1),
\end{equation}
where the variance $\Var\{\mathbf{U}(\btheta^{*};\Dd)\}$ exists, and $\mathbf{U}(\btheta^{*};\Dd)$ satisfies that
$\Exp\{\mathbf{U}(\btheta^{*};\Dd)\mid\X\}=\btheta^{*}$ and
$\Exp\{\dot\mathbf{U}(\btheta^{*};\Dd)\mid\X\}=\0$ with $\dot\mathbf{U}(\btheta^{*};\Dd)=\partial\mathbf{U}(\btheta^{*};\Dd)/\partial{\btheta^{*}}\tp$. If $N_1\Var\{\mathbf{U}(\btheta^{*};\Dd)\}\rightarrow\mathbf{V}_U$ in probability, then $\mathbf{V}_U\ge\mathbf{V}_{\lik}$.
\end{theorem}
\begin{remark}
Theorem~\ref{thm:6} tells us that ${\hat{\boldsymbol{\theta}}}_{\lik}$ is statistically the most efficient among a class of estimators, which includes both ${\hat{\boldsymbol{\theta}}}_w$ and ${\hat{\boldsymbol{\theta}}}_{\lik}$ as special cases.
The condition $\Exp\{\mathbf{U}(\btheta^{*};\Dd)\mid\X\}=\btheta^{*}$ ensures that the estimator is asymptotically unbiased. The condition $\Exp\{\dot\mathbf{U}(\btheta^{*};\Dd)\mid\X\}=\0$ can be intuitively interpreted as the requirement that the derivative of the $o_P(1)$ term in (\ref{eq:24}) is also negligible.
Clearly, this is satisfied by all unbiased estimators for which the $o_P(1)$ term in (\ref{eq:24}) is $\0$.
\end{remark}
\section{Practical considerations}
\label{sec:practical}
In this section, we first show how we design a more practical estimator based on the previous results and then present the theoretical analysis behind these improvements.
\subsection{Making estimators more practical}
Like the LCC sampling \citep{fithian2014local} and the A-optimal sampling \citep{wang2019more}, the $\varphi_{\os}(\x)$ in (\ref{eq:11}) depends on $\btheta$, and thus a pilot value of $\btheta$, say $\tilde{\bm{\theta}}$, is required in practice. Here, $\tilde{\bm{\theta}}$ can be constructed from a pilot sample, say $(\tilde{\x}_i,\tilde{y}_i)$, $i=1, ..., \tilde{n}$, taken by uniform sampling from both classes with equal expected sample sizes.
With $\tilde{\bm{\theta}}$ obtained, calculate
$\tilde{t}_i=p(\tilde\x_i;\tilde\btheta)\|\tilde\M_{{\mathrm{f}}}^{-1}\dot{g}(\tilde\x_i;\tilde\btheta)\|$
for $i=1, ..., \tilde{n}$, where $\tilde\M_{{\mathrm{f}}}$ is the Hessian matrix of the pilot sample objective function.
In case that the Hessian matrix and gradients are hard to record, the gradient term can be dropped, i.e., use $\tilde{t}_i=p(\tilde\x;\tilde{\bm{\theta}})$. As mentioned in Remark~\ref{remark:2}, $T$ can be ignored for very imbalanced data.
The expectation in the denominator of (\ref{eq:11}) can be approximated by mimicking the method of moment estimator, namely by
\begin{equation}\label{eq:9}\textstyle
\tilde\omega= 2N_1(\tilde{n}N)^{-1}\sum_{\tilde{y}_i=1}\tilde{t}_i
+2N_0(\tilde{n}N)^{-1}\sum_{\tilde{y}_i=0}\tilde{t}_i.
\end{equation}
In practice, it is common to set a lower threshold, say $\varrho$, on sampling probabilities, to ensure that all instances have positive probabilities to be selected. This is more critical for the IPW estimator while the likelihood estimator is not very sensitive to very small sampling probabilities.
Denote the pilot value as $\tilde{\bm{\vartheta}}=(\tilde{\bm{\theta}}, \tilde\omega)$.
The following probabilities can be practically implemented
\begin{equation}\label{eq:8}
\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})
=\min[\max\{\rho\tilde\varphi_{\os}(\x),\varrho\},1],
\quad\text{ where }\quad
\tilde\varphi_{\os}(\x)=\tilde\omega^{-1}t(\x;\tilde{\bm{\theta}}),
\end{equation}
and we use this in our experiments in Section~\ref{sec:numer-exper}. The $\tilde\omega$ in (\ref{eq:9}) is essentially a normalizing term so that the expected subsample size is $\rho$. For some practical systems such as online models, %
$\tilde\omega$ is often treated as a tuning parameter. We will illustrated this in our real data example in Section~\ref{sec:exper-real-data}.
\subsection{Theoretical analysis of practical estimators}
Denote ${\hat{\boldsymbol{\theta}}}_w^{\tilde{\bm{\vartheta}}}$ and ${\hat{\boldsymbol{\theta}}}_{\lik}^{\tilde{\bm{\vartheta}}}$ the IPW estimator and the likelihood estimator, respectively, with practically estimated optimal probability in (\ref{eq:8}). We have the following results.
\begin{theorem}\label{thm:7}
Assume that the pilot estimator $\tilde{\bm{\vartheta}}$ is independent of the data, %
$\tilde\varphi_{\os}(\x)\rightarrow\varphi_{\mathrm{plt}}(\x)$ in probability, and %
$\rho^{-1}\varrho\rightarrow c_l>0$. Under Assumptions \ref{asmp:1}-\ref{asmp:3}, as $N\rightarrow\infty$,
the ${\hat{\boldsymbol{\theta}}}_w^{\tilde{\bm{\vartheta}}}$ satisfies that,
\begin{equation*}
\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_w^{\tilde{\bm{\vartheta}}}-\btheta)
\rightarrow \mathbb{N}(\bm{0},\mathbf{V}_w^{\mathrm{plt}}) \quad\text{in distribution,}
\end{equation*}
where $\mathbf{V}_w^{\mathrm{plt}}$ has the same expression of $\mathbf{V}_w$ except that $\varphi(\x)$ is replaced by $\max\{\varphi_{\mathrm{plt}}(\x),c_l\}$. If $\Pr\{\varphi_{\mathrm{plt}}(\x)\ge c_l\}=1$ and the pilot estimator is consistent, then $\mathbf{V}_w^{\mathrm{plt}}$ achieves the optimal variance.
\end{theorem}
\begin{theorem}\label{thm:8}
Assume that the pilot estimator $\tilde{\bm{\vartheta}}$ is independent of the data, %
$\tilde\varphi_{\os}(\x)\rightarrow\varphi_{\mathrm{plt}}(\x)$ in probability, and %
$\varrho=o(\rho)$.
Under Assumptions \ref{asmp:1}-\ref{asmp:3}, as $N\rightarrow\infty$,
${\hat{\boldsymbol{\theta}}}_{\lik}^{\tilde{\bm{\vartheta}}}$ satisfies that,
\begin{equation*}
\sqrt{N_1}({\hat{\boldsymbol{\theta}}}_{\lik}^{\tilde{\bm{\vartheta}}}-\btheta)
\rightarrow \mathbb{N}\big(\bm{0},\mathbf{V}_{\lik}^{\mathrm{plt}}\big)
\quad\text{in distribution,}
\end{equation*}
where
$\mathbf{V}_{\lik}^{\mathrm{plt}}$ has the same expression of $\mathbf{V}_{\lik}$ except that $\varphi(\x)$ is replaced by $\varphi_{\mathrm{plt}}(\x)$. If the pilot estimator is consistent, then $\mathbf{V}_{\lik}^{\mathrm{plt}}$ achieves the variance with the optimal probability.
\end{theorem}
\begin{remark}
Theorems~\ref{thm:7} and \ref{thm:8} do not require $\tilde{\bm{\vartheta}}$ to be consistent, i.e., the pilot estimator can be misspecified. However $\tilde{\bm{\vartheta}}$ has to be consistent in order to achieve the asymptotic variances with the optimal probability. Compared with the likelihood estimator ${\hat{\boldsymbol{\theta}}}_{\lik}^{\tilde{\bm{\vartheta}}}$, the IPW estimator ${\hat{\boldsymbol{\theta}}}_w^{\tilde{\bm{\vartheta}}}$ requires a stronger condition on $\varrho$ to have asymptotic normality, because $\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})$ is in the denominator of the objective function for ${\hat{\boldsymbol{\theta}}}_w^{\tilde{\bm{\vartheta}}}$ so the turbulence of $\tilde{\bm{\vartheta}}$ is amplified. %
The requirement $c_l>0$ means that $\varrho$ cannot be too small %
in practice. The likelihood estimator does not have this constraint.
\end{remark}
\section{Numerical experiments}
\label{sec:numer-exper}
We present simulated results and report the performance on a CTR dataset with more than 0.3 trillion instances. More experiments with model and pilot misspecifications are available in the supplement.
\subsection{Experiments on simulated data}
\label{sec:exper-simul-data}
We implemented the following negative sampling methods for logistic regression. 1) uniW: uniform sampling with the IPW estimator. 2) uniLik: uniform sampling with the likelihood estimator. 3) optW: optimal sampling with the IPW estimator. 4) optLik: optimal sampling with the likelihood estimator. We also implement the full data MLE (Full) and the LCC for comparisons.
In each repetition of the simulation, we generate full data of size $N=5\times10^5$ from the logistic regression with $g(\x;\btheta)=\alpha+\x\tp\bbeta$. We set the true $\bbeta^{*}$ to be a $6\times1$ vector of ones and set different values for $\alpha$ for different feature distributions so that the positive/negative ratio is close to 1:400.
We consider the following feature distributions: %
(a) Normal distribution which is symmetric with light tails; the true intercept is $\alpha=-7.65$.
(b) Log-normal distribution which is asymmetric and positively skewed; the true intercept is $\alpha=-0.5$.
(c) $t_3$ distribution which is symmetric with heavier tails; the true intercept is $\alpha=-7$.
We set the sampling rate as $\rho= 0.002, 0.004, 0.006, 0.01$, and $0.02$
for all sampling methods. %
In each repetition, uniform samples of average size $100$ are selected from each class to calculate the pilot estimates, so the uncertainty due to the pilot estimates are taken into account.
We also consider pilot misspecification by adding a uniform random number from $\mathbb{U}(0,1.5)$ to the pilot so that the pilot is systematically different from the true parameter.
We repeat the simulation for %
$R=1000$ times to calculate the MSE as $R^{-1}\sum_{r=1}^R\|{\hat{\boldsymbol{\theta}}}^{(r)}-\btheta\|^2$, where $\btheta^{(r)}$ is the estimate at the $r$-th repetition.
\begin{figure}[htb]\centering
\begin{minipage}{0.32\textwidth}\centering
\includegraphics[width=\textwidth,page=1]{figures/00mse.pdf}
\includegraphics[width=\textwidth,page=1]{figures/00mseMisPilot.pdf}
(a). {$\x$'s are normal}
\end{minipage} %
\begin{minipage}{0.32\textwidth}\centering
\includegraphics[width=\textwidth,page=2]{figures/00mse.pdf}
\includegraphics[width=\textwidth,page=2]{figures/00mseMisPilot.pdf}
(b) {$\x$'s are lognormal}
\end{minipage} %
\begin{minipage}{0.32\textwidth}\centering
\includegraphics[width=\textwidth,page=3]{figures/00mse.pdf}
\includegraphics[width=\textwidth,page=3]{figures/00mseMisPilot.pdf}
(c) {$\x$'s are $t_3$}
\end{minipage}
\caption{Log(MSEs) for different estimators (the smaller the better). The top row uses consistent pilot estimators; the bottom uses misspecified pilot estimators.}
\label{fig:logistic1}
\end{figure}
Results are presented in Figure~\ref{fig:logistic1}. The MSEs of all estimators decrease as the sampling rate $\rho$ increases. The optLik outperforms other sampling methods in general, and its performance is close to the full data MLE when $\rho=0.02$, especially if the pilot is consistent.
The advantage of optLik over optW is more evident for smaller $\rho$.
If the pilot is misspecified, optLik is quite robust but optLik and LCC significantly deteriorate. Since uniLik and uniW do not require a pilot, they are not affected by pilot misspecification, but they have much larger MSE than optLik and optW when the pilot is consistent, especially uniW.
Different feature distributions also affect the performances of different sampling methods.
To verify Theorem~\ref{thm:1} numerically, we let the full data size $N$ increase at a faster rate than the average number of positive instances $N_1^a=\Exp(N_1)$ so that the probability $\Pr(y=1)$ decreases towards zero. As $N$ increases, we fixed the value of $\bbeta^*$ and set decreasing values to $\alpha^*$ to mimic our scaling regime. %
We considered two covariate distributions: 1) a multivariate normal distribution for which a logistic regression model is a correct model; and 2) a multivariate log-normal distribution for which a logistic regression model is misspecified. %
We simulated for 100 times to calculate the empirical variance $\hat{\mathbf{V}}_f$ of the full data MLE, and report the results in Table~\ref{tab:1}.
It is seen that $\tr(\hat{\mathbf{V}}_f)$ is decreasing towards zero.
According to Theorem~\ref{thm:1}, $\tr(\hat{\mathbf{V}}_f)$ should converge in a rate of $1/N_1$, so $N_1^a\tr(\hat{\mathbf{V}}_f)$ should be relatively stable as $N$ increases. This is indeed the case as seen in the third and sixth columns of Table~\ref{tab:1}. On the other hand, $N\tr(\hat{\mathbf{V}}_f)$ gets large dramatically as $N$ increases. The aforementioned observations confirm the theoretical result in Theorem~\ref{thm:1}. Furthermore, Table~\ref{tab:1} shows that the convergence rate in Theorem~\ref{thm:1} may also be true for some misspecified models.
\begin{table}[htbp]%
\caption{Empirical variances of the full data MLE for different full data size and average number of positive instances combinations.}\label{tab:1}
\centering
\begin{tabular}{lccrcccr}\hline
& \multicolumn{3}{c}{Correct model} & & \multicolumn{3}{c}{Mis-sprcified model} \\
\cline{2-4} \cline{6-8}\\[-3.5mm]
($N$, $N_1^a$) & $\tr(\hat{\mathbf{V}}_f)$ & $N_1^a\tr(\hat{\mathbf{V}}_f)$ & $N\tr(\hat{\mathbf{V}}_f)$
&& $\tr(\hat{\mathbf{V}}_f)$ & $N_1^a\tr(\hat{\mathbf{V}}_f)$ & $N\tr(\hat{\mathbf{V}}_f)$\\\hline
($10^3$, { } $32$) & 0.169 & 5.41 & 169.17 && 0.969 & 30.99 & 968.70 \\
($10^4$, { } $64$) & 0.097 & 6.20 & 969.29 && 0.322 & 20.59 & 3217.12\\
($10^5$, $128$) & 0.045 & 5.76 & 4497.24 && 0.135 & 17.32 & 13527.60\\
($10^6$, $256$) & 0.018 & 4.62 & 18048.40&& 0.046 & 11.74 & 45847.40\\\hline
\end{tabular}
\end{table}
\subsection{Experiments on real data}
\label{sec:exper-real-data}
We conduct experiments on {an internal CTR dataset from our products} %
with more than 0.3 trillion instances. There are 299 input features, including user and ad profiles together with rich context features. We concatenate all features as a single high-dimensional vector $\x$, whose length is greater than $5,000$. We use a 3-layer fully connected deep neural network as a feature extractor that maps $\x$ into a 256-dimensional dense features $h(\x)$. The downstream model is a linear logistic regression model taking $h(\x)$ as input to predict the binary output whether the user clicks the ad or not.
The positive/negative ratio is around 1:80.
Due to limited storage, we first uniformly drop 80\% negative instances and try various negative sampling algorithms on the rest 60 billion instances. On this pre-processed dataset, we then split out 1 percent of the data for testing, and do negative sampling on the rest data. The entire training procedure scans the subsampled data set from begin to end in one-pass according to the timestamp. The testing is done all the way through training. We calculate the area under the ROC curve (AUC) for testing instances. The entire model is trained on a distributed machine learning platform using 1,200 CPU cores and can finish within two days.
We adopt the sampling probability $\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})$
as demonstrated in~(\ref{eq:8}) and use the empirical sampling rate to calibrate the normalizing term $\tilde{\omega}$. As mentioned before, we remove %
the gradient term for implementation tractability so $t(\x;\tilde{\bm{\theta}})=p(\x;\tilde{\bm{\theta}})$. As the result will show, this approximate score produces consistent results with the simulation experiments. We compare four methods: 1) uniW and 2) uniLik are the same as in simulated experiments; and 3) optW and 4) optLik represent nonuniform sampling probability using $\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})$
with the IPW estimator and with the likelihood estimator. We use ``opt" to refer to both optW and optLik when focusing on the sampling.
We study two sets of negative sample rates (w.r.t. the original 0.3 trillion data): (i) $[0.01, 0.05]$. In this case, negative instances still outnumber positive instances. (ii) $[0.001, 0.008]$. In this case, negative subsampling is too aggressive that positive instances dominates negative instances. We demonstrate the result in Figure~\ref{fig:lcc1}. When the sample rate is moderate (Figure~\ref{fig:lcc1}.(b)), opt is nearly optimal and can significantly outperform uniform subsampling. There is still a small gap between the IPW estimator and the likelihood estimator. When the sample rate is extremely small (Figure~\ref{fig:lcc1}.(a)), %
{opt becomes sub-optimal and its gap w.r.t. uniform sampling is closer. Still we see a clear difference between the IPW estimator and the likelihood estimator. This is due to a larger asymptotic variance for uniform sampling when the sampling rate goes to zero.} Note that on this huge data set, a small relative AUC gain (e.g., 0.0005) usually could bring about significant revenue gain in products.
\begin{figure}[htb]
\centering
\begin{minipage}{0.49\linewidth}\centering
\includegraphics[width=\linewidth]{figures/case_lcc_small_sample_rate_flat.pdf}
(a). {\small Extremely small sampling rates.}
\end{minipage}
\begin{minipage}{0.49\linewidth}\centering
\includegraphics[width=\linewidth]{figures/case_lcc_moderate_sample_rate_flat.pdf}
(b) {\small Moderate sampling rates.}
\end{minipage}
\caption{Empirical testing AUC of subsample estimators for different sample sizes (the larger the better).}
\label{fig:lcc1}
\end{figure}
\subsection{Sensitivity analysis}\label{sec:sensitivity-analysis}
In practice, we find it necessary to use cross-validation to find the optimal $\varrho$ in (\ref{eq:8}) given a fixed sample rate. As demonstrated in Figure~\ref{fig:sensitivity}, for each sampling rate we tune $\varrho$ from a very small value (in this case, $10^{-6}$) all the way to its largest value. %
When $\varrho$ achieves its largest value, $\rho\tilde{\omega}^{-1}=0$ and $\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})$ reduces to uniform negative sampling. We observe that $\pi_{\varrho}^{\os}(\x;\tilde{\bm{\vartheta}})$ with a moderate $\varrho$ achieves the best results for various sampling rates. Note that $\varrho$ is the lower truncation level for negative sampling probabilities. Thus, a larger $\varrho$ means more ``simple" negative instances ($\rho\tilde{\omega}^{-1}p(\x_i;\tilde{\bm{\theta}})$ is smaller than $\varrho$) will have better chances to be selected. This coincides with recent empirical results for negative subsampling in large search systems~\citep{huang2020embedding}, where they mix ``simple" (smaller $p(\x_i;\tilde{\bm{\theta}})$) and ``hard" (larger $p(\x_i;\tilde{\bm{\theta}})$) negative instances. We also observe an empirical variance of around $0.0001$
for each setup, demonstrating that the relative
improvement is consistent.
\begin{figure*}[htp]
\centering%
\includegraphics[width=\linewidth]{figures/sensitivity_analysis_v2.pdf}\vspace{-20pt}
\caption{AUC for IPW and likelihood estimators when tuning the truncation lower bound $\varrho$.}
\label{fig:sensitivity}
\end{figure*}
\section{Conclusions, discussion, and limitations}
\label{sec:conclusions}
In this paper, we have derived asymptotic distributions for full data and subsample estimators with rare events data. We have also found the optimal sampling probability that minimizes the unconditional asymptotic variance of the IPW estimator, and proposed the likelihood estimator through nonuniform log odds correction to further improve the estimation efficiency. The optimal probability depends on unknown parameters so we have discussed practical implementations and investigated the convergences of the resultant estimators. Experiments on both simulated and real big online streaming data confirm our theoretical findings.
Heuristically, our nonuniform negative sampling method gives preference to data points that are harder to observe, so it would give preference to sub-populations that are difficult to observe. We do not think this would bring any negative societal impacts. We have also examined our real data application and did not notice discrimination against any sub-populations.
This paper has the following limitations: 1) We assume that the model is correctly specified. Theoretical properties with model misspecification is not considered, and this important question requires future studies. 2) Oversampling the rare positive instances is another common practice to deal with imbalanced data. Its application together with negative sampling is not considered. 3) The assumption of a fixed $\bbeta^{*}$ means that both the marginal and conditional probabilities for a positive instance to occur are small. If the proportion of the positive instances for a given feature in a dataset is large, this assumption may not be appropriate. Further investigation is need to see if our results is still applicable.
In the following, we present three scenarios that the scaling regime may or may not fit in:
Scenario 1 (phone call while driving) Car crashes occur with small probabilities, and making phone calls while driving significantly increases the probability of a car crash. However, the probability of car crashes among people making phone calls while driving is still small, so for these types of features, our scaling regime is appropriate to model the rare events.
Scenario 2 (anti-causal learning) Anti-causal learning \cite{scholkopf2012causal,kilbertus2018generalization} assumes that label ($y$) causes observations ($x$). Thus $\Pr(x|y)$ represents the causal mechanism and is fixed. One standard example is that diseases ($y$) cause symptoms ($x$). Our scaling regime fits the framework of anti-causal learning. To see this, using Bayes Theorem we write the log odds as
\begin{equation*}
\alpha+f(\x;\bbeta)=\log\frac{\Pr(y=1)}{\Pr(y=0)} + \log\frac{\Pr(\x\mid y=1)}{\Pr(\x\mid y=0)}.
\end{equation*}
In anti-causal learning, the marginal distribution of $y$ changes while the conditional distribution of $\x$ given $y$ is fixed. Thus only the scale factor $\alpha$ changes, and $f(\x;\bbeta)$ is fixed.
Scenario 3 (local transmission of the COVID-19) Our scaling regime has some limitations and does not apply to all types of rare events data. For example, although the COVID-19 infection rate for the whole population is low, the infection rate for people whose family members are in the same house with positive test results is high. This means the change of a family member's test result converts a small-probability-event to a large-probability-event, and our scaling regime would not be appropriate.
\begin{ack}
HaiYing Wang's research was partially supported by NSF grant CCF 2105571.
\end{ack}
\newpage
\begin{center}\Large\bf
Proofs and Additional Numerical Experiments
\end{center}
|
1,108,101,562,796 | arxiv | \section{Introduction}
The use of curved crystals to diffract and focus x-rays comes as a natural extension of the mirror and grating technology for radiation of longer wavelength. Some fundamental concepts, like the Rowland circle, date back to the 19$^\text{th}$ century \cite{rowland1882}.
The fundamental setups using bent crystals to focus X-rays were proposed in the early 1930’s. Some systems use meridional focusing (in the diffraction plane), like i) Johann spectrometer \cite{Johann1931}, using a cylindrically bent crystal, ii) Johansson spectrometer \cite{Johansson1933} using a ground and cylindrically bent crystal and iii) the Cauchois spectrometer \cite{cauchois1933} in transmission (Laue) geometry. The von Hamos spectrometer \cite{V.Hamos1933} applies sagittal focusing in the plane perpendicular to the diffraction plane.
With the advent of synchrotron radiation, the concepts of ``geometrical focusing" were applied to design instruments such as polychromators for energy-dispersive extended x-ray absorption fine structure (EXAFS) \cite{Tolentino:ms0206}, monochromators with sagittal focusing for bending magnet beamlines \cite{Sparks1980}, or several types of crystal analyzers used at inelastic x-ray scattering beamlines. Bent crystals in transmission or Laue geometry are often employed in beamlines operating at high photon energies. The crystal curvature is used for focusing or collimating the beam in the meridional \cite{Suortti1988,SuorttiShulze} or sagittal \cite{Zhong2001} plane, or just to enlarge the energy bandwidth and improve the luminosity. The crystal bandwidth was optimized and aberrations reduced thanks to the high collimation and small source size of synchrotron beams. Curved crystal monochromators work in off-Rowland condition, whereas crystal analysers for inelastic scattering studies work in the Rowland setting.
A ``Crystal Lens Equation" (CLE) was indeed formulated by \cite{CK} for the focusing properties of a cylindrically bent crystal plate diffracting monochromatic x-rays or neutrons, in Laue (transmission) or Bragg (reflection) geometries. The crystal is bent around an axis perpendicular to the diffraction plane (meridional focusing). This CLE is based on a purely geometric approach in which multiple Bragg scattering (dynamical effects) is neglected.
The CLE is revisited in Section~\ref{sec:CLE}, in order to correct errors found in \cite{CK} for the Laue geometry. A new formula valid in Bragg and Laue geometry is obtained, using the same geometrical approach as in \cite{CK}.
The CLE has wide applicability in Bragg geometry. However, its use for Laue geometry is limited to very thin crystals, because
it ignores a basic dynamical focusing effect also found in flat crystals,
as described in section~\ref{sec:dynamlicalLaue}.
The applicability of the lens equation in symmetrical Bragg geometry is discussed in appendix~\ref{sec:BraggGeometry}.
The CLE concerns the focusing of monochromatic radiation, and is in general different from the condition of polychromatic focusing. The particular cases where these two different focusing conditions coincide are discussed in section~\ref{sec:polychromatic}. A final summary is given in section~\ref{sec:summary}.
\section{The crystal lens equation revisited}
\label{sec:CLE}
The lens equation will be derived in Bragg or Laue geometry, with source $S$ and focus $F$ in real or virtual positions (see Fig.~\ref{fig:geometries}). Consider a monochromatic x-ray or neutron beam from a real or virtual point-source $S$. The origin of coordinates $O$ is chosen at the point of the crystal surface such that the ray $\overline{SO}$, of wavevector $\vec k_0$, is in \inblue{geometrical} Bragg incidence. \inblue{It gives rise outside the crystal} to a diffracted ray of wavevector $\vec k_h = \vec k_0 + \vec h$, where $\vec h$ is the reciprocal lattice vector in $O$, and \inblue{$|\vec k_h|=|\vec k_0|$} (see Fig.~\ref{fig:vectors}). This is valid in both transmission geometry (Laue) or reflection geometry (Bragg) for both plane and curved crystals\footnote{\inblue{Because of refraction effects, this choice implies that $\overline{SO}$ is, in general, not exactly in the direction of the diffraction profile peak, except for the symmetric Laue case}.}.
\begin{figure}
\label{fig:geometries}
\caption{Schematic representation of the different diffraction setups with real or virtual source in Bragg or Laue cases:
a) real source, real focus (red) in Laue case or virtual focus (blue) in Bragg case,
b) real source, virtual focus (red) in Laue case or real focus (blue) in Bragg case,
c) virtual source, real focus (red) in Laue case or virtual focus (blue) in Bragg case,
d) virtual source, virtual focus (red) in Laue case or real focus (blue) in Bragg case.
$L_0=\overline{SO}$ is the distance source to crystal and $L_h=\overline{OF}$ is the distance crystal to focus.}
\includegraphics[width=0.99\textwidth,trim=4cm 9cm 6cm 9cm,clip=true]{fig1.pdf}
\end{figure}
The inward normal to the crystal surface in $O$ is $\vec n$, and $\varphi_0 = (\vec n, \vec k_0)$ is the oriented angle from the vector $\vec n$ to the vector $\vec k_0$. Similarly, $\varphi_h = (\vec n, \vec k_h)$. Without loss of generality $\varphi_0$ is positive; $\theta_B$ is the Bragg angle (always positive).
In the case of symmetric geometry(asymmetry angle $\alpha=0$) we find $\varphi_{0,h}=\pm\theta_B$ in Laue or $\varphi_{0,h}=(\pi/2)\mp\theta_B$ in Bragg. Otherwise, the asymmetry angle $\alpha$ is defined as the angle of rotation of the vector $\vec h$ from its direction in the symmetrical case.
In Laue case $\varphi_{0,h}=\alpha \pm\theta_B$; in Bragg case $\varphi_{0,h}=\alpha\mp\theta_B+\pi/2$, therefore $2\theta_B=|\varphi_0-\varphi_h|$ in both cases, $2\alpha=\varphi_0+\varphi_h$ in Laue case and $2\alpha=\varphi_0+\varphi_h-\pi$ in Bragg case.
When moving the point of incidence $O$ to $P$ over an arbitrary small distance $s$ along the curved crystal surface (see Fig.~\ref{fig:vectors}),
$\vec h$ and $\vec n$ are changed into $\vec h'$
and $\vec n'$, respectively. The incident wavevector $\vec k'_{0}$ has the direction of $\overline{SP}$. It is diffracted into $\vec k'_{h}$.
The projections of the vectors $\vec k'_{h}$ and $\vec k'_{0}+\vec h'$ on the crystal surface are equal (conservation of the parallel components of wave-vectors).
$\varphi_{0,h}$ are changed into $\varphi'_{0,h}=\varphi_{0,h}+\Delta \varphi_{0,h}$.
Furthermore, in the present case of cylindrical bending of very thin crystal, the surface projection of $\vec h'$ is constant (the angle between $\vec h$ and $\vec n$ is constant).
This implies that $(\sin \varphi_h - \sin \varphi_0)$ is invariant, therefore
\begin{equation}
\label{eq:invariant}
\Delta \varphi_h \cos\varphi_h = \Delta \varphi_0 \cos\varphi_0.
\end{equation}
\begin{figure}
\label{fig:vectors}
\caption{Schematic view of the relevant parameters in focusing by a bent crystal in Bragg geometry.
}
\includegraphics[width=0.99\textwidth,trim=4cm 6cm 5cm 10cm,clip=true]{fig2.pdf}
\end{figure}
The source distance $L_0=\overline{SO}$ is set as positive if the source is on the incidence side of the crystal (real source) or negative if the source is on the other side (virtual source) (see Fig.~\ref{fig:geometries}). The radius of curvature $R_c$ is set as positive if the beam is incident on the concave side of the bent crystal. The focus distance $L_h$ is set as positive if the (real or virtual) focus $F$ is situated on the incidence side on the crystal. With these conventions, $(\vec n,\vec n')=s/R_c$, $\epsilon_0 L_0 = s \cos\varphi_0$, $\epsilon_h L_h = s |\cos\varphi_h|$,
where $\epsilon_{0,h}$ are the angles between $\vec k_{0,h}$ and $\vec k'_{0,h}$.
Using the relationship
\begin{equation}
\varphi'_{0,h} =
(n', \vec k'_{0,h}) =
(\vec n', \vec n) + (\vec n,\vec k_{0,h}) + (\vec k_{0,h}, \vec k'_{0,h}) = -\frac{s}{R_c} + \varphi_{0,h} + \epsilon_{0,h},
\end{equation}
we obtain
\begin{equation}
\label{eq:angles}
\Delta \varphi_0 = - \frac{s}{R_c} + s \frac{\cos\varphi_0}{L_0}
\end{equation}
and
\begin{equation}
\label{eq:angles2}
\Delta \varphi_h = - \frac{s}{R_c} + s \frac{|\cos\varphi_h|}{L_h}.
\end{equation}
The crystal lens equation valid in both Bragg and Laue cases, is finally obtained by inserting these expressions in equation~(\ref{eq:invariant})
\begin{equation}
\label{eq:CLE}
\frac{|\cos\varphi_h| \cos\varphi_h}{L_h} - \frac{\cos^2\varphi_0}{L_0} = \frac{\cos\varphi_h - \cos\varphi_0}{R_c}.
\end{equation}
In the Laue symmetrical case ($\cos\varphi_h=\cos\varphi_0$) it predicts $L_h=L_0$ (for a real source, the focus is virtual at the same distance as the source) and, in the particular case of $L_0=+\infty$, a plane incident wave is diffracted into a plane wave.
The crystal lens equation~(\ref{eq:CLE}) obtained here is different from the equation given in \cite{CK}\footnote{The CLE given in \cite{CK} is
$
\cos^2\varphi_0/L_0 + \cos^2\varphi_h/L_h = (\cos\varphi_0 + |\cos\varphi_h|)/R_c$.
We think this is due to mistakes in their calculations, specially sign errors in their equation (9) as compared to our equation~(\ref{eq:angles}).
}.
Both equations are equivalent for the Bragg case ($\cos\varphi_h<0$), which is also considered by \citeasnoun{snigirevkohn1995}. They are not equivalent in the Laue case.
Note that we used in this section the same notation as \cite{CK}, where $R_c$ is positive for a concave surface, used to focus in Bragg case. For the rest of the paper, we also use the notation: $p \leftarrow L_0$, $q \leftarrow -L_h$ $R \leftarrow -R_c$, $\theta_1 \leftarrow \varphi_0$ and $\theta_2 \leftarrow \varphi_h$, which is more convenient for Laue crystals, because real focusing is obtained when the beam coming from a real source is incident on the convex side of the bent crystal (with positive $R$).
Equation~(\ref{eq:CLE}) is obtained here using a geometrical ray optics approach. It can also be deduced from a wave-optics approach as shown in Appendix~\ref{appendix:CLE}.
\section{Dynamical focusing in Laue geometry}
\label{sec:dynamlicalLaue}
The applicability of the CLE for the Laue case is limited to very thin crystals. The dynamical theory (see book \cite{authierbook})
predicts ``new" focal conditions, even for flat Laue crystals.
This is analyzed here in the framework of the Takagi-Taupin equations, hereafter TTE \cite{Takagi1962, Takagi, Taupin, Taupin1967}.
Section~\ref{sec:influence} deals with the derivation of the ``influence functions" (Green functions) which represent the wavefield generated in the crystal by a point-source on the crystal entrance surface.
In section~\ref{sec:LaueFlat}, the approach to dynamical focusing in the symmetric Laue case
\cite{kushnir, GuigayFerrero2013}
is extended to asymmetric geometry. The effects of anomalous absorption (Borrmann effect) are obtained in parallel. The new concept of ``numerically determined focal length" of a flat crystal, denoted as $q_{dyn}$, is introduced.
In section~\ref{sec:LaueNewCLE}, a lens equation for a bent Laue symmetrical crystal of finite thckness, expressed in terms of $q_{dyn}$ is established. Its predictions are shown to be in agreement with numerical calculations.
In section~\ref{sec:LaueCompatibilityCLE}, we make the verification that the formulation for the Laue asymmetric case by
\cite{GuigayFerrero2016} is in agreement with the CLE (equation (\ref{eq:CLE})) in the limit of vanishing crystal thickness.
\subsection{Influence function derived from Takagi-Taupin equations}
\label{sec:influence}
The x-ray wavefield inside the crystal is expressed as the sum of two modulated plane waves
\begin{equation}
\Psi(\vec x) = D_0(\vec x) e^{i \vec k_0 . \vec x} + D_h(\vec x) e^{i \vec k_h . \vec x},
\end{equation}
with slowly varying amplitudes $D_{0,h}(\vec x)$.
The spatial position $\vec x$ is expressed in oblique coordinates $(s_0,s_h)$ along the directions of the $\vec k_0$ and $\vec k_h=\vec k_0 + \vec h$ vectors, which are the in-vacuum wave-vectors of modulus $k=2\pi/\lambda$, where $\lambda$ is x-ray wavelength. $\vec{h}$ is the Bragg diffraction vector of the undeformed crystal. In such conditions, the differential TTE are
\begin{subequations}
\label{eq:TT}
\begin{align}
\frac{\partial D_0}{\partial s_0} =& \frac{ik}{2} \left[ \chi_0 D_0(\vec x)+c \chi_{\bar h} e^{i \vec h . \vec u (\vec x)} D_h(\vec x) \right]; \\
\frac{\partial D_h}{\partial s_h} =& \frac{ik}{2} \left[ \chi_0 D_h(\vec x)+c \chi_{h} e^{-i \vec h . \vec u (\vec x)} D_0(\vec x) \right],
\end{align}
\end{subequations}
where $\chi_0$, $\chi_h$, and $\chi_{\bar h}$ are the Fourier coefficients of order 0, $\vec h$ and $-\vec h$ of the undeformed crystal polarisability. The polarization factor $c$ ($c=1$ for $\sigma$-polarization and $c=\cos2\theta_B$ for $\pi$-polarization) is omitted from now on.
$\vec u (\vec x)$ is the displacement field of the deformed crystal.
In the case of cylindrical bending we have
\begin{equation}
\label{eq:cylinder}
\vec h . \vec u = -A s_0 s_h + \phi_1(s_0) - \phi_2(s_h)
\end{equation}
where $A$ and the $\phi_{1,2}$ functions are defined in Appendix~\ref{appendix:Deformation}.
This a
``constant strain gradient" case \cite{authierbook} meaning that $\partial^2(\vec h . \vec u)/(\partial s_0 \partial s_h)$ is constant. In terms of the functions $G_{0,h}(s_0,s_h)$ defined by
\begin{subequations}
\label{eq:functionsG}
\begin{align}
D_0(s_0,s_h) &= G_0(s_0,s_h) \exp[i\frac{k}{2}\chi_0 (s_0+s_h)-i \phi_2(s_h)]\\
D_h(s_0,s_h) &= G_h(s_0,s_h) \exp[i\frac{k}{2}\chi_0 (s_0+s_h)-i \phi_1(s_0)+iAs_0s_h],
\end{align}
\end{subequations}
the TTE have a simpler form
\begin{subequations}
\label{eq:TTEsimple}
\begin{align}
\frac{\partial G_0}{\partial s_0} &= i \frac{k}{2}\chi_{\bar{h}} G_h
\\
\frac{\partial G_h}{\partial s_h} &= i \frac{k}{2}\chi_{h} G_0 - i A s_0 G_h.
\end{align}
\end{subequations}
An incident monochromatic wave of any form can be expressed as a modulated plane wave $D_{inc}(\vec x)\exp(i \vec k_0 . \vec x)$ defining a continuous distribution of coherent elementary point-sources on the crystal surface, according to the general Huyghens principle in optics. The “influence functions” or Green functions, hereafter IF, are the TTE solutions for these point-sources.
The IF for point-sources of oblique coordinates $(\sigma_0,\sigma_h)$
are derived in \cite{GuigayFerrero2016} by formulating the TTE as integral equations in the case of an incident amplitude of the form $D_{inc}=\delta(s_h-\sigma_h)$.
The calculations (see appendix~\ref{appendix:TTEintegral}) result in the diffracted amplitude\footnote{the result for the transmitted amplitude $D_0(s_0,s_h)$ is not necessary for our results and is not presented here, but it is easily obtained using equation~(\ref{eq:kummer}) in (\ref{eq:TTEsimple}).}
\begin{equation}
\label{eq:kummer}
D_h(s_0,s_h) = \frac{i k }{2} \chi_h e^{(ik/2) \chi_0 (s'_0 + s'_h)} e^{-i \vec h . \vec u (s_0,\sigma_h)} M(\frac{i\Omega}{A},1,iA s'_0 s'_h)
\end{equation}
where the first exponential term stands for the effects of refraction and normal absorption, $s'_{0,h}=s_{0,h}-\sigma_{0,h}$; $\Omega=k^2\chi_h\chi_{\bar{h}}/4$ and the $M$-function is the Kummer function (a confluent hypergeometric function) defined by the convergent infinite series
\begin{equation}
\label{eq:kummerSeries}
M(a,b,z) = 1 + \frac{a}{b} z +
... + \frac{a(a+1)...(a+n-1)}{n! b (b+1)...(b+n-1)}z^n+...
\end{equation}
This type of TTE solution was already obtained by different methods \cite{Petrashen1974,Katagawa1974,Litzmann1974,Chukhovski1977}.
It is noticeable that the term $\exp[-i\vec h . \vec u (s_0,\sigma_h)]$ in equation~(\ref{eq:kummer}) is the phase shift acquired by scattering at the point of coordinates $(s_0,\sigma_h)$ along the incident ray. We can say that the kinematical (single-scattering) approximation of equation~(\ref{eq:kummer}) is
\begin{equation}
\label{eq:kummerapprox}
D_{h,kin}(s_0,s_h) = \frac{i k }{2} \chi_h e^{(ik/2) \chi_0 (s'_0 + s'_h)} e^{-i \vec h . \vec u (s_0,\sigma_h)}
\end{equation}
and the full multiple scattering is $D_h=D_{h,kin} M$.
\subsection{Dynamical focusing and Borrmann effect in a flat, asymmetric, Laue crystal}
\label{sec:LaueFlat}
Dynamical focusing by flat Laue crystals (without bending) was predicted by \citeasnoun{AfanasevKohn1977} and verified experimentally by \cite{Aristov1978,Aristov1980PhysStatSol,Aristov1980}
in the case of symmetrical geometry. The theory was extended to the asymmetric case by \citeasnoun{Kohn2000}. The application of dynamical focusing to high-resolution spectrometry was proposed by \citeasnoun{KohnGorobtsov2013}.
\begin{figure}
\label{fig:laue}
\caption{Schematic representation of the relevant parameters in Laue asymmetrical diffraction.
}
\includegraphics[width=0.99\textwidth,trim=3cm 10cm 5cm 10cm,clip=true]{fig3.pdf}
\end{figure}
The basic case of dynamical focusing is that of a point-source in $O$ ($\sigma_0=\sigma_h=0$) on the crystal entrance surface of the crystal of thickness $t$.
$O'$ is the middle of the basis of the influence region (Borrmann fan) on the exit surface (see Fig.~\ref{fig:laue}).
The amplitude of the diffracted wave along the axis $O'\xi \perp \vec k_h$ is the value of the IF at the point of coordinates
\begin{equation}
\label{eq:s0andsh}
s_0 = \frac{a+\xi}{\sin2\theta_B } ; \:\:
s_h = \gamma\frac{a-\xi}{\sin2\theta_B},
\end{equation}
with $a=t \sin2\theta_B/(2\cos\theta_1)$ and $\gamma=\cos\theta_1/\cos\theta_2$.
The amplitude $D_h(\xi)$ is zero outside the interval $-a<\xi<a$,
and is proportional to the Bessel function $J_0(k \sqrt{\chi_h \chi_{\bar h} s_0 s_h})=J_0(Z\sqrt{a^2-\xi^2})$ in this interval \cite{kato1961}, with $Z=k\sqrt{\gamma\chi_h\chi_{\bar h}}/\sin2\theta_B$. In the case $|Z a| \gg 1$ the asymptotic approximation
\begin{equation}
J_0(Z\sqrt{a^2-\xi^2})\approx \left(\frac{2}{\pi Z \sqrt{a^2-\xi^2}}\right)^{1/2} \cos(Z\sqrt{a^2-\xi^2}-\pi/4)
\end{equation}
can be used in the central region $|\xi|\ll a$ where $\sqrt{a^2-\xi^2} \approx a - \frac{\xi^2}{2a}$.
We thus obtain in this central region the approximation
\begin{equation}
\label{eq:approximatedDiffractedField}
J_0(Z\sqrt{a^2-\xi^2})\approx \left(\frac{2}{i \pi Z a}\right)^{1/2} \left( e^{iZa-i Z\frac{ \xi^2}{2a}} + i
e^{-i Z a+i Z\frac{\xi^2}{2a}} \right),
\end{equation}
where the two exponential terms are related to the two sheets of the dispersion surface.
The function $\exp(- i Z \xi^2 / (2 a))$
represents a converging wave if $\operatorname{Re}(Z)>0$ (divergent if ($\operatorname{Re} Z <0$). A double, real and virtual, focusing effect is thus expected at opposite distances $\pm q_0$ from the crystal, with
\begin{equation}
\label{eq:q0}
q_0 = \frac{k a}{|\operatorname{Re}(Z)|}= \frac{a \sin2\theta_B}{|\operatorname{Re}(\sqrt{\gamma\chi_h\chi_{\bar h}})|}
\end{equation}
This equation is present in \cite{Kohn2000, KohnGorobtsov2013} in a different form and from a different point of view. These authors consider a point-source at a finite distance and their equation determines the value of the crystal thickness needed to focus the diffracted wave on the back crystal surface. A noticeable difference is that our equation is expressed in terms of $\chi_h \chi_{\bar h}$ without approximations concerning the real and imaginary parts of the crystal polarizability. In the works cited above $\operatorname{Re} (\sqrt{\chi_h \chi_{\bar h}})$ is approximated by $|\chi_{hr}|$ or $|\chi_h|$.
The moduli of the two terms in equation~(\ref{eq:approximatedDiffractedField}) are proportional to $\exp(\mp a \text{Im}~(Z))$, respectively. This is the expression of anomalous absorption (Borrmann effect). Two focal positions will be observed for small absorption, but only one for strong absorption, as shown in Fig.~\ref{fig:flatLaue}.
The reflected amplitude at any distance $q$ from the crystal can be calculated numerically, without the approximations used above, by the Fresnel diffraction integral
\begin{equation}
\label{eq:Fresnel}
D_h(\xi; q) = (\lambda q)^{-1/2} \int_{-a}^a d\xi' \, e^{i k
\frac{(\xi-\xi')^2}{2 q}}
J_0(Z\sqrt{a^2-\xi'^2}).
\end{equation}
The ``axial intensity profile" $|D_h(0,q)|^2$ shows in general two strong maxima at distances $q_{1,2}=\pm q_{dyn} < q_0$ (Fig.~\ref{fig:flatLaue}). This difference is a cylindrical aberration effect related to the approximations used to obtain equation (\ref{eq:q0}). The parameter $q_{dyn}$, which depends on the crystal thickness, is the ``dynamical focal length" obtained numerically, thus non-approximated (contrary to $q_0$).
As an example, some numerical values are given in Table~\ref{table:example}.
\begin{table}
\caption{Parameters for symmetrical Laue silicon crystal in 111 reflection and thickness $t$~= \SI{250}{\micro\meter}.}
\begin{tabular}{llccccc}
\makecell{Photon \\ energy \\ (keV)}& \makecell{$\theta_B$ \\ (deg)} & $\chi_0$ & $\chi_h\chi_{\bar h}$ & \makecell{$a$ \\ (\SI{}{\micro\meter})}& \makecell{$q_0$ \\ (mm)} & \makecell{$q_{dyn}$ \\ (mm)} \\
\hline
8.3 & 13.78 & \makecell{(-14.24 + 0.317 i) 10$^{-6}$} & \makecell{(58.06 - 3.416 i) 10$^{-12}$} & 59 & 3615 & 2860 \\
17 & 6.68 & \makecell{(-3.36 + 0.018 i) 10$^{-6}$} & \makecell{(3.20 - 0.046 i) 10$^{-12}$} & 29 & 3753 & 2535
\end{tabular}
\label{table:example}
\end{table}
\begin{figure}
\label{fig:flatLaue}
\caption{Numerical evaluation of on-axis intensity for a \SI{250}{\micro\meter} thick flat Si111 crystal ($R=\infty$) with source at the crystal entrance surface ($p=0$) calculated using equation~(\ref{eq:Fresnel}).
a) Simulation for a photon energy of 8.3 keV.
b) Simulation for a photon energy of 17 keV.
Numerical values of these simulations are in Table~\ref{table:example}.
}
\includegraphics[width=1\textwidth]{fig4.pdf}
\end{figure}
The focusing condition for a source at a finite distance $p$ from the crystal can be obtained by considering that propagation in free-space and propagation in the flat crystal are space-invariant, therefore expressed as convolutions in direct space or simple multiplications in reciprocal space. Therefore, they can be commuted. This allows to merge the free-space propagation before and after the crystal. The focusing condition is therefore
\begin{equation}
p + q = q_{dyn}.
\end{equation}
On the contrary, propagation through a bent crystal is not space-invariant because the IF is not only dependent on the variables $(s'_0, s'_h)$, but also on the variables $(\sigma_0, \sigma_h)$ because of the factor
$\exp[-i \vec h . \vec u (s_0, \sigma_h)]$ in
equation~(\ref{eq:kummer}).
\subsection{A new lens equation for a bent crystal of finite thickness in symmetrical Laue geometry}
\label{sec:LaueNewCLE}
In symmetrical Laue geometry, the factor $\exp(i \chi_0 (s'_0+s'_h))$ in equation~(\ref{eq:kummer}) is constant on the crystal exit surface and will be omitted. Equation~(\ref{eq:kummer}) is (see Appendix~\ref{appendix:TTEintegral})
\begin{equation}
\label{eq:DhSymmetricalLaue}
D_h(s_0,s_h) = \frac{i k}{2} \chi_h e^{-i \vec h . \vec u(s_0,\sigma_h)}
J_0(2\sqrt{\Omega s'_0 s'_h}).
\end{equation}
Let us consider the incident amplitude $D_{inc}(\tau)=\exp(i k \tau^2/(2p))$, where $\tau$ is a coordinate along the axis $O\tau$ normal to $\vec k_0$ (see Fig.~\ref{fig:laue}). On the exit surface, using $s_0=(\xi+a)/\sin2\theta_B$ and $\sigma_h=-\tau/\sin2\theta_B$, and the notation $R'=R\cos\theta_B$ we obtain from equations in appendix \ref{appendix:Deformation}, in the case $\alpha=0$
\begin{equation}
\vec h . \vec u(s_0,\sigma_h) = k \frac{\tau(\tau+a)-\xi(\xi+a)}{ 2R'}.
\end{equation}
Using the integration variable $\eta=\xi-\tau$, the amplitude along the $\xi$-axis is, with omission of $i(k/2)\chi_h$,
\begin{equation}\label{eq:blabla}
D_h(\xi,0)=\int_{-a}^{+a}\frac{d\eta}{\sqrt{\lambda p}}
e^{\frac{ik}{2}\left[\frac{(\xi-\eta)^2}{p}+\frac{\eta^2-2\eta\xi-a\eta}{R'}\right]}
J_0(Z\sqrt{a^2-\eta^2}).
\end{equation}
The wave amplitude at a distance $q$ downstream from the crystal is obtained using a Fresnel diffraction integral similar to equation~(\ref{eq:Fresnel}). We thus have a double integral over $\eta$ and $\xi'$. The $\xi'$ integration is performed analytically \cite{GuigayFerrero2013} and it turns out that
\begin{equation}
\label{eq:Dhpropagated}
D_h(\xi,q)=
\frac{e^{i k \frac{\xi^2}{2L}}}{\sqrt{\lambda L}}
\int_{-a}^{+a} d\eta
e^{\frac{ik}{2}
[\frac{\eta^2}{L_e}-\eta(
\frac{2\xi q_e}{q L_e}+
\frac{a}{R'}
)]}
J_0(Z\sqrt{a^2-\eta^2}),
\end{equation}
where $L=p+q$, $p_e^{-1}=p^{-1}+R'^{-1}$, $q_e^{-1}=q^{-1}-R'^{-1}$ and $L_e=p_e+q_e$. The focal positions are given by $L_{e}=\pm q_{dyn}$.
This can be written as
\begin{equation}
\label{eq:preLaueCLE}
\frac{R'}{R'-q} - \frac{R'}{R' + p} = \pm \frac{q_{dyn}}{R'}.
\end{equation}
Translating equation~(\ref{eq:preLaueCLE}) in the notation of section~\ref{sec:CLE} ($p \to L_0$, $q \to -L_h$, $R \to -R_c$), we obtain
\begin{equation}
\label{eq:newCLE}
\frac{1}{L_h-R_c \cos\theta_B} -
\frac{1}{L_0 - R_c \cos\theta_B} =
\pm \frac{q_{dyn}}{(R_c \cos\theta_B)^2}.
\end{equation}
If $q_{dyn}$ is set to zero, we obtain $L_h=L_0$, the same result as the lens equation~(\ref{eq:CLE}).
Equation~(\ref{eq:newCLE}) can be considered as a ``modified lens equation" which takes dynamical diffraction effects into account in symmetric Laue geometry.
We do not know an equation like equation~(\ref{eq:newCLE}) for the general case of asymmetrical Laue diffraction. However, numerical simulations can be done to obtain the focal positions \cite{Nesterets,GuigayFerrero2016}.
Examples of numerical calculations using equation~(\ref{eq:Dhpropagated}) are shown in Fig.~\ref{fig:8keV}, for the case of the 111 reflection of a \SI{250}{\micro\meter} thick cylindrically bent symmetric Laue silicon crystal, with a curvature radius of $R$~= \SI{1}{\meter}, at a source distance $p$~= \SI{30}{\meter} and for x-ray photon energies of 8.3~keV and 17~keV.
Alternatively, provided that the parameter $q_{dyn}$ has been previously determined numerically by a plot similar to Fig.~\ref{fig:flatLaue}, the focal positions can be given directly by equation~(\ref{eq:newCLE}). The results are in very good agreement with the focal positions obtained obtained numerically in Fig.~\ref{fig:8keV}. An important advantage in using the new CLE is that the same value of $q_{dyn}$ can be used for any value of the radius of curvature and for any value of source distance.
We are often interested in real focusing ($q>0$) of an incident beam from a very distant real source, for instance in dispersive EXAFS beamlines.
Suppose $0<R'\le q_{dyn}$. When $p$ increases from zero to infinity, $q_1$ decreases from $q_1=R'q_{dyn}/(q_{dyn}+R')$ to
$q_1=R' q_{dyn}(q_{dyn}-R')$.
Simultaneously, $q_2$ decreases from $q_2=R'q_{dyn}/(q_{dyn}-R')$ to
$q_2=R'q_{dyn}(q_{dyn}+R')$.
For very large $p$-values, we have the simple relation $q_1+q_2\approx 2R'$ in good agreement with the numerical results in Fig.~\ref{fig:8keV}.
\begin{figure}
\label{fig:8keV}
\caption{Numerical evaluation of diffracted intensity by a \SI{250}{\micro\meter} thick Si 111 symmetric Laue crystal calculated using equation~(\ref{eq:Dhpropagated}) for a bent (R~= \SI{1}{\meter}) crystal and $p$~= \SI{50}{\meter}.
a) on-axis intensity for a photon energy of 8.3 keV.
Inset: transverse profile at the focal distances (maximum values):
$q_1$~= \SI{651}{\milli\meter} (blue), and
$q_2$~= \SI{1330}{\milli\meter} (red).
b) on-axis intensity for a photon energy of 17 keV.
Inset: transverse profile at the focal distances (maximum values):
$q_1$~= \SI{625}{\milli\meter} (blue), and
$q_2$~= \SI{1372}{\milli\meter} (red).
}
\includegraphics[width=1\textwidth]{fig5.pdf}
\end{figure}
It can be seen from equation~(\ref{eq:Dhpropagated}) that the intensity function $|D_h(s_0,s_h)|^2$ as a function of $\xi$ is symmetric around $\xi_c=-a q L_e / (2 q_e R')$. This denotes a lateral shift of the intensity profile from its position for the unbent crystal (the axial intensity profiles of Fig.\ref{fig:8keV}a and \ref{fig:8keV}b are actually plotted as a function of $(\xi-\xi_c)$.
\subsection{Semianalytical approach in asymmetric Laue geometry and its CLE limit}
\label{sec:LaueCompatibilityCLE}
The generalization of equation (\ref{eq:blabla}) to asymmetric Laue geometry is \cite{GuigayFerrero2016}
\begin{equation}
\label{eq:unpropagatedkummer}
D_h(\xi,0) =
\int_{-a}^{a} \gamma\frac{d\eta}{\sqrt{\lambda p}}
e^{i k \gamma^2
\frac{(\xi-\eta)^2}{2p}+i \phi(\xi,\eta)
}
M(\frac{i\Omega}{A},1,i g k \frac{a^2-\eta^2}{R}).
\end{equation}
Here, $\phi(\xi,\eta)$ is calculated from the term $\exp(-i\vec h . \vec u(s_0,\sigma_h))$ in equation~(\ref{eq:kummer}) with $s_0=(a+\xi)/\sin2\theta_B$ and $\sigma_h=\gamma(\xi-\eta)/\sin2\theta_B$, giving
\begin{multline}
\phi(\xi,\eta) =\frac{k}{2R}[-\mu_2\gamma^2(\xi-\eta)^2
+a_2\gamma(\eta-\xi) \\
-\mu_1(a+\xi)^2
+a_1(a+\xi)
-2g(a+\xi)(\xi-\eta)],
\end{multline}
with parameters $\mu_{1,2}$, $a_{1,2}$ and $g$ given in Appendix~\ref{appendix:Deformation}.
The reflected amplitude $D_h(\xi,q)$ at distance $q$ downstream from the crystal is again obtained as in equation~(\ref{eq:Fresnel}), therefore by double integration over $\eta$ and $\xi'$. The $\xi'$-integration can be again performed analytically. The remaining $\eta$-integration involving the Kummer function is carried out numerically \cite{GuigayFerrero2016}. We consider this approach as semi-analytical, in contrast to the approach based on a numerical solution of the TTE \cite{Nesterets}.
It is interesting to study analytically the limit of this semi-analytical formulation in the case of
vanishing crystal thickness ($a\rightarrow0{}$) because the comparison with lens equation~(\ref{eq:CLE}) represents a validity test of the semi-analytical formulation.
In the limit ($a\rightarrow0{}$),
the Kummer function is equal to unity in equation~(\ref{eq:unpropagatedkummer}), and the integral can be replaced by $2a$ times the integrand evaluated at $\eta=a=0$, therefore
\begin{equation}
\label{eq:14reduced}
D_h(\xi,0) = \frac{2 a \gamma}{\sqrt{\lambda p}} e^{\frac{i k \xi^2}{2}(\frac{\gamma^2}{p}-\frac{\mu_2\gamma^2+\mu_1+2g}{R})}.
\end{equation}
This is the expression of the amplitude of a cylindrical wave focused at the distance $q$ such that
\begin{equation}
\frac{1}{q}+\frac{\gamma^2}{p}-\frac{\mu_2\gamma^2+\mu_1+2g}{R}=0.
\end{equation}
Using the identity
\begin{equation}
\label{eq:appendixIdentity}
\mu_1+\gamma^2\mu_2+2g=\frac{\cos\theta_2-\cos\theta_1}{\cos^2\theta_2},
\end{equation}
which is derived in Appendix~\ref{appendix:Deformation}, the focusing condition is
\begin{equation}
\frac{1}{q}+\frac{\gamma^2}{p}+\frac{\cos\theta_1-\cos\theta_2}{R\cos^2\theta_2}=0,
\end{equation}
or,
\begin{equation}
\frac{\cos^2\theta_2}{q}+\frac{\cos^2\theta_1}{p}+\frac{\cos\theta_1-\cos\theta_2}{R}=0,
\end{equation}
which is the CLE (equation~(\ref{eq:CLE})) for the Laue case, with the correspondence $p \rightarrow L_0$, $q \rightarrow -L_h$, $R \rightarrow -R_c$, $\theta_1 \rightarrow \varphi_0$ and $\theta_2 \rightarrow \varphi_h$
\section{Polychromatic geometric focusing}
\label{sec:polychromatic}
As pointed out by \cite{CK}, the monochromatic focusing condition must not be confused with the polychromatic focusing condition \cite{handbook,Caciuffo1987,Schulze1998,Martinson,martinson2017}, obtained by varying the wavelength of the reflected rays in order to satisfy the exact Bragg condition on the whole crystal surface.
The equation $\varphi_0+\varphi_h=2\alpha$ in Laue, or $\varphi_0+\varphi_h=2\alpha+\pi$ in Bragg case, implies $\Delta\varphi_0+\Delta\varphi_h=0$. Using equations~(\ref{eq:angles}) and (\ref{eq:angles2}) we obtain
\begin{equation}
\label{eq:polychromaticfocusing}
\frac{{\cos {\varphi _o}}}{{{L_0}}} + \frac{{\left| {\cos {\varphi _h}} \right|}}{{{L_h}}} = \frac{2}{R_c}.
\end{equation}
Equation~(\ref{eq:polychromaticfocusing}) is usually referred to as the "geometric focusing" condition for bent crystals. It is also applied in the case of flat crystals \cite{sanchezdelrio1994}. Like in equation~(\ref{eq:CLE}), the crystal thickness does not appear in equation~(\ref{eq:polychromaticfocusing}).
The combination of equations (\ref{eq:CLE}) and (\ref{eq:polychromaticfocusing}) gives
\begin{equation}
\label{eq:coincidence}
\frac{\cos\varphi_0}{L_0}(\cos\varphi_h+\cos\varphi_0) = \frac{|\cos\varphi_h|}{L_h}(\cos\varphi_h+\cos\varphi_0),
\end{equation}
which is verified either in the symmetric Bragg case ($\cos\varphi_h+\cos\varphi_0=0$), or if $\cos\varphi_0/L_0=|\cos\varphi_h|/L_h=1/R$, which is the Rowland condition. The Rowland condition is therefore necessary for the coincidence of equations (\ref{eq:CLE}) and (\ref{eq:polychromaticfocusing}) in Laue geometry.
\footnote{This is different from the statement of \cite{CK} that the coincidence is always realised under symmetrical reflection or the Rowland condition.}
A narrow energy band is reflected in Rowland condition, because the angle of incidence on the local reflecting plane does not change along the bent crystal surface.
On synchrotron dispersive EXAFS beamlines, the use of a Bragg symmetric reflection by a bent polychromator at a large distance from the source guarantees the focusing of a broad bandwidth (up to $~1$ keV) on a small spot \cite{Tolentino:ms0206} at a distance close to $L_h=(R_c\sin\theta_B)/2$.
Laue polychromators are also used in synchrotron beamlines.
In symmetric Laue geometry, condition (\ref{eq:CLE}) should be replaced by equation~(\ref{eq:newCLE}), which is $L_h \approx R_c \cos\theta_B + (R_c \cos\theta_B) ^2 / q_{dyn}$ if the source distance is very large.
Coincidence with (\ref{eq:polychromaticfocusing}) is then obtained if
$R_c=-q_{dyn}/(2\cos\theta_B)$,
which means real focusing at the distance $|L_h|=q_{dyn}/4$ with beam incidence in the crystal convex side ($R_c<0$).
If $|L_h|$ is fixed, the required conditions are $|R_c| = 2 |L_h| / \cos\theta_B$ and $q_{dyn}=4|L_h|$. The last condition should be fulfilled by choosing the crystal thickness, as in \cite{Mocella2004,Mocella2008}.
Another polychromatic condition for Laue geometry has been introduced more recently \cite{Martinson, PengQi, PengQi2021}.
The energy components of a polychromatic ray traversing a bent Laue crystal with finite thickness meet the Bragg condition at different positions along the ray path. They are diffracted with different Bragg angles, therefore they exit in different directions, giving raise to a polychromatic focus from a single ray. The ``magic condition", under which single ray focusing and geometric focusing (equation (\ref{eq:polychromaticfocusing})) would coincide, is achieved by the adequate choice of the asymmetry. The magic condition is independent of the crystal thickness \cite{PengQi2021}. We observe that the magic condition (equation (19) in \cite{PengQi2021}) and the modified lens equation (\ref{eq:newCLE}) are both satisfied in the particular case of symmetric Laue geometry in Rowland configuration.
\section{Conclusions and future perspectives}
\label{sec:summary}
The crystal lens equation (CLE, equation~(\ref{eq:CLE})) based on the conservation of the parallel component of the wavevector in the diffraction process has been revisited. It includes all cases of symmetric and asymmetric Laue and Bragg geometries. It differs from the previous formulation \cite{CK} in the Laue case. However, in Laue geometry, the lens equation
can be only applied if the crystal is so thin that important effects resulting from the dynamical theory of diffraction, like the focusing of the Borrmann triangle, can be neglected. We derived the modified lens equation (\ref{eq:newCLE}) which overcomes this restriction in the Laue symmetric case. Consistently, it converges to the CLE if the crystal thickness tends to zero. The generic case of arbitrary asymmetry is left for a future investigation.
The fact that dynamic focusing cannot be achieved in Bragg case (see Appendix D) justifies in some way the larger applicability of the CLE in Bragg case.
The application of the CLE (equation~\ref{eq:CLE}) is restricted to monochromatic focusing. Polychromatic focusing, as used in the polychromators of dispersive EXAFS beamlines, happens when the wavelength of the reflected rays changes to exactly match the Bragg angle. This condition is given by a different lens equation (\ref{eq:polychromaticfocusing}). This implies a specular reflection of the rays on the Bragg planes that is, in general, incompatible with the CLE or the results of dynamical theory, except for the Bragg symmetric case. It has been demonstrated that focii predicted by monochromatic and polychromatic focusing conditions coincide if the source is situated on the Rowland circle. Moreover, such coincidence is also true for any source position (off-Rowland) in symmetric Bragg geometry, but not in symmetric Laue geometry. Here, for the Laue symmetric case, both polychromatic and monochromatic focii can match if the modified lens equation~(\ref{eq:newCLE}) is used instead, but requires a particular choice of the crystal thickness.
The additional effect of focusing a polychromatic ray \cite{PengQi2021} gives the ``magic condition" for Laue focusing, which implies geometric and single scattering. Further studies would be required to match the magic condition (which does not depend on the crystal thickness) with monochromatic focusing. This could be done by optimizing numerically the crystal thickness using the formulation in section~\ref{sec:LaueCompatibilityCLE}.
|
1,108,101,562,797 | arxiv | \section{Introduction}
\label{sec:introduction}
Discovering the latent structure from many observed variables is an important yet challenging learning task. The discovered structures can help better understand the domain and lead to potentially better predictive models. Many local search heuristics based on maximum parsimony and maximum likelihood methods have been proposed to address this problem~\citep{SemSte03,Zhang04,HelGha05,TehDauRoy08,HarWil10}. Their common drawback is that it is difficult to provide consistency guarantees. Furthermore, the number of hidden states often needs to be determined before the structure learning. Or cross-validations are needed to determine the hidden states, which can be very time consuming to run.
Efficient algorithms with provable performance guarantees have been explored in the phylogenetic tree reconstruction community. One popular algorithm is the neighbor-joining (NJ) algorithm~\citep{SaiNei87}, where pairs of variables are joined recursively according to a certain distance measure. The NJ algorithm is consistent when the distance measure satisfies the path additive property~\citep{MihLevPac2009}. For discrete random variables, the additive distance is defined using the determinant of the joint probability table of a pair of variables~\citep{Lake1994}. However, this definition only applies to the cases where the observed variables and latent variables have the same number of states. When the latent variables represent simpler factors with smaller number of states, the NJ algorithm can perform poorly.
Another family of provably consistent reconstruction methods is the quartet-based methods \citep{SemSte03,ErdSzeSteWar99b}. These methods first resolve a set of latent relations for quadruples of observed variables (quartets), and subsequently, stitch them together to form a latent tree. A good quartet test plays an essential role in these methods, as it is called repeatedly by the stitching algorithms. Recently,~\citep{AnaChaHsuKakSonZha2011} proposed a quartet test using the leading $k$ singular values of the joint probability table, where $k$ is the number of hidden states. This new approach allows $k$ to be different from the number of the observed states. However, it still requires $k$ to be given in advance.
Our goal is to design a latent structure discovery algorithm which is \emph{agnostic} to the number of hidden states, since in practice we rarely know this number. The proposed approach is quartet based, where the quartet relations are resolved based on rank properties of $4$th order tensors associated with the joint probability tables of quartets. The key insight is that rank properties of the tensor reveal the latent structure behind a quartet. Similar observations have been reported in the phylogenetic community~\citep{E05,AllRho06}, but they are concerned about the cases where the number of hidden states is larger or equal to the number of observed states.
We focus instead on the cases where the number of hidden states is smaller, representing simpler factors. Furthermore, if the joint probability tensor is only approximately given (due to sampling noise) the main rank condition has to be modified. In~\citet{AllRho06} such condition is missing and in~\citet{E05} the condition is heuristically translated to the distance of a matrix to its best rank-$k$ approximation. In contrast, we propose a novel nuclear norm relaxation of the rank condition, discuss its advantages, and provide recovery conditions and finite sample guarantees.
Our quartet test is easy to compute since it only involves singular value decomposition of unfolded $4$th order tensors.
Using the proposed quartet test as a subroutine, the latent tree structure can be recovered in a divide-and-conquer fashion~\citep{PeaTar86}. For $d$ observed variables, the computational complexity of the algorithm is $O(d \log d)$, making it scalable to large problems.
Under mild conditions, the tree construction algorithm using our quartet test is consistent and stable to estimate given a finite number of samples. In simulations, we compared to alternatives in terms of resolving quartet relations and building the entire latent trees. The proposed approach is among the best performing ones while being agnostic to the number of hidden states $k$. The latter is an important improvement, since cross validation for finding $k$ is expensive while leading to similar final results. We also applied the new approach to a stock dataset, where it discovered meaningful grouping of stocks according to industrial sectors, and led a latent variable model that fits the data better than the competitors.
\section{Latent Tree Graphical Models}
\label{sec:latent_tree}
In this paper, we focus on discrete latent variable models where the conditional independence structures are specified by trees.
We assume that the $d$ observed variables, $\Oscr = \cbr{X_1,\ldots,X_d}$, are leaves of the tree and that they all have the same number of states, $n$. We also assume the $d_h$ hidden variables, $\Hscr = \cbr{X_{d+1},\ldots,X_{d+d_h}}$,
have the same\footnote{Our results are easily generalizable to the case where all hidden variables have different number of states.},
\emph{but unknown}, number of states, $k$, ($k\leq n$). Furthermore, we use uppercase letters to denote random variables (\eg, $X_i$) and lowercase letters their instantiations (\eg, $x_i$).
{\bf Factorization of distribution.} The joint distribution of all variables, $\Xscr=\Oscr \cup \Hscr$, in a latent tree model is a multi-way table (tensor), $\Pcal$, with $d+d_h$ dimensions. Although the tensor has $O(n^d k^{d_h})$ number of entries, they can be computed from just a polynomial number of parameters due to the latent tree structure. That is $\Pcal(x_1,\ldots,x_{d+d_h}) = \prod_{i=1}^{d+d_h} P(x_i | x_{\pi_i})$ where each $P(X_i|X_{\pi_i})$ is a conditional probability table (CPT) of a variable $X_i$ and its parent $X_{\pi_i}$ in the tree.\footnote{For a latent tree, we can select a latent node as the root, and re-orient all edges away from it to induce consistent parent-child relations. For the root node $X_r$, $P(X_r | X_{\pi_r})=P(X_r)$.} This factorization leads to a significant saving in terms of tensor representation: we can represent exponential number of entries using just $O(d_h k^2 + dnk)$ parameters from the CPTs. Throughout the paper, we assume that {\bf (A1)} all CPTs have full column rank, $k$.
{\bf Structure learning.} Determining the tree topology $\Tcal$ is an important and challenging learning problem. The goal is to discover the latent structure based just on samples from observed variables. For simplicity and uniqueness of the tree topology~\citep{Pearl88}, we assume that {\bf (A2)} every latent variable has \emph{exactly} 3 neighbors.
{\bf Quartet.} A quadruple of observed variables from a latent tree $\Tcal$ is called a quartet (Figure~\ref{fig:quartet_tree}).
\begin{figure}[ht]
\centering
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{5pt}
\begin{tikzpicture}
[
scale=0.75,
observed/.style={circle,inner sep=0.01mm,draw=black,fill=MyBlue},
hidden/.style={circle,inner sep=0.01mm,draw=black},
hidden2/.style={circle,inner sep=0.3mm,draw=black},
hidden3/.style={circle,inner sep=1.2mm,draw=black},
]
\node [observed,name=z1] at (-3.1,0.8) {$\mathsmaller X_{i_1}$};
\node [observed,name=z2] at (-3.1,-0.8) {$\mathsmaller X_{i_2}$};
\node [observed,name=z3] at (1.4,0.8) {$\mathsmaller X_{i_3}$};
\node [observed,name=z4] at (1.4,-0.8) {$\mathsmaller X_{i_4}$};
\node [hidden,name=h] at ($(-1.7,0)$) {$\mathsmaller H_i$};
\node [hidden3,name=m] at ($(-1,0)$){$ $};
\node [hidden,name=g] at ($(0,0)$) {$\mathsmaller G_i$};
\node [hidden3,name=empty1] at ($(-2.4,0.4)$) {$ $};
\node [hidden3,name=empty2] at ($(-2.4,-0.4)$) {$ $};
\node [hidden3,name=empty3] at ($(0.7,0.4)$) {$ $};
\node [hidden3,name=empty4] at ($(0.7,-0.4)$) {$ $};
\draw [-] (z1) to (empty1);
\draw [line width=0.4mm,style=dotted] (empty1) to (h);
\draw [-] (z2) to (empty2);
\draw [line width=0.4mm,style=dotted] (empty2) to (h);
\draw [-] (z3) to (empty3);
\draw [line width=0.4mm,style=dotted] (empty3) to (g);
\draw [-] (z4) to (empty4);
\draw [line width=0.4mm,style=dotted] (empty4) to (g);
\draw [-] (h) to (m);
\draw [line width=0.4mm,style=dotted] (m) to (g);
\end{tikzpicture}
\caption{Quartet ($X_1$, $X_2$, $X_3$, $X_4$) from a tree.}
\label{fig:quartet_tree}
\end{figure}
Under assumption {\bf (A2)}, there are $3$ ways to connect a quartet, $X_1, X_2, X_3$, $X_4$, using $2$ latent variables $H$ and $G$ (Figure~\ref{fig:topologies}).
\begin{figure}[ht]
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{ccc}
\begin{tikzpicture}
[
scale=0.75,
observed/.style={circle,inner sep=0.3mm,draw=black,fill=MyBlue},
hidden/.style={circle,inner sep=0.3mm,draw=black}
]
\node [observed,name=z1] at (-1.2,0.5) {$\mathsmaller X_1$};
\node [observed,name=z2] at (-1.2,-0.5) {$\mathsmaller X_2$};
\node [observed,name=z3] at (1.2,0.5) {$\mathsmaller X_3$};
\node [observed,name=z4] at (1.2,-0.5) {$\mathsmaller X_4$};
\node [hidden,name=h] at ($(-0.4,0)$) {$\mathsmaller H$};
\node [hidden,name=g] at ($(0.4,0)$) {$\mathsmaller G$};
\draw [-] (z1) to (h);
\draw [-] (z2) to (h);
\draw [-] (z3) to (g);
\draw [-] (z4) to (g);
\draw [-] (h) to (g);
\end{tikzpicture}
&
\begin{tikzpicture}
[
scale=0.75,
observed/.style={circle,inner sep=0.3mm,draw=black,fill=MyBlue},
hidden/.style={circle,inner sep=0.3mm,draw=black}
]
\node [observed,name=z1] at (-1.2,0.5) {$\mathsmaller X_1$};
\node [observed,name=z3] at (-1.2,-0.5) {$\mathsmaller X_3$};
\node [observed,name=z2] at (1.2,0.5) {$\mathsmaller X_2$};
\node [observed,name=z4] at (1.2,-0.5) {$\mathsmaller X_4$};
\node [hidden,name=h] at ($(-0.4,0)$) {$\mathsmaller H$};
\node [hidden,name=g] at ($(0.4,0)$) {$\mathsmaller G$};
\draw [-] (z1) to (h);
\draw [-] (z3) to (h);
\draw [-] (z2) to (g);
\draw [-] (z4) to (g);
\draw [-] (h) to (g);
\end{tikzpicture}
&
\begin{tikzpicture}
[
scale=0.75,
observed/.style={circle,inner sep=0.3mm,draw=black,fill=MyBlue},
hidden/.style={circle,inner sep=0.3mm,draw=black}
]
\node [observed,name=z1] at (-1.2,0.5) {${\mathsmaller X_1}$};
\node [observed,name=z4] at (-1.2,-0.5) {$\mathsmaller X_4$};
\node [observed,name=z2] at (1.2,0.5) {$\mathsmaller X_2$};
\node [observed,name=z3] at (1.2,-0.5) {$\mathsmaller X_3$};
\node [hidden,name=h] at ($(-0.4,0)$) {$\mathsmaller H$};
\node [hidden,name=g] at ($(0.4,0)$) {$\mathsmaller G$};
\draw [-] (z1) to (h);
\draw [-] (z4) to (h);
\draw [-] (z2) to (g);
\draw [-] (z3) to (g);
\draw [-] (h) to (g);
\end{tikzpicture}\\
$\{\{1,2\},\{3,4\}\}$
& $\{\{1,3\},\{2,4\}\}$
& $\{\{1,4\},\{2,3\}\}$
\end{tabular}
\centering
\caption{Three fixed ways to connect $X_1$, $X_2$, $X_3$, $X_4$, with two latent variables $H$ and $G$.}
\label{fig:topologies}
\end{figure}
However, only one of the 3 quartet relations is consistent with $\,\,\Tcal$. The mapping between quartets and the tree topology
$\Tcal$ is captured in the following theorem~\citep{Buneman71}:
\begin{theorem}
\label{th:quartets_and_tree}
The set of all quartet relations $\Qcal_{\Tcal}$ is unique to a latent tree $\Tcal$, and furthermore, $\Tcal$ can be recovered from $\Qcal_{\Tcal}$ in polynomial time.
\end{theorem}
{\bf Quartet-based tree reconstruction.} Motivated by Theorem~\ref{th:quartets_and_tree}, a family of latent tree recovery algorithms has been designed based on resolving quartet relations. These algorithms first determine one of the $3$ ways how $4$ variables are connected, and then join together all quartet relations to form a consistent latent tree. For a model with $d$ observed variables, there are $O(d^4)$ quartet relations in total (taking all possible combinations of $4$ variables). However, we do not necessarily need to resolve all these quartet relations in order to reconstruct the latent tree. A small set of size $O(d\log d)$ will suffice for the tree recovery, which makes quartet based methods efficient even for problems with large $d$~\citep{PeaTar86,Pearl88}. In this paper, we design a new quartet based method. Our main contribution compared to previous approaches is that our method is \emph{agnostic} to the number of hidden states, $k$, which is usually unknown in practice.
\section{Resolving Quartet Relations without Knowing the Number of Hidden States}
In this section, we develop a test for resolving the latent relation of a quartet when the number of hidden states is unknown. Our approach makes use of information from the joint probability table of a quartet, which is a $4$-way table or $4$th order tensor. Suppose that the quartet relation of $4$ variables, $X_1, X_2, X_3$ and $X_4$, is $\{\{1,2\},\{3,4\}\},$ then the entries in this tensor are specified by
\begin{align}
\Pcal&(x_1, x_2, x_3, x_4) =
\sum\nolimits_{h, g} P(x_1 | h) P(x_2 | h) P(h,g) P(x_3|g) P(x_4 | g).
\label{def:P}
\end{align}
This factorization suggests that there exist some low rank structures in the $4$th order tensor.
To study the rank properties of $\Pcal(X_1,X_2,X_3,X_4)$, we first relate it to the conditional probability tables, $P(X_1|H)$, $P(X_2|H)$, $P(X_3|G)$, $P(X_4|G)$, and the joint probability table, $P(H,G)$ (we abbreviate them as $ P_{1|H}$, $P_{2|H}$, $P_{3|G}$, $P_{4|G}$ and $P_{HG}$, respectively). Using tensor algebra, we have
$${\cal P}(X_1,X_2,X_3,X_4)= \langle\Tcal_1 , \Tcal_2\rangle_3,$$
$$\begin{array}{ll}
\mbox{with} & \Tcal_1 = \Ical_H \times_1 P_{1|H} \times_2 P_{2|H},\\[1mm]
& \Tcal_2 = \Ical_G \times_1 P_{3|G} \times_2 P_{4|G} \times_3 P_{HG},
\end{array}$$
where ${\cal I}_H$ and ${\cal I}_G$ are $3$rd order diagonal tensors of size $k\times k \times k$ with diagonal elements equal to $1$. The multiplication $\times_i$ denotes a tensor-matrix multiplication with respect to the $i$-th dimension of the tensor and the rows of the matrix, and
$\langle\cdot,\cdot\rangle_3$ denotes tensor-tensor multiplication along the third dimension of both tensors\footnote{For formal definitions of tensor notations see appendix, \S\ref{sect:properties}.}. This formula can be schematically understood as Figure~\ref{fig:tensor}.
\begin{figure}[ht]
\centering
\psfrag{1}{\hspace*{-2mm}\begin{footnotesize}$P_{1|H}$\end{footnotesize}}
\psfrag{2}{\hspace*{-3mm}\begin{footnotesize}$P_{2|H}$\end{footnotesize}}
\psfrag{3}{\hspace*{-2mm}\begin{footnotesize}${\cal I}_H$\end{footnotesize}}
\psfrag{5}{\hspace*{-2mm}\begin{footnotesize}$P_{HG}$\end{footnotesize}}
\psfrag{6}{\begin{footnotesize}${\cal I}_G$\end{footnotesize}}
\psfrag{7}{\hspace*{-3mm}\begin{footnotesize}$P_{4|G}$\end{footnotesize}}
\psfrag{8}{\hspace*{-3mm}\begin{footnotesize}$P_{3|G}$\end{footnotesize}}
\includegraphics[width=.3\textwidth]{./GM_tensor_v4a}
\caption{Schematic diagram of the tensor $\Pcal(X_1,X_2,X_3,X_4)$.}
\label{fig:tensor}
\end{figure}
We will start by characterizing the rank properties of ${\cal P}$ and then exploit them to design a quartet test. Although the proposed approach involves unfolding the tensor and subsequent computation at the matrix level, modeling the problem using tensors provides higher level conceptual understanding of the structure of ${\Pcal}.$ The novelty of our use of low rank tensors is for latent structure discovery.
\subsection{Unfolding the $4$th Order Tensor}
Now we consider 3 different reshapings $A,\,B$ and $C$ of the tensor into matrices (``unfoldings''). These unfoldings contain exactly the same entires as $\Pcal$ but in different order. $A$ corresponds to the grouping $\{\{1,2\},\{3,4\}\}$ of the variables, \ie, the rows of $A$ correspond to dimensions $1$ and $2$ of $\Pcal$, and its columns to dimensions $3$ and $4$. $B$ corresponds to the grouping $\{\{1,3\},\{2,4\}\}$ and $C$ - to the grouping $\{\{1,4\},\{2,3\}\}$. Using {\sc Matlab}'s notation (see appendix, \S\ref{sect:properties} for further explanation), \newpage\vspace*{-1.2cm}
\begin{align}
\label{def:A} A & = \mbox{reshape}({\Pcal},n^2,n^2);\\
\label{def:B} B & = \mbox{reshape}(\mbox{permute}({\Pcal},[1, 3, 2, 4]),n^2,n^2);\\
\label{def:C} C & = \mbox{reshape}(\mbox{permute}({\Pcal},[1, 4, 2, 3]),n^2,n^2).
\end{align}
Next we present useful characterizations of
$A,\,B$ and $C$, which will be essential for understanding their connection with the latent structure of a quartet. The {\it Kronecker product} of two matrices $M$ and $M'$ is denoted as $M\otimes M'$, and if they have the same number of columns, their {\it Khatri-Rao product} (column-wise Kronecker product), is denoted as $M\odot M'$. Then (see appendix \S\ref{sect:fromP_toABC} for proof),
\begin{lemma}
Assume that $\{\{1,2\},\{3,4\}\}$ is the correct latent structure. The matrices $A$, $B$ and $C$ can be factorized respectively as
(see Figure~\ref{fig:ABcompact}(a) and Figure~\ref{fig:ABcompact}(b) for schematic diagrams)
\begin{align}
\hspace*{-1mm}A &= \big(P_{2|H} \odot P_{1|H}\big)\,\,\, P_{HG} \,\,\, \big(P_{4|G} \odot P_{3|G}\big)^\top, \label{eq:Acompact} \\
\hspace*{-1mm}B &= \big(P_{3|G} \otimes P_{1|H}\big)\,\diag(P_{HG}(:))\,\big(P_{4|G} \otimes P_{2|H}\big)^\top, \label{eq:Bcompact} \\
\hspace*{-1mm}C &= \big(P_{4|G} \otimes P_{1|H}\big)\,\diag(P_{HG}(:))\,\big(P_{3|G} \otimes P_{2|H}\big)^\top. \label{eq:Ccompact}
\end{align}
\label{le:unfolding}
\end{lemma}\vspace*{-5mm}
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\hspace*{-20mm}
\psfrag{0}{\hspace*{-2mm}\begin{footnotesize}$~$\end{footnotesize}}
\psfrag{1}{\hspace*{-2mm}\begin{footnotesize}$P_{2|H}$\end{footnotesize}}
\psfrag{2}{\hspace*{-2mm}\begin{footnotesize}$P_{1|H}$\end{footnotesize}}
\psfrag{4}{\hspace*{-2.6mm}\begin{footnotesize}$P_{HG}$\end{footnotesize}}
\psfrag{5}{\hspace*{-2mm}\begin{footnotesize}$P_{4|G}$\end{footnotesize}}
\psfrag{6}{\hspace*{-2mm}\begin{footnotesize}$P_{3|G}$\end{footnotesize}}
\psfrag{T}{\begin{tiny}$\top$\end{tiny}}
\includegraphics[width=.40\textwidth]{./GM_Acompact_v2}
&
\hspace*{-5mm}
\psfrag{0}{\hspace*{-2mm}\begin{footnotesize}$~$\end{footnotesize}}
\psfrag{1}{\hspace*{-3mm}\begin{footnotesize}$P_{3|G}$\end{footnotesize}}
\psfrag{2}{\hspace*{-3mm}\begin{footnotesize}$P_{1|H}$\end{footnotesize}}
\psfrag{3}{\hspace*{-7mm}\begin{footnotesize}$\mathsmaller{\diag}(P_{HG}(:))$\end{footnotesize}}
\psfrag{5}{\hspace*{-1.5mm}\begin{footnotesize}$P_{4|G}$\end{footnotesize}}
\psfrag{6}{\hspace*{-1mm}\begin{footnotesize}$P_{2|H}$\end{footnotesize}}
\psfrag{T}{\begin{tiny}$\top$\end{tiny}}
\includegraphics[width=.40\textwidth]{./GM_Bcompact_v2} \\
\hspace*{6mm}(a)\hspace*{2mm} \begin{footnotesize}$A$\end{footnotesize}
& \hspace*{4mm}(b)\hspace*{2mm} \begin{footnotesize}$B$\end{footnotesize}
\end{tabular}
\caption{Schematic diagrams of the two unfoldings $A$ and $B$.}
\label{fig:ABcompact}
\end{figure}
The factorization of $A$ is very different from those of $B$ and $C$. First, in $A$, $P_{2|H}\odot P_{1|H}$ is a matrix of size $n^2\times k$, and the columns of $P_{2|H}$ interact only with their corresponding columns in $P_{1|H}$. However, in $B$, $P_{3|G}\otimes P_{1|H}$ is a matrix of size $n^2\times k^2$, and every column of $P_{1|H}$ interacts with every column of $P_{3|G}$ respectively (similarly for $C$). Second, in $A$, the middle factor $P_{HG}$ has size $k\times k$, whereas in $B$, the entires of $P_{HG}$ appear as the diagonal of a matrix of size $k^2\times k^2$ (similarly for $C$). These differences result in different rank properties of $A,\,B$ and $C$ which we will exploit to discover the latent structure of a quartet.
\subsection{Rank Properties of the Unfoldings}
\label{sect:rank_properties}
Under assumption {\bf (A1)} that all CPTs have full column rank, the factorization of $A$, $B$ and $C$ in~\eq{eq:Acompact},~\eq{eq:Bcompact} and~\eq{eq:Ccompact} respectively suggest that (see appendix \S\ref{sect:fromP_toABC} for more details)
\begin{align}
\text{rank}(A) = \text{rank}(P_{HG}) = k
~\leq~ \text{rank}(B) = \text{rank}(C) = \text{nnz}(P_{HG}), \label{eq:rankA_}
\end{align}
where $\text{nnz}(\cdot)$ denotes the number of nonzero elements.
We note that the equality is attained if and only if the relationship between the hidden variables $G$ and $H$
is deterministic, \ie, there is a single nonzero element in each row and in each column
of $P_{HG}$. In this case, the grouping of variables in a quartet can be arbitrary, and we will not consider this case in the paper. More specifically, we have
\begin{theorem}
Assume $P_{HG}$ has a few zero entries, then
$k \ll k^2 \approx \,\mbox{\textnormal{nnz}}(P_{HG})$
and thus
\begin{align}
{\boxed{
\mbox{\textnormal{rank}}(A) \ll \mbox{\textnormal{rank}}(B) = \mbox{\textnormal{rank}}(C).}
\label{eq:rank_condition}}
\end{align}
\label{th:rank}
\end{theorem}\newpage
The above theorem reveals a useful difference between the correct grouping of variables
and the two incorrect ones. Furthermore, this condition can be easily verified:
Given $\Pcal$ we can check the rank of its matrix representations
$A,\,B$ and $C$ and thus discover the latent structure of the quartet.
\subsection{Nuclear Norm Relaxation for the Rank Condition}
In practice, due to sampling noise all unfolding matrices $A,\,B$ and $C$ would
be nearly full rank, so the rank condition cannot be applied directly.
To deal with this, we design a test based on relaxation of the rank condition using nuclear norm
\begin{align}
\|M\|_\ast = \sum\nolimits_{i=1}^{n} \sigma_i(M),
\end{align}
which is the sum of all singular values of an $(n\times n)$ matrix $M$. Instead of comparing the ranks of $A,\,B$ and $C$, we look for the one with the smallest nuclear norm and declare the latent structure corresponding to it. This simple quartet algorithm is summarized in Algorithm~\ref{alg:main}.
\begin{algorithm}[htb]
\caption{$i^\ast=$ Quartet($X_1$, $X_2$, $X_3$, $X_4$)}
\begin{algorithmic}[1]
\STATE Estimate $\widehat{\Pcal}(X_1,X_2,X_3,X_4)$ from a set of $m$~\iid~samples $\{(x_1^l, x_2^l, x_3^l, x_4^l)\}_{l=1}^{m}$.\;\\
\STATE Unfold $\widehat{\Pcal}$ in three different ways into matrices $\widehat{A}$, $\widehat{B}$ and $\widehat{C}$, and compute their nuclear norms\\
\hspace*{4mm}$a_1 = \|\widehat{A}\|_\ast,~a_2 = \|\widehat{B}\|_\ast$ and $a_3 = \|\widehat{C}\|_\ast$.\\
\STATE Return $i^\ast = \argmin\nolimits_{i\in\{1,2,3\}} a_i$.
\end{algorithmic}
\label{alg:main}
\end{algorithm}
Note that Algorithm~\ref{alg:main} works even if the number of hidden states, $k$, is a priori unknown.
This is an important advantage over the idea of learning the structure
based on additive distance~\citep{Lake1994}, where $k$ is assumed to be the same as the number of states, $n$, of the observed variables, or over a recent approach based on quartet test~\citep{AnaChaHsuKakSonZha2011}, where $k$ needs to be specified in advance.
In our current context, nuclear norm has a few useful properties.
First, it is the tightest convex lower bound of the rank of a matrix~\citep{FazHinBoy01}.
This is why\footnote{Note that $A$, $B$ and $C$ consist of the same elements so their Frobenius norms are the same, \ie, the $3$ matrices are readily equally ``normalized''.} it is meaningful to compare nuclear norms instead of ranks.
Second, it is easy to compute: a standard singular value decomposition will
do the job. Third, it is robust to estimate. The nuclear norm of a probability matrix $\widehat{A}$~based on samples is nicely concentrated around its population quantity~\citep{RosBelVit2010}. Given a confidence level $1-2e^{-\tau}$, an estimate based on $m$ samples satisfies
\begin{align}
|& \|A\|_\ast - \|\widehat{A}\|_\ast | =
\abr{\sum\nolimits_i \sigma_i(A) - \sum\nolimits_i \sigma_i(\widehat{A})} \leq 2\sqrt{2\tau}/\sqrt{m}.
\label{eq:samplebound}
\end{align}
Fourth, the nuclear norm can be viewed as a measure of dependence between two pairs of variables. For instance, if $A$ corresponds to grouping $\{\{1,2\},\{3,4\}\}$, $\|A\|_\ast$ measures the dependence between the compound variables $\{X_1,X_2\}$ and $\{X_3,X_4\}$. In the community of kernel methods, $A$ is treated as a cross-covariance operator between $\{X_1,X_2\}$ and $\{X_3,X_4\}$, and its spectrum
has been used to design various dependence measures, such as Hilbert-Schmidt Independence Criterion, which is the sum of squares of all singular values~\citep{GreBouSmoSch05}, and kernel constrained covariance, which only takes the largest singular value~\citep{GreHerSmoBouetal05}. Intuitively, our quartet test says that: if we group the variables correctly, then cross group dependence should be low, since the groups are separated by two latent variables; however if we group the variables incorrectly, then cross group dependence should be high, since similar variables exist in the two groups.
\section{Recovery Conditions and Finite Sample Guarantee for Quartets}
\label{sec:nuclearnormconditions}
Since nuclear norm is just a convex lower bound of the rank, there might be situations where the nuclear norm does not satisfy the same relation as the rank. That is, it might happen that
$\mbox{rank}(A) \leq \mbox{rank}(B)$ but $\|A\|_\ast \geq \|B\|_\ast$. In this section, we present sufficient conditions under which nuclear norm returns successful quartet test.
{\bf When latent variables $H$ and $G$ are independent}, rank$(P_{HG})=1$,
since $P_{HG} = P_H P_G^\top$ ($P(h,g)=P(h)P(g)$). Let $\{\{1,2\},\{3,4\}\}$ be the correct quartet relation. We can obtain simpler characterizations of the 3 unfoldings of $\Pcal(X_1,X_2,X_3,X_4)$, denoted as $A_{\perp}$, $B_{\perp}$ and $C_{\perp}$ respectively. Using Lemma~\ref{le:unfolding} and the independence of $H$ and $G$, we have (see appendix, (\ref{eq:B_perp_nn})--(\ref{eq:A_perp_nn}))
\begin{equation}
\begin{array}{llcl}
\hspace*{-2mm}A_\perp\hspace*{-2mm}
& = (P_{2|H} \odot P_{1|H})\,\,\, P_H P_G^\top \,\,\, (P_{4|G} \odot P_{3|G})^\top\\[1mm]
& = P_{12}(:)~P_{34}(:)^\top, \\[2mm]
\hspace*{-2mm}B_\perp\hspace*{-2mm}
& = (P_{3|G} \otimes P_{1|H}) ({\mathsmaller\diag}(P_G) \otimes {\mathsmaller\diag}(P_H)) (P_{4|G}\otimes P_{2|H})^\top\\[1mm]
& = P_{34} \otimes P_{12},
\end{array}
\label{eq:B_perp}
\end{equation}
and $\mbox{rank}(A_{\perp})=1 \ll \rank(B_{\perp})$ which is consistent with Theorem~\ref{th:rank}. Furthermore, since $A_{\perp}$ has only one nonzero singular value, we have $\|A_{\perp}\|_\ast = \|A_{\perp}\|_F = \|B_{\perp}\|_F \leq \|B_{\perp}\|_\ast$ (using $\|M\|_F \leq \|M\|_\ast$ for any matrix $M$). Similarly, $C_\perp=P_{43} \otimes P_{12}$ and $\|A_{\perp}\|_\ast \leq \|C_{\perp}\|_\ast$. Then we know for sure that the nuclear norm quartet test will return the correct topology.
{\bf When latent variables $H$ and $G$ are not independent}, we treat it as perturbation $\Delta$ away from the independent case,~\ie,~$\widetilde{P}_{HG} = P_H P_G^\top + \Delta$. The size of $\Delta$ quantifies the strength of dependence between $H$ and $G$. Obviously, when $\Delta$ is small,~\eg,~$\Delta=\zero$, we are back to the independence case and it is easy to discover the correct quartet relation; when it is large,~\eg,~$\Delta = I - P_H P_G^\top$, $H$ and $G$ are deterministically related and the different groupings are indistinguishable. The question is how large can $\Delta$ be while still allowing the nuclear norm quartet test to find the correct latent relation.
First, we require {\bf (A3)} $\Delta \one = \zero$, and $\Delta^\top \one = \zero$, where $\one$ and $\zero$ are vectors of all ones and all zeros. Such perturbation $\Delta$ keeps the marginal distributions $P_H$ and $P_G$ as in the independent case, since $\widetilde{P}_H=\widetilde{P}_{HG} \one = P_H P_G^\top \one + \Delta \one = P_H$. Assuming $\{\{1,2\},\{3,4\}\}$ is the correct quartet relation, $\Delta$ also keeps the pairwise marginal distribution $P_{12}$ as in the independent case, since $P_{12} = P_{1|H} \diag(P_H) P_{2|H}^\top$ and the marginal $P_H$ is the same before and after the perturbation. Similar reasoning also applies to $P_{34}=P_{3|G} \diag(P_G) P_{4|G}^\top$.
We define \emph{excessive dependence} of the correct and incorrect groupings as
$$\theta := \min \{\|B_\perp\|_\ast - \|A_\perp\|_\ast,~\|C_\perp\|_\ast - \|A_\perp\|_\ast\}.$$
It quantifies the changes in dependence when we switch from incorrect groupings to the correct one
(in the case when $H$ and $G$ are independent). Note that $\theta$ is measured only from pairwise marginals (\ref{eq:B_perp}), $P_{12}$ and $P_{34}$. Using matrix perturbation analysis we can show that (see appendix $\S$\ref{sect:perturbation} for proof)
\begin{lemma}
\label{le:deltacondition}
If $\nbr{\Delta}_F \leq \frac{\theta}{{k^2} +k}$, then Algorithm~\ref{alg:main} returns the correct quartet relation.
\end{lemma}
Thus, if the excessive dependence $\theta$ is large compared to the number of hidden states, the size of the allowable perturbation can be correspondingly larger. In other words, if the dependence between variables within the same group is strong enough compared to the dependence across groups, we allow for larger $\Delta$ and stronger dependence between hidden variables $H$ and $G$ (which is closer to the indistinguishable case).
Then under the recovery condition in Lemma~\ref{le:deltacondition}, and given $m$~\iid~observations, we can obtain the following guarantee for the quartet test (see appendix, $\S$\ref{app:stat:quartet} for proof). Let $ \alpha = \min \cbr{\|B\|_\ast - \|A\|_\ast, \|C\|_\ast - \|A\|_\ast}$.
\begin{lemma}
\label{le:quartetsuccess}
With probability $1-8 e^{-\frac{1}{32}m\alpha^2}$, Algorithm~\ref{alg:main} returns the correct quartet relation.
\end{lemma}
\section{Building Latent Tree from Quartets}
{\bf Algorithm.} We can use the resolved quartet relations (Algorithm~\ref{alg:main}) to discover the structure of the entire tree
via an incremental divide-and-conquer algorithm~\citep{PeaTar86,Pearl88}, summarized in Algorithm~\ref{alg:buildtree}
(further details in appendix \S\ref{app:build_tree}).
Joining variable $X_{i+1}$ to the current tree of $i$ leaves can be done with
$O(\log i)$ tests. This amounts to performing
$O(d\log d)$ quartet tests for building an entire tree of $d$ leaves, which is efficient even if $d$ is large.
Moreover, as shown in~\citep{PeaTar86}, this algorithm is consistent.
\begin{algorithm}[ht]
\caption{${\Tcal}$ = BuildTree$(X_1,\ldots, X_d)$}
\begin{algorithmic}[1]
\STATE Connect any $4$ variables $X_1$, $X_2$, $X_3$, $X_4$ with $2$ latent variables in a tree $\Tcal$ using Algorithm~\ref{alg:main}.
\FOR[insert $\mathsmaller{(i+1)}$-th leaf $X_{i+1}$]{$i=4,5,\ldots,d-1$}
\STATE Choose root $R$ that splits $\Tcal$ into sub-trees $\Tcal_1,\Tcal_2,\Tcal_3$ of roughly equal size.
\STATE Choose any triplet $(X_{i_1},X_{i_2},X_{i_3})$ of leaves from different sub-trees.
\STATE Test which sub-tree should $X_{i+1}$ be joined to:\\
$i^\ast \leftarrow$ Quartet($X_{i+1},X_{i_1},X_{i_2},X_{i_3}$).
\STATE Repeat recursively from step 3 with ${\Tcal} := {\Tcal}_{i^\ast}$.\\
This will eventually reduce to a tree with a single leaf. Join $X_{i+1}$ to it via hidden variable.
\ENDFOR
\end{algorithmic}
\label{alg:buildtree}
\end{algorithm}
{\bf Tree recovery conditions and guarantees.} How will the quartet recovery conditions translate to recovery conditions for the entire tree,
where each ``edge'' of a quartet is a path in the tree? What are the finite sample guarantees for the divide-and-conquer algorithm?
When a quartet is taken from a latent tree, each edge of the quartet corresponds to a path in the tree involving a chain of variables (Figure~\ref{fig:topologies}). We need to bound the perturbation to each single edge of the tree such that joint path perturbations
satisfy edge perturbation conditions from Lemma~\ref{le:deltacondition}. For a quartet $q=\{\{i_1,i_2\},\{i_3,i_4\}\}$ corresponding to a single edge between $H$ and $G$, denote the excessive dependence by $\theta_q$.
By adding perturbation $\Delta_q$ of size smaller than $\frac{\theta_q}{k^2+k}$ to $P_H P_G^\top$ we can still correctly recover $q$. Let $\theta_{\min}:=\min_{\text{quartet}~ q}\theta_q$. If we require $\|\Delta_q\|_F \leq \frac{\theta_{\min}} {k^2+k}$, all such quartet relations will be recovered successfully. If we further restrict the size of the perturbation by
the smallest value in a marginal probability distribution of a hidden variable, $\gamma_{\min}:=\min_{\text{hidden node}~H} \min_{i=1\ldots k} P_H(i)$, we can guarantee that all quartet relations corresponding to a path between $H$ and $G$ can also be successfully recovered by the nuclear norm test (see appendix \S\ref{app:recovery_tree}). Therefore, we assume that {\bf (A4)} $\nbr{\Delta_q}_F \leq \min\{\frac{\theta_{\min}}{{k^2} +k},\gamma_{\min}\}$ for all quartets $q$ in a tree.
\begin{theorem}
\label{th:treecondition}
Algorithm~\ref{alg:buildtree} returns the correct tree topology under assumptions {\bf (A1)--(A4)}.
\end{theorem}
The recovery conditions guarantee that all quartet relations can be resolved correctly and simultaneously. Then a consistent algorithm using a subset of the quartet relations should return the correct tree structure. Given $m$~\iid~samples, we have the following statistical guarantee for the tree building algorithm (see appendix, $\S$\ref{app:stat:tree} for proof). Let $\alpha_{\min}:=\min_{\text{quartet}~q}\alpha_q$.
\begin{theorem}
With probability $1-8\cdot c\cdot d\log d \cdot e^{-\frac{1}{32}m\alpha_{\min}^2}$,
Algorithm~\ref{alg:buildtree} recovers the correct tree topology for a constant $c$ under assumptions {\bf (A1)--(A4)} .
\end{theorem}
We note that there are better quartet based algorithms for building latent trees with stronger statistical guarantees,~\eg~\citep{ErdSzeSteWar99b}. We can adapt our nuclear norm based quartet test to those algorithm as well. However, this is not the main focus of the paper. We choose the divide-and-conquer algorithm due to its simplicity, ease of analysis and it illustrates well how our quartet recovery guarantee can be translated into a tree building guarantee.
\section{Experiments}
\label{sec:experiments}
\newcommand{{NJ}}{{NJ}}
\newcommand{{Spectral@$k$}}{{Spectral@$k$}}
We compared our algorithm with representative algorithms: the neighbor-joining algorithm ({NJ}) \citep{SaiNei87}, a quartet based algorithm of~\citet{AnaChaHsuKakSonZha2011} ({{Spectral@$k$}}), the Chow-Liu neighbor Joining algorithm (CLNJ)~\citep{Choi11}, and an algorithm of~\citet{HarWil10}~ (HW).
{{NJ}}~proceeds by recursively joining two variables that are closest according to an additive distance defined as $
d_{ij} = \smallfrac{1}{2} \log \det \diag P_i - \log |\det P_{ij}| + \smallfrac{1}{2} \log \det \diag P_j,
$
where ``det'' denotes determinant, ``diag'' is a diagonalization operator, $P_{ij}$ denotes the joint probability table $P(X_i,X_j)$, and $P_i$ and $P_j$ the probability vector $P(X_i)$ and $P(X_j)$ respectively~\citep{Lake1994}. When $P_{ij}$ has rank $k < n$, $\log |\det P_{ij}|$ is not defined,~{NJ}~can perform poorly. {{Spectral@$k$}}~uses singular values of $P_{ij}$ to design a quartet test~\citep{AnaChaHsuKakSonZha2011}. For instance, if the true quartet configuration is $\{\{1,2\},\{3,4\}\}$ as in Figure~\ref{fig:topologies}, then the quartet needs to satisfy
$\prod\nolimits_{s=1}^k \sigma_s(P_{1 2}) \sigma_s(P_{34})>\max\{\prod\nolimits_{s=1}^k \sigma_s(P_{1 3}) \sigma_s(P_{2 4}),~
\prod\nolimits_{s=1}^k \sigma_s(P_{1 4}) \sigma_s(P_{2 3})\}
$. Based on this relation, a confidence interval based quartet test is designed and used as a subroutine for a tree reconstruction algorithm. {{Spectral@$k$}} can handle cases with $k < n$, but still require $k$ as an input. We will show in later experiments that its performance is sensitive to the choice of $k$.~CLNJ first applies Chow-Liu algorithm~\citep{ChowLiu68} to obtain a fully observed tree and then proceeds by adding latent variables using neighbor joining algorithm. The HW algorithm is a greedy algorithm to learn binary trees by iteratively joining two nodes with a high mutual information. The number of hidden states is automatically determined in the HW algorithm and can be different for different latent variables.
\subsection{Resolving Quartet Relations}
We compared our method to NJ and Spectral@$k$ in terms of their ability to recover the quartet relation among four variables. We used quartet with three different configurations for the hidden states: (1) $k_H=2$ and $k_G=4$ (small difference); (2) $k_H=2$, $k_G=8$ (large difference); and (3) $k_H=4$, $k_G=4$ (no difference). In all cases, the states of the observed variables were fixed to $n=10$. In all cases we started from independent $P_{HG}$ but identity $P_{X_i|H}$ and $P_{X_i|G}$, and perturbed them using the following formula
$
P(a=i|b) = \frac{P(a=i|b) + u_i}{\sum_i P(a=i|b) + u_i},
$
where all $u_i$ are~\iid~random variables drawn from $\text{Uniform}[0,\mu]$. We then drew random sample from the quartet according to these CPTs. We studied the percentage of correctly recovered quartet relations as we varied the sample size across $S=\{50,$ $100,$ $200,$ $300, 400, 500, 750, 1000, 1500, 2000\}$ and under two different levels of perturbation ($\mu = \{0.5,1\}$). We randomly initialized each experiment 1000 times and report the average quartet recovery performance and the standard error in Figure~\ref{fig:quartet}.
\begin{figure*}[t!]
\centering
\subfigure[$k=\{2,4\}, \mu=0.5$]{\label{fig:quartet:a}\includegraphics[width=.25\textwidth]{./experiment2_nlevel0_5_Kh2_Kg4_Ko_10_10_10_10}}
\subfigure[$k=\{2,8\}, \mu=0.5$]{\label{fig:quartet:b}\includegraphics[width=.25\textwidth]{./experiment2_nlevel0_5_Kh2_Kg8_Ko_10_10_10_10}}
\subfigure[$k =\{4,4\}, \mu=0.5$]{\label{fig:quartet:c}\includegraphics[width=.25\textwidth]{./experiment2_nlevel0_5_Kh4_Kg4_Ko_10_10_10_10}}
\\
\subfigure[$k=\{2,4\}, \mu=1$]{\label{fig:quartet:d}\includegraphics[width=.25\textwidth]{./experiment2_nlevel1_Kh2_Kg4_Ko_10_10_10_10}}
\subfigure[$k=\{2,8\}, \mu=1$]{\label{fig:quartet:e}\includegraphics[width=.25\textwidth]{./experiment2_nlevel1_Kh2_Kg8_Ko_10_10_10_10}}
\subfigure[$k=\{4,4\}, \mu=1$]{\label{fig:quartet:f}\includegraphics[width=.25\textwidth]{./experiment2_nlevel1_Kh4_Kg4_Ko_10_10_10_10}}
\\%\vspace*{-4mm}
\subfigure[$\mu=0.2, \beta=0.5$]{\label{fig:tree:d}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo0_2_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_5_quartet2quartet}}
\subfigure[$\mu=0.5, \beta=0.5$]{\label{fig:tree:e}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo0_5_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_5_quartet2quartet}}
\subfigure[$\mu=1, \beta=0.5$]{\label{fig:tree:f}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo1_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_5_quartet2quartet}} \\
\subfigure[$\mu=0.2, \beta=0.2$]{\label{fig:tree:a}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo0_2_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_2_quartet2quartet}}
\subfigure[$\mu=0.5, \beta=0.2$]{\label{fig:tree:b}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo0_5_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_2_quartet2quartet}}
\subfigure[$\mu=1, \beta=0.2$]{\label{fig:tree:c}\includegraphics[width=.25\textwidth]{./tree_nobserved16_nlevelo1_nlevelh0_2_Khlow2_Khhigh8_Ko10_splitprob0_2_quartet2quartet}}
\caption{(a)-(f) Quartet recovery results. (g)-(l) Tree recovery results. ``tensor'' is our method.}
\label{fig:quartet}
\vspace{-3mm}
\end{figure*}
The proposed method compares favorably to NJ and Spectral@$k$. The performance of Spectral@$k$ varies a lot depending on the chosen number of singular values $k$. Our method is free from tuning parameters and often stays among the top performing ones. Especially when the number of hidden states are very different from each other ($k_H=2$ and $k_G=8$), our method is leading the second best by a large gap~(Figure~\ref{fig:quartet:b} and~\ref{fig:quartet:e}). When both hidden states are the same ($k_H=k_G=4$), the Spectral@$k$ achieves the best performance when the chosen number of singular values $k$ is the same as $k_H$. Note that allowing Spectral@$k$ to use different $k$ resembles using cross validations for finding the best $k$. It is expensive while our approach performs almost indistinguishable from Spectral@$k$ even it choose the best $k$.
\subsection{Discovering Latent Tree Structure}
We used different tree topologies and sample sizes in this experiment. We generated tree topologies by randomly splitting 16 observed variables recursively into two groups. The recursive splitting stops when there are only two nodes left in a group. We introduced a hidden variable to join the two partitions in each recursion and this gives a latent tree structure. The topology of the tree is controlled by a single splitting parameter $\beta$ which controls the relative size of the first partition versus the second. If $\beta$ is close to $0$ or $1$, we obtain trees of skewed shape, with long path of hidden variables. If $\beta$ is close to $0.5$, the resulting latent trees are more balanced. In our experiments, we experimented with skewed latent trees $\beta=0.2$ and balanced trees $\beta = 0.5$. We first generate different random $k$ between $2$ and $8$ for the hidden states, and then generate the probability models for each tree using the same scheme as in our previous experiment. Here we experimented with perturbation level $\mu=\{0.2, 0.5, 1\}$.
We varied the sample size across $S=\{50, 100, 200, 500,$ $1000,$ $2000\}$, and measured the error of the constructed tree using Robinson-Foulds metric~\citep{RobFou1981}. This measure is a metric over trees of the same number of leaves. It is defined as $(a + b)$ where $a$ is the number of partitions of variables implied by the learned tree but not by the true tree and $b$ is the number of partitions of the variables implied by the true tree but not by the learned tree (in a sense similar to precision and recall score).
The tree recovery results are shown in Figure~\ref{fig:tree:d}-\ref{fig:tree:c}. Again we can see that our proposed method compares favorably to existing algorithms. All through the 6 experimental conditions, the tensor approach and spectral@2 performed the best with sufficiently large sample sizes. Note that we tried out different $k$ for Spectral@$k$ which resembles using cross validations for finding the best $k$. Even in this case, our approach works comparably without having to know $k$.
Harmeling-William's algorithm performed well in small sample sizes, while CLNJ does not perform well in these experimental conditions.
\subsection{Understanding Latent Relations between Stocks}
We applied our algorithm to discover a latent tree structure from a stock dataset. Our goal is to understand how stock prices $X_i$ are related to each other. We acquired closing prices of 59 stocks from 1984 to 2011 (from www.finance.yahoo.com), which provides us 6800 samples. The daily change of each stock price is discretized into 10 values, and we applied our algorithm to build a latent tree. A visualization of the learned tree topologies and discovered groupings are shown in Figure~\ref{fig:stock}.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\textwidth]{./stocktree-crop}
\caption{Latent tree estimated from stock data.}
\label{fig:stock}
\end{figure*}
We see nice groupings of stocks according to their industrial sectors. For instance, companies related to petroleum, such as CVX (Chevron), XOM (Exxon Mobil), APA (Apache), COP (ConocoPhillips), SLB (Schlumberger) and SUN (Sunoco), are grouped into a subtree. Pharmaceutical companies, such as MRK (Merck), PFE (Pfizer), BMY (Bristol Myers Squibb), LLY (Eli Lilly), ABT (Abbott Laboratories), JNJ (Johnson and Johnson) and BAX (Baxter International), are all grouped into a subtree. High-tech companies, such as AMD, MOT (Motorola), HPQ (Hewlett-Packard), IBM, are grouped into another subtree. There are also subtree for retailers, such as TGT (Target), WMT (Wal-Mart), RSH (RadioShack), subtree for utility service companies, such as DUK (Duke Energy), ED (Consolidated Edison), EIX (Edison), ECX (Exelon), VZ (Verizon), and subtree related to financial companies, such as C (Citigroup), JPM (JPMorgan Chase), and AXP (American Express).
We can also see subtree related to financial companies, such as C (Citigroup), JPM (JPMorgan Chase), and AXP (American Express). An interesting observation is that F (Ford Motor) which is well-known for its car manufacturing is also placed in the same branch as these financial companies. This seemingly abnormal structure can be explained by the fact that Ford Motor operates under two segments: Automotive and Financial Services.
Its financial services include the operations of Ford Motor Credit Company and other financial services including holding companies, and real estate. In this respect, it is quite interesting that our algorithm discovered this hidden information.
We also compared different algorithms in terms of held-out likelihood. We first randomized the data 10 times, and each time used half for training and half for computing the held-out likelihood. Then we estimated the latent binary tree structures using different algorithms. Finally, we fit latent variable models to the discovered structures.
The number of the states for all hidden variables, $k$, were the same in each latent variable model. We experimented with $k = 2, 4, 6, 8,10$ to simulate the process of using cross validation to select the best $k$. The results are presented in Table~\ref{tab}.
\begin{table*}[ht]
\setlength{\extrarowheight}{5pt}
\centering
\caption{Negative log-likelihood ($\times 10^5$) on test data. The small the number the better the method.}
\begin{small}
\begin{tabular}{|c| c| c| c| c| c| c|}
\hline
& Tensor & Spectral@$k$ & Choi (CLNJ) & Neighbor-joining & Harmeling & Chow-Liu\\
\hline
$k = 2$ & $4.41$ & $4.44$ & $4.43$ & $4.43$
&\multirow{5}{*}{$4.31$} & \multirow{5}{*}{$4.41$}\\
\cline{1-5}
$k = 4$ & $4.30$ & $4.35$ & $4.33$ & $ 4.33$ & &\\
\cline{1-5}
$k = 6$ & $4.28$ & $4.35$ & $4.32$ & $4.31$ & &\\
\cline{1-5}
$k = 8$ & $\mathbf{4.28}$ & $4.35$ & $4.32$ & $4.31$ & &\\
\cline{1-5}
$k = 10$ & $4.29$ & $4.37$ & $4.32$ & $4.31$ & &\\
\hline
\end{tabular}
\end{small}
\label{tab}
\end{table*}
Note that Harmeling-William's algorithm automatically discovers $k$, so it does not use the experimental parameter $k$.
Chow-Liu tree does not contain any hidden variables and hence just one number in the table.
CLNJ and Neighbor-joining assume the states for the hidden and observed variables are the same during structure learning. However, in parameter fitting, we can still use different number of hidden states $k$. In this experiment, the structure produced by our tensor approach produced the best held-out likelihood.
\section{Conclusion}
In this paper, we propose a quartet-based method for discovering the tree structures of latent variable models. The practical advantage of the new method is that we do not need to pre-specify the number of the hidden states, a quantity usually unknown in practice. The key idea is to view
the joint probability tables of quadruple of variables as $4$th order tensors and then use the
spectral properties of the unfolded tensors to design a quartet test. We provide conditions under which the algorithm is consistent and its error probability decays exponentially with increasing the sample size. In both simulated and a real dataset, we demonstrated the usefulness of our methods for discovering latent structures. While in this study we focus on the properties of the $4$th order tensor and its various unfoldings, we believe that properties of tensors and methods and algorithms from multilinear algebra will allow to address many other problems arising from latent variable models.
|
1,108,101,562,798 | arxiv | \section{Introduction}
\label{sec-int}
One of the most important factors supporting progress in the miniaturization of
computers and other electronic devices is the continued
exponential increase in the density of data storage.\cite{MCDA05}
Currently, designs are being considered for magnetic recording devices that
have areal data densities of the order of terabits/cm$^2$ -- several orders
of magnitude more than only a decade ago. At
such densities, the size of the recording bit approaches the
superparamagnetic limit, where thermal fluctuations seriously degrade the
stability of the magnetization.\cite{BEAN59,RICH94} However,
current industry standards demand that bits should retain 95\% of their
magnetization over a period of ten years.\cite{MCDA05} Furthermore,
subnanosecond magnetization-switching times are required to achieve acceptable
read/write rates.
One suggested method to fulfill these requirements is to use ultrathin,
perpendicularly magnetized films of very high-coercivity materials,
such as FePt (coercive field about 50~kOe), or single-particle bits that are
expected to have even higher coercivities.\cite{MCDA05}
However, such high coercive fields at room temperature
are beyond what is achievable by modern write heads,
which are limited to about 17~kOe.\cite{MATS06}
A method suggested to overcome this problem is to exploit the temperature
dependence of the coercivity
through heat-assisted magnetization reversal or HAMR (aka.\ thermally assisted
magnetization reversal or
TAMR).\cite{MCDA05,MATS06,SAGA99,KATA00,PURN07,WASE08,PURN09,CHAL09,STIP10,OCON10}
This is accomplished by increasing the temperature of the recording area
to a value close to, or above, the Curie temperature of the medium via a
localized heat source, such as a
laser.\cite{MATS06,KATA00,WASE08,CHAL09,STIP10,OCON10} Due to
the temperature dependence of the coercivity, the magnitude of the required
switching field is lowered at the elevated temperature, relaxing the
requirements for the write head.
An important consideration for the implementation of the HAMR technique is to
keep the heat input as low and as tightly focused as possible,
limiting energy transfer to neighboring recording bits. In order to reach the
desired high data densities, the laser spot must have a diameter less than
50~nm, much smaller than the wavelength.
This can be achieved using near-field optics, a technology which
currently is the objective of vigorous
research and development.\cite{MATS06,CHAL09,STIP10,OCON10}
Despite their simplicity, two-dimensional kinetic Ising models have been
shown to be useful for studying magnetization switching
in ultrathin films with strong anisotropy.\cite{RICH94}
Theoretical \cite{BAND88} and experimental \cite{BACK95}
work has shown that the equilibrium phase transition in such films
belongs to the universality class of the two-dimensional Ising model.
The dynamics of magnetization switching in ultrathin, perpendicularly
magnetized films has been studied using magneto-optical microscopies in
combination with Monte Carlo simulations of Ising-like models by,
among others, Kirilyuk et al. \cite{KIRI97} and Robb et al.\cite{ROBB08}
Systems that have been found to have strong Ising character include
Fe sesquilayers\cite{BACK95} and ultrathin films of Co,\cite{KIRI97}
Co/Pd,\cite{CARC85,PURN09} and Co/Pt.\cite{ROBB08,FERR02}
The strong anisotropy in
such systems limits the effects of transverse spin dynamics and ensures
that local spin reversals are thermally activated. The extreme thinness of the
films strongly reduce the demagnetization effects to which films with
out-of plane magnetization are otherwise
subject.\cite{BAND88,BACK95,ROBB08}
For detailed reviews of experimental and simulational studies of magnetization
switching in ultrathin films with perpendicular magnetization, see Refs.\
\onlinecite{FERR02,LYBE00}.
In the present paper we use a two-dimensional Ising ferromagnet to model
the HAMR process by kinetic Monte Carlo (MC) simulation,
demonstrating enhanced nucleation of the switched magnetization state in the
heated area.
For simplicity and computational economy, we envisage an experimental setup
slightly different from others previously reported in the
literature.\cite{SAGA99,MATS06,WASE08,PURN09}
It most closely resembles the optical-dominant setup shown in Fig.~1(b) of
Ref.~\onlinecite{MATS06}.
The recording medium is placed in a constant
write field that is too weak to cause significant switching on an acceptable
time scale, and it is heated at its center by a transient heat pulse.
At a fixed superheating temperature we show that the
relative speed-up of the magnetization switching, compared to the
constant-temperature case, depends
nonmonotonically on the magnitude of the applied field. This relative
speed-up shows a pronounced
maximum at an intermediate value of the applied field.
We give a physical explanation for this
effect, based on the nucleation theory of magnetization switching
in finite-sized systems.\cite{RICH94,RICH95C,RIKV94A}
As magnetization switching is a special case of the decay of a metastable
phase (i.e., the medium in its state of magnetization opposite to the
applied field),\cite{RIKV94A,RIKV94}
this analysis is of general physical interest
beyond the specific technological application discussed here.
The rest of this paper is organized as follows.
Our model and methods are described in
Sec.~\ref{sec-mod}, the numerical results are described
and explained in Sec.~\ref{sec-res},
and our conclusions are stated in Sec.~\ref{sec-conc}.
\section{Model and Methods}
\label{sec-mod}
We use a square-lattice, nearest-neighbor
Ising ferromagnet with energy given by the Hamiltonian,
\begin{equation}
\mathcal{H} =
-J \sum_{\langle i,j \rangle} s_{i}s_{j} - H \sum_{i} s_{i} \;.
\label{eq:ham}
\end{equation}
Here, $s_i = \pm1$, $J>0$
is the strength of the spin interactions, and the first
sum runs over all nearest-neighbor pairs. For convenience we hereafter set
$J=1$. In the second term, which represents the Zeeman energy, $H$ is
proportional to a uniform external magnetic field, and the sum runs over all
lattice sites. We use a lattice of size $L^{2}=128\times128$, with periodic
boundary conditions.
The length unit used in this study is the computational lattice constant,
which should correspond to a few nanometers.
For simplicity, our model does not
include any explicit randomness, such as impurities or random interaction
strengths. As a result, pinning of
interfaces for very weak applied field,\cite{KIRI97,ROBB08}
as well as heterogeneous nucleation of spin reversal\cite{KIRI97}
are neglected. We further exclude
demagnetizing effects, which are very weak for
ultrathin films\cite{BAND88,BACK95,ROBB08}
and thus cause no qualitative
changes in Monte Carlo simulations of the switching process.\cite{RICH95C}
The stochastic spin dynamic is given by the single-spin flip
Metropolis algorithm with transition probability\cite{METR53}
\begin{equation}
P(s_i \rightarrow -s_i) = \min[1, \exp(- \Delta E / T)] \;,
\label{eq:prob}
\end{equation}
where $\Delta E$ is the energy change that would result from
acceptance of the proposed
spin flip. The temperature, $T$, is given in energy units (i.e., Boltzmann's
constant is taken as unity). Updates are attempted for randomly chosen spins,
and $L^2$ attempts constitute one MC step per spin (MCSS), which is the time
unit used in this work. (We note that the Metropolis algorithm is not the
only Monte Carlo dynamics that could be used here. We have chosen it
because of its simplicity and ubiquity in the literature since we do not
expect that the inclusion of complications such
intrinsic barriers to single-spin flips would have significant effects
at this high temperature beyond a renormalization of the overall timescale.)
Following this algorithm and starting from $s_i=-1$ for all $i$,
we equilibrate the system over $4\times10^{4}$~MCSS at $H=0$
and temperature $T_0 = 0.8T_c \approx 1.82$,
where $T_c = 2/ \ln (1+\sqrt{2}) = 2.269...$ is the
exact critical temperature for the square-lattice Ising model.\cite{ONSA44}
Having achieved equilibrium with negative magnetization at zero field,
we then subject the system to a constant, uniform, positive magnetic
field, along with a transient heat pulse.
To simulate the heat pulse, we use a temperature profile given by
a time-dependent, Gaussian solution of a one-dimensional diffusion equation.
The profile is centered on the
mid-line of the Ising lattice, $\bar{x}=63.5$, and each spin in the $x$th column
of the lattice has the temperature
\begin{equation}
T(x,t) = T_0 + 0.3T_c\frac{t_0}{t+t_0}\exp\left( -\frac{(x -
\bar{x})^2}{4k(t+t_0)}\right), \;\; t \ge 0 \;.
\label{eq:temp}
\end{equation}
Here, $0.3T_c$ is the maximum
of the temperature pulse, which is attained at $t=0$. Therefore, the peak
temperature is $T_0 + 0.3T_c = 1.1T_c$.
The parameter $k$ is the thermal diffusivity,
which is also set to unity for convenience. The time $t_0 = \sigma^2/2k$
is related to the duration of
the heat-input process, such that $\sigma$ is the standard deviation that
governs the width of the temperature profile at $t=0$.\cite{THOM09B}
Here we use $\sigma=6$ for all simulations.
(Equation~\ref{eq:temp} most likely underestimates the speed of
decay of the temperature pulse as it ignores heat conduction into the
substrate.)
Figure~\ref{fig:pulse} displays the temperature
of each column at eight times between $t = 1$ and 500~MCSS.
By first promoting the center-most lattice sites to temperatures above $T_c$
before relaxing them back to $T_0$ according to Eq.~(\ref{eq:temp}), we expect
to initiate a magnetization-switching event that originates along
the center line of the lattice
and propagates outward. After the completion of this switching process,
almost all spins will be oriented up, $s_i = +1$. We define the
switching time $t_{\rm s}$ as the time until the system first
reaches a magnetization per spin,
\begin{equation}
m = \frac{1}{L^{2}} \sum_is_i \;,
\label{eq:magn}
\end{equation}
of zero or greater.
\section{RESULTS}
\label{sec-res}
We first performed a preliminary study
to confirm that magnetization switching can be induced by
the temperature profile, given the parameters used in Eq.~(\ref{eq:temp}).
For this purpose, we inspected snapshots of
the system during a single run at $H = 0.2$. In Fig.~\ref{fig:snap}
we display the configuration
of the system at six times between $t = 1$ and 125~MCSS
during this run. As expected, the switching begins near the center line
of the system, where the temperature
is above critical, and propagates outward. We note a strong similarity of
the simulated magnetization configurations to experimental images of ultrathin,
strongly anisotropic films undergoing magnetization reversal, such as
Figs.\ 3, 4, and 8 of Ref.~\onlinecite{KIRI97}
and Fig.\ 2 of Ref.~\onlinecite{ROBB08}. This observation further confirms the
ability of our simplified model to elucidate generic dynamical features
of real ultrathin films.
Having confirmed a switching event
at $H=0.2$, statistics were accumulated for 200 simulations at $H=0.2$ and
also at fifteen weaker fields down to $H=0.06$,
as detailed in Table~\ref{table}.
For each field, 100 simulations were performed at a constant, uniform
temperature
of $T_0 = 0.8T_c$, and 100 were performed using the time-dependent temperature
profile given by Eq.~(\ref{eq:temp}).
For each run, the average magnetization for each column at each time step was
recorded along with the switching time, $t_{\rm s}$.
To investigate the effect that the relaxing
temperature profile has on each column
of the Ising lattice,
we plotted the average magnetization per spin against the column number.
In Fig.~\ref{fig:mx}
we show this average magnetization for $H = $ 0.2, 0.08, and 0.06.
The plots on the left [Fig.~\ref{fig:mx}(a), (c), and (e)]
result from the 100 runs with the relaxing
temperature profile, and the ones on the
right [Fig.~\ref{fig:mx}(b), (d), and (f)] from the 100 runs at
the constant, uniform temperature of $T_0$. The
plots at $H = 0.2$ [(a) and (b)] show the average magnetization per spin at
eight different times between $t = 1$ and 300~MCSS.
The plots at $H = 0.08$ [(c) and (d)] show the average magnetization per spin
at ten different times between $t = 1$ and 5500~MCSS.
Finally, the plots at $H = 0.06$ [(e) and (f)] show the average magnetization
per spin at nine different times between $t = 1$ and 25000~MCSS. (For a full
listing of the times, see the figure caption.)
Again comparing the results with a
relaxing temperature profile to those realized at
constant, uniform temperature, in Fig.~\ref{fig:cum}
we show cumulative probability distributions for the switching times for fields
$H = 0.2$, 0.15, 0.08, 0.0725, 0.065, and 0.06. The black ``stairs'' are the
cumulative distributions for the
switching times in the 100 runs with the relaxing
temperature profile (hereafter referred to as $t_{\rm s}$). The gray (red
online) stairs are the cumulative distributions
for the switching times in the 100 runs at constant, uniform temperature
(hereafter referred to as $t_{\rm c}$).
Table 1 lists the median switching times for both the 100 runs with the
relaxing temperature profile ($t_{\rm s}$) and the 100 runs
at constant, uniform temperature $T_0$ ($t_{\rm c}$) for each value of $H$.
Also listed are the estimated errors $\Delta t_{\rm s}$ and
$\Delta t_{\rm c}$.
The last two columns give the ratio $t_{\rm s} / t_{\rm c}$
and the associated error $\Delta ( {t_{\rm s}}/{t_{\rm c}} )$.
The error $\Delta t_{\rm s}$ is defined as
$(t_{{\rm s}2} - t_{{\rm s}1})/2$,
where $t_{{\rm s}2}$ is the switching time with
a cumulative probability of $0.55$ and $t_{{\rm s}1}$ is the switching time
with a cumulative probability of $0.45$, and $\Delta t_{\rm c}$ is defined
analogously. The error in the ratio $ ( {t_{\rm s}}/{t_{\rm c}} )$ is
calculated in the standard way as
\begin{equation}
\Delta \left( \frac{t_s}{t_c} \right) = \sqrt{\left( \frac{\Delta t_s}{t_c}
\right)^2 + \left( \frac{t_s}{t_c^2}\Delta t_c \right)^2} \;.
\label{eq:err}
\end{equation}
The median switching time has the advantage over the mean
that it can be estimated even when only half of the 100 simulations
switch within the maximum number of time steps. This significantly reduces the
computational requirements, especially for weak fields.
The ratio $ ( {t_{\rm s}}/{t_{\rm c}} )$ is plotted vs.\ $H$
in Fig.~\ref{fig:ratio}. The minimum value of this ratio
signifies the maximum benefit from using the relaxing
temperature profile of the HAMR method. The corresponding field
value, $H=0.0725$, is the optimal field for this simulation.
To explain the nonmonotonic shape of the curve representing
$ ( {t_{\rm s}}/{t_{\rm c}} )$ in Fig.~\ref{fig:ratio}, it is necessary to
understand the two most important modes of nucleation-initiated
magnetization switching in finite-sized systems: multidroplet (MD) and
single-droplet (SD). (For more detailed discussions,
see Refs.\ \onlinecite{RIKV94A,RIKV94}.)
The average time between random nucleation events of
a growing droplet of the equilibrium phase in a $d$-dimensional system of
linear size $L$ has the strongly field-dependent form,
$\tau_{\rm n} \propto L^d \exp[\Xi(T) / (T |H^{d-1}|)]$, where $\Xi(T)$ is a
measure of the free energy associated with the droplet surface.\cite{RIKV94A}
Once a droplet has nucleated,
for the weak fields and relatively high temperatures studied in this work
it grows with a near-constant and isotropic radial velocity
$v_{\rm g} \propto |H|/T$.\cite{RIKV00B}
As a consequence, the time it would take a newly
nucleated droplet to grow to fill half of a system of volume of $L^d$ is
therefore $\tau_{\rm g} \propto L/v_{\rm g}$.
If $\tau_{\rm g} \gg \tau_{\rm n}$, many droplets will nucleate before the
first one grows to a size comparable to the system, and many droplets will
contribute to the switching process. This is the MD regime, which corresponds
to moderately strong fields and/or large systems.\cite{RIKV94A}
It is the switching mode shown in Fig.~\ref{fig:snap} for $H=0.2$.
In the limit of infinitely large systems it
is identical to the well-known Kolmogorov-Johnson-Mehl-Avrami (KJMA)
theory of phase transformations.\cite{KOLM37,JOHN39,AVRAMI,RAMO99}
If $\tau_{\rm g} \ll \tau_{\rm n}$, the first droplet to nucleate will
switch the system magnetization on its own. This is the SD regime, which
corresponds to weak fields and/or small systems.\cite{RIKV94A}
It is the switching mode shown in Fig.~\ref{fig:snap06A} for $H=0.06$.
The crossover region between the SD and MD regimes is known as the Dynamic
Spinodal (DSP).\cite{RIKV94A}
One aspect of the MD/SD picture that is particularly relevant to the current
problem, is the fact that any switching event that takes place at a time
$t < \tau_{\rm g}$ cannot be accomplished by a single droplet,
and thus it must be due to the MD mechanism.\cite{BROW01} For a circular
droplet in a square $L \times L$ system,
$\tau_{\rm g} \approx L/(\sqrt{2 \pi} v_{\rm g})$. Using results from
Ref.~\onlinecite{RIKV00B} (which, like the present model, neglects
pinning effects\cite{KIRI97,ROBB08}), we find that in the range of
moderately weak fields studied here, at $T = 0.8 T_c$
$v_{\rm g}$ can be well approximated as
$v_{\rm g} \approx 0.75 \tanh{(H/1.82)}$.
The resulting estimates for $\tau_{\rm g}$
in the simulations (which contain {\em no\/} adjustable parameters)
are shown as vertical lines in Fig.~\ref{fig:cum}(c-f).
A kink in the cumulative probability distribution for the
heat-assisted runs is observed at $\tau_{\rm g}$,
with significantly higher slopes in the MD regime on the short-time
side of $\tau_{\rm g}$, than in the SD regime on the long-time side.
From these figures
we see that the optimal field value for $L=128$, $H = 0.0725$,
corresponds to the situation where just above 50\% of the heat-assisted switching
events are caused by the MD mechanism, while essentially all the constant-temperature
switching events are SD. This situation is illustrated by the series of
snapshots in Fig.~\ref{fig:snap075}.
For significantly larger fields, both protocols lead to
all MD switching events [Fig.~\ref{fig:cum}(a,b)],
while for weaker fields, the great majority of the
switching events are SD for both protocols
[Fig.~\ref{fig:cum}(e,f)]. In both cases, the ratio
$t_{\rm s} / t_{\rm c}$ is larger than it is for fields near the
optimal value [Fig.~\ref{fig:cum}(c,d)].
We have confirmed these conclusions by additional simulations for $L = 64$ and 96
(not shown).
\section{Conclusions}
\label{sec-conc}
In this paper we have studied a kinetic Ising model of magnetization
reversal under the influence of a momentary, spatially
localized input of energy in the form
of heat (heat-assisted magnetization reversal, or HAMR). Our numerical
results indicate that the HAMR technique can significantly speed up the
magnetization reversal in a uniform, applied magnetic field, and we
find that this speed-up has its optimal
value at intermediate values of the field.
This effect is explained in terms of the MD and SD mechanisms of
nucleation-initiated magnetization switching in finite systems.\cite{RIKV94A}
The two-dimensional geometry chosen for this study is particularly appropriate
for thin films. We therefore expect that our predictions
should be experimentally observable for
ultrathin ferromagnetic films with strong perpendicular anisotropy,
such as Co/Pd\cite{CARC85,PURN09} or Co/Pt\cite{ROBB08,FERR02} multilayers.
\section*{Acknowledgments}
The authors acknowledge useful conversations with M.~A.\ Novotny and
comments on the manuscript by S.\ von~Moln{\'a}r.
This work was supported in part by U.S.\ NSF Grants No.\ DMR-0802288
and DMR-1104829, and
by the Florida State University Center for Materials Research and Technology
(MARTECH).
Computer resources were provided by the Florida State University
High-performance Computing Center.
|
1,108,101,562,799 | arxiv | \section{Introduction} \label{i}
Let $A_q(n,d;\mL)$ denote the maximum size of a code of length $n$, minimum distance at least $d$, and contained in a subset $\mL \subset \mF^n$, where
$\mF$ is an alphabet of finite size $q$. A central problem in coding theory is to obtain good upper and lower bounds for $A_q(n,d)=A_q(n,d;\mF^n)$.
The asymptotic version of this quantity is the asymptotic information rate function:
\beq \label{eq:alpha} \alpha(x) = \limsup_{n \to \infty} n^{-1} \log_q A_q(n, x n), \; x \in [0,1].\eeq
The quantities $A_q(n,d;\mL)$ and $A_q(n,d)$ are related by the inequality
\beq \label{eq:BE} A_q(n,d) \leq q^n A_q(n,d;\mL)/ |\mL|, \eeq
known as the Bassalygo-Elias lemma.
Taking $\mL$ to be a Hamming ball of diameter $w$, and choosing $w$ optimally gives, at the asymptotic level the Hamming and the Elias upper bounds.
\beq \label{eq:sp} \alpha_{H}(x) = 1 - H_q(x/2), \; x \in [0,1]. \eeq
\beq \label{eq:E} \alpha_E(x)= \alpha_{H}(2 \theta(1 - \sqrt{1 - x/\theta} )), \; x \in [0, \theta]. \eeq
The bound $\alpha_E$ is better than $\alpha_H$ for all $x$. Here $H_q(x)$ is the entropy function \eqref{eq:H},
and $\theta := 1 - q^{-1}$.
An anticode of diameter $w$ in $\mF^n$ is any subset of $\mF^n$ with Hamming diameter $w$. Let $A^*_q(n,w)$ denote the maximum size of an anticode of diameter at most $w$ in $\mF^n$. In contrast to the situation with $A_q(n,d)$, the quantity $A^*_q(n,d)$ was explicitly determined by Ahlswede and Khachatrian in \cite{AK}. From their result, it is easy to determine the asymptotic quantity $ \alpha^*(x) = \lim_{ n \to \infty} n^{-1} \log_q A_q^*(n,xn)$.
We actually do not need the results of \cite{AK}, however it is the main inspiration for this work.
Taking $\mL$ to be an $A^*_q(n,w)$ anticode in \eqref{eq:BE}, and choosing $w$ optimally, we get the following two bounds which improve $\alpha_H$ and $\alpha_E$ respectively.
\begin{theorem} \label{HSthm} (hybrid Hamming-Singleton bound)
\[ \alpha_{HS}(x) = \begin{cases}
1- H_q(\tfrac{x}{2})\!\! &\text{if $x \in [0,2/q]$} \\
(1-x) H_q(1)\!\! &\text{if $ x \in [2/q,1]$} . \end{cases}\]
The bound $\alpha_{HS}$ improves the Hamming and the Singleton bounds. It is $\cup$-convex and continuously differentiable.
\end{theorem}
\begin{theorem} \label{EPthm} (hybrid Elias-Plotkin bound) Let $q >2$.
\[ \alpha_{EP}(x) =
\begin{cases} 1 -H_q(\theta - \sqrt{\theta^2 - x \theta})\!\!\! &\text{if $x \in [0,\tfrac{2q-3}{q(q-1)}]$}\\
(\theta - x) \tfrac{ (q-1) H_q(1)}{q-2}\!\!\! &\text{if $x \in[ \tfrac{2q-3}{q(q-1)},\theta]$}
\end{cases} \]
The bound $\alpha_{EP}$ improves the Elias and Plotkin bounds. It is $\cup$-convex and continuously differentiable.
\end{theorem}
It is not known if the function $\alpha(x)$ itself is $\cup$-convex, although it is tempting to believe that it is. We propose a weaker conjecture:
\begin{conjecture} \label{conj1}
The function $\tfrac{\alpha(x)}{\theta-x}$ is decreasing. In other words
\[ \alpha(t x + (1-t) \theta) \leq t \alpha(x) + (1-t) \alpha(\theta), \; t \in [0,1].\]
\end{conjecture}
As evidence for this conjecture, we will show that theorems \ref{HSthm} and \ref{EPthm} follow very easily if we admit the truth of the conjecture.
The bound $\alpha_{EP}$ in Theorem \ref{EPthm} is an elementary and explicit correction to the classical Elias bound.
It does not however improve the upper-bounds obtained by the linear programming approach, like the second MRRW bound $\alpha_{MRRW2}$ (due to Aaltonen \cite{Aalt1}) or the further improvement of $\alpha_{MRRW2}$ due to Ben-Haim and Litsyn \cite[Theorem 7]{BL}. The reasons for this are as follows: For small $\delta$ we have $\alpha_{EP}(\delta) = \alpha_E(\delta) \geq \alpha_{MRRW2}(\delta)$. For large $\delta$, the inequality $\alpha_{EP}(\delta) > \alpha_{MRRW2}(\delta)$ follows from the fact that $\alpha_{EP}(\delta)$ has a non-zero slope at $\delta=1-1/q$ where as the actual function $\alpha(\delta)$ and the bound $\alpha_{MRRW2}$ have zero slope at $\delta=1-1/q$. \\
The paper is organized as follows. In section \ref{ac}, we collect some results on size of anticodes, which we use in section \ref{sec3} to prove Theorems \ref{HSthm} and \ref{EPthm}. We discuss Conjecture \ref{conj1} in section \ref{convx}.
\section{Size of anticodes} \label{ac}
We recall that $A^*_q(n,d)$ is the maximum size of an anticode of diameter at most $d$ in $\mF^n$. If we take $\mL$ to be an anticode of size $A_q^*(n,d-1)$ then clearly $A_q(n,d;\mL)=1$. Using this in \eqref{eq:BE}, we get a bound
\beq \label{eq:Del} A_q(n,d) \leq q^n/A_q^*(n,d-1), \eeq
known as Delsarte's code-anticode bound \cite{Delsarte_Philips}. Taking $d=xn$ where $x \in[0,1]$ we get \[ n^{-1} \log_q A_q(n,xn) \leq 1 - n^{-1} \log_q A_q^*(n,xn-1).\] Taking $\limsup_{n \to \infty}$ we get:
\beq \label{eq:Del1} \alpha(x) \leq 1 - \alpha^*(x),
\eeq
where
\beq \label{eq:alpha*} \alpha^*(x)= \liminf_{n \to \infty} n^{-1} \log_q A^*_q(n, xn).\eeq
This is the the asymptotic form of \eqref{eq:Del}. We use the notation $B(r;n)$ and $V_q(n,r)$ to denote a Hamming ball of radius $r$ in $\mF^n$ and its volume respectively.
The ball $B(t;n)$ where $t=\lfloor (d-1)/2 \rfloor$ in $\mF^n$ is an anticode of diameter at most $d-1$. Let $\mF^n =\mF^{d-1} \times \mF^{n-d+1}$ and let $v \in \mF^{n-d+1}$ be a fixed word. Sets of the form $\mF^{d-1} \times \{v\}$ of size $q^{d-1}$ are also anticodes of diameter $d-1$. It follows that:
\beq \label{eq:HS1} \alpha^*(x) \geq \text{max}\{ H_q(x/2), x \}.\eeq
Here, we have used the well known formula:
\[ \label{eq:ball} \lim_{n \to \infty} n^{-1} \log_q V_q(n,t n) = H_q(t), \; t \in [0,\theta], \]
where,
\beq \label{eq:H} H_q(x) = x \log_q(\tfrac{q-1}{x}) +(1-x) \log_q( \tfrac{1}{1-x}), \; x \in [0,1]. \eeq
While the convexity of $\alpha(x)$ is an open question, it is quite easy to see that:
\begin{lemma} \label{cnvx1} The function $\alpha^*(x)$ is $\cap$-convex.
\end{lemma}
\bep
If $S_1 \subset \mF^{n_1}$ and $S_2 \subset \mF^{n_2}$ are anticodes of diameters $d_1$ and $d_2$ respectively, then $S_1 \times S_2 \subset \mF^{n_1} \times \mF^{n_2}$ is an anticode of diameter $d_1+d_2$.
Taking $S_i$ to be $A^*_q(n_i,d_i)$ anticodes, we immediately get
\[A^*_q(n_1+n_2,d_1+d_2) \geq A^*_q(n_1,d_1) A^*_q(n_2,d_2).\]
Let $n = n_1+n_2$ go to infinity with $n_1/n = t +o(1)$, $d_1/n_1 = x+o(1)$ and $d_2/n_2 = y+o(1)$. Applying $\liminf_{n \to \infty} n^{-1} \log_q$ to this inequality we get:
\[ \alpha^*(t x + (1-t) y) \geq t \alpha^*(x) + (1-t) \alpha^*(y), \; t \in [0,1].\]
\eep
We note that with codes we have $d(\mC_1 \times \mC_2) = \text{min}\{d(\mC_1), d(\mC_2)\}$, which is why the above proof method does not apply to the question of convexity of $\alpha(x)$.
From \eqref{eq:HS1} and Lemma \ref{cnvx1} we get:
\[ \alpha^*(t x + (1-t) y) \geq t H_q(x/2)+ (1-t) y, \; t \in [0,1].\]
Let $\delta = tx+(1-t) y$. We can rewrite this as
\[ \alpha^*(\delta) \geq f(x,y),\]
where $f : [0,\delta) \times (\delta,1]$ is defined by
\beq \label{eq:f} f(x,y)= \tfrac{y-\delta}{y-x} (H_q(x/2)-x) + \delta.\eeq
We note that
\begin{IEEEeqnarray*}{rCl}
&\tfrac{(y-x)^2}{\delta-x} \tfrac{\partial f}{\partial y} (x,y) = H_q(x/2) -x, \\
&\tfrac{x(y-x)^2}{y(y- \delta)} \tfrac{\partial f}{\partial x} (x,y) = H_q(\tfrac{x}{2}) -x +(1-\tfrac{x}{y}) \log_q(1-\tfrac{x}{2}).
\end{IEEEeqnarray*}
There is a unique positive number $b >0$ satisfying $H_q(b/2) = b$ (where the Hamming and Singleton bounds intersect). Therefore,
$H_q(x/2) -x$ has the same sign as $b-x$. Using this in \eqref{eq:f}, we see that $f(x,y) \leq \delta$ for $x \geq b$. Therefore, in order to maximize $f(x,y)$ it suffices to consider $x <b$.
We note that $\tfrac{\partial f}{\partial y} (x,y)$ has the same sign as $H(x/2) - x$ and hence that of $b-x$. Since $x <b$, we see that for fixed $x<b$, the function $f(x,y)$ is maximized for $y=1$. We are now reduced to maximizing
\[ f(x,1) = 1 - (1-\delta) \tfrac{1-H_q(x/2)}{1-x},\; x \in [0,\delta].\]
\begin{lemma} \label{sp'} Let $g(x) =\tfrac{1 - H_q(x/2)}{1-x}$ for $x \in [0,1]$.
\[{\rm sign} (g'(x)) = {\rm sign}(x - \tfrac{2}{q}).\]
\end{lemma}
\bep We calculate:
\[ g'(x) = \tfrac{1}{2(1-x)^2} \log_q ( \tfrac{q^2x(2-x)}{4(q-1)}). \]
Therefore ${\rm sign} (g'(x)) = {\rm sign}(\tfrac{q^2x(2-x)}{4(q-1)} - 1)$.
Next, we note that
\[ \tfrac{q^2x(2-x)}{4(q-1)} - 1 = q (x-\tfrac{2}{q}) \tfrac{(q-2) +q(1-x)}{4(q-1)} \]
has the same sign as $x-2/q$, as was to be shown. A stronger assertion is that $g(x)$ is in fact $\cup$-convex:
differentiating once more, we get:
\[ \ln(q) (1-x)^3 g''(x) = \ln(\tfrac{q^2/4}{q-1}) + (\tfrac{1}{2x-x^2} -1- \ln(\tfrac{1}{2x-x^2}))\]
We note that $q^2 \geq 4(q-1)$, and hence the first term is non-negative. The remaining parenthetical term is non-negative using the inequality
\beq \label{eq:log} t-1-\ln(t) \geq 0 \; \text{for } t \geq 1, \eeq
and the fact that $t = 1/(2x-x^2) \geq 1$ for $x \in (0,1]$. \eep
It follows from Lemma \ref{sp'} that
\beq \label{eq:HSmin} \text{argmin}_{x \in [0,\delta]} \tfrac{1 - H_q(x/2)}{1-x} = \text{min}\{\delta, 2/q\}. \eeq
Therefore we obtain the bound:
\begin{theorem} \label{thm3} $\alpha^*(x) \geq \beta(x)$ where
\beq \label{eq:beta} \beta(x) = \begin{cases}
H_q(x/2) \!\!&\text{if } x \in [0,2/q]\\
1 - (1-x) H_q(1) \!\!&\text{if } x \in [2/q,1]. \end{cases}\eeq
Moreover, $\beta(x)$ is continuously differentiable and $\cap$-convex.
\end{theorem}
We have used the relation
\beq \label{eq:HScts} \tfrac{1 - H_q(1/q)}{1-2/q} = H_q(1)=\log_q(q-1). \eeq
The function $\beta(x)$ is continuously differentiable because the component for $x \geq 2/q$ is just the tangent line at $x = 2/q$ to the component for $x \leq 2/q$, i.e. to $H_q(x/2)$. We note that $\beta'(x)$ equals $H_q'(x/2)/2$ for $x \leq 2/q$ and $H_q'(1/q)/2$ for $x \geq 2/q$. Since $\beta'(x)$ is non-increasing, it follows that $\beta(x)$ is $\cap$-convex.
In the next lemma, we show that there is a sequence of anticodes $S_n \subset \mF^n$ of diameter at most $\delta n$ such that $\lim_{n \to \infty} n^{-1} \log_q |S_n|$ equals $\beta(\delta)$, i.e. the lower bound on $\alpha^*(\delta)$ given in theorem \ref{thm3}.
\begin{lemma} \label{aclem}
Consider the anticodes $S(d,n)$ of diameter $d$ in $\mF^n$ (taken from \cite{AK}) given by
\[S(d,n) = B(r_{d,n};n-d+2r_{d,n}) \times \mF^{d - 2r_{d,n}}, \, \text{where}\]
\[ r_{d,n} = {\rm max}\{0, {\rm min}\{ \lceil \tfrac{d-1}{2} \rceil, \lceil \tfrac{n-d-q+1}{q-2} \rceil\}\} .\]
Then $\lim_{n \to \infty} n^{-1}\log_q |S(\delta n,n)| = \beta(\delta)$.
\end{lemma}
\bep We note that \[ \rho=\lim_{n \to \infty} \tfrac{r_{\delta n,n}}{n} = \begin{cases} \tfrac{\delta}{2} \!\!&\text{if }\delta \in [0,2/q]\\
\tfrac{1- \delta}{q-2} \!\!&\text{if }\delta \in [2/q,1]. \end{cases}\]
Also $ \lim_{n \to \infty} n^{-1} \log_q |S(\delta n, n)|$ equals
\[ (1- \delta + 2 \rho) H_q(\tfrac{\rho}{1- \delta + 2 \rho}) +(\delta - 2\rho),\]
which simplifies to $H_q(\delta/2)$ if $\delta \leq 2/q$ and (on using \eqref{eq:HScts}) to $ 1 - (1-\delta) H_q(1)$
if $\delta \geq 2/q$. This is the same as $\beta(\delta)$.
\eep
We now have all the results we need for proving theorems \ref{HSthm} and \ref{EPthm}. However, we will state a remarkable theorem due to
Ahlswede and Khachatrian \cite{AK}, which we will not need. We also obtain an asymptotic version of their result and record it as a corollary, as it does not seem to have appeared in literature. In brief their theorem states that $A^*_q(n,d)$ equals $|S(d,n)|$. Moreover any $A_q^*(n,d)$ anticode is Hamming isometric to the anticode $S(d,n)$ (with some exceptions). At the asymptotic level, the result is again remarkable: The lower bound $\beta(\delta)$ for $\alpha^*(\delta)$ given in theorem \ref{thm3} is actually the exact value of $\alpha^*(\delta)$. Moreover $\alpha^*(\delta)$ need not have been defined using $\liminf_{n \to \infty}$ as $\lim_{n \to \infty} n^{-1}\log_qA_q^*(n,\delta n)$ already exists.
\begin{theorem*} \cite{AK} \label{AKthm}
Given $q \geq 2$ and integers $0 \leq d \leq n$, let $r_{d,n}$ and $S(d,n)$ be as in Lemma \ref{aclem}. Then,
\[ A_q^*(n,d) = |S(d,n)|.\]
Moreover, up to a Hamming isometry of $\mF^n$ an anticode $S$ of size $A_q^*(n,d)$ must be:
\begin{itemize}
\item $S(d,n)$
\item or $S(d,n)$ with $r_{d,n}$ replaced with $r_{d,n}-1$. This case is possible only if $(n-d-1)/(q-2)$ is a positive integer not exceeding $d/2$.
\end{itemize}
\end{theorem*}
\begin{corollary} \label{alpha*}
\[ \alpha^*(x)=\begin{cases}
H_q (x/2) &\text{ if } 0 \leq x \leq 2/q\\
1 - (1-x) H_q(1) &\text{ if } 2/q \leq x \leq 1 . \end{cases} \] \end{corollary}
\bep It follows from the theorem of Ahlswede and Khachatrian, together with Lemma \ref{aclem} that
\[ \lim_{n \to \infty} \tfrac{\log_q A_q^*(n,\delta n)}{n} = \lim_{n \to \infty} \tfrac{\log_q |S(\delta n,n)|}{ n} = \beta_q(\delta).\]
Therefore
\[ \alpha^*(\delta) = \liminf_{n \to \infty} \tfrac{\log_q A_q^*(n,\delta n)}{n} = \lim_{n \to \infty} \tfrac{\log_q A_q^*(n,\delta n)}{n} = \beta(\delta).\]
\eep
\section{Proofs of theorems \ref{HSthm} and \ref{EPthm}} \label{sec3}
\subsection{Proof of Theorem \ref{HSthm}}
If we use the bound $\alpha^*(x) \geq \beta(x)$ of Theorem \ref{thm3} in the inequality $\alpha(x) \leq 1 - \alpha^*(x)$ (see \eqref{eq:Del1}), we obtain the bound
\[ \alpha(x) \leq 1 - \beta(x) =:\alpha_{HS}(x). \]
Since $\beta(x)$ is $\cap$-convex and continuously differentiable (see Theorem \ref{thm3}), it follows that $\alpha_{HS}(x)$ is $\cup$-convex and continuously-differentiable.
To show that $\alpha_{HS}(x) \leq \alpha_S(x) = 1-x$, we note that $\alpha_S(x)$ being the secant line to $\alpha_{HS}(x)$ between $(0,\alpha_{HS}(0))$ and $(1, \alpha_{HS}(1))$,
lies above the graph of $\alpha_{HS}(x)$ as the latter is $\cup$-convex. To prove that $\alpha_{HS}(x)$ improves $\alpha_{H}(x)$ we note that $\alpha_{HS}(x)$ coincides with $\alpha_H(x)$ for $x \leq 2/q$, and for $x \geq 2/q$, Lemma \ref{sp'} implies that $\alpha_H(x) \geq (1-x) H_q(1) = \alpha_{HS}(x)$. This finishes the proof of Theorem \ref{HSthm}.\\
It is worth noting that \eqref{eq:HSmin} implies the following formula for $\alpha_{HS}(\delta)$:
\beq \label{eq:HSmin1} \alpha_{HS}(\delta) = \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{H}(x) (1 - \delta)}{1-x}. \eeq
Since $\tfrac{\theta - \delta}{\theta-x} \leq \tfrac{1 - \delta}{1-x}$ for $x \in [0, \delta]$ and $\delta \leq \theta$, we get
\beq \label{eq:HSmin2} \alpha_{HS}(\delta) \geq \alpha_{HP}(\delta): =\min\limits_{x \in [0,\delta]} \tfrac{\alpha_{H}(x) (\theta - \delta)}{\theta-x}. \eeq
It can be shown (see subsection \ref{pf1'}) that $\alpha_{HP}(\delta)$ is an upper bound for $\alpha(\delta)$ which improves both the Hamming and Plotkin bounds.
\subsection{Proof of Theorem \ref{EPthm}}
It will be convenient to identify the alphabet $\mF$ with the abelian group $\bZ/q \bZ$. Given $0 \leq \delta \leq \omega$, let $w_n = \lfloor \omega n \rfloor$ and $d_n = \lfloor \delta n \rfloor$. We take $\mL_n \subset \mF^n$ to
be the anticode (from Lemma \ref{aclem}):
\beq \label{eq:1pf2} \mL_n=B(r_n;n-w_n+2r_{n}) \times \mF^{w_n - 2r_n}, \text{ where}\eeq
\[ r_{n}={\rm max}\{0, {\rm min}\{ \lceil \tfrac{w_n-1}{2} \rceil, \lceil \tfrac{n-w_n-q+1}{q-2} \rceil\}\}. \]
We will take the balls $B(r;m)$ to be centered at $(0, \dots,0) \in \mF^m$.
As in Lemma \ref{aclem}, we have
\beq \label{eq:rho} \rho :=\lim_{n \to \infty} \tfrac{r_n}{n} = \begin{cases}
\tfrac{\omega}{2} &\text{ if } \omega \in [0, 2/q]\\
\tfrac{1 - \omega}{q-2} &\text{ if } \omega \in [2/q,1] . \end{cases}\eeq
We also note that Lemma \ref{aclem} gives
\beq\label{eq:size_L_n} \lim_{n \to \infty} n^{-1} \log_q |\mL_n| = \beta(\omega) = 1 - \alpha_{HS}(\omega). \eeq
Let $A_q(n,d_n;\mL_n)$ be the maximum possible size of a code contained in $\mL_n$ and having minimum distance at least $d_n$.
\begin{theorem} \label{thm4} $ \lim_{n \to \infty} n^{-1} \log_q A_q(n,d_n;\mL_n) = 0$ if
\beq \label{eq:thm4} \tfrac{\rho}{\theta(1-\omega +2 \rho)} \leq 1 - \sqrt{\tfrac{ 1-\delta/\theta}{1- \omega + 2 \rho}}. \eeq
\end{theorem}
\bep Our proof is similar to the standard proof of the analogous result for the Elias bound (which corresponds to taking $\rho =\omega/2$ instead of the prescription \eqref{eq:rho}).
First let $\mC \subset \mL$ be a code of size $M = A_q(n,d;\mL)$,
where $\mL \subset \mF^n$ is the anticode $\mL= B(r;n-w+2r) \times \mF^{w- 2r}$
for some $r \leq w/2$. Let
\[ \gamma_1 (\mC)=( M n)^{-1} \sum_{i=1}^n \sum_{a \in \mF} m(i,a)^2, \]
where $m(i,a) = \# \{ c \in \mC : c_i = a\}$. We note that $M=\sum_{a \in \mF} m(i,a)$, and that
\[ M(M-1) d \leq \sum_{c\in \mC} \sum_{c' \in \mC} d(c,c') = n M^2(1 - \tfrac{\gamma_1}{M}).\]
We can rewrite this as:
\beq \label{eq:0thm4} M \leq \frac{d/n}{ \tfrac{\gamma_1}{M}- (1-\tfrac{d}{n})}, \; \text{provided } \tfrac{\gamma_1}{M}> 1-\tfrac{d}{n}. \eeq
For $n-w+2 r < i \leq n$ we use Cauchy-Schwarz inequality to get $\sum_{a \in \mF} m(i,a)^2 \geq M^2/q$.
In particular
\beq \label{eq:1thm4} \tfrac{1}{M^2 (w - 2r)} \sum_{i=n-w +2 r +1}^{n} \sum_{a \in \mF} m(i,a)^2 \geq \tfrac{1}{q}.\eeq
Let $\pi_1$ be the projection of $\mF^n = \mF^{n-w+2r} \times \mF^{w-2r}$ on to the factor $\mF^{n-w+2r}$.
We note that for $c \in \mC$, we have wt$(\pi_1(c)) <r$ because $\pi_1(\mC) \subset B(r;n-w+2r)$. Here wt$(v)$ is the number of nonzero entries of $v$.
Therefore
\[ \sum_{i=1}^{n-w+2r} \sum_{a \neq 0} m(i,a) \leq Mr.\]
Since $\sum_{i=1}^{n-w+2r} \sum_{a} m(i,a) = M(n-w+2r)$ we get:
\[ S = \sum_{i=1}^{n-w+2r} m(i,0) \geq (n-w + r )M.\]
In particular
\beq \label{eq:2thm4} \tfrac{S}{M( n-w + 2 r)} - \tfrac{1}{q} \geq \tfrac{n-w+r}{n-w+2 r}- \tfrac{1}{q}= \theta -\tfrac{r}{n-w+2 r}.\eeq
We note that $\sum_{a \neq 0} m(i,a) = M- m(i,0)$.
By Cauchy-Schwarz inequality:
\[ \sum_{i=1}^{n-w+2r} m(i,0)^2 \geq S^2/(n-w+2 r),\; \text{ and} \]
\[ \sum_{a \neq 0} m(i,a)^2 \geq (M - m(i,0))^2/(q-1) \]
Since $\sum_{i=1}^{n-w+2r} \sum_{a \in \mF} m(i,a)^2$ equals
\[ \sum_{i=1}^{n-w+2r}\left( m(i,0)^2 + \sum_{a \neq 0} m(i,a)^2 \right ),\]
we get: \[\sum_{i=1}^{n-w+2r} \sum_{a \in \mF} m(i,a)^2 \geq \!\!\! \sum_{i=1}^{n-w+2r} \!\! \left(
\tfrac{q m(i,0)^2 + M^2 - 2 M m(i,0)}{q-1} \right)\]
This can be rewritten as:
\[ \tfrac{1}{M^2(n-w+2r)} \sum_{i=1}^{n-w+2r} \sum_{a \in \mF} m(i,a)^2 \geq
\tfrac{1}{\theta}(\tfrac{S}{M(n-w+2 r)} - \tfrac{1}{q} )^2 + \tfrac{1}{q} \]
Combining this with \eqref{eq:1thm4} we get:
\[ \tfrac{\gamma_1}{M} \geq \tfrac{n-w +2r}{n \theta} (\tfrac{S}{M(n-w+2r)} - q^{-1})^2 + \tfrac{1}{q}.\]
Using \eqref{eq:2thm4} this can be written as:
\[ \tfrac{\gamma_1}{M} - \tfrac{1}{q} \geq \tfrac{n-w +2r}{n \theta} \, (\theta -\tfrac{r}{n-w+2 r})^2.\]
Now let $\mC_n \subset \mL_n$ be a sequence of codes of size $M_n = A_q(n,d_n;\mL_n)$.
The preceding inequality gives:
\[ \tfrac{\gamma_1(\mC_n)}{M_n} - \tfrac{1}{q} \geq \tfrac{1-\omega +2 \rho}{ \theta} \, (\theta -\tfrac{\rho}{1-\omega +2 \rho})^2 + o(1).\]
Using this in \eqref{eq:0thm4}, we get:
\[ M_n \leq \frac{\delta+o(1)}{ \tfrac{1-\omega +2 \rho}{ \theta} \, (\theta -\tfrac{\rho}{1-\omega +2 \rho})^2 - (\theta-\delta)+ o(1)}, \]
provided the denominator is a positive number.
Therefore, $\lim_{n \to \infty} n^{-1} \log_q M_n = 0$ provided
\[ \tfrac{1-\omega +2 \rho}{ \theta} \, (\theta -\tfrac{\rho}{1-\omega +2 \rho})^2 \geq \theta - \delta. \]
This condition is the same as
\[ \tfrac{\rho}{\theta(1-\omega +2 \rho)} \leq 1 - \sqrt{\tfrac{ 1-\delta/\theta}{1- \omega + 2 \rho}} \]
Since $M_n = A_q(n,d_n;\mL_n)$ this finishes the proof.
\eep
Using \eqref{eq:BE} we get:
\[ \tfrac{\log_q A_q(n,d_n)}{n} \leq 1 - \tfrac{\log_q |\mL_n|}{n} + \tfrac{\log_q A_q(n,d_n;\mL_n)}{n}.\]
Taking $\limsup$ as $n \to \infty$ and using the result of Theorem \ref{thm4} and \eqref{eq:size_L_n} we get:
\beq \label{eq:EPmax} \alpha(\delta) \leq \alpha_{HS}(\omega_{\text{max}}(\delta)), \eeq
where $\omega_{\text{max}}(\delta)$ is the largest value of $\omega$ for which the inequality \eqref{eq:thm4} holds.
In order to determine $\omega_{\text{max}}(\delta)$, we introduce functions $f_1, f_2$ on $[0,\theta]$ defined by:
\begin{IEEEeqnarray}{rcl}
f_1(\delta) &=2 \theta(1 - \sqrt{1 - \delta/\theta}) \\
f_2(\delta)&=1 - (1 - \delta/\theta) \tfrac{(q-1)^2}{q(q-2)}
\end{IEEEeqnarray}
\begin{lemma} \label{f12lem} Let $q >2$. \begin{enumerate}
\item $ f_1(\delta) \geq f_2( \delta)$ with equality only at $\delta = \tfrac{2q-3}{q(q-1)}$.
\item $f_2(\delta)$ is the tangent line to $f_1(\delta)$ at $\delta = \tfrac{2q-3}{q(q-1)}$.
\item $\text{sign}(f_1(\delta)- 2/q) =\text{sign}(f_2(\delta)- 2/q) = \text{sign}(\delta -\tfrac{2q-3}{q(q-1)})$.
\end{enumerate}
\end{lemma}
\bep Let $f_3(\delta):=1 - \tfrac{q-1}{q-2} \sqrt{1 - \delta/\theta}$. We observe that
\[ \text{sign}(f_3(\delta)) = \text{sign}(\delta -\tfrac{2q-3}{q(q-1)} ).\]
The three assertions to be proved follow respectively from the following three relations:
\begin{eqnarray*}
f_1(\delta) - f_2(\delta) &=& (1 - 2/q) \,f_3(\delta)^2, \\
f_1'(\delta) - f_2'(\delta)&=&f_3(\delta)/\sqrt{1 - \delta/\theta},\\
\tfrac{ f_1(\delta) - 2/q}{2(1 - 2/q)} = \tfrac{ f_2(\delta) - 2/q}{(1-2/q) + \theta \sqrt{1 - \delta/\theta}}&=& f_3(\delta) .
\end{eqnarray*}
\eep
\begin{proposition} \label{EPprop} \[ \omega_{\text{max}}(\delta) = \begin{cases} 2 \theta(1 - \sqrt{1 - \delta/\theta}) &\text{ if } \delta \in [0,\tfrac{2q-3}{q(q-1)}] \\ 1 - (1 - \delta/\theta) \tfrac{(q-1)^2}{q(q-2)} &\text{ if } \delta \in [\tfrac{2q-3}{q(q-1)},1]. \end{cases}\]
The function $\omega_{\text{max}}(\delta)$ is increasing, continuously differentiable, and $\cup$-convex on $[0, \theta]$.
\end{proposition}
\bep The inequality \eqref{eq:thm4} reduces to
\[ \omega \leq \begin{cases} f_1(\delta) &\text{ if $\rho=\omega/2$}\\
f_2(\delta) &\text{ if $\rho=(1-\omega)/(2-q)$}, \end{cases} \]
where $\rho$ is as given in \eqref{eq:rho}. Therefore, for a given $\delta \in [0,\theta]$, the quantity $\omega_{\text{max}}(\delta)$ is the maximum element of the set
\[ \{ \omega: \delta \leq \omega \leq \text{min}\{f_1(\delta), 2/q\} \} \cup \{ \omega: \text{max}\{\delta, 2/q \} \leq \omega \leq f_2(\delta) \}. \]
If $\delta \geq \tfrac{2q-3}{q(q-1)}$, then $f_2(\delta) \geq 2/q$ (by Lemma \ref{f12lem}) and hence, the maximum of this set is $f_2(\delta)$.
If $\delta \leq \tfrac{2q-3}{q(q-1)}$, then $f_2(\delta) \leq f_1(\delta) \leq 2/q$ (by Lemma \ref{f12lem}) and hence, the maximum of this set is $f_1(\delta)$. This proves the asserted formula for $\omega_{\text{max}}(\delta)$.\\
We note that the second component of $\omega_{\text{max}}(\delta)$ is the tangent line to the first component at $x = \tfrac{2q-3}{q(q-1)}$. Therefore $\omega_{\text{max}}(\delta)$ is continuously differentiable. The derivative of $\omega_{\text{max}}(x)$ is $1/ \sqrt{1 - x/\theta}$ for $x \leq \tfrac{2q-3}{q(q-1)}$, and constant at $\tfrac{q-1}{q-2}$ for $x \geq
\tfrac{2q-3}{q(q-1)}$. Since the derivative is positive, the function is increasing. Since the derivative is non-decreasing, we see that the function is $\cup$-convex.
\eep
\emph{Proof of $\alpha_{EP}$ being an upper bound}: We note from lemma \ref{f12lem} that \[ \text{sign}(\omega_{\text{max}}(\delta) - 2/q) = \text{sign}(\delta - \tfrac{2q-3}{q(q-1)}).\]
Therefore $\alpha_{HS}(\omega_{\text{max}}(\delta))$ is just the function $\alpha_{EP}(\delta)$ defined in of theorem \ref{EPthm}. The bound $\alpha(x) \leq \alpha_{EP}(x)$ now follows from \eqref{eq:EPmax}. \\
\emph{Proof of $\alpha_{EP}$ being continuously differentiable}: The function $\alpha_{EP}(x) = \alpha_{HS}(\omega_{\text{max}}(x))$ being a composition of continuously differentiable functions, is itself continuously differentiable. \\
\emph{Proof of $\alpha_{EP}$ being $\cup$-convex}:
Both the functions $\alpha_{HS}$ and $\omega_{\text{max}}$ are $\cup$-convex, but $\alpha_{HS}$ is decreasing and hence it is not obvious that $\alpha_{EP}(x) = \alpha_{HS}(\omega_{\text{max}}(x))$ is $\cup$-convex. We will show instead that the derivative $\alpha_{EP}'$ is non-decreasing. Since $\alpha_{EP}'$ is constant for $x \geq \tfrac{2q-3}{q(q-1)}$, it suffices to show that $\alpha_E''(x) >0$ for $x \in (0, \tfrac{2q-3}{q(q-1)}]$. This follows from the next lemma.
\begin{lemma} \label{Econv} The Elias bound $\alpha_E(x)$ is $\cup$-convex on $[0,\delta_E]$ and $\cap$-convex on $[\delta_E,\theta]$ where $\delta_E$ satisfies:
\[ \tfrac{2q-3}{q(q-1)} < \delta_E < \tfrac{3}{4} (\tfrac{q - 4/3}{q-1}).\]
\end{lemma}
\bep Let $Z(x) = \theta(1 - \sqrt{1 - x/\theta})$. A calculation shows that
\[ 4 \theta \ln(q) (1 - \tfrac{Z(x)}{\theta})^3 \alpha_E''(x) = \varphi(Z(x)), \text{ where}\]
\[ \varphi(z) = \int_{\tfrac{1-\theta}{1-z}}^{\tfrac{\theta}{z}} (1 - 1/t) dt.\]
To see this, we note: $\alpha_E(x) = H_q(Z(x))$ and hence
\[ \ln(q) \alpha_E''(x) = \tfrac{Z'}{Z(1-Z)} +Z'' \ln(\tfrac{(q-1)(1-Z)}{Z}).\]
Since $Z' = 1/(2 (1 - Z/\theta))$ and $Z'' = Z'/(2 \theta (1 - Z/\theta)^2)$, we get
\[ 4 \theta \ln(q) (1 - \tfrac{Z(x)}{\theta})^3 \alpha_E''(x) =\int_{\tfrac{1-\theta}{1-Z(x)}}^{\tfrac{\theta}{Z(x)}} (1 - 1/t) dt,\]
as desired. It follows that sign$(\alpha_E''(x)) = \text{sign}(\varphi(Z(x))$. Next we note that $Z(x)$ is increasing on $[0,\theta]$ and
\[ Z(\tfrac{2q-3}{q(q-1)}) = 1/q, \quad Z(\tfrac{3}{4} (\tfrac{q - 4/3}{q-1})) = 1/2.\]
It now suffices to show that
\[ \text{sign}(\varphi(z)) = \text{sign}(z_E - z), \text{ for some } z_E \in (\tfrac{1}{q},\tfrac{1}{2}).\]
We note that
\[ \varphi'(z) = (z - \tfrac{1}{2}) \, \tfrac{2(\theta-z)}{z^2(1-z)^2}.\]
Thus $\varphi(z)$ is decreasing on $[0,1/2]$ and increasing on $[1/2,\theta]$.
In order to show sign$(\varphi(z)) = \text{sign}(z_E - z)$ for some $z_E \in (1/q,1/2)$, it suffices to show that $\varphi(1/q) >0$ and $\varphi(1/2) <0$. We calculate
\[ \tfrac{1}{2} \varphi(1/q) = (\tfrac{q-1}{2} -1 -\ln(\tfrac{q-1}{2}) ) + (\tfrac{2q-3}{2q-2} - \ln(2) ).\]
Since $q \geq 3$, we have $\tfrac{q-1}{2} \geq 1$. The inequality \eqref{eq:log} implies that the first parenthetical term above is non-negative.
Again $q \geq 3$ implies \[\tfrac{2q-3}{2q-2} - \ln(2) \geq \tfrac{3}{4}-\ln(2) >0,\] and hence the second parenthetical term is positive. Thus $\varphi(1/q) >0$.
Next, we note that $\varphi(1/2) = 2 - 4/q - \ln(q-1)$. The function $a(t) = 2 - 4/t-\ln(t-1)$ satisfies
\[ a'(t) = - \tfrac{(t-2)^2}{t^2(t-1)},\]
and $a(3) = 2/3 - \ln(2) <0$. Therefore $a(t) <0$ for $t \geq 3$, and hence $\varphi(1/2) <0$ for all $q \geq 3$. \\
\eep
\emph{Proof that $\alpha_{EP}$ improves the Plotkin bound}:
We have already shown that $\alpha_{EP}(x)$ is $\cup$-convex, and hence $\alpha_{EP}(x)$ lies below the secant line between $x=0$ and $x = \theta$, which is the Plotkin bound.\\
\emph{Proof that $\alpha_{EP}$ improves the Elias bound}: This does not readily follow from our results thus far, and requires more work. The
characterization of $\alpha_{EP}(x)$ given in the next theorem clearly implies $\alpha_{EP}(x) \leq \alpha_E(x)$.
\begin{theorem} \label{EPchar}
$\alpha_{EP}(\delta) = \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{E}(x) (\theta - \delta)}{\theta-x}.$
\end{theorem}
\bep The theorem immediately follows if we show that $\tfrac{\alpha_E(x)}{\theta-x}$ is decreasing on $[0, \tfrac{2q-3}{q(q-1)}]$ and increasing on $[\tfrac{2q-3}{q(q-1)}, \theta]$.
We will use the notation from the proof of Lemma \ref{Econv}.
Since $\alpha_E(x)$ is $\cap$-convex for $x \geq \delta_E$, it follows that the slope $\alpha_E(x)/(\theta-x)$ of the secant between $x$ and $\theta$ is increasing.
It remains to show that $\tfrac{\alpha_E(x)}{\theta-x}$ is decreasing on $[0, \tfrac{2q-3}{q(q-1)}]$ and increasing on $[\tfrac{2q-3}{q(q-1)}, \delta_E]$.
Since $Z(x)$ is an increasing function, with $Z(0)=0,Z(\tfrac{2q-3}{q(q-1)})=1/q$, and $(1-x/\theta) = (1 - z/\theta)^2$, it suffices to show that
\[ h(z) =\tfrac{1 - H_q(z)}{(\theta-z)^2}, \]
is decreasing on $[0, 1/q]$ and increasing on $[1/q, z_E]$ where $z_E = Z(\delta_E)$. A calculation shows that
\[ \ln(q) (\theta-z)^3 h'(z) = \int_{1/q}^{z} \varphi(t) dt.\]
To see this we note that either side of this equation evaluates to $(\theta+z) \ln( \frac{z}{(1-z)(q-1)}) + 2 \ln(q(1-z))$.
Since $\varphi(t) >0$ for $t\in (0,z_E)$, we see
\[ \text{sign}(h'(z)) = \text{sign}(z - 1/q),\; z \in [0,z_E].\]
Thus we have also shown that $h(z)$ is decreasing for $z \in [0,1/q]$ and increasing on $[1/q, z_E]$ as required.
\eep
The bounds $\alpha_{HS}$, $\alpha_{HP}$ and $\alpha_{EP}$ are related as
\[ \alpha_{EP}(\delta) \leq \alpha_{HP}(\delta) \leq \alpha_{HS}(\delta).\]
We have already shown $\alpha_{HP}(\delta) \leq \alpha_{HS}(\delta)$ in \eqref{eq:HSmin2}. Since $\alpha_E(x) \leq \alpha_H(x)$ for all $x$, we note that
\[ \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{E}(x) (\theta - \delta)}{\theta-x} \leq \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{H}(x) (\theta - \delta)}{\theta-x}.\]
Thus $\alpha_{EP}(\delta) \leq \alpha_{HP}(\delta)$. We end this section with a plot comparing $\alpha_E(x)$ and $\alpha_{EP}(x)$ for $q=16$.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{EP.pdf}
\caption{$\alpha_E(\delta)$ and $\alpha_{EP}(\delta)$ for $q = 16$.}
\label{fig_sim}
\end{figure}
\subsection{Another proof of Theorem \ref{HSthm}} \label{pf1'}
Another proof of $\alpha_{HS}(x)$ being an upper bound for $\alpha(x)$ can be given using the following theorem of Laihonen and Litsyn:
\begin{theorem*} \cite{LL} \label{ac_thm}
Let $\delta_1, \delta_2, \mu \in [0,1]$.
\beq \label{eq:ac_thm} \alpha((1-\mu) \delta_1 + \mu \delta_2) \leq (1-\mu) \alpha_{H}( \delta_1) + \mu \alpha(\delta_2). \eeq
\end{theorem*}
\bep We give a quick proof.
The result follows from the inequality:
\[ A_q(n_1+n_2,d_1+d_2) \leq \frac{q^{n_1} A_q(n_2,d_2)}{V_q(n_1,d_1/2)},\]
by taking $n_1+n_2 = n \to \infty$, and $n_1/n, n_2/n, d_1/n_1$ and $d_2/n_2$ going to $1- \mu, \mu, \delta_1$ and $\delta_2$ respectively.
The above inequality in turn comes from the Bassalygo-Elias lemma \eqref{eq:BE}
\[ A_q(n_1+n_2,d_1+d_2) \leq \frac{q^ {n_1+n_2} A_q(n_1+n_2,d_1+d_2;\mL)}{|\mL|},\] by taking $\mL = B(d_1/2;n_1) \times \mF^{n_2}$, and observing that $A_q(n_1+n_2,d_1+d_2;\mL) \leq A_q(n_2,d_2)$. (If $\mC$ is a $A_q(n_1+n_2,d_1+d_2;\mL)$ code, and $\pi_2: \mF^{n_1} \times \mF^{n_2} \to \mF^{n_2}$ is the projection on the second factor, then
the restriction of $\pi_2$ to $\mC$ is injective, and $\pi_2(\mC)$ has minimum distance at least $d_2$.)
\eep
If we set $\delta_2=1$, $\delta_1=x$ and $\mu =(\delta - \delta_1)/(1 - \delta_1)$ in \eqref{eq:ac_thm}, we get:
\[ \frac{\alpha(y)}{1- y} \leq \frac{\alpha_{H}(x)}{1-x} \; \text{ for }\, x \leq y .\]
Thus,
\[ \alpha(\delta) \leq \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{H}(x) (1 - \delta)}{1-x} = \alpha_{HS}(\delta),\]
where we have used \eqref{eq:HSmin1}. \\
We now prove that the function $\alpha_{HP}(\delta)$ defined in \eqref{eq:HSmin2} is an upper bound for $\alpha(\delta)$.
Taking $\delta_2=\theta$, $\delta_1=x$, and $\mu = (y - \delta_1)/(\theta - \delta_1)$ in \eqref{eq:ac_thm}, we get:
\[ \frac{\alpha(y)}{\theta- y} \leq \frac{\alpha_{H}(x)}{\theta-x} \; \text{ for }\, x \leq y \]
Thus
\beq \label{eq:HP} \alpha(\delta) \leq \min\limits_{x \in [0,\delta]} \tfrac{\alpha_{H}(x) (\theta - \delta)}{\theta-x} = \alpha_{HP}(\delta).\eeq
It is not known if the inequality \eqref{eq:ac_thm} (the theorem of Laihonen-Litsyn) holds if we replace $\alpha_H$ by $\alpha_E$. If such a result were true, then
the derivation of the bound $\alpha_{HP}(x)$ above with $\alpha_H$ replaced with $\alpha_E$ would immediately yield Theorem \ref{EPchar}.
We believe that such an inequality
\beq \label{eq:E_ineq} \alpha((1-\mu) \delta_1 + \mu \delta_2) \leq (1-\mu) \alpha_{E}( \delta_1) + \mu \alpha(\delta_2), \eeq
must be true (it would surely be true if $\alpha(x)$ is $\cup$-convex), but we believe it cannot be obtained just by a simple application of the Bassalygo-Elias lemma \eqref{eq:BE}.
If \eqref{eq:E_ineq} holds, we can obtain an upper bound which improves the Laihonen-Litsyn bound \cite{LL}. We recall that the Laihonen-Litsyn bound, which we denote $\alpha_{HMRRW}$ is a hybrid of the Hamming and MRRW bounds. It coincides with the Hamming bound for $\delta \in [0,a]$ and with the MRRW bound for $[b,\theta]$ where $a<b$ are points such that the straight line joining $(a,\alpha_H(a))$ and $(b,\alpha_{MRRW}(b))$ is a common tangent to both $\alpha_H$ at $a$ and $\alpha_{MRRW}$ at $b$. Since the Hamming bound is good for small $\delta$ and the MRRW bound good for large $\delta$, the Laihonen-Litsyn bound combines the best features of both bounds in to a single bound. To obtain this bound, we note that \eqref{eq:ac_thm} implies the inequality
\[ \alpha((1-\mu) \delta_1 + \mu \delta_2) \leq (1-\mu) \alpha_{H}( \delta_1) + \mu \alpha_{MRRW}(\delta_2).\]
We fix $\delta = (1-\mu) \delta_1+ \mu \delta_2$ and choose $\delta_1$ and $\delta_2$ optimally in order to minimize
the right hand side. This yields the $\alpha_{HMRRW}$ bound. Since the second MRRW bound $\alpha_{MRRW2}$ improves the first MRRW bound $\alpha_{MRRW}$, a better version $\alpha_{HMRRW2}$ of the Laihonen-Litsyn bound (see \cite[Theorem 2]{BL}) can be obtained by using $\alpha_{MRRW2}$ in place of $\alpha_{MRRW2}$. Since the Elias bound $\alpha_E(\delta)$ is better than the Hamming bound $\alpha_H(\delta)$ for all $\delta$, in case \eqref{eq:E_ineq} is true, repeating this procedure with $\alpha_E$ replacing $\alpha_H$, would yield the hybrid Elias-MRRW bounds $\alpha_{EMRRW}(\delta), \alpha_{EMRRW2}(\delta)$ which would improve the respective Laihonen-Litsyn bounds $\alpha_{HMRRW}, \alpha_{HMRRW2}(\delta)$. We leave the question of the truth of \eqref{eq:E_ineq} open.
\section{On the convexity of $\alpha(x)$} \label{convx}
A fundamental open question about the function $\alpha(x)$ is whether it is $\cup$-convex. In other words is it true that
\beq \label{eq:conv} \alpha((1-t) x + t y ) \leq (1-t) \alpha(x) + t \alpha(y), \; t \in [0,1].\eeq
It is worth noting that non-convex upper bounds like the Elias bound and the MRRW bound admit corrections to the non-convex part: the bound $\alpha_{EP}(x)$ for the Elias bound and the Aaltonen straight-line bound (see the theorem below and the Appendix) for the MRRW bound. This may be viewed as some kind of evidence supporting the truth of \eqref{eq:conv}.
It is known that \eqref{eq:conv} holds for $x=0$ (for example by taking $\delta_1=0$ in \eqref{eq:ac_thm}). Another way to state this is that
\[ (1-\alpha(x))/x\, \text{ is decreasing on } [0, 1]. \]
As a consequence, if $\alpha_u(x)$ is any upper bound for $\alpha(x)$ we obtain a better upper bound
\beq \label{eq:Aalt1} \alpha(\delta) \leq \tilde\alpha_u(\delta) = 1 - \max_{x \in [\delta,\theta]} \frac{(1 - \alpha_u(x)) \delta}{x} \eeq
To see this we use:
\[ \tfrac{1-\alpha(\delta)}{\delta} \geq \tfrac{1-\alpha(x)}{x} \geq \tfrac{1-\alpha_u(x)}{x}, \text{for } x \in [\delta,\theta].\]
Thus $\tfrac{1-\alpha(\delta)}{\delta} \geq \max_{x \in [\delta,\theta]} \frac{(1 - \alpha_u(x))}{x}$ as desired.
If $(1-\alpha_u(x))/x$ is a decreasing function then the improved bound $\tilde \alpha_u(x)$ coincides with $\alpha_u(x)$, but otherwise $\tilde \alpha_u(x)$ improves $\alpha_u(x)$.
For example let $\alpha_u(x)$ be the first MRRW bound $\alpha_{MRRW}(x)=$
\[ H_q( ( \sqrt{\theta (1-x)} - \sqrt{ x (1-\theta) })^2), \, x \in [0,\theta]. \]
It can be shown that that $(1-\alpha_{MRRW}(x))/x$ fails to be decreasing near $x=0$, and similarly $\alpha_{MRRW}(x)$ fails to be $\cup$-convex near $x=0$.
This is immediately rectified by passing to the improved bound $\tilde \alpha_{MRRW}(x)$, resulting in the following theorem of Aaltonen.
\begin{theorem*} (Aaltonen bound) \cite{Aalt} \cite[p.53]{Tsfasman} Let $q>2$.
$\alpha(x) \leq \tilde\alpha_{MRRW}(x)$ where
\beq \label{eq:MRRW1} \tilde\alpha_{MRRW}(x) = \begin{cases}
1 - \tfrac{x H_q(1)}{1 - 2/q} &\text{ if } x \in [0, (1 - \tfrac{2}{q})^2] \\
\alpha_{MRRW}(x) &\text{ if } x \in [ (1 - \tfrac{2}{q})^2, \theta]
\end{cases} \eeq
This bound is $\cup$-convex, continuously differentiable, and improves the MRRW bound.
\end{theorem*}
We note that for $x \leq (1 - 2/q)^2$ the bound $\tilde \alpha_{MRRW}(x)$ coincides with the tangent line to $\alpha_{MRRW}(x)$ at $(1 - 2/q)^2$. In particular
$\tilde \alpha_{MRRW}(x)$ is continuously differentiable. The assertion that $\tilde \alpha_{MRRW}(x)$ improves $\alpha_{MRRW}(x)$ follows from the fact that $\tilde \alpha_u(x) \leq \alpha_u(x)$ for any upper bound $\alpha_u(x)$ for $\alpha(x)$. The other assertions are proved in the appendix.\\
On the other hand, it is not known if the convexity condition \eqref{eq:conv} holds for $y= \theta$, in other words if $\alpha(x)/(\theta-x)$ is a decreasing function of $x$. We conjecture that this is true (see Conjecture \ref{conj1}). As evidence for this conjecture, we now show that the bounds $\alpha_{EP}$, $\alpha_{HP}$ and $\alpha_{HS}$ can be obtained without doing any work, if we assume the truth of Conjecture \ref{conj1}: if $\alpha_u(x)$ is any upper bound for $\alpha(x)$ we obtain a better upper bound
\beq \label{eq:imp1} \alpha(\delta) \leq \alpha_u^ \dagger(\delta):= \min_{x \in [0,\delta]} \frac{\alpha_u(x)( \theta - \delta)}{\theta - x} \eeq
To see this we use:
\[ \tfrac{\alpha(\delta)}{\theta - \delta} \leq \tfrac{\alpha(x)}{\theta-x} \leq \tfrac{\alpha_u(x)}{\theta-x}\; \text{ for } x \in [0,\delta].\]
Thus $\alpha(\delta) \leq \min_{x \in [0,\delta]} \frac{\alpha_u(x)( \theta - \delta)}{\theta - x} $ as desired.
In case $\tfrac{\alpha_u(x)}{\theta-x} $ is a decreasing function then the improved bound $\alpha_u^ \dagger(x)$ coincides with $\alpha_u(x)$, but otherwise $\alpha_u^ \dagger(x)$ improves $\alpha_u(x)$. Taking $\alpha_u(x)$ to be the Elias bound, we get $\alpha_u^\dagger(x)$ to be the bound $\alpha_{EP}$. This is the content of Theorem \ref{EPchar}.
Taking $\alpha_u(x)$ to be the Hamming bound, we get $\alpha_u^\dagger(x)$ to be the bound $\alpha_{HP}$. This is the content of \eqref{eq:HP}.
Moreover, if $\alpha(x)/(\theta-x)$ is decreasing then $\alpha(x)/(1-x)$ being the product of the non-negative decreasing functions $\alpha(x)/(\theta-x)$ and $(\theta-x)/(1-x)$
is itself decreasing. Thus we obtain $\alpha(\delta) \leq \min_{x \in [0,\delta]} \frac{\alpha_u(x)( 1 - \delta)}{1 - x}$. Taking $\alpha_u(x)$ to be the Hamming bound, the bound $\min_{x \in [0,\delta]} \frac{\alpha_u(x)( 1 - \delta)}{1 - x}$ is $\alpha_{HS}(\delta)$. This is the content of \eqref{eq:HSmin1}.
\appendices \label{App1}
\section{Aaltonen's straight-line bound}
The bound $\tilde \alpha_{MRRW}$ presented above was obtained by Aaltonen in \cite[p.156]{Aalt}. The bound follows from \eqref{eq:Aalt1} and the following result
\beq \label{eq:MRRWmin} \text{argmax}_{x \in [\delta,\theta]} \tfrac{1 - \alpha_{MRRW}(x)}{x} = \text{max}\{\delta, (1 - 2/q)^2\}. \eeq
The argmax above is not straightforward to obtain, and to quote from \cite{Aalt}, was found by a mere chance. The derivation is not presented in \cite{Aalt}. The purpose of this appendix is to i) record a proof of \eqref{eq:MRRWmin}, and ii) to prove that $\tilde \alpha_{MRRW}(x)$ is $\cup$-convex. The author thanks
Tero Laihonen for providing a copy of Aaltonen's work \cite{Aalt}, which is not easily available.\\
Let $\xi:[0, \theta] \to [0,\theta]$ be the function defined by $\xi(x) = ( \sqrt{\theta (1-x)} - \sqrt{ x (1-\theta) })^2$. We note that $\alpha_{MRRW}(x) = H_q(\xi(x))$, and that $\xi(x)$ decreases from $\theta$ to $0$ as $x$ runs from $0$ to $\theta$. It is easy to check that $\xi( \xi (x)) = x$ for $x \in [0,\theta]$.
Therefore we can invert the relation $y = \xi(x)$ as $x = \xi(y)$. We also note that $\xi((1 - 2/q)^2) = 1/q$. Therefore \eqref{eq:MRRWmin} is equivalent to the assertion:
\beq \label{eq:MRRWmin1} \text{argmax}_{y \in [0,t]} \tfrac{1 - H_q(y)}{\xi(y)} = \text{min}\{t, 1/q\}. \eeq
In terms of $h_A(y) := \tfrac{1 - H_q(y)}{\xi(y)}$ we must show
\[ \text{sign}(h'_A(y)) = \text{sign}(1/q-y), \quad y \in (0,\theta).\]
A calculation shows that:
\[ h_A'(y) \xi(y)^{3/2} \sqrt{\tfrac{y(1-y)}{\theta(1-\theta)}} \ln(q) = \sqrt{\tfrac{y}{1- \theta}} \, \ln(\tfrac{y}{\theta}) + \sqrt{\tfrac{1- y}{\theta}} \, \ln(\tfrac{1-y}{1-\theta}):= G(y) \]
Clearly $\text{sign}(h'_A(y)) = \text{sign}(G(y))$. Therefore, we must show that $\text{sign}(G(y))= \text{sign}(1/q-y)$ for $y \in (0,\theta)$. Clearly $G(1/q)=G(1-\theta)=0$. First we will prove that $G(y) >0$ on $[0,1/q)$. We calculate:
\[ -\sqrt{\theta(1-\theta)}\, G'(y) = \int^{ \sqrt{\tfrac{\theta}{y}}}_{ \sqrt{\tfrac{1-\theta}{1-y}}} \ln(t) dt \\= \int^{1}_{ \sqrt{\tfrac{1-\theta}{1-y}}} \ln(t) dt + \int^{ \sqrt{\tfrac{\theta}{y}}}_{1} \ln(t) dt.\]
We make the substitution $t= 1/\tau$ in the first integral to obtain:
\[ -\sqrt{\theta(1-\theta)}\, G'(y) = \int^{\sqrt{\tfrac{1-y}{1-\theta}}}_{1} \ln(t) (1 - \tfrac{1}{t^2}) dt +\int^{ \sqrt{\tfrac{\theta}{y}}}_{ \sqrt{\tfrac{1-y}{1-\theta}}} \ln(t) dt. \]
We note that $t \geq 1$ in both the integrals, and hence both the integrands are non-negative. Consequently, the first integral is positive, and the second integral is also positive
when $\sqrt{\theta/y} > \sqrt{(1-y)/(1-\theta)}$. For $y \in [0,\theta]$,
this inequality is equivalent to $(\theta-y)(1-\theta-y) > 0$ which in turn is equivalent to $y < 1-\theta$ i.e. $y \in [0,1/q)$. Thus, for $y \in (0,1/q)$, we have shown that $G'(y)< 0$.
Since $G(0) = \ln(q) \sqrt{q/(q-1)} >0$ and $G(1/q)=0$, the fact that $G(y)$ is strictly decreasing on $[0,1/q]$ implies $G(y) >0$ on $[0,1/q)$. \\
Next we prove $G(y) < 0$ on $(1/q,\theta)$. Differentiating the expression for $G'(y)$ we get:
\[ 4 \sqrt{\theta(1- \theta)} \,G''(y) =
\ln (\tfrac{\theta}{y}) \tfrac{\sqrt{\theta}}{y^{3/2}} + \ln (\tfrac{1-\theta}{1-y}) \tfrac{\sqrt{1-\theta}}{(1-y)^{3/2}}.
\]
Differentiating once more, we get:
\[ 4 \sqrt{\theta(1- \theta)} \, G'''(y) =
\tfrac{\sqrt{1-\theta}}{(1-y)^{5/2}} (1+ \tfrac{3}{2} \ln(\tfrac{1-\theta}{1-y}))
- \tfrac{\sqrt{\theta}}{y^{5/2}}(1+ \tfrac{3}{2} \ln (\tfrac{\theta}{y}))
\]
The second term $- \tfrac{\sqrt{\theta}}{y^{5/2}}(1+ \tfrac{3}{2} \ln (\tfrac{\theta}{y}))$
is negative on $[1/q, \theta)$ because $\theta/y >1$ on this interval. The first term
$ \tfrac{\sqrt{1-\theta}}{(1-y)^{5/2}} (1+ \tfrac{3}{2} \ln(\tfrac{1-\theta}{1-y}))$ has the same sign as $y -(1 - q^{-1}e^{2/3})$ for $y \in [1/q, \theta)$. It follows that $G'''(y)< 0$ for $y \in [1/q,
1 - q^{-1}e^{2/3}]$. (We note that the condition for $1/q < 1 - q^{-1}e^{2/3}$, is $q \geq 3$, which is the case here).
For $y \in (1 - q^{-1}e^{2/3}, \theta)$, as above $\tfrac{\sqrt{1-\theta}}{(1-y)^{5/2}} (1+ \tfrac{3}{2} \ln(\tfrac{1-\theta}{1-y}))$ is positive. It is also an increasing function of $y$, because $(1-\theta)/(1-y)$ increases with $y$. For $y \in [1 - q^{-1}e^{2/3}, \theta]$, we note that $\theta/y$ decreases with $y$ and $\theta/y \geq1$. Therefore the term $- \tfrac{\sqrt{\theta}}{y^{5/2}}(1+ \tfrac{3}{2} \ln (\tfrac{\theta}{y}))$ increases with $y$.
Thus $G'''(y)$ is an increasing function of $y$ for $y \in [1 - q^{-1}e^{2/3}, \theta]$.
We note the boundary conditions on $G'''(y)$: we have $G'''(1/q) <0 < G'''(\theta)$. To see this we note that
\[ -4 (\theta(1-\theta))^3 G'''(1/q) = \tfrac{3}{2} \ln(q-1) ( \theta^3+(1-\theta)^3) + ( \theta^3- (1-\theta)^3) >0 \]
because $q > 2$ is equivalent to $\theta > 1- \theta$ as well as $\ln(q-1) >0$. Also
\[ G'''(\theta)= \tfrac{2 \theta-1}{4 (\theta(1- \theta))^{2.5}} > 0.\]
Since $G'''(1 - q^{-1}e^{2/3})<0 < G'''(\theta)$ and $G'''(y)$ is increasing on
$[1 - q^{-1}e^{2/3}, \theta]$, we conclude that
there is a unique $y_0$ in the interior of this interval such that $G'''(y)$ has the same sign as $y-y_0$ on this interval. Together with the fact that $G'''(y) <0$ on $[1/q, 1 - q^{-1}e^{2/3}]$, we obtain:
\[ \text{sign}(G'''(y)) = \text{sign}(y-y_0) \quad \text{ on } \; [1/q,\theta].\]
This is illustrated in Figure \ref{fig:Gfunc}, which shows the graphs of $G(y)$ (dashed plot) and $G'''(y)$ on $[1/q, \theta]$ for $q=8$. The point $(y,G'''(y))$ for $y=1 - q^{-1}e^{2/3}$ is marked. (In this plot, the values of $G'''(y)$ are indicated on the right-vertical axis, and the values of $G(y)$ are indicated on the left-vertical axis).
Thus $G''(y)$ is decreasing on $[1/q, y_0]$ and increasing on $[y_0,\theta]$.
Since $G''(\theta)=0$, it follows that $G''(y) <0$ on $[y_0,\theta)$. We note that
\[ G''(1/q) = \tfrac{\ln(q-1) (2 \theta-1)}{4 (\theta(1- \theta))^2} > 0.\]
Thus $G''(1/q) >0 > G''(y_0)$ together with the fact that $G''(y)$ is decreasing on $[1/q,y_0]$ implies that there is a unique $y_1$ in the interior of this interval such that $G''(y)$ has the same sign as $y_1-y$ on this interval. We have already shown that $G''(y) < 0$ on $[y_0,\theta]$. Thus we conclude
\[ \text{sign}(G''(y)) = \text{sign}(y_1-y) \quad \text{ on } \; [1/q,\theta).\]
This implies $G'(y)$ is increasing on $[1/q,y_1]$ and decreasing on $[y_1, \theta]$.
Since $G'(\theta)=0$, we conclude that $G'(y) >0$ on $[y_1, \theta)$. We note that
\[ G'(1/q)= \tfrac{-1}{\sqrt{\theta(1-\theta)}} \int^{\sqrt{q-1}}_{1} \ln(t) (1 - \tfrac{1}{t^2}) dt <0 .\]
Since $G'(y)$ is increasing on $[1/q,y_1]$ and $G'(1/q) <0 < G'(y_1)$, we conclude that there is a unique $y_2$ in the interior of the interval $[1/q,y_1]$ such that $G'(y)$ has the same sign as $y-y_2$ on this interval. Also $G'(y) >0$ on $[y_1, \theta]$. Thus we conclude:
\[ \text{sign}(G'(y)) = \text{sign}(y-y_2) \quad \text{ on } \; [1/q,\theta).\]
This implies that $G(y)$ is decreasing on $[1/q,y_2]$ and increasing on $[y_2, \theta]$.
Since $G(1/q) = G(\theta)=0$, we see that $G(y)$ is negative on $(1/q,y_2]$ as well as $[y_2, \theta)$. This finishes the proof of the assertion $G(y)<0$ on $(1/q,\theta)$,
and hence of \eqref{eq:MRRWmin1}.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Gfunc.pdf}
\caption{Graphs of $G(y)$ and $G'''(y)$ on $[1/q,\theta]$ for $q = 8$.}
\label{fig:Gfunc}
\end{figure}
Next, we prove the $\cup$-convexity of $\tilde \alpha_{MRRW}(x)$. We must show that the derivative $\tilde \alpha_{MRRW}'(x)$ is non-decreasing. Since the derivative is constant on $[0,(1 - 2/q)^2]$, the problem reduces to showing that $\alpha_{MRRW}(x)$ is $\cup$-convex for $x \in [(1 - 2/q)^2,\theta]$. This follows from the next lemma:
\begin{lemma} \label{MRRWconv} The first MRRW bound $\alpha_{MRRW}(\delta)$ is
$\cup$-convex if $q=2$. For $q>2$, it is $\cap$-convex on $[0,\delta_{MRRW}]$ and $\cup$-convex on $[\delta_{MRRW},\theta]$ where $\delta_{MRRW}$ satisfies:
\[ \tfrac{1}{2} - \tfrac{\sqrt{q-1}}{q} < \delta_{MRRW} < (1-\tfrac{2}{q})^2.\]
\end{lemma}
\bep Let $y = \xi(x)$. Let
\[ \chi(y) = 1-2y+(2 \theta-1) \sqrt{\tfrac{y(1-y)}{\theta(1-\theta)}} \]
We calculate:
\beq \label{eq:A1}
\tfrac{y'^2}{2y'' y(1-y)} = \sqrt{\tfrac{x(1-x)}{\theta(1-\theta)}} = \chi(y)
\eeq
Since $\sqrt{x(1-x)/(\theta(1-\theta))}$ is non-negative, we also make the observation that that $\chi(y)>0$ for all $y \in [0,\theta)$.
Since $\alpha_{MRRW}(x) =H_q(y)$, we get:
\[ \ln(q) \alpha_{MRRW}''(x) = \tfrac{-y'^2}{y(1-y)} + y''\, \ln \tfrac{(q-1)(1-y)}{y}. \]
Since $\xi(\xi(x))=x$, we get $y' = \xi'(x) = 1/\xi'(y)$. Using this we get:
\[ \alpha_{MRRW}''(x) (\xi'(y))^2 y(1-y) \ln(q) = -1 + \tfrac{2 y'' y(1-y)}{y'^2} \, \ln \sqrt{\tfrac{(q-1)(1-y)}{y}}. \]
Using \eqref{eq:A1}, we obtain:
\[ \label{eq: A2} \alpha_{MRRW}''(x) (\xi'(y))^2 y(1-y) \ln(q) =
-1+ \tfrac{\ln \sqrt{\tfrac{(1-y)(q-1)}{y}}} {\chi(y)}
\]
Let $y \in (0,\theta)$. We recall note $\chi(y)> 0$ for $y \in (0,\theta)$. Thus for $\alpha_{MRRW}''(x)$ has the same sign as
\[ G_2(y) := \ln (\sqrt{\tfrac{(1-y)(q-1)}{y}}) - \chi(y). \]
We calculate:
\[G_2'(y) y(1-y) = \chi(y)(y-\tfrac{1}{2}).\]
Therefore, sign$(G_2'(y)) = \text{sign}(y-1/2)$. In other words $G_2(y)$ is decreasing on $[0,1/2]$ and increasing on $[1/2,\theta)$.
We note $G_2(1/q) = \ln(q-1) - 2(1 - \tfrac{2}{q})$.
The function
\[t \mapsto \ln(t-1) - 2(1 - 2/t),\] evaluates to $0$ at $t=2$, and is an increasing function of $t$ for $t\geq 2$ (because its derivative $(1-2/t)^2/(t-1)$ is positive). Thus $G_2(1/q)>0$ for $q>2$ and $G_2(1/q)=0$ for $q=2$. Since $G_2(0) = + \infty$ and $G_2(y)$ is decreasing on $[0,1/q]$, we conclude that $G_2(y) >0$ on $[0,1/q]$ if $q>2$. If $q=2$, then $G_2(y) \geq 0$ on $[0,1/q]=[0, \theta]$. In particular, for $q=2$ the bound $\alpha_{MRRW}(x)$ is $\cup$-convex on $[0, \theta]$.\\
For $q>2$, we note that $G_2(1/2) = \tfrac{1}{2}( \ln(q-1) - \tfrac{q-2}{\sqrt{q-1}})$. The function $b(t) = \tfrac{1}{2}( \ln(t-1) - \tfrac{t-2}{\sqrt{t-1}})$ satisfies $b(3) = \tfrac{1}{2}( \ln(2) - \tfrac{2}{\sqrt{2}})<0$ and $b'(t) = \tfrac{2 \sqrt{t-1} - t}{4 (t-1)^{3/2}} <0$ for $t \geq 3$. Thus $G_2(1/2) < 0$ for all $q>2$.
Since $G_2(1/q) >0$ and $G_2(1/2) <0$ and $G_2(y)$ is decreasing on $[1/q,1/2]$, we conclude that there is a $y_{MRRW} \in (1/q,1/2)$ such that sign$(G_2(y)) = \text{sign}(y_{MRRW} - y)$ for $y \in [0,1/2]$. Also $G_2(\theta) = 0$, $G_2(1/2)<0$ and $G_2(y)$ is increasing on $[1/2,\theta]$, which shows that $G_2(y) <0$ on $[1/2,\theta)$.
Thus sign$(G_2(y)) =\text{sign} (y_{MRRW} - y)$ for $y \in (0,\theta)$. Since $\alpha_{MRRW}''(x)$ has the same sign as $G_2(y)$ (where $y=\xi(x)$), we finally obtain sign$(\alpha_{MRRW}''(x))= \text{sign}(x -\delta_{MRWW})$ for $ x \in (0,\theta)$, where $\delta_{MRWW} = \xi(y_{MRRW})$ satisfies $\xi(1/2) < \delta_{MRWW} < \xi(1/q)$, or in other words: $\tfrac{1}{2} - \tfrac{\sqrt{q-1}}{q} < \delta_{MRRW} < (1-\tfrac{2}{q})^2$. This completes the proof of the lemma.
\eep
\bibliographystyle{IEEEtran}
\bibliographystyle{plain}
|
1,108,101,562,800 | arxiv | \section{Introduction}
In recent times, a number of materials for which quasiparticle excitations behave like relativistic two-dimensional fermions have appeared in condensed matter. One of the most fascinating examples of such materials is single-layer graphene, which is a material consisting of a single-layer of graphite. Graphene exhibits many interesting features, among which are the anomalous quantum Hall effect \cite{Zhang2005Exp}, a record high Young's modulus \cite{Lee2008Ela}, ultrahigh electron mobility \cite{Bolotin2008}, as well as very high thermal conductivity \cite{Faugeras2010a}. The band structure of single-layer graphene has two inequivalent and degenerate valleys, $\vec{K}$ and $\vec{K}'$, at opposite corners of the Brillouin zone. The possibility to manipulate the valley to store and carry information defines the field of ``valleytronics'', in a analogous way as the role played by spin in spintronics.
\par
One of the most exciting aspects of the physics of single-layer graphene is that several unobservable phenomena in experiments of high energy physics may be observed, such as Klein tunneling \cite{katsnelson2006chiral} and ‘‘Zitterbewegung’’ \cite{KATSNELSON20073}. From the point of view of quantum field theory, graphene exhibits similar features with quantum electrodynamics in three dimensions (QED$_{2+1}$)\footnote{Since the electrostatic potential between two electrons on a plane is the usual $1/r$ Coulomb potential instead of a logarithmic potential, which is distinctive of quantum electrodynamics in $2+1$ dimensions (QED$_{2+1}$),
the theory that describes graphene at low energies is known as \textit{reduced quantum electrodynamics} (RQED$_{4,3}$) \cite{Marino1993a,Gorbar2001,Teber2012a}. In RQED$_{4,3}$, the fermions are confined to a plane; nevertheless, the electromagnetic interaction between them is three-dimensional.} in such a way that it is possible to explore sophisticated aspects of three-dimensional quantum field theory; for instance, magnetic catalysis, symmetry breaking, dynamical mass generation, and anomalies, among others. It is possible, in the first place, because the two valleys can be associated with two irreducible representations of the Clifford algebra in three dimensions. In the second place, because the low-energy regime quasiparticles behave like massless relativistic fermions, where the speed of light is replaced by the Fermi velocity $v_F$, which is about 300 times smaller than the speed of light.
\par
The presence of a mass gap may turn single-layer graphene from a semimetal into a semiconductor. This can be accomplished, for example, when a single-layer graphene sheet is placed on a hexagonal boron nitride substrate \cite{Giovannetti2007,Hunt2013},
or deposited on a SiO$_2$ surface \cite{Shemella2009}. Additionally, it has been proposed that a bandgap can be induced by vacuum fluctuations \cite{Kibis2011}. Significantly, a mass gap suppresses the Klein tunneling so that this fact could be useful in the design of devices based on single-layer graphene \cite{Navarro2018}.
\par
The phenomenon known as \textit{magnetic catalysis} appears when a dynamical symmetry breaking occurs in the presence of an external magnetic field, independent of its intensity \cite{Gusynin1994,Shovkovy2013}. Since in QED$_{2+1}$ the mass term breaks the chiral symmetry in a reducible representation\footnote{Because the chiral symmetry cannot be defined for irreducible representations, it does not make sense to talk about chiral symmetry breaking.}, magnetic catalysis rise as $m\rightarrow 0$. Dynamical symmetry breaking is a consequence of the appearance of a nonvanishing chiral condensate $\langle 0|T[\Psi,\bar{\Psi}]|0\rangle$, which leads to the generation of a fermion dynamical mass \cite{Gusynin1994}. In particular, when single-layer graphene is subjected to an external magnetic field, a nonvanishing chiral condensate ensures that there will be a dynamical chiral symmetry breaking \cite{Gusynin1994}, as well as a dynamical mass equal for each valley \cite{Farakos1998b,Raya2010}.
\par
When a sample of single-layer graphene presents strains, ripples or curvature, the dispersion relation is modified in such a way that an effective gauge vector field coupling is induced in the low energy Dirac spectrum (the so-called \textit{pseudomagnetic field} \cite{Guinea2010a}). The mechanical control over the electronic structure of graphene has been explored as a potential approach to \textit{``strain engineering''} \cite{Levy2010,Zhu2015a}. Originally, it was observed that strain produces a strong gauge field that effectively acts as a uniform pseudomagnetic field whose intensity is greater than $10$ T \cite{Guinea2010}, a pseudomagnetic field greater than $300$ T was experimentally reported later \cite{Levy2010}. This pseudomagnetic field opens the door to previously inaccessible high magnetic field regimes.
\par
In contrast to the case of a real external magnetic field, the pseudomagnetic field experienced by the particles in the valleys $\vec{K}$ and $\vec{K}'$ have opposite signs. Hence, when a sample of strain single-layer graphene is placed in a perpendicular magnetic field, the energy levels suffer a different separation for each valley, which results in an induced valley polarization \cite{Tony2010}. The previous one is precisely the key requirement for valleytronic devices. Beyond theoretical calculations, the presence of Landau Levels in graphene has been experimentally observed in external magnetic fields \cite{Jiang2007a}, strain-induced pseudomagnetic fields \cite{Levy2010,Yan2012a} and in the coexistence of pseudomagnetic fields and external magnetic fields \cite{Li2015a}. Moreover, the effects of the combination of an external magnetic field and a strain-induced pseudomagnetic field in different configurations were studied in order to construct a valley filter \cite{Feng2010a,PChaves2010a,Fujita2010a,Feng2011a}.
\par
In the first part of this paper, we study how an interplay between real and pseudomagnetic fields affects the symmetry breaking and the dynamical mass generation. As we will see, the presence of these two fields produces not only a breaking of chiral symmetry but also parity and time-reversal symmetry breaking. Furthermore, we will show that there will be a dynamical mass generation of two types, the usual mass ($m\bar{\psi}\psi$) and another known as Haldane mass \cite{Haldane1988a}, unlike QED$_{2+1}$ where only the usual mass term is dynamically generated. As a result of this, the dynamical fermion masses will be different for each valley. In this paper, we will use a non-perturbative method based on the quantized solutions of the Dirac equation, the so-called \textit{Furry picture}.
The reason for using this method is to obtain nonperturbative results since the effective coupling constant in graphene is of order unity, $\alpha \approx 2.5$, raising serious questions about the validity of the perturbation expansion in graphene \cite{Kolomeisk2015a}. However, the latter has been a matter of controversy, since at low energy it was experimentally observed that the effective fine-structure constant approaches $1/7$ \cite{Reed2010}.
\par
In the second part, we investigate how the presence of real and pseudomagnetic fields affects the magnetization. In conventional metals, the magnetism receives contributions from spin (Pauli paramagnetism) and orbitals (Landau diamagnetism). In particular, the orbital magnetization of graphene in a magnetic field has shown a non-linear behavior as a function of the applied field \cite{Slizovskiy2012}. In order to examine how the magnetization and the susceptibility behave for each valley in the presence of constant magnetic and pseudomagnetic fields, we will first obtain the one-loop effective action. The one-loop effective action without strains in $2+1$ dimensions had been previously calculated within the Schwinger's proper time formalism \cite{Redlich1984a,Andersen1995a,Dittrich1997} and using the fermion propagator expanded over the Landau levels \cite{Ayala2010c}. Taking into account that in static background fields the one-loop effective action is proportional to the vacuum energy \cite{Weinberg1996}, which can be calculated in a direct way employing the furry picture, we use this method to study the most general case. As we will show, the presence of magnetic and pseudomagnetic fields allows us to manipulate the magnetization and the susceptibility of each valley independently.
\par
Finally, we study the parity anomaly and the induced vacuum charge in strained single-layer graphene. In quantum field theory, if a classical symmetry is not conserved at a quantum level, it is then said that the theory suffers from an anomaly. For instance, in QED$_{2+1}$ if one maintains an invariant gauge regularization in all the calculations, with an odd number of fermion species, the parity symmetry is not preserved by quantum corrections, \textit{i.e.} it has a parity anomaly \cite{Semenoff1983a,Semenoff1984a,Redlich1984a}. In this way, the quantum correction to the vacuum expectation value of the current can be computed to characterize the parity anomaly. As it was pointed out by Semenoff \cite{Semenoff1984a}, external magnetic fields induce a current ($j_{\mu}$) of abnormal parity in the vacuum for each fermion species. Unfortunately, for a even number of fermion species, the total current is canceled: $J_+^{\mu}=j_{1}^{\mu}+j_{2}^{\mu}=0$, and therefore the induced vacuum current is not directly observable. Therefore, it is possible to maintain the gauge and parity symmetries even at the quantum level \cite{Redlich1984a}. In the literature, a number of scenarios were proposed to realize parity anomaly in $2+1$ dimensions. Haldane introduces a condensed matter lattice model in which parity anomaly takes place when the parameters reach critical values \cite{Haldane1988a}. Obispo and Hott \cite{Obispo2014, Obispo2015} show that graphene coupling to an axial-vector gauge potentially exhibits parity anomaly and fermion charge fractionalization. Zhang and Qiu \cite{Zhang2017a} show that in a graphene-like system, with a finite bare mass, a parity anomaly related $\rho$-exciton can be generated by absorbing a specific photon. Alternatively, as Semenoff remarked \cite{Semenoff1984a}, one could consider an ``unphysical'' field with abnormal parity coupled to the fermions, since for such field the total induced vacuum current should be different from zero and hence observable. As we will show, this field is physical, and it is just a pseudomagnetic field with a simple uniform field profile.
\par
This paper is organized as follows: In section \ref{secDiracHamiltonian}, we introduce the Dirac Hamiltonian and the symmetries for single-layer graphene in a finite mass gap. In section \ref{secFurry}, we present the Furry picture for fermions in the presence of real and pseudomagnetic fields. In section \ref{secCondesateM}, we compute the magnetic condensate and discuss how this characterizes the symmetry breaking and its connection with the dynamical mass of each valley. In section \ref{oneloopeamagnetization}, the one-loop effective action and the magnetization are calculate for each valley. In section \ref{secIndChar}, we calculate the total induced vacuum charge density to show that graphene in the presence of a pseudomagnetic field exhibits a parity anomaly. In appendix \ref{ap:ConsRPMF}, we obtain the exact solution of the Dirac equation for uniform real and pseudomagnetic fields. In appendix \ref{ap:MagConApen}, we compare the calculation of fermionic condensate in the Furry picture with the method via fermion propagator and prove that the trace of the fermion propagator evaluated at equal space-time points must be understood as the expectation value of the commutator of two field operators. Finally, section \ref{conclutionsfinal} contains our conclusions.
\section{Dirac Hamiltonian for graphene}\label{secDiracHamiltonian}
In a vicinity of the Fermi points, the Dirac Hamiltonian in the presence of real ($A$) and pseudo ($a$) magnetic potentials reads ($\hbar=v_F=1$) \cite{Herbu2008a,Kim2011a}\footnote{The electric charge $e$ is multiplying the term $a_i$ only for dimensional reasons; strictly speaking, one could write $eA_i=\tilde{A}_i$ to emphasize that it is independent of $e$.}
\begin{eqnarray}\label{DHFourC}
H_D[A,a]=\Gamma^0\Gamma^i(p_i+eA_i+iea_i)+\Gamma^0 m,
\end{eqnarray}
where $m$ is a mass gap, $a_i=a_i^{35}\Gamma^{35}$, $\Gamma^{35}=i\Gamma^{3}\Gamma^{5}$\footnote{It turns out that one can identify $a_i^{35}$ as one component of a non-Abelian $SU(2)$ gauge field within the low-energy theory of graphene \cite{Roy2011,Gopalakrishnan2012a}. The other two components of this non-Abelian $SU(2)$ gauge field are proportional to $\Gamma^3$ and $\Gamma^5$, since they are off-diagonal in valley index mixing the two inequivalent valleys \cite{Roy2011,Gopalakrishnan2012a}. In this case, the pseudo-gauge potential is $a_i=a_i^3\Gamma^3 + a_i^5\Gamma^5 +a_i^{35}\Gamma^{35}$. Assuming a smooth enough deformation in the graphene sheet, one can keep only the component $a_i^{35}$, which does not mix the two inequivalent valleys \cite{Roy2011}. Hence, Eq. (\ref{DHFourC}) captured the physics of low-energy strained graphene.}. This $4\times 4$ Hamiltonian acts on the four-component ``spinor'', $\psi^T=(\psi^K_A,\psi^K_B,\psi^{K'}_A,\psi^{K'}_B)$, where the components take into account both two \textit{valleys} ($\vec{K}$ and $\vec{K}'$) and the two \textit{sublattices} (A and B) \cite{Katsnelson2012book}, the quantum number associated with the two sublattices is usually referred to as \textit{pseudospin}. If one wants to include the real spin, the spinor will have eight-components, and the Dirac Hamiltonian will be $H_{D(8\times 8)}=I_2\otimes H_D[A,a]$ \cite{Roy2011}. For subsequent calculations, it is sufficient to consider $H_D[A,a]$, given that including the real spin only increases the degeneration of the Landau levels by two ($g_s=2$). Since the difference between QED$_{2+1}$ and RQED$_{4,3}$ lies in the kinetic term of the gauge fields, and the magnetic field here is considered as an external field and the pseudomagnetic is a non-dynamical field. Then Eq. (\ref{DHFourC}) is an appropriate description for strained graphene in the presence of an external magnetic field.
\par
As a matter of convenience, we choose here the $\Gamma-$ matrices as \cite{Gomes2009a}
\begin{align}\nonumber
\Gamma^0&=\sigma^3\otimes \sigma^3=\left(\begin{array}{cc}
\sigma^3 & 0 \\
0 & -\sigma^3
\end{array}\right), \\\nonumber
\Gamma^1&=\sigma^3\otimes i\sigma^1=\left(\begin{array}{cc}
i\sigma^1 & 0 \\
0 & -i\sigma^1
\end{array}\right),
\\\nonumber
\Gamma^2&=\sigma^3\otimes i\sigma^2=\left(\begin{array}{cc}
i\sigma^2 & 0 \\
0 & -i\sigma^2
\end{array}\right),\\\nonumber
\Gamma^3&=i\sigma^1\otimes I_{2\times 2}=\begin{pmatrix}
0 & \ iI \\
iI & \ 0
\end{pmatrix},
\\\nonumber
\Gamma^5&=-\sigma^2\otimes I_{2\times 2}=\left(\begin{array}{cc}
0 & \ iI \\
-iI & \ 0
\end{array}\right),
\\
\Gamma^{35}&=i\sigma^3\otimes I_{2\times 2}=\left(\begin{array}{cc}
iI & \ 0 \\
0 & \ -iI
\end{array}\right),
\end{align}
so that $(\Gamma^{3})^2=-1$, $(\Gamma^{5})^2=1$, $\Gamma^{3}$ and $\Gamma^{5}$ anticommute with $\Gamma^{\mu}$, while $\Gamma^{35}$ commutes with $\Gamma^{\mu}$ and anticommutes with $\Gamma^{3}$ and $\Gamma^{5}$. Note that $\Gamma^{\mu}$ ($\mu=0,1,2$) are block-diagonal, where each block is one of two inequivalent irreducible representations of the Clifford algebra in $2+1$ dimensions. For odd dimensions, there are two inequivalent irreducible representations of the Dirac matrices that we denote as $\mathcal{R}_1$ and $\mathcal{R}_2$. The two inequivalent representations were chosen as $\gamma^{\mu}$ and $-\gamma^{\mu}$ for $\mathcal{R}_1$ and $\mathcal{R}_2$, respectively, where
\begin{equation}
\gamma^{0}=\sigma^3, \ \ \gamma^{1}=i\sigma^1,\ \
\gamma^{2}=
i\sigma^2.
\end{equation}
Given that there is no intervalley coupling, we can rewrite the Dirac Hamiltonian as $H_D=H_+[A,a]\oplus H_-[A,a]$, thus
\begin{eqnarray}\label{DHtwovalleys}
H_{\pm}=i\sigma^3\sigma^i(p_i+eA_i\mp ea_i^{35})\pm\sigma^3 m,
\end{eqnarray}
where $H_+$ and $H_-$ represent the Hamiltonian near of valley $\vec{K}$ (representation $\mathcal{R}_1$) and $\vec{K}'$ (representation $\mathcal{R}_2$), respectively. $H_+$ acts in a two-component spinor that describes a fermion with pseudospin up and an antifermion with pseudospin down, while $H_-$ acts in a two-component spinor that describes a fermion with pseudospin down and an antifermion with pseudospin up. Thus, we obtain two decoupled Dirac equations in $(2+1)$-dimensions
\begin{eqnarray}\label{DiracEpm}
i\frac{\partial \psi(x,y,t)}{\partial t}=H_{\pm}\psi(x,y,t).
\end{eqnarray}
Finally, we can write the Lagrangian density for this system as the sum of two Lagrangian densities for each valley
\begin{eqnarray}\nonumber
\mathcal{L}=\overbrace{i\bar{\psi}^K\gamma^{\mu}D^+_{\mu}\psi^K-m\bar{\psi}^{K}\psi^{K}}^{\mathcal{L}_{+}}+
\overbrace{i\bar{\psi}^{K'}\gamma^{\mu}D^-_{\mu}\psi^{K'}+m\bar{\psi}^{K'}\psi^{K'}}^{\mathcal{L}_{-}},\\\label{DiracLagratwovalleys+-}
\end{eqnarray}
where $D^{\pm}_{i}=\partial_{i}+ieA_i\mp iea_i^{35}$, which can be interpreted as a system describing two species of two-component spinors, one with mass $+m$ and coupled to $eA_i-ea_i^{35}$ and the other with mass $-m$ and coupled to $eA_i+ea_i^{35}$. Hence, in the vicinity of the Fermi points, graphene monolayers constitute an ideal scenario to simulate the matter sector of QED$_{2+1}$. In the following, we will neglect the corrections due to the effects of Coulomb interactions between the charge carriers. However, we point out that the model given by Eq. (\ref{DHFourC}) is in good agreement with the experiments carried out in graphene in the presence of external magnetic fields \cite{Jiang2007a,novoselov2007rise}, pseudomagnetic fields \cite{Levy2010, Yan2012a}, and in the combination of magnetic and pseudomagnetic fields \cite{Li2015a}.
\subsection{Symmetries in the irreducible and reducible representations}\label{SymIrreRe}
\textbf{Irreducible representations:} For irreducible representations, it is possible to define the parity ($\mathcal{P}$), charge conjugation ($\mathcal{C}$), and time-reversal ($\mathcal{T}$) transformations as follows:
\begin{align}
\mathcal{P}\psi(t,x,y)\mathcal{P}^{-1}&=-i\gamma^1\psi(t,-x,y),\\
\mathcal{T}\psi(t,x,y)\mathcal{T}^{-1}&=-i\gamma^2\psi(-t,x,y),\\\label{Ccharge2x2}
\mathcal{C}\psi(t,x,y)\mathcal{C}^{-1}&=-\gamma^2(\bar{\psi}(t,x,y))^T.
\end{align}
Here $\mathcal{P}$ and $\mathcal{C}$ are unitary operators and $\mathcal{T}$ is an anti-linear operator \cite{Peskin1995}, \textit{i.e.} $\mathcal{T}(c-\text{number})\mathcal{T}^{-1}=(c-\text{number})^*$. One can check that the mass terms in the Dirac Lagrangian is not invariant under $\mathcal{P}$ or $\mathcal{T}$. However, the combined transformation $\mathcal{PT}$ leaves the mass terms invariant, so $\mathcal{CPT}$ is a symmetry of the Dirac Lagrangian \cite{Deser1982a}. Since the $\gamma^{\mu}$ form three $2\times 2$ matrices and no other matrix anticommutes with them, the chiral symmetry cannot be defined for irreducible representations.
\newline
\newline
\textbf{Reducible representation:} For a reducible representation, let us take the four-component spinor $\psi^T=(\psi^K,\psi^{K'})^T$. As it has been pointed out in Refs. \cite{Gomes1991a,Gomes2009a}, because the free Lagrangian uses only three Dirac matrices, parity, charge conjugation and time-reversal transformations can be implemented by more than one operator
\begin{align}
\mathcal{P}\psi(t,\textbf{r})\mathcal{P}^{-1}&=P_j\psi(t,\textbf{r}'),\\\label{Time-reversalWigner}
\mathcal{T}\psi(t,\textbf{r})\mathcal{T}^{-1}&=T_j\psi(-t,\textbf{r}),\\
\mathcal{C}\psi(t,\textbf{r})\mathcal{C}^{-1}&=C_j(\bar{\psi}(t,\textbf{r}))^T,
\end{align}
where
\begin{align}\label{P1P2parity}
P_1&=-i\Gamma^1\Gamma^3, \ \ \ \ \ \ \ P_2=-\Gamma^1\Gamma^5,\\\label{T1T2Time}
T_1&=-\Gamma^2\Gamma^3, \ \ \ \ \ \ \ \ T_2=-i\Gamma^2\Gamma^5,\\\label{C1C2charge}
C_1&=-i\Gamma^0\Gamma^1, \ \ \ \ \ \ \ C_2=-\Gamma^2,
\end{align}
with $\textbf{r}=(x,y)$ and $\textbf{r}'=(-x,y)$.
\par
We present the transformation properties of some bilinears under $\mathcal{P}$, $\mathcal{C}$ and $\mathcal{T}$ transformations in Tab. \ref{bilinears} . Using these properties, one can easily prove that the massive (or massless) Dirac Lagrangian in $2+1$ dimensions for the reducible representation is invariant under $\mathcal{P}$, $\mathcal{C}$ and $\mathcal{T}$, regardless of which transformation is used, \textit{i.e.} the Dirac Lagrangian is invariant under $P_1$ and $P_2$, $C_1$ and $C_2$, $T_1$ and $T_2$, or even a linear combination of this operator could be used, with some restrictions \cite{Gomes1991a}. In our Lagrangian, , there are bilinears given by $\bar{\psi} \psi$, $\bar{\psi} \Gamma^{\mu} \psi$ and $\bar{\psi} \Gamma^{\mu} \Gamma^{35} \psi$, which are independent of $j$. Therefore, any of the operators $P_j$, $C_j$ and $T_j$ can be used to implement $\mathcal{P}$, $\mathcal{C}$ and $\mathcal{T}$, respectively. Surprisingly, the transformation properties of $\bar{\psi} \Gamma^{3} \psi$, $\bar{\psi} \Gamma^{5} \psi$, $\bar{\psi} \Gamma^{\mu} \Gamma^{3} \psi$ and $\bar{\psi} \Gamma^{\mu} \Gamma^{5} \psi$ depends on $j$. As a result, we find two non-equivalent realizations of parity, charge conjugation and time-reversal, which is unusual\footnote{In fact, there is an infinite number of non-equivalent realizations of $\mathcal{P}$, $\mathcal{C}$ and $\mathcal{T}$ since any linear combination of the two realizations found is an inequivalent realization.}. For example, the terms $\bar{\psi} \Gamma^{\mu} \Gamma^{3} \psi$ and $\bar{\psi} \Gamma^{\mu} \Gamma^{5} \psi$ appear when a non-Abelian $SU(2)$ gauge field is introduced in graphene \cite{Roy2011,Gopalakrishnan2012a}. In what follows we will be interested only in the Lagrangian density (\ref{DiracLagratwovalleys+-}).
\newline
\def1.2{1.2}
\begin{table*}[ht]
\centering
\begin{tabular*}{0.715\textwidth}{|c|c|c|c|}
\hline
& $P_j$ & $T_j$ & $C_j$ \\ \hline
$\bar{\psi} \psi(t,\textbf{r})$ & $\bar{\psi} \psi(t,\textbf{r}')$ & $\bar{\psi} \psi (-t,\textbf{r})$ & $\bar{\psi} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{\mu} \psi(t,\textbf{r})$ & $\bar{\psi} \tilde{\Gamma}^{\mu} \psi(t,\textbf{r}')$ & $\bar{\psi} \bar{\Gamma}^{\mu} \psi (-t,\textbf{r})$ & $-\bar{\psi} \Gamma^{\mu} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{\mu} \Gamma^{35} \psi(t,\textbf{r})$ & $-\bar{\psi} \tilde{\Gamma}^{\mu} \Gamma^{35} \psi(t,\textbf{r}')$ & $-\bar{\psi} \bar{\Gamma}^{\mu} \Gamma^{35} \psi (-t,\textbf{r})$ & $\bar{\psi} \Gamma^{\mu} \Gamma^{35} \psi (t,\textbf{r})$ \\
$\bar{\psi} i \Gamma^{35} \psi(t,\textbf{r})$ & $-\bar{\psi} i \Gamma^{35} \psi(t,\textbf{r}')$ & $-\bar{\psi} i \Gamma^{35} \psi (-t,\textbf{r})$ & $\bar{\psi} i \Gamma^{35} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{3} \psi(t,\textbf{r})$ & $(-1)^j\bar{\psi} \Gamma^{3} \psi(t,\textbf{r}')$ & $(-1)^{j+1}\bar{\psi} \Gamma^{3}\psi (-t,\textbf{r})$ & $(-1)^{j+1}\bar{\psi} \Gamma^{3} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{5} \psi(t,\textbf{r})$ & $(-1)^{j+1}\bar{\psi} \Gamma^{5} \psi(t,\textbf{r}')$ & $(-1)^{j}\bar{\psi} \Gamma^{5}\psi (-t,\textbf{r})$ & $(-1)^{j}\bar{\psi} \Gamma^{5} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{\mu} \Gamma^{3} \psi(t,\textbf{r})$ & $(-1)^{j}\bar{\psi} \tilde{\Gamma}^{\mu} \Gamma^{3} \psi(t,\textbf{r}')$ & $(-1)^{j+1}\bar{\psi}\bar{\Gamma}^{\mu} \Gamma^{3} \psi (-t,\textbf{r})$ & $(-1)^{j+1}\bar{\psi} \Gamma^{\mu}\Gamma^{3} \psi (t,\textbf{r})$ \\
$\bar{\psi} \Gamma^{\mu} \Gamma^{5} \psi(t,\textbf{r})$ & $(-1)^{j+1}\bar{\psi} \tilde{\Gamma}^{\mu} \Gamma^{3} \psi(t,\textbf{r}')$ & $(-1)^{j}\bar{\psi} \bar{\Gamma}^{\mu} \Gamma^{3} \psi (-t,\textbf{r})$ & $(-1)^{j}\bar{\psi} \Gamma^{\mu}\Gamma^{3} \psi (t,\textbf{r})$
\\ \hline
\end{tabular*}
\caption{$\mathcal{P}$, $\mathcal{C}$ and $\mathcal{T}$ transformation properties of some bilinears, here
$\tilde{\Gamma}^{\mu}=\{\Gamma^{0},-\Gamma^{1} ,\Gamma^{2}\}$ and $\bar{\Gamma}^{\mu}=\{\Gamma^{0},-\Gamma^{i}\}$.}\label{bilinears}
\end{table*}
\par
It should be noted that in the literature one can find two different transformations which are defined as time-reversal. One, $\hat{T}$, which acts as $\hat{\mathcal{T}}\psi(t,x,y)\hat{\mathcal{T}}^{-1}=\hat{T}\psi^{*}(-t,\textbf{r})$. The other, $T$, which is the one considered here, Eq. (\ref{Time-reversalWigner}), and is referred as Wigner time-reversal. The latter transformation was defined consistently with what has been done in four dimensions by Weinberg (Ch. 5. in \cite{Weinberg1996}), and Peskin and Schroeder (Ch. 3. in \cite{Peskin1995}). Moreover, the one that concerns the $\mathcal{CPT}$ theorem is $T$ (for a detailed discussion, see sec. 11.6. in \cite{Schwartz2014}).
\par
The transformation properties of the electromagnetic potential are \cite{Deser1982a}
\begin{align}
\mathcal{P}A_0(t,x,y)\mathcal{P}^{-1}&=A_0(t,-x,y),\nonumber\\
\mathcal{P}A_1(t,x,y)\mathcal{P}^{-1}&=-A_1(t,-x,y),\nonumber\\
\mathcal{P}A_2(t,x,y)\mathcal{P}^{-1}&=A_2(t,-x,y),\nonumber\\
\mathcal{T}A_0(t,x,y)\mathcal{T}^{-1}&=A_0(-t,x,y),\nonumber\\
\mathcal{T}\vec{A}(t,x,y)\mathcal{T}^{-1}&=-\vec{A}(-t,x,y),\nonumber\\
\mathcal{C}A_{\mu}(t,x,y)\mathcal{C}^{-1}&=-A_{\mu}(t,x,y),
\end{align}
which leave the Lagrangian invariant. For pseudo-magnetic potential we should have
\begin{align}
\mathcal{P}a_1^{35}(t,x,y)\mathcal{P}^{-1}&=a_1^{35}(t,-x,y),\nonumber\\
\mathcal{P}a_2^{35}(t,x,y)\mathcal{P}^{-1}&=-a_2^{35}(t,-x,y),\nonumber\\
\mathcal{T}\vec{a}^{35}(t,x,y)\mathcal{T}^{-1}&=\vec{a}^{35}(-t,x,y),\nonumber\\
\mathcal{C}\vec{a}^{35}(t,x,y)\mathcal{C}^{-1}&=\vec{a}^{35}(t,x,y).
\end{align}
so that the interaction $\bar{\psi}\Gamma^i\Gamma^{35}a_i^{35}\psi$ in the Lagrangian is invariant.
\par
For reducible representations, the transformation $\psi\rightarrow e^{i\alpha_{\mu}\tilde{\sigma}^{\mu}}\psi$ leaves invariant the kinetic term, where
the generators $\tilde{\sigma}^{\mu}=\sigma^{\mu}\otimes I_{2\times2}=\{I_{4\times 4}, -i\Gamma^3,-\Gamma^5,-i\Gamma^{35}\}$ are the generators of a global $U(2)$ symmetry, with $\sigma^0\equiv I_{2\times 2}$ and the $\alpha_{\mu}$ are taken as constants. The mass term $m\bar{\psi}\psi$ breaks this global symmetry down to $U(1)\times U(1)$ symmetry, whose generator are $I_{4\times 4}$ and $-i\Gamma^{35}$ \cite{Das1997}. However, when the mass vanishes, the quantum corrections generate a vacuum expectation value of $\bar{\psi}\psi$ (to be precise $[\bar{\psi},\psi]/2$, see below), then, the symmetry would have broken down to $U(1)\times U(1)$.
\par
It should be noted that besides the usual mass term $m\bar{\psi}\psi$, there is a mass term $m_{\tau}\bar{\psi}\frac{[\Gamma^3,\Gamma^5]}{2}\psi=m_{\tau}\bar{\psi}i\Gamma^{35}\psi$ known as Haldane mass term \cite{Haldane1988a}, which is invariant under the $U(2)$ symmetry. However, this term breaks parity and time-reversal symmetries (see Tab. \ref{bilinears}).
\section{Furry picture}\label{secFurry}
In this section we present the Furry picture based on the quantized solutions of the Dirac equation, and we generalize what has been done in Refs. \cite{Dunne1996,Das1996} for a Dirac equation in the presence of a real magnetic field to the case in which the Dirac equation is in the presence of real and pseudomagnetic fields. In static background gauge fields, the Dirac equation (\ref{DiracEpm}) can be rewritten as
\begin{equation}
\left( \begin{array}{cc}
E\mp m & -(D_1^{\pm}-iD_2^{\pm})\\
(D_1^{\pm}+iD_2^{\pm}) & E\pm m \\
\end{array} \right)\psi=0.
\end{equation}
There are two possible solutions depending on the threshold states ($|E|=\pm m$). The positive-energy solutions ($\psi^{(+)}$) are
\begin{eqnarray}\nonumber
\psi^{(+)}_{\pm,1}&=&e^{-i|E|t}\sqrt{\frac{|E|\pm m}{2|E|}}\left( \begin{array}{c}
f\\
-\frac{D_1^{\pm}+iD_2^{\pm}}{|E|\pm m}f\\
\end{array} \right), \ \ \ \text{or} \\ \label{PositiveMGfg}
\psi^{(+)}_{\pm,2}&=&e^{-i|E|t}\sqrt{\frac{|E|\mp m}{2|E|}}\left( \begin{array}{c}
\frac{D_1^{\pm}-iD_2^{\pm}}{|E|\mp m}g\\
g\\
\end{array} \right),
\end{eqnarray}
where $\psi^{(+)}_{+,i}$ refers to the positive-energy solution in the representation $\mathcal{R}_1$ and $\psi^{(+)}_{-,i}$ refers to the positive-energy solution in the representation $\mathcal{R}_2$. The negative-energy solutions ($\psi^{(-)}$) are
\begin{eqnarray}\nonumber
\psi^{(-)}_{\pm,1}&=&e^{+i|E|t}\sqrt{\frac{|E|\mp m}{2|E|}}\left( \begin{array}{c}
f\\
-\frac{D_1^{\pm}+iD_2^{\pm}}{|E|\mp m}f\\
\end{array} \right), \ \ \ \text{or} \\ \label{NegativeMGfg}
\psi^{(-)}_{\pm,2}&=&e^{+i|E|t}\sqrt{\frac{|E|\pm m}{2|E|}}\left( \begin{array}{c}
\frac{D_1^{\pm}-iD_2^{\pm}}{|E|\pm m}g\\
g\\
\end{array} \right),
\end{eqnarray}
where $f$ and $g$ are two functions such that
\begin{align}
-(D_1^{\pm}-iD_2^{\pm})(D_1^{\pm}+iD_2^{\pm})f&=(E^2-m^2)f,\\
-(D_1^{\pm}+iD_2^{\pm})(D_1^{\pm}-iD_2^{\pm})g&=(E^2-m^2)g.
\end{align}
Note that the threshold states $|E|=m$ and $|E|=-m$ must be specified separately. When $|E|=m$ is a positive (negative) energy solution, the negative (positive) energy threshold is excluded, because of the factor $1/\sqrt{|E|-m}$ \cite{Dunne1996}. For example, for the valley $\vec{K}$, or equivalently the representation $\mathcal{R}_1$, the positive-energy solutions for $|E|=m>0$ and $|E|=m<0$ are respectively
\begin{equation}\label{PositiveMGfg0}
\psi^{(+0)}_{+,1}=e^{-i|m|t}\left( \begin{array}{c}
f^{(0)}\\
0\\
\end{array} \right), \ \ \ \ \ \psi^{(+0)}_{+,2}=e^{-i|m|t}\left( \begin{array}{c}
0\\
g^{(0)}\\
\end{array} \right),
\end{equation}
where $f^{(0)}(x,y)$ satisfies the first-order threshold equation
\begin{equation}\label{thresholdequf}
(D_1^{+}+iD_2^{+})f^{(0)}=0,
\end{equation}
and $g^{(0)}(x,y)$ satisfies
\begin{equation}\label{thresholdequg}
(D_1^{+}-iD_2^{+})g^{(0)}=0.
\end{equation}
It turns out that if the solutions of (\ref{thresholdequf}) are normalizable, then the solutions of (\ref{thresholdequg}) are not, and vice versa \cite{Dunne1996,Aharonov1979a}. Now, in the absence of pseudo-magnetic field ($a^{35}_i=0$), one has that $D=D^+=D^-$. Thus, if $\psi^{(+0)}_{+,1}$ $\left(\psi^{(+0)}_{+,2}\right)$ is a positive-energy solution for the valley $\vec{K}$, then the valley $\vec{K}'$ only has the negative-energy solution $\psi^{(-0)}_{-,2}$ $\left(\psi^{(-0)}_{-,1}\right)$. This leads to the well-known asymmetry in the spectrum of the states. Remarkably, this does not necessarily happen when there is a pseudo-magnetic field, since $D^+\neq D^-$. Additionally, if the solutions of (\ref{thresholdequf}) are normalizable, this does not imply that the solutions of $(D_1^{+}-iD_2^{+})g^{(0)}=0$ are not. Therefore, both valleys may have positive (or negative) energy states simultaneously. The Appendix (\ref{ap:ConsRPMF}) illustrates this point in the case of constant real and pseudomagnetic fields.
\par
One can calculate the vacuum condensate $\langle \bar{\psi}\psi \rangle$ (pairing between fermions and antifermions in the vacuum)
in $2+1$ dimensions by expanding out the fermion field in a complete orthonormal set of the positive- and negative-energy solutions ($i=\mathcal{R}_1,\mathcal{R}_2$)
\begin{eqnarray}\label{psicampo}
\Psi_i(\vec{x},t)=\SumInt_{n}
\SumInt_p
[a_{i,n,p}\psi^{(+)}_{i,n,p}+b^{\dag}_{i,n,p}\psi^{(-)}_{i,n,p}].
\end{eqnarray}
The solutions are labeled by two quantum numbers $(n,p)$, in which the label $n$ refers to the eigenvalue $E_n$, whilst the label $p$ distinguishes between degenerate states. In general, both $n$ and $p$ may take discrete and/or continuous values \cite{Dunne1996}. The $a_{i,n,p}$ and $b^{\dag}_{i,n,p}$ are the fermion annihilation operator and antifermion creation operator, respectively, which obey the anticommutation relations
\begin{eqnarray}\label{conmutationrela}
\{a_{i,n,p},a^{\dag}_{j,n',p'}\}=\{
b_{i,n,p},b^{\dag}_{j,n',p'}\}=\delta_{ij}\bar{\delta}_{nn'}\bar{\delta}_{pp'},.
\end{eqnarray}
where $\bar{\delta}_{\alpha,\alpha'}$, is the Kronecker delta if $\alpha$ takes discrete values, or is the Dirac delta if it takes continuous values. Using the commutation relations (\ref{conmutationrela}), the vacuum expectation value $\langle \bar{\Psi}_i\Psi_i \rangle$ can be written as
\begin{eqnarray}\nonumber
\langle 0| \bar{\Psi}_i(x)\Psi_i(x)|0 \rangle \equiv \text{tr} \langle \bar{\Psi}_{i,\alpha}\Psi_{i,\beta} \rangle =\SumInt_{n}
\SumInt_p \bar{\psi}^{(-)}_{i,n,p}(x) \psi^{(-)}_{i,n,p}(x),\\\label{psiAPsiAG}
\end{eqnarray}
\textit{i.e} the fermion condensate is a sum over occupied negative-energy states. The $\text{tr}$ is over the spinorial indices $\{\alpha,\beta\}$. Let us also write the condensate $\langle \Psi_i \bar{\Psi}_i \rangle$, which will be relevant to what follows, as
\begin{eqnarray}\nonumber
\langle 0| \Psi_i(x)\bar{\Psi}_i(x)|0 \rangle \equiv \text{tr} \langle \Psi_{i,\beta} \bar{\Psi}_{i,\alpha} \rangle=\SumInt_{n}
\SumInt_p \bar{\psi}^{(+)}_{i,n,p}(x) \psi^{(+)}_{i,n,p}(x),\\ \label{PsiAGpsiA}
\end{eqnarray}
\textit{i.e} this condensate is a sum over occupied positive-energy states.
\par
In the \ref{ap:MagConApen}, we compare the calculation of fermionic condensate in the Furry picture via fermion propagator. We prove that the trace of the fermion propagator evaluated at equal space-time points must be understood as the expectation value of the commutator of two field operators. Thus, the Schwinger's choice \cite{Schwinger1951}, which is equivalent to perform the coincidence limit symmetrically on the time coordinate \cite{Dittrich1985}, \textit{i.e.}
\begin{eqnarray}\nonumber
\frac{\langle 0|[\bar{\Psi}_i,\Psi_i]|0 \rangle}{2}&=&-\text{tr} S_F(x,x)\\\label{OrderConmutator}
&\equiv &
-\left(\text{tr} \lim_{\substack{x_0\rightarrow y_0^{+}\\ \mathbf{x}\rightarrow
\mathbf{y}}} S_F(x,y)-\text{tr} \lim_{\substack{x_0\rightarrow y_0^{-}\\ \mathbf{x}\rightarrow \mathbf{y}}} S_F(x,y)\right).
\end{eqnarray}
Besides, we argue that this must be the order parameter for chiral (or parity) symmetry breaking and not the fermionic condensate, as is commonly assumed in the literature.
\section{Fermion condensate}\label{secCondesateM}
Chiral symmetry breaking in ($2+1$)- and
($3+1$)-dimensional theories has been a subject of intense
scrutiny over the past two decades
\cite{Gusynin1994,Gusynin1995a,Gusynin1995b,Gusynin1995c,
Gusynin1996a,Gusynin1999d,Dunne1996,Dittrich1997,Cea1997,Cea1998,
Farakos1998a,Farakos1998b,Lasinio1999,Cea2000,Anguiano2005,Raya2010,
Cea2012a,Boyda2014,Ayala2006,Ferrer2010,Ayala2010,Khalilov2015}.
In the presence of a uniform magnetic field, the appearance of a
nonvanishing chiral condensate $\langle \bar{\Psi}\Psi \rangle
\neq 0$ in the limit $m\rightarrow0$, produce spontaneous chiral
symmetry breaking \cite{Gusynin1994,Gusynin1995a,Gusynin1995b}.
For example \cite{Gusynin1994,Gusynin1995c}, in the Nambu-Jona-Lasinio (NJL)
model, the spontaneous symmetry breaking occurs when the coupling constant exceeds some critical value,
\textit{i.e.} when $\lambda>\lambda_c$. With an external uniform
magnetic field $\lambda_c\rightarrow0$, independent of the
intensity of the magnetic field $B$, the magnetic field is a
strong \textit{catalyst} of chiral symmetry breaking (see Ref.
\cite{Shovkovy2013} for review).
\par
The exact expression for a fermion propagator in an external
magnetic field in $3+1$ dimensions was found for the first time by
Schwinger using the proper-time formalism \cite{Schwinger1951}. For $2+1$ dimensions, the fermion
propagator was presented in the momentum representation by Gusynin
\textit{et al.} in \cite{Gusynin1994}. In \cite{Gusynin1994,Gusynin1995a,Gusynin1996a}, the vacuum condensate
was computed in the reducible representation (fourcomponent spinor) using the expression of the fermion propagator in the presence of an uniform magnetic field. On the other hand, Das and Hott introduced an alternative derivation of the magnetic condensate using the Furry Picture. This method has been used to calculate the magnetic vacuum condensate in $3+1$ dimensions \cite{Anguiano2007}, at finite temperature \cite{Das1996,Das1997,Cea1998,Cea2000}, and as well as in the presence of parity-violating mass terms \cite{Anguiano2007}. In appendix \ref{ap:MagConFurryP}, we compute the vacuum condensate in the presence of an external magnetic field for the two irreducible representations and show that these two methods are consistent if we take the definition of the propagator in equal times, as the one introduced by Schwinger in \cite{Schwinger1951}. Furthermore, we discuss why the vacuum expectation value of the commutator of two field operators is the appropriate order parameter to describe chiral (or parity) symmetry breaking.
\par
In order to study the effect of strains, in the following we consider a sample of graphene in the presence of constant real ($B$) and pseudo ($b$) magnetic fields\footnote{We noted that the pseudomagnetic field study here is mathematically equivalent to considering a Dirac oscillator potential in $(2+1)$-dimensions \cite{Quimbay2013}. Consequently, a constant pseudomagnetic field can be seen as a physical realization of the two-dimensional Dirac oscillator.}. In this case, we choose $\vec{A}=(0,Bx)$ and $\vec{a}^{35}=(0,bx)$. The explicit solution can be found in the (\ref{ap:ConsRPMF}). In a similar way shown in \ref{ap:MagConFurryP}, we compute the vacuum expectation value of the commutator in the two irreducible representations for arbitrary values of $m$, $eB$ and $eb$, i.e.,
\begin{align}\nonumber
&\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{B}
=-\text{sgn}(m)\frac{|eB\pm eb|}{4\pi}-\frac{|eB\pm eb|}{2\pi}\sum_{n=1}^{\infty} \frac{m}{|E^{\pm}_n|}\\\label{BPsiPsiB}
&=\text{sgn}(m)\frac{|eB\pm eb|}{4\pi}-m\frac{\sqrt{2|eB\pm eb|}}{4\pi}\zeta\left(\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right),
\end{align}
with $|E_n^{\pm}|=\sqrt{m^2+2|eB\pm eb|n}$, here the $(-)$ sign refers to the representation $\mathcal{R}_1$ (valley $\vec{K}$), whereas $(+)$ refers to the representation $\mathcal{R}_2$ (valley $\vec{K}'$). $\zeta(s,q)$ is the Hurwitz zeta function defined by
\begin{eqnarray}
\zeta(s,q)=\sum_{n=0}^{\infty}\frac{1}{(n+q)^{s}}.
\end{eqnarray}
The commutator can be rewritten in an integral representation as
\begin{eqnarray}\label{CondensataZeroDiv1}\nonumber
&&\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{B,b}\\
&&=-\frac{m}{4 \pi^{\frac{3}{2}}}\int_0^{\infty}dt e^{-m^2 t}t^{-\frac{1}{2}}|eB\pm eb|\coth (|eB\pm eb|t),
\end{eqnarray}
where we have used that
\begin{eqnarray}\nonumber
&&\int_0^{\infty}dt e^{-m^2 t}t^{-\frac{1}{2}}|\omega|\coth (|\omega|t)\\\label{integralrepre}
&&=(2|w|\pi)^{\frac{1}{2}}\left(\zeta\left(\frac{1}{2},\frac{m^2}{2|w|}\right)-\frac{|w|^{\frac{1}{2}}}{2^{\frac{1}{2}}|m|}\right),
\end{eqnarray}
which can be obtained after regularization with the $\epsilon-$inte\-gration technique \cite{Dittrich2000}.
Although Eq. (\ref{CondensataZeroDiv1}) is divergent, the divergences are already present for zero external field
\begin{align}\label{CondensataZeroDiv}
\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{0,0}
=-\frac{m}{4 \pi^{\frac{3}{2}}}\int_0^{\infty}dt e^{-m^2 t}t^{-\frac{3}{2}}.
\end{align}
Therefore, by subtracting out the vacuum part, a finite result is obtained \cite{Dittrich2000,Ayala2010,Farakos1998b}
\begin{align}\nonumber
\mu^{\pm}&=\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{B,b}-\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{0,0}\\\label{integralcconde}
&
=-\frac{m}{4 \pi^{\frac{3}{2}}}\int_0^{\infty}dt e^{-m^2 t}t^{-\frac{1}{2}}\left(|eB\pm eb|\coth (|eB\pm eb|t)-\frac{1}{t}\right).
\end{align}
\begin{figure}
\begin{minipage}{\columnwidth}
\centering
\includegraphics[width=8.1cm]{ccondensates.pdf}
\end{minipage}
\caption{(color online). The $c-$condensates as a function of a external magnetic field, $\mu_-/(m|m|)$ (dashed line) and $\mu_+/(m|m|)$ (continuous line), for $eb/m^2=5$. Since $\mu^{\pm}(eB\pm eb,m)/(m|m|)=\mu^{\pm}\left(\frac{eB\pm eb}{m^2},1\right)$ it is satisfied, when the mass is changed only a widening of the ``parabolic'' form of the function occurs.}
\label{ccondensates}
\end{figure}
To evaluate in a simple form Eq. (\ref{CondensataZeroDiv}), first note that although this integral is divergent, Eq. (\ref{BPsiPsiB}) has a finite limit as $|eB\pm eb|\rightarrow 0$ if we used the analytic continuation of the Hurwitz zeta function so $\frac{1}{2}\langle[\bar{\Psi}_{\pm},\Psi_{\pm}] \rangle_{0,0}=\text{sgn}(m)\frac{m}{2 \pi}$, which coincides with the regularization using the $\epsilon$-integration technique \cite{Dittrich2000}. Henceforth, we will refer to $\mu^{\pm}$ as the $c-$ condensates, which in terms of the analytically continue Hurwitz zeta function are given by\footnote{For numerical calculations,
instead of utilizing the integral (\ref{integralcconde}), the $c-$condensates can be evaluated in a much more efficient way using the Hurwitz zeta functions.}
\begin{eqnarray}\nonumber
\mu^{\pm}
&=&-\text{sgn}(m)\frac{|eB\pm eb|}{4\pi}\\
&&-m\frac{\sqrt{2|eB\pm eb|}}{4\pi}\zeta\left(\frac{1}{2},1+\frac{m^2}{2|eB\pm eb|}\right)-|m|\frac{m}{2\pi}.
\end{eqnarray}
One can show that there is no critical value of the fields in which the $\mu^{\pm}/m$ changes sign. Notably, if $B\neq0$ or $b\neq 0$, the values of the $\mu^{\pm}$ for each valley are different and when $|B|=|b|$ one of the two is zero, as showed in Fig. \ref{ccondensates}. Finally, let us take the limit $m\rightarrow 0$
\begin{eqnarray}\label{ccondenm02x2}
\mu^{\pm}=-\text{sgn}(m)\frac{|eB\pm eb|}{4\pi}.
\end{eqnarray}
As we will see below, this is related to a breaking of chiral, parity and time-reversal symmetries.
\subsection{Dynamical mass}
\begin{figure}
\begin{minipage}{\columnwidth}
\centering
\includegraphics[width=8.1cm]{dinamicalmass.pdf}
\end{minipage}
\caption{(color online) Dynamical masses in the constant-mass approximation versus external magnetic field, $m^{+}_{dyn}/(m\alpha)$ (continuous line) and $m^{-}_{dyn}/(m\alpha)$ (dashed line), for $eb/m^2=10$.}
\label{dinamicalmasses}
\end{figure}
Dynamical mass generation in QED$_{2+1}$ has been a subject of study in the past three decades \cite{Hoshino1989a,Gusynin1994,Shpagin1996,Farakos1998a,Farakos1998b,Raya2010,khalilov2019a}. As shown in Ref. \cite{Raya2010}, the dynamical mass with a two-component fermion in a uniform magnetic field, in the so-called constant-mass approximation, is
\begin{align}
m_{dyn}^{2+1}=2\alpha W\left(\frac{e^{-\gamma_E/2}\sqrt{2|eB|}}{2\alpha}\right),
\end{align}
with $\gamma_E$ the Euler constant, $\alpha=e^2/(4\pi)$ and $W(x)$ the Lambert $W$ function. In the latter formula, it is necessary that $|eB|\gg m_{dyn}^2$ for consistency.
For weak magnetic fields, the dynamical mass has a quadratic behavior in the magnetic field, $m_{dyn}^{2+1}=m_0+m_2(|eB|)^2+\dots$ \cite{Farakos2000a}. Futhermore, the radiative corrections to the
mass of a charged fermion when it occupies the lowest Landau level in RQED$_{4,3}$ was recently computed in the one-loop approximation in Refs. \cite{khalilov2019a,Machet2018a}. Its associated equation reads
\begin{eqnarray}\nonumber
m_{dyn}^{RQED}=\frac{e^{2}}{4 \pi} \sqrt{|e B|}\left[\sqrt{2} \pi^{3 / 2} \operatorname{erfc}\left(\frac{1}{\sqrt{l}}\right)-\frac{10}{3 \sqrt{l}} \Gamma\left(0, \frac{1}{l}\right)\right],\\
\end{eqnarray}
with $l=\frac{|eB|}{m^2}$ and where erfc$(z)$ and $\Gamma(z)$ are the complementary error function and upper incomplete gamma function, respectively. Significantly, the dynamical mass does not vanish even at the limit of zero bare mass ($m\rightarrow 0$) \cite{Machet2018a,khalilov2019a}
\begin{equation}
m_{dyn}^{RQED}=\frac{e^{2}}{2 \sqrt{2}} \sqrt{\pi|e B|}.
\end{equation}
In graphene, while the photon propagates in $3$ spatial dimensions, the fermions are localized on $2$ spatial dimensions, because of this, RQED$_{4,3}$ is an appropriate model to describe the low-energy physics for this system. Nevertheless, as mentioned above, the coupling constant is large, hence, this perturbative result is not necessarily accurate \cite{Kolomeisk2015a,khalilov2019a}. In the following, because of the lack of a better estimate, we use this result to determine the dynamical mass in graphene.
\par
In general, the dynamical mass should be given by $m_{dyn}=g(|eB|)$, where $g$ is a general function. It is straightforward to extend this result to include the pseudomagnetic field. Hence, the dynamical mass would read as $m_{dyn}=g(|eB\pm eb|)$ and we thus obtain
\begin{eqnarray}\nonumber
m_{dyn}^{\pm}&=&\alpha \sqrt{|e B\pm eb|} \left[\sqrt{2} \pi^{3 / 2} \operatorname{erfc}\left(\frac{m}{\sqrt{|e B\pm eb|}}\right)\right.\\
&&\left.-\frac{10m}{3 \sqrt{|e B\pm eb|}} \Gamma\left(0, \frac{m^2}{|e B\pm eb|}\right)\right],
\end{eqnarray}
with $\alpha=e^2/(4\pi)$.
According to these findings, the dynamical fermion mass is different for each valley, $m^{+}_{dyn}$ for $\vec{K}'$ and $m^{-}_{dyn}$ for $\vec{K}$ (see Fig. \ref{dinamicalmasses}). This is not surprising since the $c-$condensates, $\mu^{+}$ and $\mu^{-}$, are different if a pseudomagnetic field is included. Furthermore, it is possible to construct a Lagrangian that describes two species of fermions, each with different mass, introducing the usual mass term ($m$) and a Haldane mass term ($m_{\tau}$)
\begin{align}
\mathcal{L}_m=m\bar{\psi}\psi+m_{\tau}\bar{\psi}i\Gamma^{35}\psi.
\end{align}
In this case, the two masses will be $m\pm m_{\tau}$ and $\psi$ is taken as a four-component spinor. This result implies that the interplay between the real and pseudomagnetic fields allows us to dynamically generate these two terms. With the help of Eq. (\ref{ccondenm02x2}), one can realize that
\begin{eqnarray}\nonumber
&&\frac{1}{2}\langle[\bar{\psi},\psi] \rangle_{B,b}-\frac{1}{2}\langle[\bar{\psi},\psi] \rangle_{0,0}\\
&&=-\frac{\text{sgn}(m)}{4\pi}(|eB+eb|+|eB-eb|),
\end{eqnarray}
while in the limit $m\rightarrow 0$, we have
\begin{eqnarray}\nonumber
&&\frac{1}{2}\langle[\bar{\psi},i\Gamma^{35}\psi] \rangle_{B,b}-\frac{1}{2}\langle[\bar{\psi},i\Gamma^{35}\psi] \rangle_{0,0}\\
&&=-\frac{\text{sgn}(m)}{4\pi}(|eB+eb|-|eB-eb|).
\end{eqnarray}
Therefore, the usual mass term is always generated independently of $B$ and $b$, whereas the Haldane mass term is only generated if $B\neq 0$ and $b\neq 0$ simultaneously. Note that when $B=b$ (or $B=-b$), one of the $c-$condensates is zero and the dynamical mass of this valley will be independent of $B$ and $b$.
\par
Finally, it is important to realize that for zero pseudomagnetic field, the mass term in irreducible representations breaks parity and time-reversal symmetries, while in a reducible representation chirality is broken. Thus, for irreducible representations, the $c-$ condensate ($\mu=\mu^{+}=\mu^{-}$) is the order parameter of the dynamical parity and there is a time-reversal symmetry breaking. In contrast, dynamical symmetry breaking occurs in reducible representations. In reducible representations, however, non-zero magnetic and pseudo magnetic fields produce a dynamical symmetry breaking, not only of the chiral symmetry but also of the parity and time-reversal symmetries\footnote{In Ref. \cite{Herbu2008a}, it had been suggested that the flux of the non-Abelian pseudomagnetic field catalyzes the time-reversal symmetry breaking.}. The reason for this is that by including pseudomagnetic field, the dynamical mass is different for each valley
since a Haldane mass term (which breaks parity and time reversal) is generated\footnote{The Haldane mass term, for example, can also be dynamically generated in graphene at sufficiently large strength of the long-range Coulomb interaction \cite{Gonzalez2013}.}.
\section{One-loop effective action and magnetization}\label{oneloopeamagnetization}
In this section we compute the effective action and the magnetization
in the presence of uniform real and pseudomagnetic fields. We consider the fermionic
part of the generating functional for each valley
\begin{eqnarray}
&&Z_{\pm}=e^{iW_{\pm}}\\\nonumber
&&=\int \mathcal{D} \bar{\Psi}_{\pm} \mathcal{D}\Psi_{\pm}
\exp \left( i\int d^3x [ \bar{\Psi}_{\pm} (i \slashed{\partial}
+e\slashed{A}\mp e\slashed{a}^{35}-m ) \Psi_{\pm}] \right).
\end{eqnarray}
Then, we introduce the one-loop effective Lagrangian $\mathcal{L}^{(1)}_{\pm}$ via
$\text{ln} Z_{\pm} = i \int d^3x \mathcal{L}^{(1)}_{\pm} (x)$. In the presence of a static background field, the one-loop effective action is proportional to the vacuum energy (Ch. 16. in \cite{Weinberg1996}). The vacuum energy of the Dirac energy field can be computed using the formula \cite{Gunter1986}
\begin{equation}\label{EnerVaccumForm}
E_{\text{vac}}=\frac{1}{2}\left(-\sum_{E_n>0}E_n+\sum_{E_n<0}E_n\right),
\end{equation}
which depends upon the zero-point energies of both positive- and negative-energy states\footnote{Provided that for each eigenvalue $E_n$, there is an eigenvalue $-E_n$, then the two sums in Eq. (\ref{EnerVaccumForm}) reduces to the sum over the Dirac sea
$$E_{\text{vac}}=\sum_{E_n<0}E_n.$$
This equation is always satisfied by a charge conjugation invariant background; however, this is not our case. The use of this equation in a magnetic field background has led to erroneous conclusions in Ref. \cite{Cea1985,Cea2000}.}.
In our case, it is straightforward to obtain the density vacuum energy for each valley
\begin{eqnarray}\nonumber
&&\mathcal{E}^{\pm}_{\text{vac}}(B,b)=\frac{E^{\pm}_{\text{vac}}}{A}\\
&&=-\frac{|eB\pm eb|}{4 \pi}|m|-\frac{|eB\pm eb|}{2 \pi}\sum_{n=1}^{\infty}\sqrt{m^2+2|eB\pm eb|n}\\\nonumber
&&=\frac{|eB\pm eb|}{4 \pi}|m|-\frac{|eB\pm eb|^{\frac{3}{2}}}{2^{\frac{1}{2}}\pi}\zeta\left(-\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right),
\end{eqnarray}
where we have used that the Landau degeneracy per unit area is $|eB\pm eb|/(2 \pi)$. In order to calculate the purely magnetic field effect, we need to subtract the zero-field part. Thus, the one-loop effective Lagrangian density is\footnote{This approach is equivalent to compute an infinite series of one–loop diagrams with the insertion of one, two, \dots external lines (see for instance Ref. \cite{Dittrich1985}).}
\begin{align}\nonumber
\mathcal{L}^{(1)}_{\pm}&=-(\mathcal{E}^{\pm}_{\text{vac}}(B,b)-\mathcal{E}^{\pm}_{\text{vac}}(0,0))\\
&=-\frac{|eB\pm eb|}{4 \pi}|m|+\frac{|eB\pm eb|^{\frac{3}{2}}}{2^{\frac{1}{2}}\pi}\zeta\left(-\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right)+\frac{|m|^3}{6 \pi}.
\end{align}
\begin{figure*}
\center
\includegraphics[width=16.4cm]{susceptivilidad.pdf}
\caption{(color online) (a) Magnetization vs the external magnetic field, $M_{+}$ (continuous line), $M_{-}$ (dashed line) and the total magnetization $M=M_{+}+M_{-}$ (dotted line). The magnetization ($M_{+}$) in the valley $\vec{K}$ is zero in $eB/m^2=-10$ while the magnetization ($M_{-}$) in the valley $\vec{K}'$ is zero in $eB/m^2=10$. (b) Magnetic susceptibility vs the external magnetic field, $\chi_{+}$ (continuous line), $\chi_{-}$ (dashed line) and the total magnetization $\chi=\chi_{+}+\chi_{-}$ (dotted line). Here we taken $eb/m^2=10$.}
\label{magnetizationfig}
\end{figure*}
Using Eq. (\ref{integralrepre}), we can rewritten the one-loop effective Lagrangian in an integral representation such as
\begin{eqnarray}\nonumber
\mathcal{L}^{(1)}_{\pm}&=&-\int_0^{\infty}\frac{dt e^{-m^2 t}t^{-\frac{5}{2}}}{8\pi^{3/2}} \left(|eB\pm eb|t\coth (|eB\pm eb|t)-1\right).\\
\end{eqnarray}
For $b=0$, the result is in agreement with what was found in Ref. \cite{Redlich1984a,Andersen1995a,Dittrich1997,Ayala2010c}.
In particular, when $m\rightarrow 0$, we arrive to
\begin{align}
\mathcal{L}^{(1)}_{\pm,m=0}=\frac{|eB\pm eb|^{\frac{3}{2}}}{2^{\frac{1}{2}}\pi}\zeta\left(-\frac{1}{2}\right).
\end{align}
Here $\zeta(x)$ is the Riemann-zeta function. One can compute the orbital magnetization for each valley ($M_{\pm}$) employing the one-loop effective Lagrangian, namely $M_{\pm}=\frac{\partial \mathcal{L}^{(1)}_{\pm}}{\partial B}$. A straightforward calculation gives
\begin{eqnarray}\nonumber
M_{\pm}&=&-\frac{e}{8\pi}\int_0^{\infty}\frac{dt}{\pi^{1/2}} e^{-m^2 t}t^{-\frac{3}{2}}\left(\coth (|eB\pm eb|t)\right. \\
&&\left. -\frac{|eB\pm eb|t}{\sinh ^2(|eB\pm eb|t)}\right)\text{sgn}(eB\pm eb),
\end{eqnarray}
which can also be written as
\begin{align}\nonumber
M_{\pm}=&\left[-\frac{e|m|}{4 \pi}-\frac{em^2}{\sqrt{32}\pi|eB\pm eb|^{1/2}}\zeta\left(\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right)\right.\\
&\left.+3e\frac{|eB\pm eb|^{\frac{1}{2}}}{\sqrt{8}\pi}\zeta\left(-\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right)\right]\text{sgn}(eB\pm eb).
\end{align}
Therefore, the orbital magnetization displays nonlinear behavior in the magnetic and pseudomagnetic fields (see Fig. \ref{magnetizationfig}a). For $b=0$, the result is in agreement with Refs. \cite{Andersen1995a,Ayala2010c,Slizovskiy2012} and in the limit $m\rightarrow 0$ the magnetization is
\begin{align}\label{magnezerostronglimit}
M_{\pm}=3e\frac{|eB\pm eb|^{\frac{1}{2}}}{\sqrt{8}\pi}\zeta\left(-\frac{1}{2}\right)\text{sgn}(eB\pm eb),
\end{align}
for each valley\footnote{It should be noted that Eq. (\ref{magnezerostronglimit}) also corresponds to the dominant term in the strong field expansion.}. Notably, the value and sign of magnetization are different for each valley. Thus, they can be modified by the strains or varying the applied magnetic field. In particular, the magnetization of one valley could be zero while the other does not, as it can be seen in Fig. \ref{magnetizationfig}a.
\par
Having calculated $M_{\pm}$, one can compute the magnetic susceptibility for each valley in the presence of magnetic and pseudomagnetic fields, which is simply given by $\chi_{\pm}=\frac{\partial M_{\pm}}{\partial B}$, thus
\begin{eqnarray}\nonumber
\chi_{\pm}&=&\frac{e}{16\sqrt{2} \pi |eB\pm eb|^{\frac{5}{2}}}\left[ 12 |eB\pm eb|^2\zeta\left(-\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right)\right.\\\nonumber
&&\left. -4m^2|eB\pm eb|\zeta\left(\frac{1}{2},\frac{m^2}{2|eB\pm eb|}\right)-m^4\zeta\left(\frac{3}{2},\frac{m^2}{2|eB\pm eb|}\right)\right]\\
\end{eqnarray}
In the presence of magnetic and pseudomagnetic fields, the total susceptibility has two minimums, see Fig. \ref{magnetizationfig}b, which is a distinctive feature compared with the case $eb=0$ \cite{Slizovskiy2012}. Finally, in the limit $m\rightarrow 0$ the susceptibility is
\begin{align}
\chi_{\pm}&=\frac{3 e \zeta(-1/2)}{\sqrt{32}\pi|eB\pm eb|^{1/2}},
\end{align}
which is divergent when $B=\pm b$. Since $\chi_{\pm}(B\pm b=0,m\neq 0)$ is finite, it can be concluded that the mass acts as a regulator.
\section{Induced charge density}\label{secIndChar}
In this section, we derive an expression for the induced charge density in the presence of uniform real and pseudomagnetic fields. We first noticed, that as in the case of the fermionic condensate, the formula $-\text{tr} \gamma^{\mu} S(x,x)=\langle 0| j^{\mu}|0\rangle$ also deserves to be revised. The current must be understood as
\begin{eqnarray}\nonumber
j^{\mu}\rightarrow :j^{\mu}(x)&:=&:e\bar{\Psi}(x)\gamma^{\mu}\Psi(x):\\\nonumber
&=&\frac{e}{2}[\bar{\Psi}(x),\gamma^{\mu}\Psi(x)]\\
&=&\frac{1}{2}\gamma^{\mu}_{\dot{\alpha}\alpha}[\bar{\Psi}_{\dot{\alpha}}(x),\Psi_{\alpha}(x)],
\end{eqnarray}
which is the correct definition of the current operator, where it has been subtracted an infinite charge of the vacuum state \cite{Dittrich1985}. In an analogous way to what was done before, we can find that in the Furry picture the vacuum expectation value of the current operator is
\begin{eqnarray}\nonumber
&&\langle 0|:j_i^{\mu}(x):|0\rangle =\frac{e}{2}\langle 0|[\bar{\Psi}_i(x),\gamma^{\mu}\Psi_i(x)]|0\rangle\\\nonumber
&&=\frac{e}{2}\SumInt_{n}
\SumInt_p \left( \bar{\psi}^{(-)}_{i,n,p}(x)\gamma^{\mu}\psi^{(-)}_{i,n,p}(x)-\bar{\psi}^{(+)}_{i,n,p}(x)\gamma^{\mu}\psi^{(+)}_{i,n,p}(x)\right).\\
\end{eqnarray}
For constant real and pseudomagnetic fields, using the orthogonality of Hermite polynomials, one can show that
\begin{align}\label{InduCurrendenpm}
\langle 0|:j_{\pm}^{i}(x):|0\rangle &=0, \ \ \ (i=1,2),
\end{align}
\textit{i.e} the induced current vanishes. A nonvanishing vacuum current would arise in the presence of an external electric field \cite{Andersen1995a}. On the other hand,
the induced charge density is given by
\begin{align}\label{InduChargedenpm}
\langle 0|\rho(x)_{\pm}|0\rangle=\langle 0|:j_{\pm}^{0}(x):|0\rangle &=\pm\frac{\text{sgn}(m)}{4\pi}e^2(B\pm b).
\end{align}
Curiously, in contrast to $\mu^{\pm}$, the induced charge density only receives contributions from the lowest Landau level (LLL), even if $m\neq 0$. An alternative technique to compute the charge density is through spectral function. It can be shown that the induced charge is \cite{Reuter1986a,Dittrich1986L}
\begin{align}
Q_{\pm}=\int d^2x\langle 0|\rho(x)_{\pm}|0\rangle=-\frac{e}{2}\lim_{\substack{s\rightarrow 0^{+}}}\eta(\mathcal{H}_{\pm},s),
\end{align}
where
\begin{align}
\eta(\mathcal{H}_{\pm},s)=\sum_n|E_{n}^{\pm}|^{-s}\text{sgn}(E_{n}^{\pm}).
\end{align}
is the $\eta$ invariant of Atiyah, Patodi, and Singer. Using the Landau degeneracy per unit area and noting that except for the LLL, for each eigenvalue $E_n^{\pm}$ there is an eigenvalue $-E_n^{\pm}$, then we obtain Eq. (\ref{InduChargedenpm}) as we should. Therefore, the total induced charge density is
\begin{eqnarray}\nonumber
\langle 0|\rho(x)|0\rangle&=&\langle 0|\rho(x)_{+}|0\rangle+\langle 0|\rho(x)_{-}|0\rangle\\\nonumber
&=&\frac{\text{sgn}(m)}{4\pi}e^2(B+b)-\frac{\text{sgn}(m)}{4\pi}e^2(B-b)\\\label{cchargepseudo}
&=&\frac{\text{sgn}(m)}{2\pi}e^2b,
\end{eqnarray}
which is observable (and measurable) when taking a nonzero pseudomagnetic field. This is remarkable since
this is not possible in the case of a pure magnetic field \cite{Semenoff1984a}. Eq. (\ref{cchargepseudo}) shows that in the presence of pseudomagnetic fields, the system has a parity anomaly, even with two fermionic species. Eqs. (\ref{InduCurrendenpm}) and (\ref{InduChargedenpm}) are related with the Chern-Simons relation, which in the case of zero pseudomagnetic field reads \cite{Semenoff1984a}
\begin{align}
\langle 0|:j_{\pm}^{\mu}(x):|0\rangle &=\pm\frac{e^2}{8\pi}\text{sgn}(m)\epsilon^{\mu\nu \lambda}F_{\nu\lambda}.
\end{align}
It is clear from this equation equation that when the two representations are present, the vacuum expectation value of the total current is always zero.
\section{Conclusions}\label{conclutionsfinal}
In summary, we have examined the Dirac Hamiltonian in $2+1$ dimensions and found an infinite number of non-equivalent realizations of parity, charge conjugation, and time-reversal transformations for the reducible representation. We have then explored how the interplay between real and pseudomagnetic fields affects some aspects of the three-dimensional quantum field theory. For the case of uniform magnetic and pseudomagnetic fields, by employing a non-perturbative approach we have found that: (i) The $c-$condensate is the appropriate order parameter for studying the breaking of chiral, parity and time-reversal symmetries. (ii) One can control the magnetization, susceptibility, and the dynamical mass independently for each valley by straining and varying the applied magnetic field. (iii) The dynamical mass generated is due to two terms, the usual mass term ($m\bar{\psi}\psi$) and a Haldane mass term ($m_{\tau}\bar{\psi}i\Gamma^{35}\psi$), being the latter, for the case in which the two fields are simultaneously different from zero, the one that breaks parity and time-reversal symmetries. (iv) For non-zero pseudomagnetic field, the total induced ``vacuum'' charge density is not null. This last result implies that strained single-layer graphene exhibits a parity anomaly. Therefore, strained graphene in the presence of an external magnetic field has distinctive features compared with QED$_{2+1}$, which lacks the aforementioned consequences (i)-(iv). Finally, it would be interesting to extend our calculations to include the effect of Coulomb interactions on the magnetic catalysis, symmetry breaking, dynamical mass generation, etc. See, for instance, the study of the magnetic catalysis in unstrained graphene in the weak-coupling limit \cite{Semenoff2011a}.
\begin{acknowledgements}
J. A. S\'{a}nchez is grateful to F. T. Brandt, C. M. Acosta and J. S. Cort\'{e}s for useful comments.
\end{acknowledgements}
|
1,108,101,562,801 | arxiv | \section*{Introduction}
The ``local models" of this paper are projective schemes
over the integers of a $p$-adic local field
that are expected to model the singularities of
integral models of Shimura varieties at places of (tame, parahoric)
bad reduction. This is meant in the sense that each point on the
integral model of the Shimura variety should have an \'etale
neighborhood which
is isomorphic to an \'etale neighborhood of a corresponding point on the local model.
The simplest example is for the classical modular curve $X_0(p)$ with $\Gamma_0 (p)$-level structure.
In this case, the local model is obtained by blowing up the
projective line $\ensuremath{\mathbb{P}}\xspace_{\ensuremath{\mathbb{Z}}\xspace_p}^1$ over ${\rm Spec } (\ensuremath{\mathbb{Z}}\xspace_p)$ at the origin
$0$ of the special fiber $\ensuremath{\mathbb{P}}\xspace_{\ensuremath{\mathbb{F}}\xspace_p}^1$. More generally, local
models for Shimura varieties of PEL type with parahoric level
structure were given by Rapoport and Zink in \cite{RapZinkBook}.
Their construction was tied to the description of the Shimura
variety as a moduli space of abelian schemes with additional
structures; the fact that they capture the singularities of this
moduli space is a consequence of the Grothendieck-Messing
deformation theory of abelian schemes. However, it was soon realized
that the Rapoport-Zink construction is not adequate when the group
of the Shimura variety is ramified at $p$ and in many cases of
orthogonal groups. Indeed, then the corresponding integral models
are often not flat (\cite{PaJAG}). In the ramified PEL case,
corrected integral models were considered and studied in
\cite{PaJAG}, \cite{PappasRaI}, \cite{PappasRaII},
\cite{PappasRaIII}. These are flat by definition but they do not
always have a neat moduli space interpretation. Unfortunately, the
constructions in these works are somewhat ad-hoc and are mostly done
case-by-case: Ultimately, they are based on representing the
corresponding reductive group explicitly as the neutral component of
the group of automorphisms of a suitable bilinear (symmetric,
alternating, hermitian etc.) form on a vector space over a division
ring. Then, its parahoric subgroups are the connected stabilizers of
a self-dual chain of lattices as explained in \cite{BTclassI},
\cite{BTclassII} (see also \cite[Appendix]{RapZinkBook}) and the
local models are given as flat closures in certain corresponding
linked Grassmannians. We refer the reader to the survey \cite{PRS}
for precise definitions and more references.
In this paper, we provide a general group theoretic definition of local
models that is not tied to a particular representation of the group.
This approach allows us to make significant progress and resolve several
open questions. Our local models are constructed starting
from the ``local Shimura data", i.e triples $(G, K, \{\mu\})$, where
$G$ is a (connected) reductive group over ${\mathbb Q}_p$, $ K\subset
G({\mathbb Q}_p)$ a parahoric ``level" subgroup, and $\{\mu\}$ a geometric
conjugacy class of one parameter subgroups $\mu: {{\mathbb G}_{\rm m}}_{/\overline
{\mathbb Q}_p}\to G_{\overline{\mathbb Q}_p}$ over an algebraic closure
$\overline{\mathbb Q}_p$. Here, we assume that $\mu$ is minuscule. Denote by
$E$ the field of definition of the conjugacy class $\{\mu\}$. This
is the {\sl local reflex field}; it is a finite extension of ${\mathbb Q}_p$
and is contained in a splitting field for $G$. We will
assume\footnote{This is an important assumption that we keep
throughout the paper.} that $G$ splits over a tamely ramified
extension of ${\mathbb Q}_p$. Let $\O_E$ be the ring of integers of $E$ and
$k_{\scriptscriptstyle E}$ its residue field. By definition, the local model ${\rm
M}^{\rm loc}$ is a projective scheme over ${\rm Spec } (\O_E)$ with generic
fiber the homogeneous space for $G_E$ that corresponds to $\mu$. Our
first main result is the following (a weaker version of a combination of Theorems \ref{CMfiber} and \ref{special fiber}):
\begin{thm}\label{thm01}
Suppose that the prime $p$ does not divide the order of the
fundamental group $\pi_1(G({\overline {\mathbb Q}_p})_{\rm der})$.
Then ${\rm M}^{\rm loc} $ is normal with reduced special fiber;
all the irreducible components of the geometric special fiber ${\rm M}^{\rm loc}\otimes_{\O_E}\bar
{\mathbb F}_p$ are normal and
Cohen-Macaulay. In fact, ${\rm M}^{\rm loc}\otimes_{\O_E}\bar
{\mathbb F}_p$ can be identified with the reduced union of a finite
set of affine Schubert varieties in the affine flag variety ${\rm
Gr}_{{\mathcal G}, {\mathbb F}_p}\otimes_{{\mathbb F}_p}\bar {\mathbb F}_p$;
the set parametrizing this union is the ``$\mu$-admissible set"
defined by Kottwitz and Rapoport.
\end{thm}
The definition of ${\rm M}^{\rm loc}$ and of the affine flag variety
${\rm Gr}_{{\mathcal G}, {\mathbb F}_p}\otimes_{{\mathbb F}_p}\bar {\mathbb
F}_p$ will be explained below; the rest is given in \S \ref{ss8d}. The main ingredients in the proof of
this theorem are results of \cite{FaltingsLoops, PappasRaTwisted} on
the structure of Schubert varieties in affine flag varieties and
the coherence conjecture of \cite{PappasRaTwisted} which was shown
by the second-named author in \cite{ZhuCoherence}.
In the case of Shimura data of PEL type, we show that the local
models ${\rm M}^{\rm loc}$ agree with the ``corrected local models"
which are obtained (in most cases at least) by taking the flat
closures of the ``naive local models" of Rapoport-Zink; these
corrected local models do describe the \'etale local structure of
corresponding integral models of Shimura varieties as we discussed
in the beginning of the introduction. For PEL types the result in
the above theorem was conjectured and verified for a few special
cases in several papers (\cite{GortzFlatGLn},
\cite{GortzSymplectic}, \cite{PappasRaI}, \cite{PappasRaII},
\cite{PappasRaIII}, \cite{PappasRaTwisted}, see also \cite{PRS}). In
\cite{ZhuCoherence}, the theorem is proven for the ramified unitary
similitude groups. Our approach here allows a unified treatment
in almost all cases.
For example, let us explain a result that follows by combining the above with the work of Rapoport and Zink.
Suppose that ${\mathfrak D}=({\bold B}, \O_{\bold B}, $*$, {\bold
V}, (\ ,\ ), \mu, \{\L \}, K^p)$ give PEL data as in \S
\ref{PELremark}, \S \ref{sss8c4}, with corresponding group $\bold G$
over ${\mathbb Q}$ and reflex field ${\bold E}$. Assume that $p$ is odd, that
$K^p\subset {\bold G}({\mathbb A}^{p}_f)$ is sufficiently small and
that the subgroup $ K=K_p$ of ${\bold G}({\mathbb Q}_p)$ that stabilizes the
lattice chain $\{\L\}$
is parahoric. Suppose that $G={\bold G}_{{\mathbb Q}_p}$ splits over a tamely ramified extension of ${\mathbb Q}_p$
and, in addition, that ${\bold G} $ is connected. Let $\mathfrak P$ be a prime of ${\bold E}$ over $p$. Under these assumptions, we obtain:
\begin{thm}\label{thmPEL}
The Shimura variety
$Sh_{\bold K}$ defined by the PEL data $\mathfrak D$
affords a flat integral model ${\mathcal S}_{\bold K}$ over
$\O_{{{\bold E}}_{\mathfrak P}}$ which is, locally for the \'etale topology,
isomorphic to the local model ${\rm M}^{\rm loc}$ for $(G, K,
\{\mu\})$. The scheme ${\mathcal S}_{\bold K} $ is normal with reduced special fiber;
the geometric special fiber ${\mathcal S}_{\bold K}\otimes_{\O_E}\bar
{\mathbb F}_p$ admits a stratification with locally closed
strata parametrized by the $\mu$-admissible set;
the closure of each stratum is normal and Cohen-Macaulay.
\end{thm}
In fact, the result is more precise: We show the existence of a
``local model diagram" (see (\ref{locmoddiagram}));
also, the model
${\mathcal S}_{\bold K}$ is the flat closure of the generic fiber of the
corresponding Rapoport-Zink integral model and thus supports a
natural morphism to a Siegel moduli scheme. See \S \ref{rem8c4}
for the proof and for more similar results, in
particular for a discussion of cases in which $\bold G$ is not
connected. (Note that, in general, $Sh_{\bold K}$ above is equal to a disjoint union of
Shimura varieties in the sense of Deligne; this is related to the failure of the Hasse principle,
see \cite[\S 8]{KottJAMS} and Remark \ref{sss8d5}.)
In general, we
conjecture that the general Shimura variety with local data $(G, K,
\{\mu\})$ has an integral model which affords a local model diagram
and hence is locally for the \'etale
topology isomorphic to ${\rm M}^{\rm loc}$. Showing this, in cases
of Shimura varieties of Hodge type, is the subject of joint work in
preparation of the first author with M. Kisin \cite{K-P}. This
combined with Theorem \ref{thm01} will then imply that the conclusion
of
Theorem \ref{thmPEL} also holds for such Shimura varieties.
Before considering Kottwitz's conjecture, we will explain our definition of local
models. This uses the construction of certain
group schemes over a two-dimensional base and of their corresponding
(affine) flag varieties. We believe that these objects are of
independent interest and we begin by discussing them
in some detail.
We start by discussing the group schemes. Let $F$ be a $p$-adic
field with ring of integers $\O$ and residue field $k$. Suppose $G$
is a reductive group over $F$, $K$ a parahoric subgroup of $G(F)$.
By definition, $K$ is the (connected) stabilizer of a point $x$ in
the Bruhat-Tits building ${{\mathcal B}}(G, F)$ of $G(F)$. In \cite{BTII},
Bruhat-Tits construct a smooth group scheme $\P_x$ over the discrete
valuation ring $\O$ such that $K=\P_x(\O)$. Assume that $G$ splits
over a tamely ramified extension of $F$. Choose a uniformizer
$\varpi$ of $\O$. In the first part of the paper, we construct a
smooth affine group scheme ${\mathcal G}$ over the affine line ${\mathbb
A}^1_{\O }={\rm Spec } (\O [u])$ which has connected fibers, is reductive
over the complement of $u=0$ and specializes to $\P_x$ along the
section ${\rm Spec } (\O )\to {\mathbb A}^1_{\O} $ given by $u\mapsto
\varpi$. In addition, the base changes of ${\mathcal G}$ by $\O [u]\to
F[[u]]$ and $\O[u]\to k[[u]]$ give corresponding Bruhat-Tits group
schemes over these two discrete valuation rings.
Having given ${\mathcal G}$, we can now define various mixed characteristic
versions of
the familiar (from the theory of geometric Langlands correspondence)
global and local versions of the affine Grasmannian and the affine flag variety.
For simplicity, we set $X={\mathbb A}^1_\O={\rm Spec } (\O[u])$.
The main actor is the global affine Grassmannian ${\rm Gr}_{{\mathcal G}, X}$;
this is the moduli functor on schemes over $X$ which to $y: S\to X$
associates the set of isomorphism classes of
${\mathcal G}$-bundles over $X\times_\O S$ together with a trivialization on the complement of
the graph of $y$. When $\O$ is replaced by a field and ${\mathcal G}$
is a constant split reductive group, this is the global affine Grassmannian (over the affine line)
of Beilinson and Drinfeld. We show that
${\rm Gr}_{{\mathcal G}, X}$ is represented by an ind-scheme
which is ind-proper over $X$. Denote by ${\rm Gr}_{{\mathcal G}, \O}\to {\rm Spec } (\O)$ the base change
of ${\rm Gr}_{{\mathcal G}, X}\to X$ by ${\rm Spec } (\O)\to X$
given by $u\mapsto\varpi$. We can easily see, using
the descent lemma of Beauville and Laszlo, that the generic fiber
${\rm Gr}_{{\mathcal G}, \O}\otimes_\O F$ is
isomorphic to the affine Grassmannian ${\rm Gr}_{G, F}$
for the loop group $G\otimes_F F((t))$, $t=u-\varpi$. Recall that ${\rm Gr}_{G, F}$ represents
the fpqc sheaf given by $R\mapsto G(R((t)))/G(R[[t]])$. Similarly, the special fiber ${\rm Gr}_{{\mathcal G}, \O}\otimes_\O k$ is
isomorphic to an affine flag variety ${\rm Gr}_{{\mathcal G}, k}$
for the group ${\mathcal G}(k((u)))$ over the local field $k((u))$ and its
parahoric subgroup ${\mathcal G}(k[[u]])$. Here ${\rm Gr}_{{\mathcal G}, k}$
represents $R\mapsto {\mathcal G}(R((u)))/{\mathcal G}(R[[u]])$.\footnote{In
\cite{PappasRaTwisted}, affine flag/Grassmannian varieties for
groups that are not necessarily constant are referred to as
``twisted". Here we omit this adjective.}
We are now ready to give our definition of the local model ${\rm M}^{\rm loc}$.
For this we take $F={\mathbb Q}_p$ in the above and use the ``local Shimura
data" $(G, K, \{\mu\})$ with local reflex field $E$. As above,
we need to assume that $G$ splits over a tamely ramified extension of ${\mathbb Q}_p$.
Since ${{\mathbb G}_{\rm m}}={\rm Spec } ({\mathbb Q}_p[t, t^{-1}])$, the coweight $\mu$ provides
a $\overline{\mathbb Q}_p((t))$-valued
point $s_\mu$ of $G$ and hence a $\overline{\mathbb Q}_p$-valued
point $[s_{\mu}]$ of ${\rm Gr}_{G, {\mathbb Q}_p}$. Because $\mu$ is minuscule, the (left) $G(\overline{\mathbb Q}_p[[t]])$-orbit
of $[s_\mu]$ is a smooth projective variety $X_\mu$ (actually a homogeneous space for $G_E$)
defined over the local reflex field $E$; $X_\mu$ is a closed subvariety of
the affine Grassmannian ${\rm Gr}_{G, {\mathbb Q}_p}\otimes_{{\mathbb Q}_p} E=
{\rm Gr}_{{\mathcal G},\O}\otimes_\O E$. By definition, the local model ${\rm M}^{\rm loc}:=M_{{\mathcal G}, \mu}$ is the reduced projective scheme over ${\rm Spec } (\O_E)$
given by the Zariski closure of $X_\mu\subset {\rm Gr}_{{\mathcal G},\O}\otimes_\O E$ in the ind-scheme
${\rm Gr}_{{\mathcal G}, \O}\otimes_\O\O_E$.
By construction, the special fiber ${\rm M}^{\rm
loc}\otimes_{\O_E}k_E$ is a closed subscheme of the affine flag
variety ${\rm Gr}_{{\mathcal G}, {\mathbb F}_p}\otimes_{{\mathbb F}_p} k_E$.
Let us remark here that the local models ${\rm M}^{\rm loc}$ are
given by taking a Zariski closure and, as a result, they do not
always have a neat moduli space interpretation. This issue does not
concern us in this paper. Indeed, the close relation of ${\rm M}^{\rm loc}$ to the affine
Grassmannians allows us to show directly many favorable properties
as in Theorem \ref{thm01}. These then imply nice properties of
corresponding integral models for Shimura varieties as in Theorem
\ref{thmPEL}.
The same connection with the theory of affine Grassmannians also
allows us to obtain results about the sheaf of nearby cycles ${\rm
R}\Psi(\overline{\mathbb Q}_\ell)$ of
the scheme ${\rm M}^{\rm loc} \to {\rm Spec } (\O_E)$. (Here $\ell$ is a prime different from $p$.)
We will describe these below. Recall that we conjecture that ${\rm M}^{\rm loc}$ describes the \'etale local
structure of an integral model of the Shimura variety. Therefore,
the nearby cycles of the local models should also be determining
the nearby cycles for integral models of Shimura varieties with
parahoric level structure. (As follows from the above, this is
indeed the case for most PEL types.) Our results will
be useful in expressing the local factor of the
Hasse-Weil zeta function of the Shimura variety at places of (tame)
parahoric reduction as a product of local factors of automorphic
L-functions. The strategy of using the local model to determine the
(semi-simple) Hasse-Weil zeta function was first suggested by
Rapoport \cite{RapoportGuide}. It has since been advanced by
Kottwitz, Haines-Ng\^o (\cite{HainesNgoNearby}), Haines and
others. A centerpiece of this approach is a conjecture of Kottwitz
that states that, in the case that $G$ is split, the semi-simple
trace of Frobenius on the sheaf of nearby cycles gives a central
function in the parahoric Hecke algebra. This was proven by Haines
and Ng\^o (\cite{HainesNgoNearby}) in (split) types A and C by
following an argument of Gaitsgory \cite{GaitsgoryInv}. Gaitsgory
proved a stronger statement for general split groups in the function
field case; he showed that the perverse sheaf of nearby cycles
satisfies a commutativity constraint with respect to the convolution
product. This implies Kottwitz's conjecture in the function field
case. The main tools in Gaitsgory's approach are various versions of
the global affine Grassmannian of Beilinson-Drinfeld. In this paper,
we are able to generalize and simplify the approaches of both
Gaitsgory and Haines-Ng\^o by using the mixed characteristic affine
Grassmannians ${\rm Gr}_{{\mathcal G}, X}$ and various other related
ind-schemes. In particular, we obtain a general result even for
non-split groups as follows:
Our construction of the group scheme ${\mathcal G}$ over $X={\rm Spec } ({\mathbb Z}_p[u])$ also provides us with a
reductive group $G'={\mathcal G}\times_X{\rm Spec } ({\mathbb F}_p((u)))$ over
${\mathbb F}_p((u))$ and a parahoric subgroup $K'$ which correspond
to $G$ and $K$ respectively. By definition, if $ {\mathbb
F}_q\supset k_E$, we have an equivariant embedding ${\rm M}^{\rm
loc}\otimes_{\O_E}{\mathbb F}_q\subset {\rm Gr}_{{\mathcal G}, {\mathbb
F}_p}\otimes_{{\mathbb F}_p} {\mathbb F}_q={\rm Gr}_{G', {\mathbb
F}_p}\otimes_{{\mathbb F}_p} {\mathbb F}_q$. This allows us to view
the Frobenius trace function $\tau^{\on{ss}}_{{\rm
R}\Psi}(x)=\on{tr}^{\rm ss}({\rm Frob}_x, {\rm
R}\Psi(\overline{{\mathbb Q}}_\ell)_{\bar x})$, $x\in {\rm M}^{\rm
loc}({\mathbb F}_q)$, as an element of the parahoric Hecke algebra
${\mathcal H}_q(G', K')$
of bi-$K'({\mathbb F}_q[[u]])$-invariant, compacted supported locally constant
$\overline{{\mathbb Q}}_{\ell}$-valued functions on $G'({\mathbb F}_q((u)))$.
\begin{thm}(Kottwitz's conjecture)\label{thm02}
The semi-simple trace of Frobenius on the sheaf of nearby cycles
${\rm R}\Psi(\overline{\mathbb Q}_\ell)$ of ${\rm M}^{\rm loc}\to{\rm Spec } (\O_E)$
gives a central function $\tau^{\on{ss}}_{{\rm R}\Psi}$ in the
parahoric Hecke algebra ${\mathcal H}_q(G', K')$.
\end{thm}
See \S \ref{sstraceSect} for more details and in particular
Theorem \ref{thm9.13} for the precise statement which is more general. In the split Iwahori case, as
a corollary of this theorem, one can give an explicit formula for
the semi-simple Frobenius trace function using Bernstein's
presentation of the Iwahori-Hecke algebra, as was explained in
\cite{HainesNgoNearby}. In fact, more generally,
when $G$ is unramified and the subgroup an arbitrary parahoric,
we show that the semi-simple trace can be
expressed as a Bernstein function in the parahoric Hecke algebra as
was also conjectured by Kottwitz and by Haines. Let us mention here
that Kottwitz's conjecture for quasi-split (but not split)
unramified unitary groups was also shown independently by S.~Rostami
in his thesis \cite{RostamiThesis}.
We also obtain results for quasi-split ramified groups when the
level subgroup is {\sl special}. In this case, we give a
characterization of the function $\tau^{\on{ss}}_{{\rm R}\Psi}$ by
identifying its trace on unramified representations; this also
agrees with the prediction of a conjecture of Haines and Kottwitz.
Corresponding to $\mu$ we have a minuscule coweight for $G'$ which
for simplicity we still denote by $\mu$. The conjugacy class
$\{\mu\}$ defines an algebraic $\overline{\mathbb Q}_\ell$-representation
$V_{\mu}$ of the Langlands dual group ${}^LG'=H^\vee\rtimes {\rm
Gal}_{{\mathbb F}_q((u))}$ of $G'$ over ${\mathbb F}_q((u))$. (Here $H$ is
the Chevalley split form of $G'$). The inertia invariants $V_{\mu
}^{I}$ give a representation of $(H^\vee)^I\rtimes {\rm
Gal}(\overline {\mathbb F}_q/{\mathbb F}_q)$. If $\pi$ is an
irreducible smooth representation of $G'({\mathbb F}_q((u)))$ with a
$K'({\mathbb F}_q[[u]])$-fixed vector, one can define its
Langlands-Satake parameter $\on{Sat}(\pi): W\to {}^LG'$: Among other
properties, one can show that if $\Phi_q\in W $ is a (geometric)
Frobenius element in the Weil group, then $\on{Sat}(\pi)(\Phi_q)$ is
a semi-simple element in $(H^\vee)^{I }\times {\rm Frob}_q^{-1} $,
which is well-defined up to $(H^\vee)^{I }$-conjugacy and completely
determines $\on{Sat}(\pi)$. Our characterization of the Frobenius
trace function is the identity
\begin{equation}\label{eq0.1}
\on{tr}(\pi(\tau^{\on{ss}}_{{\rm
R}\Psi}))=\on{tr}(\on{Sat}(\pi)(\Phi_q), V_{\mu} ^{I })
\end{equation}
for all $\pi$ as above. This is shown by combining our constructions
with the results in \cite{ZhuSatake}, \cite{RiZh}. See the last
section of the paper for more details.
As we mentioned above, when
the Shimura data are of PEL type, the local model does indeed
describe the \'etale local structure of an integral model of the Shimura
variety. Therefore, in this case, our
results also give the semi-simple trace of Frobenius on the nearby cycles of an
integral model of the Shimura variety. One can then apply them to
the calculation of the local factor of the Hasse-Weil zeta function
of the Shimura variety following the arguments of Kottwitz,
Rapoport and Haines (e.g \cite{RapoportGuide}, \cite{HainesSurvey}), but we will not go
into this here. (We also expect that this approach will
be extended to many Shimura varieties
with parahoric level which are not of PEL type using \cite{K-P}.)
Here we should also mention very recent results of Scholze \cite{ScholzeLK} and Scholze and Shin
\cite{ScholzeShin} that make progress towards this calculation without having to
explicitly identify the
semi-simple trace.
Finally, we give several results about the action of monodromy (i.e
of inertia) on the sheaves of nearby cycles. These also imply corresponding results for
Shimura varieties. Here is an example:
\begin{thm}\label{thm03}
Assume that $G$ splits over the (tamely ramified) extension $F/{\mathbb Q}_p$
and that
$K$ is a special
subgroup of $G({\mathbb Q}_p)$. Then the
inertia subgroup $I_F={\rm Gal}(\overline{\mathbb Q}_p/ {\mathbb Q}_p^{\rm
unr}F)$ acts trivially
on the sheaf of nearby cycles ${\rm R}\Psi(\overline{\mathbb Q}_{ \ell})$ of
${\rm M}^{\rm loc}\to{\rm Spec } (\O_E)$.
\end{thm}
Here and elsewhere, we say that $K$ is special, if the corresponding
parahoric subgroup of $G({\mathbb Q}_p^{\rm unr})$ is special in the sense of
Bruhat-Tits. More generally, without assuming that $K$ is special,
we show that the action of $I_F$ on the sheaf of nearby cycles ${\rm
R}\Psi(\overline{\mathbb Q}_{ \ell})$ is unipotent. In the special case, we
can describe the action of the full inertia group $I_E$ in terms of
the geometric Satake equivalence for ramified groups of
\cite{ZhuSatake}. This is obtained by comparing nearby cycles along
two directions over the two-dimensional base $\O[u]$ and is
used in proving the identity
(\ref{eq0.1}) above. In the case that
$G={\rm Res}_{F/{\mathbb Q}_p}{\rm GL}_n$ with $F/{\mathbb Q}_p$ tame, this description of the inertia action
confirms a conjecture in \cite{PappasRaI}.
Recall that, throughout the paper, we have restricted to the case that the
group $G$ splits over a tamely ramified extension of the base field.
This assumption is important for the construction of the group
scheme ${\mathcal G}$ over $\O[u]$; it is always satisfied when $p\geq 5$ and $G$ is absolutely simple
and is either adjoint or simply connected. A combination of our methods
with the idea of splitting models from \cite{PappasRaII} can be used to also deal with groups
that are Weil restrictions of scalars of tame groups down
from a (possibly) wildly ramified extension. In particular, we can define local models
and show Theorem \ref{thm01} in such cases. However,
since this paper is already too long we leave this for another
occasion. Of course, a general reductive group is
isogenous to a product of Weil restrictions of absolutely simple adjoint groups
and so this allows us to handle most cases.
Let us now give an overview of the various sections of the
paper and explain some aspects of our constructions.
In \S 1 we discuss preliminaries on reductive groups over local fields; the main emphasis
is on obtaining forms of a split group by descent
as inner twists of a quasi-split form.
In \S 2 we generalize this approach and give a certain reductive group
scheme $\underline G$ over $\O[u, u^{-1}]$ which
specializes to $G$ under the base change $\O[u, u^{-1}]\to F$ given by
$u\mapsto \varpi$. Here, a crucial observation is
that $\O[u, u^{-1}]\to F$ identifies the \'etale fundamental group of $\O[u,u^{-1}]$
with the tame quotient of the Galois group of $F$; our tameness hypothesis enters this way. Our construction of the ``parahoric type" smooth affine
group schemes ${\mathcal G}$ over $\O[u]$ is given in \S 3 (Theorem \ref{grpschemeThm}).
Here is a brief outline of the construction of ${\mathcal G}$: When $G$ is
split over $F$ the existence of such a smooth group
scheme ${\mathcal G}$ follows from \cite{BTII}. However, showing that this is affine is not so straightforward.
We do this in two stages. In the case that $K$ is contained in a hyperspecial
subgroup, we realize ${\mathcal G}$ as a dilation of the
corresponding Chevalley group scheme over $\O[u]$ along a subgroup supported along $u=0$.
In general, still for $G$ split, we reduce to the above case by using the fact that there is always a finite ramified extension $L/F$
such that the stabilizer of $x$ in $G(L)$ is contained in a hyperspecial subgroup. Next, we construct ${\mathcal G}$
when $G$ is quasi-split (and splits over a tamely ramified extension) by taking fixed points of
a corresponding group scheme for the split form. Finally, the general case is obtained by descent
along an \'etale extension of $\O[u]$; this descent resembles a corresponding argument in \cite{BTII}.
In \S 4, we give several examples and eventually explain how,
when $G$ is a classical group and $K$ is the connected stabilizer of a self-dual lattice chain (cf.
\cite{BTclassI}, \cite{BTclassII}), we can realize concretely
the group schemes ${\mathcal G}$ as (the neutral components of) automorphisms of certain self-dual $\O[u]$-lattice
chains.
In \S 5, we prove that the global affine Grassmannian
${\rm Gr}_{{\mathcal G}, X}$ is represented by an ind-scheme
which is eventually shown to be ind-proper over $X$. The strategy here is to first demonstrate that we can
find a faithful linear representation ${\mathcal G}\rightarrow {\rm GL}_n$ such that the quotient ${\rm GL}_n/{\mathcal G}$
is represented by a quasi-affine scheme. Given this we can reduce
the proof of representability of ${\rm Gr}_{{\mathcal G}, X}$ to
the standard case of ${\mathcal G}={\rm GL}_n$.
Here we are dealing with objects over the two-dimensional base
$X={\mathbb A}^1_\O$ and we need to work harder than in the usual
situation in which the base is a smooth curve over a field. For example, it is not trivial to show that a smooth group scheme ${\mathcal G}$ with connected fibers over a two-dimensional regular base can be embedded as a closed subgroup scheme of ${\rm GL}_n$, as was proven by Thomason \cite{ThomasonEqRes}.
We can upgrade this to also show that there is such an embedding
with ${\rm GL}_n/{\mathcal G}$ a quasi-affine scheme; this is done in the appendix (\S 10).
Also, general
homogeneous spaces over a (regular) two dimensional base are not
always represented by schemes; this complicates our proof.
Raynaud
asked whether the quotients of the form ${\mathcal H}/{\mathcal G}$, where
${\mathcal G}$ is a closed subgroup scheme of ${\mathcal H}$ and both ${\mathcal G}$ and ${\mathcal H}$
are smooth affine with connected fibers over a normal base, are represented by schemes.
In the appendix, we also give an affirmative answer in the case
when the base is excellent regular Noetherian and of Krull dimension two.
Our
definition of the local models and certain generalizations is then given in
\S 6. In \S 7, we describe the relation of local models to integral
models of Shimura varieties and explain why in the PEL case our
local models coincide with the (corrected) local models of
Rapoport-Zink, Pappas-Rapoport and others. This uses our description
of the group schemes ${\mathcal G}$ for classical groups given in \S 4 and the work of Rapoport and Zink.
In
\S 8 we show Theorem \ref{thm01} and other related results.
Finally, \S 9 is devoted to the study of nearby cycles and of the
semi-simple trace of Frobenius; we show Theorem \ref{thm02}, Theorem
\ref{thm03} and
the identity (\ref{eq0.1}).
\smallskip
\noindent{\bf Notations:} In this paper, $\O$ denotes a discrete
valuation ring with fraction field $F$ and perfect residue field $k$
of characteristic $p>0$. Most of the time $F$ will be a finite
extension of the field of $p$-adic numbers. Usually, $G$ will denote
a (connected) reductive group over $F$ and ${\mathcal B}(G, F)$ will be the
Bruhat-Tits building of $G(F)$ (here by this we mean the ``enlarged"
building).
As usual, we will denote by $G_{\rm der}$, resp. $G_{\rm ad}$, the
derived subgroup, resp. the adjoint quotient of $G$, and by $G_{\rm
sc}$ the simply-connected cover of $G_{\rm der}$. If $A$ is an
affine algebraic group over a field $k$, we will use $\mathbb{X}^\bullet(A)$
(resp. $\mathbb{X}_\bullet(A)$) to denote the character group (resp. cocharacter
group) over the separable closure $\bar k^s$. Then the Galois group
${\rm Gal}(\bar k^s/k)$ acts on $\mathbb{X}^\bullet(A)$ and $\mathbb{X}_\bullet(A)$. We will
often write $X={\rm Spec } (\O[u])$ for the affine line over ${\rm Spec } (\O)$.
We will denote by $R[[u]]$ the ring of formal power series ring in
the variable $u$ with coefficients in $R$, $R((u))=R[[u]][u^{-1}]$
is the corresponding ring of formal Laurent power series. If $Z$ is
a set with an action of a group $\Gamma$, we will write $Z^\Gamma$
for the subset of elements $z$ of $Z$ which are fixed by the action:
$\gamma\cdot z=z$ for all $\gamma\in \Gamma$.
\smallskip
\noindent{\bf Acknowledgements:} The authors would like to warmly
thank M.~Rapoport, B.~Conrad, T.~Haines and B. Levin for useful
discussions and comments.
\bigskip
\section{Preliminaries}\label{Preliminaries}
\setcounter{equation}{0}
\subsection{} In all this chapter, unless we say
to the contrary, we suppose that $F$ is either a $p$-adic field (i.e
a finite extension of ${\mathbb Q}_p$), or the field of Laurent power series
$k((t))$ with $k$ finite or algebraically closed of characteristic
$p$. Recall $\O$ is the valuation ring of $F$. We fix a separable
closure $\bar F^s$ of $F$ and denote by $\breve{F}$ the maximal
unramified extension of $F$ in $\bar F^s$.
\subsection{Pinnings and quasi-split forms}
\subsubsection{}\label{sss1a1} We refer to \cite{SGA3} or \cite{ConradNotes} for background on
reductive group schemes over a general base. Recall that a pinned Chevalley group over $\O$ is the data $(H,
T_H, B_H, e)$ where $H$ is a Chevalley (reductive, connected, split)
group scheme over $\O$, $B_H$ is a Borel subgroup scheme of $H$,
$T_H$ a maximal split torus contained in $B_H$ and $e=\sum_{ a\in
\Delta} e_{ a}\in {\rm Lie}(B_H)$, where $\Delta$ is the set of
simple roots, $e_{ a}$ is a generator of the rank $1$ $\O$-module $
{\rm Lie}({ U}_{ a})$. Here, $ U_{ a}$ is the root group
corresponding to $ a$. The group of automorphisms $\Xi_H={\rm
Aut}(\Sigma_H)$ of the based root datum $\Sigma_H=(\mathbb{X}^\bullet(T_H), \Delta,
\mathbb{X}_\bullet(T_H), \Delta^\vee)$ is canonically isomorphic to the group of
automorphisms of $(H, T_H, B_H, e)$. We will call an element
$\gamma$ of $\Xi_H$ a diagram automorphism of $(H, T_H, B_H, e)$.
\subsubsection{}\label{sssPQS} Let $S$ be a (finite type, separated) scheme over ${\rm Spec } (\O)$.
\begin{Definition}\label{defPQS}
A pinned quasi-split (isotrivial) form of the group $H$ over $S$ is
a quadruple $(
\underline G, \underline T, \underline B, \underline e)$, where $\underline G$ is a reductive group scheme over
$S$ (see \cite{SGA3}), $ \underline T\subset \underline B$ are closed subgroup
schemes of $\underline G$ and $\underline e\in{\rm Lie} (\underline B)$ a section such
that locally for the finite \'etale topology of $S$, $(\underline G,\underline
T,\underline B,\underline e) \simeq (H,T_H,B_H, e)\times_\O S$. We will denote
the groupoid of pinned quasi-split (isotrivial) forms of the group
$H$ over $S$ by ${\rm PQ}(S)$.
\end{Definition}
\begin{Remark}
{\rm In this paper we only need isotrivial forms, i.e forms that
split after a finite \'etale extension. We could also consider forms
that split over a general \'etale extension but we choose not to do
this here.
}
\end{Remark}
We will give a more combinatorial description of ${\rm PQ}(S)$. We
say a scheme $\eta$ locally of finite presentation over $S$ is an
isotrivial $\Xi_H$-torsor if it admits a right action of $\Xi_H$ and
can be trivialized after a finite \'etale cover $S'\to S$.
\begin{prop}\label{propPQS} The category ${\rm PQ}(S)$ is equivalent to the
category given by quintuples $(H,T_H,B_H, e, \eta)$, where
$(T_H,B_H, e)$ is a pinning of $H$ and $\tilde S\to S$ is an isotrivial
$\Xi_H$-torsor.
\end{prop}
\begin{proof}
Indeed, given such a $\Xi_H$-torsor $\eta$ we can construct a
reductive group scheme $\underline G$ over $S$ together with subgroup
schemes $\underline T, \underline B$ and $\underline e\in {\rm Lie}(\underline B)$ via
twisting. Conversely, let $(\underline G,\underline T, \underline B, \underline e)$ be a
quadruple corresponding to an object of ${\rm PQ}(S)$. Let
\[\eta(S')={\rm Isom}((\underline G_{S'},\underline T_{S'},\underline B_{S'},\underline e_{S'}), (H ,T_{H },B_{H },e )\times_\O S).\]
This is represented by a closed subscheme of ${\rm Isom}(\underline G,
H_S)$; this isomorphism scheme is a $\underline{\rm Aut}(H)$-torsor, and
the latter group scheme is separated and smooth over $S$. Therefore,
the morphism $\eta\to S$ is also separated and locally of finite
presentation; the rest follows from the above.
\end{proof}
\subsubsection{} \label{groupPQS}
Now let $s$ be a geometric point of $S$. Recall that there is an
equivalence between the category of isotrivial $\Xi_H$-torsors on
$S$ and the following groupoid: objects are continuous group
homomorphisms $\rho:\pi_1(S,s)\to \Xi_H$, and the morphisms between
$\rho_1$ and $\rho_2$ are elements $h\in \Xi_H$ such that
$\rho_2=h\rho_1h^{-1}$. Therefore, we can describe ${\rm PQ}(S)$ as
the category of quintuples $(H,T,B, e, \rho: \pi_1(S,s)\to \Xi_H)$.
This description of ${\rm PQ}(S)$ admits an immediate generalization
as follows. Let $\Gamma$ be a profinite group. We denote ${\rm
PQ}(\Gamma)$ be the category of quintuples $(H,T,B, e,\rho:\Gamma
\to \Xi_H)$. If $\pi:\pi_1(S,s)\to \Gamma$ is a continuous group
homomorphism, there is a functor ${\rm PQ}(\Gamma)\to {\rm PQ}(S)$
which is a full embedding if $\pi$ is surjective. We denote the
image of this functor by ${\rm PQ}^\Gamma(S)$.
Now, suppose $S_1\to S_2$ is a morphism of schemes, and $s_1$, $s_2$
corresponding geometric points of $S_1$ and $S_2$ so that we have
$\pi_1(S_1,s_1)\to \pi_1(S_2,s_2)$. Suppose that there is a
surjective map $\pi_1(S_2,s_2)\to \Gamma$ such that the composition
$\pi_1(S_1,s_1)\to \Gamma$ is also surjective. Then pullback along
$S_1\to S_2$ induces an equivalence of categories
\begin{equation}\label{ssstame}
{\rm PQ}^\Gamma(S_2)\xrightarrow{\sim} {\rm PQ}^{\Gamma}(S_1).
\end{equation}
\subsection{Fixed points}
Here, we give some useful statements about the fixed points of an
automorphism of a Chevalley group scheme.
\subsubsection{}
If $\gamma$ is an automorphism of a scheme $Z$, we will denote by
$Z^\gamma$ the closed subscheme of fixed points for the action of
$\gamma $ on $Z$ so that, by definition, we have
$Z^\gamma(R)=Z(R)^\gamma:=\{z\in Z(R)\ |\ \gamma\cdot z=z\}$. We
start with the useful:
\begin{prop}\label{locconstant}
Let $H$ be a Chevalley group scheme (connected, reductive and split)
over $\O$ with a pair $(T, B)$ of a maximal split torus $T$ and a
Borel subgroup scheme $B$ that contains $T$. Suppose that $\gamma$
is an automorphism of $H$ order $e$ prime to $p$ that preserves both
$T$ and $B$. Denote by $\Gamma=\langle \gamma\rangle$ the group
generated by $\gamma$. Suppose that $E$ is a separably closed field
which is an $\O$-algebra. Then the group of connected components
$\pi_0(H_E^\gamma)$ of $H_E^\gamma$ is commutative with order that
divides $e$ and is independent of $E$.
\end{prop}
\begin{proof} We start with the case of tori.
When $E={\mathbb C}$ the isomorphisms in the Lemma below can be found in
\cite[Lemma 2.2]{KottCuspidal}:
\begin{lemma}\label{torus}
Suppose that $T$ is a split torus over $\O$ which supports an action
of the cyclic group $\Gamma=\langle \gamma\rangle$ of order $e$
prime to $p$. Let $E$ be a separably closed field which is an
$\O$-algebra. We have
$$
\pi_0((T_E)^\gamma)\simeq {\rm H}^1(\Gamma, \mathbb{X}_\bullet(T)), \quad {\rm
H}^1(\Gamma, T(E))\simeq {\rm H}^2(\Gamma, \mathbb{X}_\bullet(T)),
$$
and so in particular these groups are finite of order annihilated by
$e$ and are independent of $E$. The fppf sheaf over ${\rm Spec } (\O)$
associated to $R\mapsto {\mathrm H}^1(\Gamma, T(R))$ is represented by a
finite \'etale commutative group scheme of rank equal to the order
of ${\rm H}^2(\Gamma, \mathbb{X}_\bullet(T))$.
\end{lemma}
\begin{proof}
Consider the norm homomorphism $N=\prod_{i=0}^{e-1}\gamma^i: T \to T
$. By comparing Lie algebras, we can see that the image $N(T_E)$ is
the connected component $(T^\gamma_E)^0$ and so
$\pi_0(T^\gamma_E)=T^\gamma(E)/N(T(E))\simeq {\mathrm H}^2(\Gamma,
T(E))={\mathrm H}^2(\Gamma, \mathbb{X}_\bullet(T)\otimes_{\mathbb Z} E^*)$. Now consider the
group $\mu_{e^\infty}(E)$ of roots of unity of order a power of $e$
in $E$. The quotient $E^*/\mu_{e^\infty}$ is uniquely divisible by
powers of $e$ and we can conclude that ${\mathrm H}^2(\Gamma,
\mathbb{X}_\bullet(T)\otimes_{\mathbb Z} E^*)\simeq {\mathrm H}^2(\Gamma, \mathbb{X}_\bullet(T)\otimes_{\mathbb Z}
\mu_{e^\infty})$. Since $\mu_{e^\infty}(E)\simeq {\mathbb Q}_e/{\mathbb Z}_e$ and
${\mathbb Q}_e$ is similarly uniquely divisible, we obtain ${\mathrm H}^2(\Gamma,
\mathbb{X}_\bullet(T)\otimes_{\mathbb Z} \mu_{e^\infty})\simeq {\mathrm H}^1(\Gamma,
\mathbb{X}_\bullet(T)\otimes_{\mathbb Z} {\mathbb Z}_e)={\mathrm H}^1(\Gamma, \mathbb{X}_\bullet(T))$. The proof of
the second isomorphism is similar. Now, we can see that the fppf
sheaf associated to the presheaf $R\mapsto {\mathrm H}^1(\Gamma, T(R))$ is
given by the quotient group scheme ${\rm ker}(N)/T^{\gamma-1}$ and
so the last statement also follows from the above.
\end{proof}
\begin{Remark}\label{flasque}
{\rm Following \cite{ColliotFlasques}, we call a split torus $T$
over $\O$ with $\Gamma$-action $\Gamma$-{\sl quasi-trivial}, if
$\mathbb{X}_\bullet(T)$ is a permutation $\Gamma$-module, i.e if $\mathbb{X}_\bullet(T)$ has
a ${\mathbb Z}$-basis which is stable under the action of $\Gamma$. We will
call the split torus $T$ over $\O$ with $\Gamma$-action
$\Gamma$-{\sl flasque} if for all subgroups $\Gamma'\subset\Gamma$,
we have ${\rm H}^1(\Gamma', \mathbb{X}_\bullet(T))=(0)$. Notice that by
Shapiro's Lemma, if $T$ is $\Gamma$-quasi-trivial, then $T$ is also
$\Gamma$-flasque. By the above lemma, if $T$ is $\Gamma$-flasque,
then $T^\gamma$ is connected. }
\end{Remark}
We now continue with the proof of Proposition \ref{locconstant}.
We will first discuss the cohomology set ${\mathrm H}^1(\Gamma, H(E))$. We
will show that ${\mathrm H}^1(\Gamma, H(E))$ is finite of cardinality
independent of $E$. For simplicity, we will omit $E$ from the
notation and write $H$ instead of $H(E)$ etc. By \cite[Lemma
7.3]{SteinbergMemoirs}, every element of $H$ is $\gamma$-conjugate
to a element of $B$. This implies that the natural map
${\mathrm H}^1(\Gamma, B)\to {\mathrm H}^1(\Gamma, H)$ is surjective. Now notice
that ${\mathrm H}^1(\Gamma, T)={\mathrm H}^1(\Gamma, B)$. (Indeed, ${\mathrm H}^1(\Gamma,
U)=(0)$ because $\Gamma$ has order prime to the characteristic of
$E$. In fact, for the same reason, ${\mathrm H}^1(\Gamma, {}_cU)=(0)$ where
${}_cU$ is $U$ with $\gamma$-action twisted by a cocycle $c$ in
$Z^1(\Gamma, B)$. Since $T=B/U$, the long exact sequence of
cohomology and \cite[Cor. 2, I. \S 5]{SerreGaloisCoh} implies that
${\mathrm H}^1(\Gamma, B)\to {\mathrm H}^1(\Gamma, T)$ is injective. The splitting
$T\to B$ shows that this map is also surjective.) Now suppose that
$t$, $t'\in T$
give cohomologous $1$-cochains in $H$, i.e
$$
t'=x t \gamma(x)^{-1}, \ \ \hbox{\rm for some $x\in H$}.
$$
Using $H = \sqcup_{n\in N} UNU$, where $N$ is the normalizer of $T$,
we write $x=u_1nu_2$. Then we get
$$
t'=u_1nu_2 t\gamma(u_2)^{-1}\gamma(n)^{-1}\gamma(u_1)^{-1}, \quad
\hbox{\rm or}\quad t'\gamma(u_1)\gamma(n)\gamma(u_2)=u_1nu_2t.
$$
Since $T$ normalizes $U$, this implies $t'\gamma(n)=nt$, so
$t'=nt\gamma(n)^{-1}$. This shows that two classes $[t']$, $[t]$ in
${\mathrm H}^1(\Gamma, T)$ are identified in ${\mathrm H}^1(\Gamma, H)$ if and only
if they are identified in ${\mathrm H}^1(\Gamma, N)$. Now use the exact
sequence
$$
1\to T^\gamma\to N^\gamma\to W^\gamma\to {\mathrm H}^1(\Gamma, T)\to
{\mathrm H}^1(\Gamma, N),
$$
\cite[Prop. 39 (ii), I. \S 5]{SerreGaloisCoh}, and the above to
conclude that ${\mathrm H}^1(\Gamma, H)$ can be identified with the set of
orbits of a right action of $W^\gamma$ on ${\mathrm H}^1(\Gamma, T)$. This
action is given as follows (\cite[I. \S 5, 5.5]{SerreGaloisCoh}).
Suppose $w$ is in $W^\gamma$; lift $w$ to $n\in N$ and consider
$t_w:=n^{-1}\gamma(n)\in T$. Then we set $[t]\cdot w=[n^{-1}tn\cdot
t_w]=[n^{-1}t\gamma(n)]$. By Lemma \ref{torus}, ${\mathrm H}^1(\Gamma,
T)\simeq {\mathrm H}^2(\Gamma, X_*(T))$ is independent of $E$. We can now
easily see, by picking the lift $n$ of $w\in W^\gamma$ in $N(\O)$,
that the set of orbits ${\mathrm H}^1(\Gamma, T)/W^\gamma$ is also
independent of the field $E$.
Let us now consider the group of connected components
$\pi_0(H^\gamma_E)$.
Write $T_{\rm der}$, $T_{\rm sc}$, resp. $B_{\rm der}$, $B_{\rm
sc}$, for the preimages of $T$, $B$, in the derived group $H_{\rm
der}$, simply-connected cover $H_{\rm sc}$ of the derived group of
$H$. The automorphism $\gamma$ gives corresponding automorphisms of
$H_{\rm der}$, $H_{\rm sc}$ that preserves the pairs $(T_{\rm der},
B_{\rm der})$, $(T_{\rm sc}, B_{\rm sc})$. By \cite[Theorem
8.2]{SteinbergMemoirs}, $H^\gamma_{\rm sc}$ is connected. Following
the arguments in the proof of \cite[Prop.-Def. 3.1]{ColliotFlasques}
mutatis-mutandis, we see that we can find a central extension of
reductive split groups with $\Gamma$-action over $\O$
\begin{equation}\label{flasRes1}
1\to S\to H'\to H\to 1
\end{equation}
where $S$ is a $\Gamma$-flasque torus, $H'_{\rm der}$ is simply
connected and $H'/H'_{\rm der}=D$ is a $\Gamma$-quasi-trivial torus.
In fact, we can make sure that $\gamma$ preserves a corresponding
pair $(T', B')$ of $H'$ with $1\to S\to T'\to T\to 1$. As before, we
will now use the same letters to denote the base changes of these
groups to $E$. We see that the exact sequence
$$
1\to H'_{\rm der}\to H'\to D\to 1
$$
gives
$$
1\to H'^\gamma_{\rm der}\to H'^\gamma\to D^\gamma\to {\mathrm H}^1(\Gamma,
H'_{\rm der}).
$$
By \cite[Theorem 8.2]{SteinbergMemoirs}, $H'^\gamma_{\rm der}$ is
connected; similarly $D^\gamma$ is connected by Lemma \ref{torus},
cf. Remark \ref{flasque}. We can conclude that $H'^\gamma$ is
connected. Now (\ref{flasRes1}) gives
$$
1\to S^\gamma\to H'^\gamma\to H^\gamma\to {\mathrm H}^1(\Gamma, S)\to
{\mathrm H}^1(\Gamma, H').
$$
Since both $S^\gamma$ and $H'^\gamma$ are connected (by Remark
\ref{flasque} and the above respectively), we obtain an exact
$$
1\to \pi_0(H^\gamma)\to {\mathrm H}^1(\Gamma, S)\to {\mathrm H}^1(\Gamma, H').
$$
Since $S$ is central in $H'$, the connecting map $\pi_0(H^\gamma)\to
{\mathrm H}^1(\Gamma, S)$ is a group homomorphism and $\pi_0(H^\gamma)$ is
identified with the subgroup of elements of ${\mathrm H}^1(\Gamma, S)$ that
map to the trivial class in ${\mathrm H}^1(\Gamma, H')$. We can now conclude
using the above results on ${\mathrm H}^1$ applied to $H=S$, $H=H'$.
\end{proof}
\subsection{Reductive groups over local fields.}\label{ss1a}
In this section, we give some preliminaries on reductive groups over
the local field $F$. In particular, we explain how we can present,
using descent, such a group $G$ as a form of a split group $H$.
\subsubsection{}\label{sss1a2a} Let $G$ be a connective reductive group over $F$.
Suppose that $G$ splits over the finite Galois extension $\tilde
F/F$ with Galois group $\Gamma={\rm Gal}(\tilde F/F)$. Denote by $H$
the Chevalley group scheme over ${\mathbb Z}$ which is the split form of $G$,
i.e we have $G\otimes_F \tilde F\simeq H\otimes_{\mathbb Z} \tilde F$. In what
follows, we fix a pinning $(T_H, B_H, e)$ of $H$ over $\O$.
\subsubsection{}\label{sss1a3} Let $A$ be a maximal $F$-split torus of $G$.
Also, let $S$ be a maximal $\breve F$-split torus in $G$ which contains
$A$ and is defined over $F$. Such a torus exists by
\cite[5.1.12]{BTII}. Let $T=Z_G(S)$, $M=Z_G(A)$. Since $G_{\breve F}$ is
quasi-split, $T$ is a maximal torus of $G$ which is defined over $F$
and splits over $\tilde F$.
As in \cite[16.4.7]{SpringerBook}, by adjusting the isomorphism
$G\otimes_F \tilde F\simeq H\otimes_{\mathbb Z} \tilde F$, we may identify $T$ with
the maximal split torus $T_H\otimes_\O\tilde F$ of the split form $H$
of $G$ given by the pinning. We now represent the indexed root datum
(\cite[16.2]{SpringerBook}) of $G$ over $F$ as a collection
$(\mathbb{X}^\bullet(T), \Delta, \mathbb{X}_\bullet(T), \Delta^\vee, \Delta_0, \tau)$ where
$\tau: \Gamma\to \Xi_H$ is a group homomorphism and $\Delta_0$ is a
subset of $\Delta$ which is stable under the action of the group
$\Gamma$ via $\tau$. Then using Proposition \ref{propPQS} we see
that $\tau$ together with the pinning $(T_H, B_H, e)$ of $H$ gives a
pinned quasi-split form $(G^*, T^*, B^*, e^*)$ of $H$ over $F$.
Concretely,
$$
G^*=({\rm Res}_{\tilde F/F}(H\otimes_F\tilde F))^\Gamma,
$$
where $\gamma\in \Gamma$ acts on $H\otimes_F\tilde F$ as
$\tau(\gamma)\otimes \gamma$. This is the quasi-split form of $G$
over $F$.
Let $B_H\subset P_0$ be the standard parabolic subgroup of $H$ that
corresponds to $\Delta_0$ and let $M_0$ be the Levi subgroup of
$P_0$. Since $\tau$ leaves $\Delta_0$ stable, we can also similarly
consider $M^*=M^*_0=({\rm Res}_{\tilde F/F}(M_0\otimes_\O\tilde
F))^\Gamma$ which is also quasi-split over $F$. Then $T^*=({\rm
Res}_{\tilde F/F}(T_H\otimes_\O\tilde F))^\Gamma\subset M^*$. Denote by
$A^*$ the maximal $F$-split subtorus of $T^*$ and by $S^*$ the
maximal $\breve F$-split subtorus of $T^*$ which is clearly defined over
$F$ and which is a maximal $\breve F$-split torus of $G^*$.
By \cite[16.4.8]{SpringerBook}, $G$ is an inner twist of $G^*$ which
is given by a class ${\mathrm H}^1(\Gamma, G^*_{\rm ad}(\bar F^s))$. Then
there is a ${\rm Gal}(\bar F^s/F)$-stable $G^*_{\rm ad}(\bar
F^s)$-conjugacy class of isomorphisms $\psi: G_{\bar
F^s}\xrightarrow{\sim } G^*_{\bar F^s}$ such that
\begin{equation}
\gamma\mapsto \on{Int}(g_\gamma)=\psi\cdot
\gamma(\psi)^{-1}=\psi\gamma\psi^{-1}\gamma^{-1}\in G^*_{\rm
ad}(\bar F^s)
\end{equation}
gives a $1$-cocycle that represents the corresponding class in
${\mathrm H}^1(\Gamma, G^*_{\rm ad}(\bar F^s))$.
Notice that $G_{\breve F}\simeq G^*_{\breve F}$ since they are both
quasi-split and they are inner forms of each other. This implies
that we can realize the inner twist $G$ over $\breve{F}$. (In fact,
over a finite extension of $F$ contained in $\breve{F}$.) More
precisely, any inner twist $G_{\breve F}\otimes_{\breve F}\bar
F^s\xrightarrow{\sim} G^*_{\breve{F}} \otimes_{\breve F}\bar F^s$ is
conjugate over $\bar F^s$ to an isomorphism $\psi:
G_{\breve F}\xrightarrow{\sim} G^*_{\breve F}$ (cf.
\cite[11.2]{HainesRostami}). Then
\begin{equation}
\on{Int}(g)=\psi\cdot
\sigma(\psi)^{-1}=\psi\sigma\psi^{-1}\sigma^{-1}\in G^*_{\rm
ad}(\breve F)
\end{equation}
is an inner automorphism of $G^*_{\breve F}$, where $\sigma$ is the
topological generator of ${\rm Gal}(\breve F/F)\simeq\hat{{\mathbb Z}}$ given by the
lift of the (arithmetic) Frobenius. In what follows, we will choose
this isomorphism $\psi$ more carefully. For this purpose, it is
convenient to formulate the following:
\begin{Definition}\label{rigid}
A {\rm rigidification} of $G$ is a triple $(A,S,P)$, where $A$ is a
maximal $F$-split torus of $G$, $S\supset A$ a maximal $\breve F$-split
torus of $G$ defined over $F$, and $P\supset M=Z_G(A)$ a minimal
parabolic subgroup defined over $F$. A rigidified group over $F$ is
a reductive group over $F$ together with a rigidification. The
groupoid of rigidified groups over $F$ will be denoted by ${\rm RG}(F)$.
\end{Definition}
\begin{lemma}\label{transitive}
The group $G_{\rm ad}(F)$ acts transitively on the set of
rigidifications of $G$.
\end{lemma}
\begin{proof}Let $(A_1,S_1,P_1)$ and $(A_2,S_2,P_2)$ be two
rigidifications. After conjugation, we can assume that $A_1=A_2$,
$P_1=P_2$. Let $M=Z_G(A)$. Then we can replace $G$ by $M$. In
addition, we can assume that it is of adjoint type; hence we reduce
to the case that $G$ is adjoint anisotropic. We need to show that
$S_1$ and $S_2$ are conjugate in this case.
As $G$ is anisotropic, there is a unique parahoric group scheme of
$G$ (the Iwahori group scheme, see
\cite[5.2.7]{BTII}); let us denote it by ${\mathcal P}$. By the construction of ${\mathcal P}$
in \cite{BTII}, we see that the N\'{e}ron models ${\mathcal S}_1$
and ${\mathcal S}_2$ of $S_1$ and $S_2$ map naturally into ${\mathcal P}$ as
closed subgroup schemes. Therefore, ${\mathcal S}_1(\breve\O)$,
${\mathcal S}_2(\breve\O)\subset {\mathcal P}(\breve\O)$. We can therefore choose
$g$ in ${\mathcal P}(\breve\O)$ such that $gS_1g^{-1}=S_2$. Since ${\mathcal P}$
is smooth and has connected fibers,
${\mathrm H}^1(\hat{\mathbb Z},{\mathcal P}(\breve\O))=(1)$ (see \cite[3.4 Lemme 2]{BTIII}), and a standard argument shows
that we can choose $g\in {\mathcal P}(\O)\subset G(F)$.
\end{proof}
\begin{prop}\label{propopsi}
Denote by ${M'}^*:= N_{G^*_{\on{ad}}}(M^*)\cap P^*_{\on{ad}}$ the
Levi subgroup of $G^*_{\on{ad}}$ that corresponds to $M^*\subset
G^*$. The inner twist $ G_{\bar F^s}\xrightarrow{\sim } G^*_{\bar
F^s}$ is conjugate over $\bar F^s$ to an isomorphism $\psi:
G_{\breve F}\xrightarrow{\sim} G^*_{\breve F}$ which is such that ${\rm
Int}(g)=\psi\cdot \sigma(\psi)^{-1}$ lies in the normalizer
$N_{{M'}^*}(S^*)$ of the torus $S^*$ in ${M'}^*$.
\end{prop}
\begin{proof} We pick up a rigidification $(A,S,P)$ of $G$. Recall $M= Z_G(A)$. Then
$M^*$ is the unique standard Levi of $G^*$ (i.e. $T^*\subset M^*$)
corresponding to $M$. The group $M^*$ is the quasi-split inner form
of $M$ (\cite[16.4.7]{SpringerBook}). Indeed, let $P^*=B^*M^*=({\rm
Res}_{\tilde F/F}(P_0\otimes_F\tilde F))^\Gamma$ (a standard parabolic
subgroup in $G^*$) correspond to $P$. Observe that there is an inner
twist $\psi:G_{\breve F}\xrightarrow{\sim} G^*_{\breve F}$ sending
$P_{\breve F}$ to $P^*_{\breve F}$ and $M_{\breve F}$ to $M^*_{\breve F}$. Now
$\on{Int}(g)= \psi \sigma \psi^{-1}\sigma^{-1}$ preserves
$M^*_{\breve F}\subset P^*_{\breve F}$. In particular, $g\in {M'}^*(\breve F)$,
where ${M'}^*:= N_{G^*_{\on{ad}}}(M^*)\cap P^*_{\on{ad}}$ is the
corresponding Levi in $G^*_{\on{ad}}$. Therefore,
$\on{Int}(g):M^*_{\breve F}\to M^*_{\breve F}$ is inner. In particular, when
restricted to the (connected) centers of $M$ and $M^*$, $\psi$ is an
isomorphism of $F$-groups. Then it automatically sends $A= Z(M)^0_s$
(the maximal split torus in $Z(M)^0$) to $ Z(M^*)^0_s\subset A^*$.
By composing by an inner automorphism of $G^*(\breve F)$ induced by an
element in $M^*(\breve F)$, we can further assume that $\psi:M_{\breve F}\to
M^*_{\breve F}$ sends $S_{\breve F}\to S^*_{\breve F}$.
\end{proof}
Let us denote
\begin{equation}\label{N'}
{N'}^*=N_{{M'}^*}(T^*_{\rm ad})=N_{{M'}^*}(S^*_{\rm ad}), \quad
N^*_{\rm ad}:={\rm Im}({N'}^*\subset {M'}^*\to M^*_{\rm ad}).
\end{equation}
\begin{cor}\label{coropsi}
With the above notations, there is a unique class $[c^{\rm rig}]\in
{\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))$ whose image $[c]$ under
\[{\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))\to {\mathrm H}^1(\hat{\mathbb Z}, {M'}^*(\breve F))\to {\mathrm H}^1(\hat{\mathbb Z}, G^*_{\on{ad}}(\breve F))\subset {\mathrm H}^1(F,G^*_{\on{ad}}),\]
gives $G$ as an inner twist of its quasi-split form $G^*$, where we
identify $\hat{\mathbb Z}={\rm Gal}(\breve F/F)$ by sending $1$ to the Frobenius
$\sigma$. \endproof
\end{cor}
\begin{proof}The existence is given by Proposition
\ref{propopsi} after choosing a rigidification $(G,A,S,P)$. Let
$[c']$ be another class in ${\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))$ that also
maps to $[c]$. Then via twisting $(G^*,S^*,M^*,P^*)$ by $c'$, we
obtain $(G',S',M',P')$, where $S'$ is an $F$-torus of $G'$,
$M'\supset T'$ is a $F$-Levi subgroup of $G'$ and $P'\supset M'$ is
an $F$-parabolic subgroup of $G'$. Let $A'=Z(M')^0_s$ be an
$F$-split torus of $G'$. Since $G$ and $G'$ are isomorphic as
$F$-groups, $P'$ is a minimal $F$-parabolic subgroup of $G'$,
$M'\subset P'$ is a minimal $F$-Levi subgroup of $G'$, and therefore
$A'$ is a maximal $F$-split torus. In addition, $S'$ is a maximal
$\breve F$-split torus of $G'$ defined over $F$. Therefore, there is an
$F$-isomorphism $(G,A,S,P)\simeq (G',A',S',P')$ by Lemma
\ref{transitive}. We can now conclude that $[c']$ coincides with
$[c]$ given by Proposition \ref{propopsi}.
\end{proof}
\begin{Remark}{\rm Note that we do not claim that the map
${\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))\to {\mathrm H}^1(F,G^*_{\on{ad}})$ is injective.
In fact, the group ${N'}^*$ depends on $[c]$.}
\end{Remark}
\begin{Remark}{\rm Let $(G,A,S,P)$ be a rigidified group over $F$; then
$M=Z_G(A)$ is a minimal $F$-Levi of $G$ and $T=Z_G(S)$ a maximal
torus of $G$. Let $M'$, $T_{\rm ad}$ be the images of $M$, $T$ in
$G_{\rm ad}$. We can see that the elements in $G_{\rm ad}$ that fix
the triple $(A,S,P)$ are those elements in $M'$ that fix $T$ (or
equivalently fix $T_{\rm ad}$). Therefore, we obtain the following
exact sequence of $F$-groups
\[1\to N'\to {\rm Aut}(G,A,S,P)\to {\rm Out}(G)\to 1\]
where $N'=N_{M'}(T_{\rm ad})$.}
\end{Remark}
\subsubsection{}\label{1d3}
Now observe that the center of ${M'}^*=N_{G^*_{\on{ad}}}(M^*)\cap
P^*_{\on{ad}}$ is connected. To see this, recall that
$\mathbb{X}^\bullet(T^*_{\on{ad}})=Q(G^*_{\on{ad}})$, and $Q(M^*_{\on{ad}})$ is
just a direct factor of $Q(G^*_{\on{ad}})$, where for a connected
reductive group $L$, $Q(L)$ denote its absolute root lattice.
Indeed, to specify $M^*_{\on{ad}}$ is the same as to choose a
${\rm Gal}(\bar{F}/F)$-stable subset $\Delta_{M^*}\subset \Delta_{G^*}$
of simple roots for $G^*_{\on{ad}}$, and $Q(M^*_{\on{ad}})$ is the
lattice generated by the simple roots in $\Delta_{M^*}$. Then the
center of ${M'}^*$, which is the subgroup of $T^*_{\on{ad}}$ defined
as the intersection of the kernel of all $a$ for $a\in\Delta_{M^*}$,
is indeed an induced torus\footnote{The torus $Z^*$ is induced
because the Galois group permutes
$\Delta_{G^*}\setminus\Delta_{M^*}$}, denoted by $Z^*$. We have
\begin{equation}\label{barM}
1\to Z^*\to {N'}^*\to N^*_{\rm ad}\to 1.
\end{equation}
Since $Z^*$ is induced, Hilbert theorem 90 and Shapiro's Lemma
implies that ${N'}^*(\breve F)\to N^*_{\on{ad}}(\breve F)$ is surjective.
Then, by \cite[I. \S 5.7]{SerreGaloisCoh} we obtain an exact
sequence of pointed sets
\begin{equation}\label{barMexact}
{\mathrm H}^1(\hat{\mathbb Z}, {N'}^*(\breve F))\hookrightarrow {\mathrm H}^1(\hat{\mathbb Z}, {N}_{\rm ad}^*(\breve F))\to {\mathrm H}^2(\hat{\mathbb Z}, Z^*(\breve F))
\end{equation}
with the first map injective. Write $Z^*=\prod_j{\rm
Res}_{F_j/F}{{\mathbb G}_{\rm m}}$. Then using Shapiro's Lemma we see
\begin{equation}\label{BrF}
{\mathrm H}^2(\hat{\mathbb Z}, Z^*(\breve F))=\prod_j {\rm Br}(F_j)\simeq \prod_j {\mathbb Q}/{\mathbb Z}.
\end{equation}
In the next subsection, we recall an explicit cocycle representing
the image of $[c^{\rm rig}]$ under ${\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))\to
{\mathrm H}^1(\hat{\mathbb Z},N^*_{\rm ad}(\breve F))$.
\subsubsection{}\label{1b4} Here we assume in addition that $F$ is a $p$-adic field, i.e a finite extension of
${\mathbb Q}_p$. Then we can choose the inner twist $\psi$ of Proposition
\ref{propopsi} even more carefully: Indeed, recall that since
$M=Z_G(A)$, the group $M_{\rm ad}$ is anisotropic. Therefore, by
\cite[Satz 3]{KneserII}, $M_{\on{ad}}$ is isomorphic to $\prod_i
{\rm Res}_{E_i/F}(B_i^\times/E_i^\times)$, where $B_i$ are central
division algebras of degree $m_i$ over $E_i$, and $E_i$ are finite
extensions of $F$. Therefore, we have an isomorphism
\begin{equation}\label{isoMad}
\rho: M^*_{\on{ad}}\xrightarrow{\sim}\prod
{\rm Res}_{E_i/F}\on{PGL}(m_i),
\end{equation}
sending $S^*_{\on{ad}}$ to $\prod{\rm Res}_{E_i/F}\on{D}_{m_i}$, where
$\mathrm D_n$ is the standard torus in $\on{PGL}(n,E)$.
\begin{lemma}\label{coxeter}
Suppose that $B$ is a central division algebra over the local field
$E$ of degree $n^2$ and with Brauer group invariant $r/n$ with $0<
r<n$ and ${\rm gcd}(r, n)=1$. Denote by $\tau$ the permutation
$(12\cdots n)$. If $\varpi$ is a uniformizer of $E$ we let $\underline
n:=\underline {n}_r(\varpi)$ be the element of $ \on{GL}(n,\breve{E})$
given by $\underline {n}(e_i)=e_{\tau^r(i)}$, if $i\neq n$, and $\underline
{n}(e_n)=\varpi\cdot e_{\tau^r(n)}$. Then there is an isomorphism
\[
\psi:B\otimes_E\breve{E}\xrightarrow{\sim} M_{n\times n}(\breve{E})
\]
so that $\psi\sigma\psi^{-1}\sigma^{-1}=\on{Int}(\underline n)$.
\end{lemma}
\begin{proof}
Suppose that $E_n/E$ is an unramified extension of degree $n$ and
$\sigma$ a generator of ${\rm Gal}(E_n/E)$.
Then we can represent $B$ as the associative $E_n$-algebra with generator $\Pi$
and relations $\Pi^n=\varpi$, $a\cdot \Pi= \Pi\cdot \sigma(a)^r$, i.e
$$
B=\{\oplus_{i=0}^{n-1} E_n\cdot \Pi^i\ |\ \Pi^n=\varpi, \ a\cdot
\Pi=\Pi\cdot \sigma(a)^r\}.
$$
Sending $\Pi$ to $\underline n=\underline {n}_r(\varpi)$ as above and
$a\in E_n$ to the diagonal matrix $(a, \sigma(a),\ldots,
\sigma^{n-1}(a))$, gives an isomorphism $\psi:
B\otimes_EE_n\xrightarrow{\sim} M_{n\times n}(E_n)$. The result now
follows from an explicit calculation.
\end{proof}
Suppose that the Brauer group invariant of $B_i$ is $r_i/m_i$ with
$1\leq r_i< m_i$, ${\rm gcd}(r_i, m_i)=1$. By the above lemma we can
always choose an inner twist
\begin{equation}\label{psi'}
\psi': (M_{\on{ad}})_{\breve F}\xrightarrow{\sim}
(\prod{\rm Res}_{E_i/F}\on{PGL}(m_i))_{\breve F},
\end{equation} such that the element
$g\in \prod\on{PGL}(m_i,E_i\otimes_F\breve F)$ given by
$\on{Int}(g)=\psi'\sigma\psi'^{-1}\sigma^{-1}$ satisfies $g=\prod
g_i$, and $g_i=\underline {n}_{r_i}(\varpi_i)$ where $\varpi_i$ is a
uniformizer of $E_i$. Combining with the discussion in \ref{1d3}, we
obtain that
\begin{prop}\label{propopsi2}
Assume that $F$ is a $p$-adic field and choose an isomorphism
(\ref{isoMad}). Then we can find $\psi : G_{\breve F}\xrightarrow{\sim}
G^*_{\breve F}$ as in Proposition \ref{propopsi}, such that in addition
the image of ${\rm Int}(g)$ under ${M'}^*(\breve F)\to M^*_{\rm
ad}(\breve F)$ is given by the element $ {g}=\prod_i g_i $ with $g_i$ as
above.\qed
\end{prop}
\bigskip
\section{Reductive groups over $\O[u, u^{-1}]$}\label{reductive
group}
\setcounter{equation}{0}
Recall $G$ is a connected reductive group over $F$.
We will assume that:
\medskip
({\sl Tameness hypothesis}) \ {\sl $G$ splits over a finite tamely ramified extension $\tilde F/F$.}
\medskip
Let $\varpi$ be a uniformizer of $\O$. Our goal in this section is
to construct a reductive group scheme $\underline G$ over ${\rm Spec } (\O[u^{\pm
1}])$ which extends $G$ in the sense that its base change
\begin{equation}
\underline G\otimes_{\O[u^{\pm 1}]}F,\quad u\mapsto \varpi ,
\end{equation}
is isomorphic to $G$. This is done by following the construction of
$G$ from its split form $H$ which was described in \S \ref{ss1a}; we
first extend the quasi-split form $G^*$ to $\underline G^*$ and then give
$\underline G$ by an appropriate descent. (We write $\O[u^{\pm 1}]$
instead of $\O[u, u^{-1}]$ for brevity.)
\subsection{The tame splitting field}\label{sss1a2} Recall $F$ is either a $p$-adic field, or $F=k((t))$. Assume that in either case the
residue field $k$ has cardinality $q=p^m$.
Denote by $\tilde F_0$ the maximal unramified extension of $F$ that is contained in $\tilde F$
and by $\tilde \O_0$, $\tilde \O$ the valuation rings of $\tilde F_0$, $\tilde F$ respectively.
Set $e=[\tilde F:\tilde F_0]$ (which is then prime to $p$) and let $\gamma_0$ be a generator of ${\rm Gal}(\tilde F/\tilde F_0)$.
Recall that by Steinberg's theorem, the group $G_{\breve F}:=G\otimes_F\breve F$
is quasi-split. By possibly enlarging the splitting field $\tilde F$,
we can now assume that
\begin{itemize}
\item{} $G_{\tilde F_0}$ is quasi-split,
\item{} $\tilde F/F$ is Galois with group $\Gamma={\rm Gal}(\tilde F/F)=\langle \sigma\rangle\rtimes \langle\gamma_0\rangle$
which is the semi-direct product of $\langle \sigma\rangle \simeq
{\mathbb Z}/(r)$, where $\sigma$ is a lift of the (arithmetic) Frobenius
${\rm Frob}_q\in {\rm Gal}(\tilde F_0/F)$, with the normal inertia
subgroup $I:={\rm Gal}(\tilde F/\tilde F_0)=\langle\gamma_0\rangle\simeq
{\mathbb Z}/(e)$, with relation $\sigma\gamma_0\sigma^{-1}=\gamma_0^q$,
\item{} there is a uniformizer $\tilde\varpi$ of $\tilde \O$ such that $\tilde\varpi^e=\varpi$.
\end{itemize}
Without further mention, we will assume that the extension $\tilde F/F$
is as above. Then we also have $\tilde\O=\tilde \O_0[\tilde\varpi]\simeq
\tilde\O_0[x]/(x^e-\varpi)$ and $\tilde\O_0$ contains a primitive
$e$-th root of unity $\zeta=\gamma_0(\tilde\varpi)\tilde\varpi^{-1}$.
\subsection{Covers of $\O[u]$ and $\O[u^{\pm 1}]$.}
Suppose that $F$ is a $p$-adic field with ring of integers $\O$ and
residue field $k=\mathbb F_q$. Let $\tilde F/F$ be a finite tamely
ramified Galois extension of $F$ as in \S \ref{sss1a2} with Galois
group $\Gamma=\langle\gamma_0, \sigma |\ \sigma^r=\gamma_0^e=1,
\sigma\gamma_0\sigma^{-1}=\gamma_0^q\rangle$. The maximal tamely
ramified extension $F^t$ of $F$ in $\bar F$ is the union of fields
$\tilde F$ as above and its Galois group
$$
\Gamma^t:={\rm Gal}(F^t/F)\simeq \prod_{l\neq p}{\mathbb Z}_l(1)\rtimes\hat{\mathbb Z}
$$
is the projective limit of the corresponding groups $\Gamma$.
\subsubsection{}\label{sss2a1} Consider the affine line $\AA^1_\O={\rm Spec } (\O[u])$ and its cover
$$
\pi: \AA^1_{\tilde\O_0}={\rm Spec } (\tilde\O_0[v])\to \AA^1_\O={\rm Spec } (\O[u])
$$
given by $u\mapsto v^e$. The (abstract) group $\Gamma$ described as
above acts on $\tilde\O_0[v]$ by
$$
\sigma(\sum_i a_i v^i)=\sum_i \sigma(a_i)v^i, \quad \gamma_0(\sum_i
a_i v^i)=\sum_i a_i \zeta^i v^i,
$$
where $\zeta$ is the primitive $e$-th root of unity
$\gamma_0(\tilde\varpi)\tilde\varpi^{-1}$ in $\tilde\O_0$. We have
$\tilde\O_0[v]^\Gamma=\O[u]$ and $\pi$ is a $\Gamma$-cover ramified
over $u=0$. The restriction of $\pi$ over the open subscheme $u\neq
0$ gives a $\Gamma$-torsor
$$
\pi_0: {\rm Spec } (\tilde\O_0[v^{\pm 1}])\to {\rm Spec } (\O[u^{\pm 1}])={\mathbb G}_{m\O}.
$$
Notice that base changing the $\Gamma$-cover $\pi$ via the map
$\O[u]\to F$ given by $u\mapsto \varpi $ gives ${\rm Spec } (\tilde F)\to
{\rm Spec } (F)$ with its Galois action. In this way, we realize $\Gamma^t$
as a quotient of the fundamental group of ${\mathbb G}_{m\O}$ such that the
composed map ${\rm Gal}(\bar{F}/F)\to \Gamma^t$ coincides with the tame
quotient of ${\rm Gal}(\bar{F}/F)$. In fact, we can easily see that
$\pi_1({{\mathbb G}_{\rm m}}_\O, {\rm Spec } (\bar F))\to \Gamma^t$ is an isomorphism.
\subsubsection{} Suppose that we take $F={\mathbb Q}_p$ and that $L$ is a tamely
ramified finite extension of ${\mathbb Q}_p$. Let $\tilde F$ be a Galois
extension of $F={\mathbb Q}_p$ that contains $L$ and satisfies the
assumptions of \S \ref{sss1a2} with $\Gamma={\rm Gal}(\tilde F/{\mathbb Q}_p)$.
In particular, $\tilde F_0$ is the maximal unramified extension of
${\mathbb Q}_p$ contained in $\tilde F$. Denote by $L_0$ the maximal unramified
extension of ${\mathbb Q}_p$ contained in $L$ and suppose $L\simeq
L_0[x]/(x^{e_L}-p\cdot c)$, $c\in \O_{L_0}^*$. Let $\Gamma_L$ the
subgroup of $\Gamma$ that fixes $L$. We can then see that there is a
${\mathbb Z}_p[u]$-algebra homomorphism
\begin{equation}\label{invarRing}
(\tilde\O_0[v])^{\Gamma_L}\simeq \O_{L_0}[w]
\end{equation}
in which the right hand side is a ${\mathbb Z}_p[u]$-algebra via $u\mapsto
w^{e_L}\cdot c^{-1}$.
\medskip
\subsection{Groups over $\O[u^{\pm 1}]$}\label{ss2b}
We will now give the construction of the group schemes $\underline G$ over
$\O[u^{\pm 1}]$.
\subsubsection{}\label{sss2b1}
Recall that $G$ is a connected reductive group over the $p$-adic
field $F$ which splits over the tame extension $\tilde F/F$ as in \S
\ref{sss1a2}. We will use the notations of \S \ref{Preliminaries}.
In particular, $H$ over $\O$ is the Chevalley (split) form of $G$;
{\sl we fix a pinning on $H$ as in \S \ref{sss1a1}}. Then the
indexed root datum for $G$ gives a group homomorphism $\tau:
\Gamma={\rm Gal}(\tilde F/F)\to {\rm Aut}_\O(H)$ and using Proposition
\ref{propPQS} we obtain the pinned quasi-split group $(G^*, T^*,
B^*, e)$ over $F$.
We apply the equivalence of categories \eqref{ssstame} to the
following case: Let $S_2={\mathbb G}_{m\O}={\rm Spec } (\O[u, u^{-1}])$ and
$s=S_1={\rm Spec } (F)\to S_2$ be the point given by $u=\varpi$. Let
$\bar{s}={\rm Spec } (\bar F)$ be a geometric point over $s$. Then we
obtain an
equivalence
\begin{equation}\label{ssstameeq}
{\rm PQ}({\mathbb G}_{m\O})\simeq {\rm
PQ}^{\Gamma^t}({\mathbb G}_{m\O})\xrightarrow{\sim}{\rm PQ}^{\Gamma^t}(F).
\end{equation}
We choose a quasi-inverse of this functor and therefore, for any
pinned quasi-split group $(G^*,T^*,B^*,e^*)$ over $F$, we have
$(\underline G^*,\underline T^*,\underline B^*,\underline e^*)$ over $\O[u, u^{-1}]$
together with an isomorphism $(\underline G^*,\underline T^*,\underline B^*,\underline
e^*)\otimes_{\O[u, u^{-1}]}F\simeq (G^*,T^*,B^*,e^*)$.
In particular, $\underline G^*$ is the reductive group scheme over
$\O[u^{\pm 1}]$ which is the twisted form of $\underline
H=H\otimes_\O\O[u^{\pm 1}]$ obtained from the $\Gamma$-torsor
$\pi_0: {\rm Spec } (\tilde\O_0[v^{\pm 1}])\to {\rm Spec } (\O[u^{\pm 1}])$ using
$\tau$. More concretely,
\begin{equation}\label{defG*}
\underline G^*=({\rm Res}_{\tilde\O_0[v^{\pm 1}]/\O[u^{\pm
1}]}(H\otimes_\O\tilde\O_0[v^{\pm 1}]))^\Gamma
\end{equation}
where $\gamma\in \Gamma$ acts diagonally via $\tau(\gamma)\otimes
\gamma$. The same construction applies to other groups that inherit
a pinning from $G^*$; for example, it applies to the Levi subgroup
$M_0$ that corresponds to the $\Gamma$-stable subset
$\Delta_0\subset \Delta$. We obtain
$$
\underline{M}^*=({\rm Res}_{\tilde\O_0[v^{\pm 1}]/\O[u^{\pm
1}]}(M_0\otimes_\O\tilde\O_0[v^{\pm 1}]))^\Gamma.
$$
Again base changing along $u\mapsto \varpi$ gives a group
canonically isomorphic to $M^*\subset G^*$.
\subsubsection{}
The goal of the rest of this chapter is to explain how to construct
a quadruple of group schemes $(\underline G,\underline A,\underline S,\underline P)$, whose
specialization along $u=\varpi$ gives rise to $G$ together with a
rigidification $(A, S, P)$.
First as above, we obtain adjoint quasi-split forms $\underline M^*_{\rm
ad}$, $\underline G^*_{\rm ad}$ over $\O[u^{\pm 1}]$ that specialize to
$M^*_{\rm ad}$, $G^*_{\rm ad}$ over $F$ after the base change
$u\mapsto \varpi$. Then let $\underline {N'}^*=N_{\underline {M'}^*}(\underline
T^*_{\rm ad})$ and $\underline N_{\rm ad}^*={\rm Im}(\underline {N'}^*\to \underline
M^*_{\rm ad})$.
We also obtain a short exact (central) sequence
\begin{equation}\label{barMu}
1\to \underline Z^*\to \underline {N'}^*\to \underline N^*_{\rm ad}\to 1,
\end{equation}
which specializes to (\ref{barM}). We will consider the base change
of these groups after the \'etale extension $\O[u^{\pm 1}]\to
\breve{\O}[u^{\pm 1}]$. For simplicity, we will sometimes omit this
base change from the notation. Here $\underline Z^*$ is an induced torus
and by (\ref{invarRing}) we have
\begin{equation}\label{ZZ}
\underline Z^*\otimes_\O\breve{\O}\simeq\prod_j {\rm
Res}_{\breve{\O}[w^{\pm 1}]/\breve{\O}[u^{\pm 1}]}{{\mathbb G}_{\rm m}}
\end{equation}
where $w^{d_j}=uc_j^{-1}$ for $p\nmid d_j$. By applying ${\rm
Pic}(\breve{\O}[w^{\pm 1}])=(1)$ and Hilbert's theorem 90 we see
that ${\mathrm H}^1_{\rm et}({\rm Spec } (\breve{\O}[u^{\pm 1}]), \underline Z^*)=(1)$.
This gives that
\begin{equation}
\underline {N'}^*(\breve{\O}[u^{\pm 1}])\to \underline N^*_{\rm
ad}(\breve{\O}[u^{\pm 1}])
\end{equation}
is surjective. The exact cohomology sequence for a central quotient
now gives that
\begin{equation}\label{barMuexact}
{\mathrm H}^1(\hat{\mathbb Z}, \underline {N'}^*(\breve{\O}[u^{\pm 1}]))\hookrightarrow
{\mathrm H}^1(\hat{\mathbb Z}, \underline N^*_{\rm ad}(\breve{\O}[u^{\pm 1}]))\to
{\mathrm H}^2(\hat{\mathbb Z}, \underline Z^*(\breve{\O}[u^{\pm 1}]))
\end{equation}
is an exact sequence of pointed sets. There are natural maps between
the exact sequences (\ref{barMuexact}) and (\ref{barMexact})
obtained by sending $\breve{\O}[u^{\pm 1}]\to \breve F$ via
$u\mapsto \varpi$.
Recall the description of $M_{\rm ad}$ and the isomorphism
(\ref{isoMad}) that follow from Kneser's theorem. Using
(\ref{ssstameeq}) we obtain that there is an
isomorphism
\begin{equation}\label{218}
\underline \rho: \underline M^*_{\rm ad} \xrightarrow{ \sim\ } \prod_i {\rm
Res}_{\tilde\O_0[v^{\pm 1}]^{\Gamma_i}/\O[u^{\pm 1}]}{\rm PGL}(m_i)
\end{equation}
where $\Gamma_i\subset \Gamma$ are subgroups of finite index in the
Galois group $\Gamma$ of $\tilde\O_0[v^{\pm 1}]/\O[u^{\pm 1}]$ which
specializes under $u\mapsto \varpi$, to the isomorphism
(\ref{isoMad}).
\subsubsection{}\label{sss2c3}
Now recall that the anisotropic kernel $M$ (an inner form of $M^*$)
gives a well-defined class $[c(M)]$ in ${\mathrm H}^1(\hat{\mathbb Z}, M^*_{\rm
ad}(\breve F))$. Applying Proposition \ref{coropsi} to $M$ and the exact
sequence \eqref{barMexact}, we see that $[c(M)]$ has the following
properties:
\begin{itemize}
\item[1)] It is the image of a unique class $[c^{\rm rig}]$ in ${\mathrm H}^1(\hat{\mathbb Z}, {N'}^*(\breve F))$, and,
\item[2)] The image of $[c^{\rm rig}]$ under ${\mathrm H}^1(\hat{\mathbb Z}, {N'}^*(\breve F))\to {\mathrm H}^1(\hat{\mathbb Z}, G^*_{\rm ad}(\breve F))$
gives the inner twist $G$ of $G^*$.
\end{itemize}
We have ${\mathrm H}^2(\hat{\mathbb Z}, \breve\O^*)=(1)$, hence ${\mathrm H}^2(\hat{\mathbb Z},
\breve{\O}[u^{\pm 1}]^*)= {\mathrm H}^2(\hat{\mathbb Z}, \breve{\O}^*\times
u^{{\mathbb Z}})\simeq {\mathrm H}^2(\hat{\mathbb Z}, {\mathbb Z})\simeq {\mathbb Q}/{\mathbb Z}$, and similarly ${\rm
Br}(F)={\mathrm H}^2(\hat{\mathbb Z}, \breve{F}^*)={\mathrm H}^2(\hat{\mathbb Z},
\breve{\O}^*\times\varpi^{\mathbb Z})\simeq {\mathbb Q}/{\mathbb Z}$. Therefore, $u\mapsto
\varpi$ gives an isomorphism ${\mathrm H}^2(\hat{\mathbb Z}, \breve{\O}[u^{\pm
1}]^*)\xrightarrow{\sim} {\mathrm H}^2(\hat{\mathbb Z}, \breve{F}^*)$. Similarly,
using (\ref{ZZ}), and Shapiro's Lemma we obtain that
$u\mapsto\varpi$ gives an isomorphism
\begin{equation}\label{219}
{\mathrm H}^2(\hat{\mathbb Z}, \underline Z^*(\breve{\O}[u^{\pm 1}]))\xrightarrow{\sim} {\mathrm H}^2(\hat{\mathbb Z}, Z^*(\breve F)).
\end{equation}
Note that using (\ref{218}), (\ref{invarRing}) and Shapiro's lemma
we obtain an isomorphism
\begin{equation}\label{210}
{\mathrm H}^1(\hat{\mathbb Z}, \underline M^*_{\rm ad}(\breve{\O}[u^{\pm 1}])
\xrightarrow{\sim} \prod_i {\mathrm H}^1(\hat{\mathbb Z}, {\rm
PGL}_{m_i}(\breve{\O}[w^{\pm 1}_i])).
\end{equation}
The constructions explained in \ref{exDivision} and \ref{sss4b3}
produce Azumaya algebras ${\mathcal B}_i$ over $\O[w^{\pm 1}_i]$
that under $w\mapsto \varpi_i$ specialize to the central division
algebras $B_i$ that appear in the decomposition of the anisotropic
kernel $M_{\rm ad}$. These Azumaya algebras split over
$\breve{\O}[w^{\pm 1}_i]$; hence, by using the isomorphism
(\ref{210}), we obtain a well-defined class $[\underline c]\in
{\mathrm H}^1(\hat{\mathbb Z}, \underline M^*_{\rm ad}(\breve{\O}[u^{\pm 1}]))$ that
specializes to $[c(M)]\in {\mathrm H}^1(\hat{\mathbb Z}, {M}^*_{\rm ad}(\breve F))$.
Now, using Lemma \ref{coxeter}, the explicit constructions of \S
\ref{exDivision}, \ref{sss4b3} and our discussion in \S \ref{ss2b},
the class $[\underline c]$ corresponding to the product of Azumaya
algebras in fact comes from a canonical class $[\underline c^{\rm rig}]$
in ${\mathrm H}^1(\hat{\mathbb Z},\underline N^*_{\rm ad}(\breve{\O}[u^{\pm 1}]))$, whose
specialization to ${\mathrm H}^1(\hat{\mathbb Z},N^*_{\rm ad}(\breve{F}))$ is the
image of $[c^{\rm rig}]$ in Corollary \ref{coropsi} under the map
${\mathrm H}^1(\hat{\mathbb Z},{N'}^*(\breve F))\to {\mathrm H}^1(\hat{\mathbb Z},N^*_{\rm ad}(\breve F))$.
By (\ref{barMexact}), $[c]$ maps to the trivial class in
${\mathrm H}^2(\hat{\mathbb Z}, Z^*(\breve F))$. Using (\ref{barMuexact}) and the
isomorphism (\ref{219}) we see that $[\underline c^{\rm rig}]$ belongs to
${\mathrm H}^1(\hat{\mathbb Z}, \underline {N'}^*(\breve{\O}[u^{\pm 1}]))$ and maps to the
class $[c^{\rm rig}]$ in ${\mathrm H}^1(\hat{\mathbb Z}, {N'}^*(\breve{F}))$ under
the specialization $u\mapsto\varpi$.
\begin{Remark}\label{unique lift}
{\rm In fact, one can show that specialization $u\mapsto \varpi$
induces isomorphisms
\[{\mathrm H}^1(\hat{\mathbb Z}, \underline {N'}^*(\breve{\O}[u^{\pm 1}]))\xrightarrow{\sim} {\mathrm H}^1(\hat{\mathbb Z}, {N'}^*(\breve F)), \quad {\mathrm H}^1(\hat{\mathbb Z}, \underline N_{\rm ad}^*(\breve{\O}[u^{\pm 1}]))\xrightarrow{\sim}{\mathrm H}^1(\hat{\mathbb Z}, N_{\rm ad}^*(\breve F)).\]
We do not use this fact here. }
\end{Remark}
\subsubsection{}\label{sss2d3}
The class $[\underline c^{\rm rig}]$ allows us to define an inner twist
${\underline{G}}$ of ${\underline{G}}^*$: Let $\underline c^{\rm rig}$ be a 1-cocycle
representing this class and denote by ${\rm Int}(\bold{g})$ the
element in $\underline {M'}^*(\breve{\O}[u^{\pm 1}])\subset \underline G^*_{\rm
ad}(\breve{\O}[u^{\pm 1}])$ which is the value at $1$ of this
cocycle. We define
\begin{equation}\label{inner}
{\underline{G}}(R)={\underline{G}}^*(\breve{\O}[u^{\pm 1}]\otimes_{\O[u^{\pm
1}]}R)^{\hat{\mathbb Z}}
\end{equation}
where the action of the topological generator $1$ of $\hat{\mathbb Z}$ is
given by ${\rm Int}(\bold{g})\cdot \sigma$. (In other words, ${\underline{G}}$
is given by the Weil descent datum obtained from the automorphism
${\rm Int}(\bold{g})\cdot\sigma$.)
By descent, ${\underline{G}}$ is a reductive group over $\O[u^{\pm 1}]$ with
base change to $\breve{\O}[u^{\pm 1}]$ isomorphic to
${\underline{G}}^*\otimes_\O\breve{\O}$.
In fact, we obtain more. Namely, we
have
\[(\underline G,\underline A,\underline S,\underline T,\underline M, \underline P),
\]
where $(\underline G,\underline T,\underline M,\underline P)$ is obtained from $(\underline
G^*,\underline T^*,\underline M^*,\underline P^*)$ via twisting by the cocycle $\underline
c^{\rm rig}$, $\underline S$ is the maximal $\breve\O[u^{\pm 1}]$-split
subtorus of $\underline T$, and $\underline A$ is the maximal $\O[u^{\pm
1}]$-split subtorus of $\underline T$. In addition, the specialization of
$(\underline G,\underline A,\underline S,\underline P)$ by $u\mapsto \varpi$ is a
rigidified reductive group
\begin{equation}
(G_0,A_0,S_0,P_0):=(\underline G,\underline A,\underline S,\underline P)\otimes_{\O[u^{\pm
1}]}F.
\end{equation}
By our construction, we have an isomorphism $G\simeq G_0$.
\subsubsection{}\label{sss2c5}
Similarly to Definition \ref{rigid}, let us define the groupoid ${\rm
RG}(\O[u^{\pm 1}])$ of rigidified groups over $\O[u^{\pm 1}]$. An
object is a quadruple $(\underline G,\underline A,\underline S,\underline P)$, where $\underline
G$ is a connected reductive group over $\O[u^{\pm 1}]$, $\underline A$ is
a maximal torus of $\underline G$, $\underline S\supset\underline A$ is a torus, which
is maximal $\breve\O[u^{\pm 1}]$-split, and $\underline P$ is a parabolic
subgroup of $\underline G$ containing $\underline M=Z_{\underline G}(\underline A)$ as a
Levi factor, such that: (i) $(\underline G,\underline A,\underline S,\underline
P)\otimes_{\O[u^{\pm 1}]}F$ is a rigidified group as in Definition
\ref{rigid}; (ii) there exists a pinned quasi-split form $(\underline
G^*,\underline T^*,\underline B^*,\underline e^*)$ over $\O[u^{\pm 1}]$ and an
\emph{inner} twist
\[\underline \psi:(\underline G,\underline S,\underline P)\otimes_{\O[u^{\pm 1}]}\breve{\O}[u^{\pm 1}]\xrightarrow{\sim} (\underline G^*,\underline S^*,\underline P^*)\otimes_{\O[u^{\pm 1}]}\breve{\O}[u^{\pm 1}],\]
where $\underline S^*$ is the maximal $\breve\O[u^{\pm 1}]$-split subtorus
of $\underline T^*$ and $\underline P^*$ is a parabolic subgroup of $\underline G^*$
containing $\underline B^*$.
Observe that an inner twist $\underline \psi$ defines a cocycle $\underline
c=\underline \psi\cdot\sigma(\underline \psi)^{-1}$ with values in $\underline
{N'}^*$, where as before $\underline {N'}^*$ is the normalizer of $\underline
T^*$ in $\underline {M'}^*$, and $\underline {M'}^*$ is the standard Levi in
$\underline G^*_{\rm ad}$ given by $\underline P^*_{\rm ad}$.
By definition, we have the specialization functor ${\rm
RG}(\O[u^{\pm 1}])\to {\rm RG}(F)$. Our construction shows that the
essential image of this functor consists of all rigidified groups
over $F$ that are split over $F^t$ (we denote this latter
subgroupoid by ${\rm RG}(F^t/F)$). In fact, assuming Remark \ref{unique
lift}, it is easy to see that the isomorphism classes in ${\rm
RG}(\O[u^{\pm}])$ biject to the isomorphism classes in ${\rm
RG}(F^t/F)$.
\begin{Remark}\label{Piano}{\rm
Let $H$ be a Chevalley group over ${\mathbb Z}$. Recall that given a base
scheme $S$, the set of isomorphism classes of forms $\mathcal H$ of $H$ over $S$
are classified by the (\'etale) cohomology set ${\mathrm H}^1(S, \underline{\rm Aut}(H))$. As
every reductive group over $F$ admits a rigidification, a corollary
of the above discussion is that the natural specialization map
\begin{equation}\label{sp}
{\mathrm H}^1(\O[u^{\pm 1}], \underline{\rm Aut}(H))\to {\mathrm H}^1(F^t/F, \underline{\rm Aut}(H))
\end{equation}
is surjective, where ${\mathrm H}^1(F^t/F, \underline{\rm Aut}(H))$ denotes the set of
isomorphism classes of forms of $H$ over $F$ that split over
$F^t$. It is natural to ask whether this map is also injective.
Some results can be obtained using ideas of
Pianzola and Gille; they have developed a theory of ``loop reductive
groups" over Laurent polynomial rings $K[u_1^{\pm 1}, \ldots, u^{\pm
1}_n]$ with $K$ a field (see for example \cite{PianzolaGille},
\cite{ChernPianzolaGille}). Essentially, these are reductive groups
that afford a maximal torus over this base. The fibers $\underline
G\otimes_{\O[u^{\pm 1}]}F[u^{\pm 1}]$, $\underline G\otimes_{\O[u^{\pm
1}]}k[u^{\pm 1}]$, of the groups $\underline G$ defined above are loop
reductive groups in their sense. Using some of the constructions in \cite{PianzolaGille} and \cite{ChernPianzolaGille}, we can show the
following: If a form $\mathcal H$ over
$\O[u^{\pm 1}]$ contains a maximal torus and splits over a finite
Galois extension (as in \ref{sss2a1}) with group $\Gamma$ of order
prime to $p$, then ${\mathcal H}$ is isomorphic to $\underline G$ where
$G={\mathcal H}\otimes_{\O[u^{\pm 1}]}F$ is the specialization of ${\mathcal H}$ along
$u\mapsto\varpi$. This implies that (\ref{sp})
is injective when restricted to the subset which corresponds to such
forms. We intend to undertake a more comprehensive study
of reductive groups over $\O[u^{\pm 1}]$ and report on this
in another paper.}
\end{Remark}
\bigskip
\section{Parahoric group schemes over $\O[u]$}\label{groupscheme}
\setcounter{equation}{0}
In this chapter, we give our construction of the ``parahoric" group
schemes ${\mathcal G}$ over $\O[u]$ which are among the main tools of the
paper. The main result is Theorem \ref{grpschemeThm} which describes
the properties of these group schemes.
\subsection{Preliminaries}\label{3a}
We start with some preliminaries on Bruhat-Tits buildings. Suppose
that $K$ is a discretely valued quasi-local field
(\cite[1.1.1]{LandvogtCrelle}) with valuation ring $\O_K$ and with
perfect residue field. If $G$ is a connected reductive group over
$K$, we can consider the Bruhat-Tits building ${\mathcal B}(G, K)$.
If $S$ is a maximal split torus of $G$, we denote by ${\mathcal A}(G, S, K)$
the corresponding apartment in ${\mathcal B}(G, K)$.
Note that if $\hat K$ is the completion of $K$, the natural maps
give identifications ${\mathcal A}(G, S, K)={\mathcal A}(G_{\hat K}, S_{\hat K}, \hat
K)$, ${\mathcal B}(G, K)={\mathcal B}(G_{\hat K}, \hat K)$
(\cite[Prop. 2.1.3]{LandvogtCrelle}).
\subsubsection{} Suppose now that $(H, T_H, B_H, e)$ is a pinned Chevalley
group scheme $H$ over ${\mathbb Z}$ and consider $H_K =H\otimes_{\mathbb Z} K$ and
$T_K=T_H\otimes_{\mathbb Z} K$ over the field $K$. We will sometimes omit the
subscripts when the choice of the field $K$ is clear. After fixing
an identification of the value groups $v(K^*)={\mathbb Z}$, we can identify
the apartments ${\mathcal A}(H, T_H, K)\subset {\mathcal B}(H, K)$ for all
such fields $K$. In particular, we can identify ${\mathcal A}(H, T_H,
\kappa((u)))$ for $\kappa=F$, $k$ and also ${\mathcal A}(H, T_H, F)$. Under
this identification, the hyperspecial points of ${\mathcal A}(H, T_H, K)$ with
stabilizer $H(\O_K)$ correspond to each other. Observe that we can
identify the Iwahori-Weyl group
$\widetilde{W}=N_H(T_H)(K)/T_H(\O_K)$ for different $K$ canonically
and the identification of the apartment is compatible with the
action of $\widetilde{W}$.
\subsubsection{}The above generalizes to
the quasi-split case as follows. Let $(\underline G^*,\underline T^*,\underline B^*,
\underline e^*)$ be a pinned quasi-split group over $\O[u^{\pm 1}]$. Let
$\underline S^*$ be the maximal $\breve\O[u^{\pm 1}]$-split subtorus of
$\underline T^*$ and $\underline A^*$ be the maximal $\O[u^{\pm 1}]$-split torus
of $\underline T^*$. Then as the centralizer of $\underline S^*$ is $\underline T^*$,
we see that $\underline S^*_{ \kappa'((u))}:=\underline S^*\otimes_{\O[u^{\pm
1}]} \kappa'((u))$ is a maximal $\kappa'((u))$-split torus of $\underline
G^*_{\kappa'((u))}:=\underline G^*\otimes_{\O[u^{\pm 1}]} \kappa'((u))$
for $\kappa'=\bar{k}$, $\breve F$. Similarly, $\underline S^*_{\breve F}:=\underline
S^*\otimes_{\O[u^{\pm 1}]} \breve F$ is a maximal $\breve F$-split torus of
$\underline G^*_{\breve F}:=\underline G^*\otimes_{\O[u^{\pm 1}]} \breve F$. We have
canonical identifications
\begin{equation}\label{aptident1}
{\mathcal A}(\underline G^*_{\kappa'((u))},\underline
S^*_{\kappa'((u))},\kappa'((u)))={\mathcal A}(\underline G^*_{\breve F},\underline
S^*_{\breve F},\breve F).
\end{equation}
Indeed, these apartments are identified with
${\mathcal A}(H,T_H,K)^{\gamma}$ (using the tameness assumption, as in
\cite{BTII}, or \cite[Ch. IV, \S 10]{LandvogtLNM}, see also
\cite{PrasadYuInv}), where $K=\kappa'((u))$ or $\breve F$, and $\gamma$
is a generator of the inertia subgroup. Similary, we can identify
the Iwahori-Weyl group for $(\underline G^*_K,\underline S^*_K)$ canonically and
compatibly with the identification of the apartment,
\subsubsection{}\label{sss3a3} We also have the following further generalization. Let
$(\underline G,\underline A,\underline S,\underline P)$ be a rigidified group over
$\O[u^{\pm 1}]$ as defined in \ref{sss2c5}. Then $\underline A_F$ is a
maximal $F$-split torus of $\underline G_F$. Let $x\in {\mathcal A}(\underline G_F,\underline
A_F,F)\subset {\mathcal B}(\underline G_F, F)$ be a point in the
apartment. The identification \eqref{aptident1} induces
\begin{multline}\label{aptident2}
{\mathcal A}(\underline G_F,\underline A_F, F)={\mathcal A}(\underline G_{\breve F},\underline
S_{\breve F},\breve F)^\sigma={\mathcal A}(\underline G^*_{\breve F},\underline S^*_{\breve F},\breve F)^{{\rm
Int}(\mathbf g)\sigma}\\ ={\mathcal A}(\underline G^*_{\kappa'((u))}, \underline
S^*_{\kappa'((u))},\kappa'((u)))^{{\rm Int}(\mathbf
g)\sigma}={\mathcal A}(\underline G_{\kappa'((u))}, \underline
S_{\kappa'((u))},\kappa'((u)))^{\sigma} ,
\end{multline}
and therefore we obtain $x_{\kappa((u))}\in {\mathcal A}(\underline
G_{\kappa'((u))},\underline S_{\kappa'((u))},\kappa'((u)))^\sigma$.
Therefore, using again ${\mathcal B}(\underline
G_{\kappa'((u))},\kappa'((u)))^\sigma={\mathcal B}(\underline
G_{\kappa((u))},\kappa((u)))$, we see that $x_{\kappa((u))}$ can be
regarded as a point in the building ${\mathcal B}(\underline
G_{\kappa((u))},\kappa((u)))$.
\subsubsection{}\label{sss3a4} Let us now begin with a connected reductive group $G$ over the
$p$-adic field $F$ that splits over the tamely ramified $\tilde F/F$.
Let $(\underline G,\underline A,\underline S,\underline P)$ be the rigidified group over
$\O[u^{\pm 1}]$ as constructed before. Let $\underline \psi:(\underline G,\underline
S,\underline P)\otimes\breve\O[u^{\pm 1}]\simeq (\underline G^*,\underline S^*,\underline
P^*)\otimes\breve\O[u^{\pm 1}]$ be an inner twist and $\underline c^{\rm
rig}$ be the corresponding cocycle. Let $(G_0,A_0,S_0,P_0): =(\underline
G_F, \underline A_F, \underline S_F, \underline P_F)$ be the base change via
$\O[u^{\pm 1}]\to F$ given by $u\mapsto \varpi$. The specialization
of the inner twist is denoted by $\psi_0$.
Let $x\in {\mathcal B}(G,F)$ be a point in the building of $G$. Let
$(G,A,S,P)$ be a rigidification of $G$ such that $x\in{\mathcal A}(G,A,F)$.
Then the set of inner twists
\[\psi: (G,S,P)\otimes_F \breve F\xrightarrow{\sim} (G^*,S^*,P^*)\otimes_F \breve F\]
such that $\psi\cdot\sigma(\psi)^{-1}=\underline c^{\rm
rig}(1)|_{u=\varpi}$ forms an $N'(F)$-torsor, where $N'=N_{M'}(T)$.
As the cocycles corresponding to $\psi$ and $\psi_0$ are the same,
the morphism $\psi_0\psi^{-1}$ is defined over $F$ (which is
independent of the choice of $\underline c^{\rm rig}$). Therefore, by
choosing a rigidification $(G,A,S,P)$ of $G$, we obtain an
isomorphism $\alpha: G\xrightarrow{\sim} G_0$, which is well-defined
up to the action of $N'(F)$. Since $N'$ centralizes $A$, $N'(F)$
acts trivially on ${\mathcal A}(G,A,F)$; hence, $x$ corresponds to a
well-defined point $x_0$ in ${\mathcal A}(G_0,A_0,F)$. Note that, however,
$x_0$ depends on the choice of $(A,S,P)$.
Now, let $(A',S',P')$ be another choice of rigidification of $G$ and
let $x'_0\in {\mathcal A}(G_0,A_0,F)$ be the corresponding point. By Lemma
\ref{transitive}, there is $g\in G_{\rm ad}(F)$ sending $(A,S,P)$ to
$(A',S',P')$. Therefore, as points in the building of $G_0$, $x_0$
and $x'_0$ are in the same $G_{0,{\rm ad}}(F)$-orbit. Therefore,
there is an element $n$ in $N_{G_{0,\rm ad}}(A_0)$ that sends $x_0$
to $x'_0$.
\subsection{The main construction}\label{3b}
We now state the main result of this section. Let $(\underline G,\underline
A,\underline S, \underline P)$ be a rigidified group over $\O[u^{\pm 1}]$ as
defined in \ref{sss2c5}. Let $x\in {\mathcal A}(\underline G_F,\underline A_F,F)$. By
\ref{sss3a3}, we also obtain from $x$ points $x_{\kappa((u))}$ in
${\mathcal B}(\underline G_{\kappa((u))}, \kappa((u)))$ for $\kappa=F$ or
$k$.
\smallskip
\begin{thm}\label{grpschemeThm}
There is a unique smooth, affine group scheme ${\mathcal G}={\mathcal G}_x\to
{\mathbb A}^1_{\O}={\rm Spec } (\O[u])$ (called a Bruhat-Tits group
scheme for $\underline G$) with connected fibers and with the following
properties:
1) The group scheme ${\mathcal G}_{|\O[u, u^{-1}]}$ is the group scheme
$\underline{G}$ constructed in \S \ref{sss2d3}.
2) The base change of ${\mathcal G}$ under ${\rm Spec } (\O)\to {\mathbb A}^1_{\O}$
given by $u\mapsto \varpi$ is the parahoric group scheme ${\mathcal P}_{x}$ for
$\underline G_F$ (as defined in \cite{BTII}).
3) The base change of ${\mathcal G}$ under ${\mathbb A}^1_{\kappa}\to
{\mathbb A}^1_{\O}$ given by $\O\to \kappa$ followed by completion
along the ideal $(u)$ of $\kappa[u]$ is the parahoric group scheme
${\mathcal P}_{x_{\kappa((u))}}$ for $\underline G_{\kappa((u))}$ over
${\rm Spec } (\kappa[[u]])$.
\end{thm}
\smallskip
The main application of this theorem in this paper is as
follows. Again, let $G$ be a connected reductive group over $F$, split over
a tamely ramified extension of $F$. Let $x$ be a point in ${\mathcal
B}(G, F)$, and ${\mathcal P}_x$ be the corresponding parahoric group
scheme. Choose a maximal $F$-split torus $A$ such that $x$ belongs
to the apartment ${\mathcal A}(G, A, F)$ and complete the pair $(G, A)$ to a
rigidification of $G$. Choose also a pinned split form $H$ of $G$
over $\O$. Then we can construct $\underline G$ and identify $G$ with the
base change $G_0=\underline G\otimes_{\O[u^{\pm 1}]}F$ using $\alpha$, as explained in the
previous paragraph. In particular, $x$ gives rise to a point $x_0$
in ${\mathcal A}(G_0,A_0,F)$ as well as $x_{\kappa((u))}\in{\mathcal B}(\underline
G_{\kappa((u))},\kappa((u)))$. Therefore, we obtain a group scheme
${\mathcal G}_{x_0}$ as in the theorem, whose specialization along
$u=\varpi$ gives back to ${\mathcal P}_x$. Observe that, $x_0$ is not
uniquely determined by $x$, but also depends on the rigidification
of $G$. However, as explained before, different $x_0$'s are in the
same $N_{G_{0,{\rm ad}}}(A_0)(F)$-orbit. It is easy to see that an
$N_{G_{0,{\rm ad}}}(A_0)(F)$-orbit on ${\mathcal A}(G_0,A_0,F)$ is the same as
an ${\rm Im}(N_{\underline G_{\rm ad}}(\underline A)(\O[u^{\pm
1}])\stackrel{u=\varpi}{\to} N_{G_{0,{\rm ad}}}(A_0)(F))$-orbit.
Therefore, different ${\mathcal G}_{x_0}$'s are isomorphic to each other via
conjugation by an element in $N_{\underline G_{\rm ad}}(\underline A)(\O[u^{\pm
1}])$ and so the isomorphism class of this group scheme
is independent of choices. In particular, using the above identifications, we obtain:
\begin{cor}\label{application}
There exists a smooth, affine group scheme ${\mathcal G}={\mathcal G}_x\to {\mathbb
A}^1_{\O}={\rm Spec } (\O[u])$ with connected fibers such that
1) The group scheme ${\mathcal G}_{|\O[u, u^{-1}]}$ is $\underline G$;
2) The base change of ${\mathcal G}$ under ${\rm Spec } (\O)\to {\mathbb A}^1_{\O}$
given by $u\mapsto \varpi$ is ${\mathcal P}_x$ and the base change of ${\mathcal G}$
under ${\mathbb A}^1_{\kappa}\to {\mathbb A}^1_{\O}$ given by $\O\to
\kappa$ followed by completion along the ideal $(u)$ of $\kappa[u]$
is ${\mathcal P}_{x_{\kappa((u))}}$.
\end{cor}
\subsubsection{}\label{unique} We first prove the uniqueness statement in
Theorem \ref{grpschemeThm}. Recall that the parahoric group scheme ${\mathcal P}_x$
is a group scheme (smooth, affine, connected) over ${\rm Spec } (\O)$ with
the generic fiber $\underline G_F$ such that ${\mathcal P}_x(\O)\subset \underline G(F)$
is the connected stabilizer of $x$. Similarly for
${\mathcal P}_{x_{\kappa((u))}}$.
We show that ${\mathcal G}'$ is a smooth affine connected group scheme over
$\O[u]$ with ${\mathcal G}[u^{-1}]=\underline G$ and which is such that
${\mathcal G}'(F[[u]])\subset \underline G(F((u)))$ is the connected stabilizer of
$x_{F((u))}$ in the building of $\underline G_{F((u))}$. Then
${\mathcal G}'={\mathcal G}_x$. In particular, this shows the uniqueness. To see this
set ${\mathcal G}_x={\rm Spec } (B)$, ${\mathcal G}'={\rm Spec } (B')$. Our assumptions imply
$B[u^{-1}]=B'[u^{-1}]$ and that ${\mathcal G}'(F[[u]])={\mathcal G}_x(F[[u]])$. Since
$F$ is infinite and perfect, $F[[u]]$ henselian and both the group
schemes are smooth, condition (ET 2) of \cite[1.7.2]{BTII} is
satisfied; hence the second identity implies that
$B'\otimes_{\O[u]}F[[u]]=B\otimes_{\O[u]}F[[u]]$. Observe now that
since $B$ is smooth over $\O[u]$,
$$
B=B[u^{-1}]\cap (B\otimes_{\O[u]}F[[u]])
$$
and similarly for $B'$. Hence $B=B'$ which gives ${\mathcal G}'={\mathcal G}_x$. This
actually shows that ${\mathcal G}_x$ only depends on $\underline G$ and
$x_{F((u))}$.
\subsubsection{} Here we show the existence part of Theorem \ref{grpschemeThm}.
a) First suppose that $\underline G=H\otimes_{\mathbb Z} \O[u^{\pm 1}]$
is split; we will then consider more general convex subsets
of the apartment (and not just points). Denote by $\Phi=\Phi(H, T_H)$ the corresponding root
system. The pinning $(H, T_H, B_H, e)$ gives a hyperspecial vertex
$x_0$ of the apartment ${\mathcal A}(H, T_H, K)$ of $T_H$. This determines a
filtration $\{U_a(K)_{x_0, r}\}_{r\in {\mathbb R}}$ of the corresponding root
subgroups for any local field $K$. Let $f: \Phi\to {\mathbb R}$ be a concave
function; there is an associated convex subset $\Omega=\Omega_f$ of
the apartment $A$ given by
$$
\Omega=\{x\in A\ |\ a\cdot (x-x_0)+f(a)\geq 0\}.
$$
Conversely, to a convex bounded subset $\Omega\subset A$, we can
associate the concave function $f_\Omega: \Phi\to {\mathbb R}$ given by
$$
f_\Omega(x)={\rm inf}\{\lambda\in {\mathbb R} \ |\ a\cdot (x-x_0)+\lambda\geq
0, \forall x\in \Omega\}
$$
Notice that $x_0\in \Omega_f$ if and only if $f\geq 0$. Now denote
by $H(K)_{x_0, f}$ the subgroup of $H(K)$ generated by $U_a(K)_{x_0,
f(a)}$ and $T_H(\O_K)$. By \cite{BTII}, there is an associated
connected affine smooth group scheme ${\mathcal P}_{x_0, f, K}$ over
${\rm Spec } (\O_K)$. This only depends on $\Omega$ and we can denote it by
${\mathcal P}_\Omega$.
We will start by explaining how to construct a smooth affine group
scheme ${\mathcal G}_{x_0, f}$ over $\O[u]$ that lifts ${\mathcal P}_{x_0, f, K}$ for
$K=F$ and $K=k((u))$ as it is required in the statement of the
theorem. We first consider the additive group schemes ${\mathcal
U}_{a, x_0, f}=u^{\langle f(a)\rangle }{U}_{a}\otimes_{\mathbb Z} \O[u]\simeq
{\mathbb G}_{a}\otimes_{\mathbb Z} \O[u]$ over $\O[u]$ and the torus
${\mathcal T}=T_H\otimes_{{\mathbb Z}}\O[u]$. (Here $\langle r \rangle$ is
the smallest integer that is larger or equal to $r$). Notice here
that by definition $u{U}_{a}\otimes_{\mathbb Z} \O[u]={\rm Spec } (\O[u, x/u])$ is
the dilatation of $U_a\otimes_{\mathbb Z} \O[u]\simeq {\mathbb
G}_{a}\otimes_{\mathbb Z} \O[u]$ along the zero section ${\rm Spec } (\O)\to
{\mathbb G}_a\otimes_{\mathbb Z}\O\hookrightarrow {\mathbb G}_{a}\otimes_{\mathbb Z}
\O[u]$ over $u=0$. Similarly, $u^{-1}{U}_{a}\otimes_{\mathbb Z}
\O[u]={\rm Spec } (\O[u, ux])$ etc. Since $f$ is concave, these group
schemes give as in \cite[3.1.1]{BTII}, or \cite{LandvogtLNM},
schematic root data over $\O[u]$. (In particular, see
\cite[3.2]{BTII} for this split case.) By using \cite{BTII} (3.9.4
together with 2.2.10) we obtain a (smooth) group scheme ${\mathcal G}_{x_0,
f}$ over $\O[u]$ which has a fiberwise dense open subscheme given by
$$
{\mathcal V}_{x_0,f}=\prod_{a\in \Phi^-}{\mathcal U}_{a,
x_0,f}\times {\mathcal T}\times \prod_{a\in \Phi^+}{\mathcal U}_{a,
x_0, f}.
$$
By \cite[1.2.13, 1.2.14]{BTII} this group scheme is uniquely
determined from the schematic root data. By \cite{BTII} or
\cite{LandvogtLNM} the group schemes ${\mathcal G}_{x_0, f, K}$ are also
given by the schematic root data obtained by base changing the
schematic root data on ${\mathcal V}_{x_0, f}$ via $\O[u]\to \O_K$.
This implies that ${\mathcal G}_{x_0, f}$ specializes to ${\mathcal P}_{x_0, f}$ as in
(2) and (3). Property (1) also follows easily. It remains to show
that ${\mathcal G}_{x_0,f}$ is affine.
\medskip
(I) We first assume that $ \Omega$ contains the hyperspecial
vertex $x_0$, i.e we can take $f\geq 0$.
\medskip
When $f=0$, ${\mathcal G}_{x_0, f}\simeq H\otimes_{\mathbb Z} \O[u]$ which is affine.
We will build on this, showing that, in general, ${\mathcal G}_{x_0, f}$ can
be obtained from ${\mathcal G}_{x_0, 0}$ by a series of dilatations. This
follows an argument of Yu (\cite{YuModels}) and provides an
alternative construction of the group schemes. Since such
dilatations are affine, we will obtain the desired conclusion.
(i) Assume first that $0\leq f\leq 1$. Consider the parabolic
subgroup $P_f\subset H$ over ${\mathbb Z}$ containing $T=T_H$ which
corresponds to the set of roots $a\in \Phi$ with $f(a)>0$: For any
field $k$, $P_f(k)$ is generated by $T(k)$ and $U_a(k)$ with
$f(a)=0$. In this case, we
can consider the dilatation of $H\otimes_{\mathbb Z}\O[u]$ along
the closed subgroup scheme given by $P_f\otimes \O \hookrightarrow
H\otimes \O[u]$ over $u=0$: Suppose that $H={\rm Spec } (A)$ and
$P_f={\rm Spec } (A/I)$. Set $J=\O\cdot I +(u)$ for the ideal generated by
$I$ and $u$ in $A\otimes \O[u]$. Then we consider
$$
\displaystyle{B=(A\otimes \O[u])[ ju^{-1} \ |\ j\in J]\subset
A\otimes\O[u, u^{-1}]}
$$
and set
$$
{\mathcal H}_{x_0,f}={\rm Spec } (B).
$$
(We refer the reader to \cite{Waterhouse} for basic properties of
dilations of group schemes.) If $R$ is an $\O[u]$-algebra we set
$\bar R=R/(u)=R\otimes_{\O[u]}\O$ and we denote by $\bar h \in
{\mathcal H}_{x_0,f}(\bar R)$ the reduction of $h\in {\mathcal H}_{x_0, f}(R)$. Then
${\mathcal H}_{x_0, f}$ is the unique, up to isomorphism, group scheme that
supports a homomorphism $\Psi: {\mathcal H}_{x_0, f}\to H\otimes_{{\mathbb Z}}\O[u]$
with the following properties:
\begin{itemize}
\item $\Psi$ is an isomorphism away from $u=0$,
\item for any $\O[u]$-algebra $R$ with $u$ not a zero divisor in $R$,
$\Psi$ identifies ${\mathcal H}_{x_0, f}(R)$ with the subset $\{ h\in H(R)\
|\ \bar h\in P_f(\bar R)\}$ of $H(R)$.
\end{itemize}
We can see that ${\mathcal H}_{x_0,f}$ has connected fibers and is given by
the same group germ ${\mathcal V}_{x_0,f}$ as ${\mathcal G}_{x_0,f}$.
Therefore, ${\mathcal G}_{x_0,f}\simeq {\mathcal H}_{x_0, f}={\rm Spec } (B)$ and is also
affine.
(ii) In general, we can find a finite sequence $0=t_0<t_1<\cdots <
t_n=1$ such that if $f_i=t_i\cdot f$, we have $f_n=f$, and
$f_{i+1}\leq f_i+1$. We will use induction on $n$. The case $n=1$ is
given by (i) above. Note that by the construction of
\cite[3.9.4]{BTII} we have
a) There are closed group scheme embeddings ${\mathcal U}_{a,
i}\hookrightarrow {\mathcal H}_{i}$ and ${\mathcal T}\hookrightarrow {\mathcal H}_{i}$
that extend the standard embeddings of the root subgroups and the
maximal torus over $\O[u, u^{-1}]$. (Here and in the rest of the
proof, for simplicity, we omit some subscripts and write ${\mathcal U}_{a,
i}$ instead of ${\mathcal U}_{a, x_0, f_i}$ etc.)
b) The embeddings in (a) combined with the multiplication morphism
induce an open immersion
$$
j: {\mathcal V}_{i}=\prod_{a\in \Phi^-}{\mathcal U}_{a, i}\times
{\mathcal T}\times \prod_{a\in \Phi^+}{\mathcal U}_{a,
i}\hookrightarrow {\mathcal G}_{i}
$$
onto a fiberwise dense open subscheme of ${\mathcal G}_i$ which makes the
schematic root data ${\mathcal D}_{i}=({\mathcal T}, ({\mathcal U}_{a,
i})_{a\in \Phi})$ compatible with
${\mathcal H}_{i}$ (in the sense of \cite{BTII}, 3.1.3).
c) The group scheme multiplication morphism
$$
{\mathcal V}_{i}\times {\mathcal V}_{i}\to {\mathcal G}_{i}
$$
is surjective on each fiber.
Our induction hypothesis is: The smooth group scheme
${\mathcal G}_n:={\mathcal G}_{x_0,f_n}$ is affine and supports a group scheme
extension
\begin{equation}\label{extension}
1\to R_n\to \overline {{\mathcal G}}_{n}\to \overline {{\mathcal G}}_{ n}^{\rm red}\to
1
\end{equation}
of a (connected) reductive group scheme by a smooth affine unipotent
group scheme (both over $\O$). Here the bar indicates base change
via $\O[u]\to \O$ given by $u\mapsto 0$. The case $n=1$ follows from
(i) above. Then, $\overline {{\mathcal G}}_{1}^{\rm red}$ is the Levi
component of the parabolic $ P$.
Let us consider ${\mathcal G}_{n+1}$. There is a natural morphism ${{\mathcal U}}_{a, x_0, f_{n+1}}\to {\mathcal U}_{a,x_0, f_n}$.
This is the identity when $f_{n+1}(a)=f_n(a)$ and is given by
dilation of the zero section over $u=0$ when $f_{n+1}(a)>f_n(a)$.
These morphisms combine to give $f_n: {\mathcal V}_{n+1}\xrightarrow{\ }
{\mathcal V}_n\subset {\mathcal G}_n$. Since ${\mathcal G}_{n+1}[u^{-1}]={\mathcal G}_n[u^{-1}]$, by
\cite[1.2.13]{BTII} there is a unique group scheme homomorphism $\tilde
f_n: {\mathcal G}_{n+1}\to {\mathcal G}_n$ that extends $f_n$.
We will now show that $\tilde f_n$ identifies ${\mathcal G}_{n+1}$ with a
dilation of ${\mathcal G}_n$. Consider the (set-theoretic image)
$Q_n$ of $\bar{\mathcal G}_{n+1}\to \bar{\mathcal G}_n$ which is a constructible set. We will show that $Q_n$ is closed in $\bar{\mathcal G}_n$ and is underlying a smooth
group scheme which we will also denote by $Q_n$. (This will imply that ${\mathcal G}_{n+1}\to{\mathcal G}_n$ factors through
${\mathcal G}_{n+1}\to {\mathcal G}_n'\to{\mathcal G}_n$ where ${\mathcal G}'_n\to {\mathcal G}_n$ is the dilation of ${\mathcal G}_n$ along $Q_n\subset \bar{\mathcal G}_n$.)
Using property (c) of ${\mathcal G}_{n+1}$, we see that the image $Q_n$ can
be identified with the image of
$\bar{{\mathcal V}}_{n+1}\times\bar{\mathcal V}_{n+1}$
under the product map in $\bar{\mathcal G}_n$. By construction, we have morphisms
$\overline{{\mathcal U}}_{a, {n+1} }\to Q_{n}$.
Suppose $\kappa$ is either $F$ or $k$. Denote by $V_{a,n}(\kappa)$ the image of the corresponding
$\overline{{\mathcal U}}_{a, n+1 }(\kappa)\to Q_{n}(\kappa)$. Then by the
above, we see that the $\kappa$-valued points $Q_{n}(\kappa)$ is the
subgroup of $\overline {{\mathcal G}_n}(\kappa)$ generated by the groups
$V_{a, n}(\kappa)$ and $\overline {\mathcal T}(\kappa)$. By the argument in \cite[8.3.2]{YuModels},
we see that the fibers of $Q_n$ over $\kappa=k$ and $F$, are closed
in the corresponding fibers of ${\mathcal G}_n$. Now consider the extension
(\ref{extension}). In this, $\overline {{\mathcal G}}_{ n}^{\rm red}$
contains the maximal torus $\overline{\mathcal T}$ and it
corresponds to the root datum given by $(\mathbb{X}^\bullet(\overline{\mathcal
T}), \Phi_{f_n}, \mathbb{X}_\bullet(\overline{\mathcal T}), \Phi^\vee_{f_n}))$
(with $\Phi_{f}=\{a\in \Phi \ |\ f(a)+f(-a)=0\}$). On the other
hand, as a scheme, $R_n$ is an affine space:
$$
R_n=\prod_{\{a\in \Phi\, |\, f_n(a)+f_n(-a)>0\}} \overline{\mathcal
U}_{a, n}.
$$
We now consider the Zariski closure $\tilde Q_n$ of the generic
fiber $Q_n(F)$ in $\overline{\mathcal G}_n$. This agrees with the Zariski
closure of $\overline{{\mathcal V}}_{n+1}\hookrightarrow \overline{{\mathcal G}}_n$.
We have of course $Q_n\subset \tilde Q_n$. We can see that $\tilde
Q_n\subset \overline{{\mathcal G}}_n$ maps in $ \overline {{\mathcal G}}_{ n}^{\rm
red}$ onto the closed parabolic subgroup scheme $P_n$ of $\overline
{{\mathcal G}}_{ n}^{\rm red}$ generated by $\overline{\mathcal T}$ and
$\overline{\mathcal U}_{a, n}$ with $f_{n+1}(a)=f_{n}(a)=0$. On the
other hand, the intersection of $\tilde Q_n$ with $R_n$ can be
identified as the closed subscheme of $R_n= \prod_{a,
f_n(a)+f_n(-a)>0} \overline{\mathcal U}_{a, n}$ given by the product
of those $\overline{\mathcal U}_{a, n}$ for which
$f_{n+1}(a)=f_n(a)$ (with $f_n(a)+f_n(-a)>0$). This and the above
allows us to conclude that $\tilde Q_n$ is an affine fibration over
$P_n$ and so all fibers of $\tilde Q_n$ are geometrically connected.
It follows that $Q_n=\tilde Q_n$ and so $Q_n$ is a smooth closed
subgroup scheme of $\overline{\mathcal G}_n$. We can see following the
argument in \cite[8.3.3]{YuModels}, that we have
\begin{equation}\label{bigcell}
Q_{n}\cap \overline {{\mathcal V}}_{n} ={\rm Im}(\overline
{{\mathcal V}}_{n+1}\rightarrow \overline{{\mathcal V}}_n).
\end{equation}
As above ${\mathcal G}_{n+1}\to {\mathcal G}_n$ factors ${\mathcal G}_{n+1}\to {\mathcal G}'_n\to
{\mathcal G}_n$, where ${\mathcal G}'_n$ is the dilation of ${\mathcal G}_n$ along $Q_n\subset
\overline{\mathcal G}_n$. Then ${\mathcal G}'_n$ is a smooth affine group scheme over
${\rm Spec } (\O[u])$ with connected fibers. Observe that ${\mathcal U}_{a,
f_{n+1}}$ is by definition isomorphic to the dilatation of ${\mathcal U}_{a,
f_n}$ along the image of the morphism $\overline{{\mathcal U}}_{a,
f_{n+1}}\to \overline{{\mathcal U}}_{a, f_n}$. As a result, the dilatation of
${\mathcal V}_{n}$ along the image of $\overline {{\mathcal
V}}_{n+1}\to \overline {{\mathcal V}}_{n}$ is isomorphic to
$\overline {{\mathcal V}}_{n+1}$. It now follows from the
functoriality of the dilatation construction and (\ref{bigcell})
that the dilatation ${\mathcal G}'_{n}$ of ${\mathcal G}_n$ along $Q_{n}$ has an
open subscheme isomorphic to ${\mathcal V}_{n+1}\subset {\mathcal G}_{n+1}$.
Since ${\mathcal V}_{n+1}$ is fiberwise dense in ${\mathcal G}_{n+1}$ it
follows that ${\mathcal G}_{n+1}={\mathcal G}_n'$ and hence ${\mathcal G}_{n+1}$ is also
affine. The rest of the induction hypothesis for ${\mathcal G}_{n+1}$ also
follow. Again, $\overline{{\mathcal G}}^{\rm red}_{n+1}$ is the Levi
component of the parabolic of $\overline {{\mathcal G}}_{ n}^{\rm red}$ that
corresponds to $a\in \Phi_{f_n}$ for which $f_{n+1}(a)>f_n(a)$.
\smallskip
II) We continue to assume that $\underline G=H\otimes_{\mathbb Z} \O[u^{\pm 1}]$ is
split but now we consider the general case in which
$\Omega=\Omega_f$ does not contain $x_0$. The argument in the proof
of \cite[Lemma 2.2]{GilleTorsors} (see also \cite{LarsenDuke})
implies that there is an integer $\delta\geq 1$ such that the subset
$\Omega_{\delta\cdot f}$ of the apartment contains a hyperspecial
vertex $x'_0$ which is the translate $x'_0=x_0+t$ of $x_0$ by an
element $t\in \mathbb{X}_\bullet(T)$. Consider the homomorphism $\O[u]\to \O[v]$
given by $u\mapsto v^\delta$. Our previous arguments allow us to
construct, using successive dilations of
$t(H\otimes_{\mathbb Z}\O[v])t^{-1}\simeq H\otimes_{\mathbb Z}\O[v]$, a smooth affine
group scheme ${\mathcal G}'_{\Omega}={\mathcal G}_{x'_0, f'}$ over $\O[v]$. (Here
$f'=\delta\cdot f+t$ which is positive.) We can however see that
base changing the schematic root data for ${\mathcal G}_{\Omega}$ by
$\O[u]\to \O[v]$ gives schematic root data for ${\mathcal G}'_{\Omega}$. As
above, using \cite[1.2.13, 1.2.14]{BTII}, this implies that
${\mathcal G}_{\Omega}\otimes_{\O[u]}\O[v]\simeq {\mathcal G}'_{\Omega}$. By faithful
flat descent, ${\mathcal G}_{\Omega}$ is then affine.
\smallskip
b) We now consider the more general case of a quasi-split group
$\underline G$ that splits over an extension $\O_0[v^{\pm 1}]/\O[u^{\pm
1}]$ as in \ref{sss2a1}. In particular, $\O$ has residue field
$k={\mathbb F}_q$, $u\mapsto v^e$ $\Gamma^t$, and $\Gamma^t$ acts on
$\tilde\O_0[v]$ by $\gamma_0(v)=\zeta\cdot v$, $\sigma(\sum a'_i
v^i)=\sum_i \sigma(a'_i)v^i$, with invariants
$\tilde\O_0[v]^\Gamma=\O[u]$. Let $\tilde F/F$ be the base change of
$\O_0[v^{\pm 1}]/\O[u^{\pm 1}]$ along $u=\varpi$, with the maximal
unramified extension $\tilde F_0/F$ of degree $r$ and $e=[\tilde F:\tilde
F_0]$.
As $\underline G$ is quasi-split, $\underline G=\underline G^*$. We consider
$\Omega\subset {\mathcal A}(\underline G_F, \underline A_F, F)={\mathcal A}(H, T_H, \tilde
F)^\Gamma\subset {\mathcal A}(H, T_H, \tilde F)$ and let $ {\mathcal H}_\Omega$ be the
smooth affine group scheme over $ {\mathbb
A}^1_{\tilde\O_0}={\rm Spec } (\tilde\O_0[v])$ constructed for the split group
$H$ as in (a). The group $\Gamma$ acts on the apartment ${\mathcal A}(H, T_H,
\tilde F)$ via its action on $H$ and $\tilde F$. Since $\Omega$ is fixed
by $\Gamma$, we can see that ${\mathcal H}_\Omega$ supports a $\Gamma$-action
that lifts the action on $\Gamma$ on ${\mathbb A}^1_{\tilde\O_0}$.
Notice that the Weil restriction of scalars $ {\rm
Res}_{\tilde\O_0[v]/\O[u]}{\mathcal H}_\Omega $ is also a smooth affine group
scheme over $\O[u]$ (\cite{BTII}, \cite[2.2]{EdixhTame}). By the
above and \cite[2.4]{EdixhTame}, this supports a $\Gamma$-action
over ${\rm Spec } (\O[u])$; the inertia groups for this action are always
subgroups of $\langle\gamma_0\rangle$. Since, $\gamma_0$ has order
prime to $p$, by \cite[3.4]{EdixhTame}, the fixed point scheme
${\mathcal G}'_\Omega=({\rm Res}_{\tilde\O_0[v]/\O[u]}{\mathcal H}_\Omega)^{\Gamma}$
is a smooth closed subscheme of the smooth
affine ${\rm Res}_{\tilde\O_0[v]/\O[u]}{\mathcal H}_\Omega$. Hence, it is also
flat over $\O[u]$. Consider the base change
${\mathcal G}'_\Omega\otimes_{\O[u]}\O$ by $u\mapsto 0$. Since this is also
smooth over $\O$, it is the disjoint union
$$
{\mathcal G}'_\Omega\otimes_{\O[u]}\O=Z^0\sqcup(\sqcup_i Z_i)
$$
of its smooth irreducible components where $Z^0$ contains the
identity section. By flatness of ${\mathcal G}'_\Omega\to \AA^1_\O$, all the
components $Z^0$, $Z_i$ are divisors in ${\mathcal G}'_\Omega$. We will set
$$
{\mathcal G}''_\Omega={\mathcal G}'_\Omega-\sqcup_i Z_i
$$
(i.e the complement of the union of those components that do not
contain the identity.) Observe that ${\mathcal G}''_\Omega$ is affine since
it can also be obtained as the dilatation of the affine
${\mathcal G}'_\Omega$ along the affine and smooth closed subscheme $Z^0$ of
its fiber over $u=0$. We will now show that ${\mathcal G}''_\Omega$ is the
connected component $ {\mathcal G}'^0_\Omega$ of ${\mathcal G}'_\Omega$. By
\cite[1.2.12]{BTII} we have to show that each fiber of
${\mathcal G}''_\Omega\to \AA^1_\O$ is connected. First observe that the
geometric fibers at points where $u\neq 0$ are isomorphic to the
split form $H$ and so they are connected. By construction, $
{\mathcal G}''_\Omega \otimes_{\O[u]}\O=Z^0$; therefore the fiber
${\mathcal G}''_\Omega \otimes_{\O[u]}F$ is connected and is the connected
component of ${\mathcal G}'_\Omega\otimes_{\O[u]}F$. In general for
$\kappa=F$ or $k$, let $\kappa'$ be $\tilde F_0$, resp. the residue
field of $\tilde \O_0$. We can consider the fiber over $\O[u]\to
\O\to\kappa'$
$$
{\mathcal G}'_\Omega\otimes_{\O[u]}\kappa'=({\rm
Res}_{\O'[v]/\O[u]}({\mathcal H}_\Omega)\otimes_{\O[u]}\kappa')^\Gamma.
$$
Since $\Gamma$ is an extension of ${\rm Gal}(\kappa'/\kappa)$ by
$\langle\gamma_0\rangle$ this is
$$
({\rm
Res}_{\kappa'[v]/(v^e)/\kappa'}({\mathcal H}_\Omega\otimes_{\tilde\O_0[v]}\kappa'[v]/(v^e)))^{\gamma_0}.
$$
We have
$$
1\to U\to {\rm
Res}_{\kappa'[v]/(v^e)/\kappa'}({\mathcal H}_\Omega\otimes_{\tilde\O_0[v]}\kappa'[v]/(v^e))
\to
\overline{\mathcal H}^{ \rm red}_{\Omega,\kappa'}\to 1
$$
with $U$ unipotent and $\overline{\mathcal H}^{\rm red}_{\Omega,\kappa'}$
(split) reductive over $\kappa'$. Now the maximal reductive quotient
$M:=\overline{\mathcal H}^{\rm red}_{\Omega}$ of
$\overline{\mathcal H}_{\Omega}={\mathcal H}_{\Omega}\otimes_{\tilde\O_0[v]}\tilde\O_0$ is a
Chevalley (reductive) group scheme over $\tilde\O_0$. Since
$\gamma_0(\Omega)=\Omega$ we have an action of $\gamma_0$ on $M$.
Since $\gamma_0$ has order prime to $p$, we can see that
${\mathrm H}^1(\langle\gamma_0\rangle, U)=(0)$. By filtering $U$ by vector
groups we can see that $U^{\gamma_0}$ is connected (see
\cite[4.7.2]{YuModels}). It follows that the group of connected
components of ${\mathcal G}'_\Omega\otimes_{\O[u]}\kappa'$ is identified with
that of $(\overline{\mathcal H}^{\rm red}_{
\Omega,\kappa'})^{\gamma_0}=(\overline{\mathcal H}^{ \rm
red}_{\Omega})^{\gamma_0}\otimes_{\O'}\kappa'=M^{\gamma_0}\otimes_{\tilde\O_0}\kappa'$.
We can now see that the $\gamma_0$-action on $M$ satisfies the
assumptions of Proposition \ref{locconstant}, i.e preserves a pair
of maximal split torus and a Borel subgroup that contains it:
Indeed, by construction, $\gamma_0$ preserves the maximal torus
given by $T_H$. Now, as before, consider the subset $\Phi_\Omega$ of
the set of roots $\Phi$ such that there is an affine root with
vector part $a$ and defining a hyperplane containing $\Omega$. The
set $\Phi_\Omega$ can be identified with the roots of $M$ with
respect to the maximal torus given by $T_H$. The group
$\langle\gamma_0\rangle$ acts on $\Phi_\Omega$. The intersection
$\Phi^+_\Omega:=\Phi^+\cap \Phi_\Omega$ is a system of positive
roots in $\Phi_\Omega$ which is stable under $\gamma_0$. Let $C$ be
the affine chamber containing $\Omega$ in its closure which is given
by $\Phi^+_\Omega$; this provides us with a $\gamma_0$-stable Borel
subgroup in $M$ which contains $T_H$. We can now apply Proposition
\ref{locconstant} to the Chevalley group $M=\overline{\mathcal H}^{\rm
red}_{\Omega}$ over $\tilde\O_0$ and the automorphism induced by
$\gamma_0$ as above. We obtain that the group scheme of connected
components of ${\mathcal G}'_\Omega\otimes_{\O[u]}\kappa'$ is given by the
fibers of a finite \'etale commutative group scheme of order
annihilated by $e$ and this order is the same for $\kappa=F$ or
$\kappa=k$. Since ${\mathcal G}_\Omega''\otimes_{\O[u]}F$ is the neutral
component of ${\mathcal G}_\Omega'\otimes_{\O[u]}F$, the base change
${\mathcal G}_\Omega''\otimes_{\O[u]}\tilde F_0$ is also connected (\cite[Exp.
${\rm VI}_{\rm A}$, 2.1.1]{SGA3}). The above now implies that
${\mathcal G}''_\Omega\otimes_{\O[u]}\tilde k$ and hence
${\mathcal G}''_\Omega\otimes_{\O[u]}k$ is also connected. Therefore, ${\mathcal G}''_\Omega=({\mathcal G}'_\Omega)^0$.
We set ${\mathcal G}_\Omega:={\mathcal G}''_\Omega=({\mathcal G}'_\Omega)^0$. It remains to show
that the base changes of ${\mathcal G}_\Omega$ by $\O[u]\to \O$, $u\mapsto \varpi$, resp.
$\O[u]\to \kappa[[u]]$, are isomorphic to the parahoric group schemes ${\mathcal P}_x$,
resp. ${\mathcal P}_{x_{\kappa((u))}}$.
For simplicity, set $L=\bar\kappa((u))$, $L'=\bar\kappa((v))$,
$R=\bar\kappa[[u]]$, $R'=\bar\kappa[[v]]$ and denote by $H(L')_\Omega\subset H(L')$ the
stabilizer of $\Omega\subset {\mathcal B}(H, L)={\mathcal B}(H, L')^{\gamma_0}$.
We also set ${\mathcal G}(L)=(H(L'))^{\gamma_0}$ (the points of a connected reductive group over $L$).
Notice that by construction, ${\mathcal G}'_\Omega(R)={\mathcal H}_\Omega(R')^{\gamma_0}$ while the
result in the split case together with \cite[4.6]{BTII}
gives ${\mathcal H}_\Omega(R')\subset H(L')_\Omega$ with the corresponding quotient
having finite index. Hence, we have
$$
{\mathcal G}'_\Omega(R)={\mathcal H}_\Omega(R')^{\gamma_0}\subset (H(L')_\Omega)^{\gamma_0}=(H(L')^{\gamma_0})_\Omega=
{\mathcal G}(L)_\Omega
$$
and we see that ${\mathcal G}'_\Omega(R)$ is of finite index in the
stabilizer ${\mathcal G}(L)_\Omega$ of $\Omega\subset {\mathcal B}({\mathcal G}, L)$
in ${\mathcal G}(L)$. Since ${\mathcal G}_\Omega$ is the neutral component of ${\mathcal G}'_\Omega$
we conclude that ${\mathcal G}_\Omega\otimes_{\O[u]}R$ is the smooth connected stabilizer of $\Omega$,
i.e we have ${\mathcal G}_\Omega\otimes_{\O[u]}R={\mathcal P}_{x_{\kappa((u))}}\otimes_{\kappa[[u]]}\bar\kappa[[u]]$.
By \cite[1.7]{BTII}, this shows the desired result
for the base change $\O[u]\to \kappa[[u]]$. The case $\O[u]\to \O$, $u\to \varpi$
is similar. This
concludes the proof of Theorem \ref{grpschemeThm} in the quasi-split
case.
\smallskip
(c) Finally, we consider the general case. Recall our notations and
in particular the choice of ${\rm Int}(\bold{g})$ in $\underline
{N'}^*(\breve{\O}[u^{\pm 1}])$. This gives the semilinear
$$
{}^*\sigma:={\rm Int}(\bold g)\cdot \sigma: \underline
G^*\otimes_{\O}\breve{\O}\to \underline G^*\otimes_{\O}\breve{\O}
$$
which covers the Frobenius $\breve{\O}[u^{\pm 1}]\to
\breve{\O}[u^{\pm 1}]$. We also have the inner twist $\underline G$ of
$\underline G^*$ over $\O[u^{\pm 1}]$ defined by taking ${}^*\sigma$ fixed
points of $\underline G\otimes_{\O}\breve{\O}$ as in (\ref{inner}). Our
construction applied to the quasi-split $G^*_{\breve F}$ and the point
$x^*$ given as $\psi_*(x)$ provides with a group scheme
${\mathcal G}^*_{x^*}$ over $\breve{\O}[u]$ which satisfies the conclusions
of the Theorem. In particular, we have
$$
{\mathcal G}^*_{x^*}{|_{\breve{\O}[u^{\pm 1}]}}=\underline
G^*\otimes_{\O}\breve{\O}.
$$
We will now show that ${}^*\sigma: \underline
G^*\otimes_{\O}\breve{\O}\to \underline G^*\otimes_{\O}\breve{\O}$ extends
to an $\sigma$-semilinear
$$
{}^*\sigma: {\mathcal G}^*_{x^*}\to {\mathcal G}^*_{x^*}.
$$
We first verify that it is enough to check that the base change of
${}^*\sigma$ over $\breve F((u))$ extends to $\breve F[[u]]$: Indeed,
since ${\mathcal G}^*_{x^*}={\rm Spec } (A)$ is affine and smooth over
$\breve{\O}[u]$ we can write $A=A[u^{-1}]\cap
(A\otimes_{\breve{\O}[u]}\breve F[[u]])$. Since ${}^*\sigma$ is defined
over $\O[u, u^{-1}]$, it remains to check that ${}^*\sigma$
preserves $A\otimes_{\breve{\O}[u]}\breve F[[u]]\subset
A\otimes_{\breve{\O}[u]}\breve F((u))$. Now let check that the base
change of ${}^*\sigma$ over $\breve F((u))$ extends to $\breve F[[u]]$:
Consider $x^*_{\breve{F}((u))}$ which by our construction is fixed
by ${\rm Int}(\bold{g})\cdot \sigma$. This implies that
\begin{equation}\label{eq3.34}
{}^*\sigma ({\mathcal P}_{x^*_{\breve F((u))}}(\breve F[[u]]))\subset
{\mathcal P}_{x^*_{\breve F((u))}}(\breve F[[u]]).
\end{equation}
Since $F[[u]]$ is henselian and $F$ infinite and perfect,
condition (ET 2) of \cite[1.7.2]{BTII} is satisfied. Therefore,
(\ref{eq3.34}) implies that ${}^*\sigma$ extends to
${\mathcal P}_{x^*_{\breve F((u))}}$ which, by our construction, is the base
change of the group scheme ${\mathcal G}^*_{x^*}$ to $\breve F[[u]]$.
We now define ${\mathcal G}_x$ to be the group scheme over $\O[u]$ given by
the Weil descent datum provided by the action of ${}^*\sigma={\rm
Int}(\bold{g})\cdot \sigma$ on ${\mathcal G}^*_{x^*}$ over $\breve{\O}[u]$.
Since ${\mathcal G}^*_{x^*}$ is affine, we can indeed see that ${\mathcal G}_x$ is
represented by an affine group scheme over $\O[u]$, which then
satisfies all the requirements in the Theorem.
\bigskip
\section{Classical groups}
\setcounter{equation}{0}
Recall that when $G$ is a classical group over the local field $F$,
Bruhat and Tits have given a description of the building ${\mathcal
B}(G, F)$ as a set of certain norms on the space of the ``natural
representation" (\cite{BTclassI}, \cite{BTclassII}). At least when
$p$ is odd, this produces a description of the facets of the
building in terms of self-dual lattice chains in this space. The
corresponding parahoric group scheme can also be explicitly
described as the neutral component of the polarized automorphisms of
the lattice chain. In this chapter, we extend some of this picture
to the group schemes over $\O[u]$ constructed in Theorem
\ref{grpschemeThm}.
\subsection{Lattice chains}\label{ss4a}
First we recall the set-up of lattice chains over $\O$ (cf.
\cite{BTclassI}, \cite{BTclassII}, \cite{RapZinkBook}).
\subsubsection{}\label{sss4a1}
Suppose first that $D$ is a central division $F$-algebra
of degree $d$ and Brauer invariant $s/d$ with $0< s<d$
and ${\rm gcd}(s,d)=1$. Recall $\O$ has residue field ${\mathbb
F}_q$, $q=p^m$. Let $F_d=F{\mathbb Q}_{p^{md}}$ which is then an unramified
extension of $F$ of degree $d$ with integers $\O_d$. Set
$\sigma={\rm Frob}_{p^m}$ for the generator of the Galois group of
$F_d/F$. We can write
$$
D=F_d\oplus F_d\cdot \Pi\oplus \cdots \oplus F_d\cdot \Pi^{d-1}
$$
with relations $ \Pi^d=\varpi$, $a\cdot \Pi = \Pi\cdot \sigma^s(a)$
for all $a\in F_d$. This contains the maximal order
$\O_D=\O_{F_d}\oplus \O_{F_d}\cdot \Pi\oplus \cdots
\oplus\O_{F_d}\cdot \Pi^{d-1}$.
Consider $V=D^n$ as a left $D$-module and let $G={\rm GL}_n(D)={\rm
Aut}_D(V)$ identified by sending the matrix $A\in {\rm GL}_n(D)$ to the
automorphism $x\mapsto x\cdot A^{-1}$. A lattice $\L$ in $V=D^n$ is
a finitely generated $\O_D$-submodule of $V$ that contains a
$D$-basis of $V$; then $\L$ is $\O_D$-free of rank $n$. Recall that
a lattice chain in $V$ is a totally ordered (non-empty) set
$\L_\bullet$ of $\O_D$-lattices in $V$ which is stable under
homotheties. It can be represented as:
\begin{equation}\label{eq435}
\cdots\subset\Pi\L_0\subset\L_{r-1}\subset\cdots \subset \L_1\subset
\L_0\subset\cdots .
\end{equation}
By \cite{BTclassI}, the facets $\Omega$ in ${\mathcal B}(G, F)$ correspond
bijectively to $\O_D$-lattice chains (cf. \cite{RapZinkBook}) in
$V=D^n$ (Bruhat and Tits consider right modules but this is
equivalent). Then the parahoric group scheme ${\mathcal P}_x$ ($x\in\Omega$)
is the group scheme over $\O$ given by the $\O_D$-linear
automorphisms of the corresponding chain, i.e
$$
{\mathcal P}_x(R)={\rm Aut}_{\O_D\otimes_\O R}(\{\L_\bullet\otimes_\O R\}).
$$
\subsubsection{}\label{sss4a2} More generally suppose that $D$ is a central
$L$-algebra with an involution $\tau$;
assume that $F$ is the fixed field of the involution on $L$. Let
$\epsilon=\pm 1$ and let $h$ be an $\epsilon$-hermitian $D$-valued form
on $V=D^n$ with respect to $\tau$. If $\L$ is an $\O_D$-lattice in
$V$, we can consider its dual $\L^\vee=\{x\in \L\ |\ h(x,
\lambda)\in \O_D, \forall \lambda\in\L\}$. A lattice chain
$\L_\bullet$ is called self dual if $\L\in \L_\bullet$ if and only
if $\L^\vee\in \L_\bullet$. The form $h$ defines an involution $*$
on ${\rm GL}_n(D)$ by $ h(xA, y)=h(x, yA^*)$. Consider the unitary group
${\rm U}(V, h)=\{A\in {\rm GL}_n(D)\ |\ (A^*)^{-1}=A\}$ given by elements
of ${\rm GL}_n(D)$ that respect $h$ and let $G$ be its neutral component.
Recall that we assume $p$ is odd. By \cite{PrasadYuInv}, the
building ${\mathcal B}(G, F)$ can be identified with the fixed points of
the action of $\tau$ on ${\mathcal B}({\rm GL}_n(D), L)$. Using the above, we
now see that the facets $\Omega$ in ${\mathcal B}(G, F)$ correspond to
self-dual $\O_D$-lattice chains $\L_\bullet$ in $V$. (This also
follows from the explicit description in \cite{BTclassII}, noting
that when $p\neq 2$, the maximinorante norms of loc. cit. can be
described via self-dual graded lattice chains). It then also follows
that the parahoric Bruhat-Tits ${\mathcal P}_x$ ($x$ a generic point in
$\Omega$)
is the neutral component of the group scheme over $\O$ given by
$\O_D$-linear automorphisms of the chain $\L_\bullet$ that respect
the perfect forms $\L_i\times\L_j\to \O_D$ obtained from $h$.
\smallskip
We now extend most of this picture to the group schemes over
$\O[u]$. We first start by describing some cases of split groups.
\subsection{Some split classical groups}
\subsubsection{}\label{exGL} {\sl The case of ${\rm GL}_{N}$.} Suppose $W=\O[u]^N$ and set $\overline
W=W\otimes_{\O[u]}\O$. Write $\overline W=\oplus_{j=0}^{r-1}V_i$ and
consider the parabolic subgroup $Q\subset {\rm GL}(\bar W)$ which is the
stabilizer of the flag $F_i=\oplus_{j\geq i}V_j$ given by the $V_i$.
Denote by $d_i$ the $\O$-rank of $V_i$. For $0\leq i\leq r-1$, set
$W_i$ for the preimage of $F_i$ under $W\to \bar W$ so that
$$
uW \subset W_{r-1}\subset\cdots\subset W_1\subset W_0=W.
$$
Extend the index set by requiring $W_{i+k\cdot r}=u^kW_i$ for
$k\in {\mathbb Z}$. Denote by $\iota_i: W_{i+1}\to W_{i}$ the inclusion. We
have a natural identification ${\rm GL}(W_i)={\rm GL}(W_{i+r})$ given by
conjugating by $u$.
The dilation ${\mathcal G}={\rm GL}(W)_Q$ of ${\rm GL}(W)$ along $Q$ is isomorphic to
the closed subgroup scheme in
$H=\prod_{i=0}^{r-1}{\rm GL}(W_i)=\prod_{i\in {\mathbb Z}/r{\mathbb Z}}{\rm GL}(W_i)$ of tuples
that commute with the maps $\iota_i: W_{i}\to W_{i+1}$. It is
isomorphic to a group scheme over $\O[u]$ obtained by Theorem
\ref{grpschemeThm} applied to $G={\rm GL}_N$ and the vertices $x$,
$x_{\kappa((u))}$, obtained from the lattice chains
$\{W_i\otimes_{\O[u]}\O\}_i$, $\{W_i\otimes_{\O[u]}\kappa((u))\}_i$.
\subsubsection{}\label{exGSp}
{\sl The case of ${\rm GSp}_{2n}$.} Consider
$W=\oplus_{i=1}^{2n}\O[u]\cdot e_i$
with the perfect $\O[u]$-bilinear alternating form $h: W\times W\to \O[u]$ determined by $h(e_i, e_{2n+1-j})=\delta_{ij}$,
$h(e_i, e_j)=h(e_{2n+1-i}, e_{2n+1-j})=0$
for $1\leq i, j\leq n$. Let us fix a chain of $\O[u]$-submodules
\begin{equation}
uW\subset W_{r-1}\subset \cdots \subset W_1\subset W_0\subset W
\end{equation}
such that
(i) $W_{i}^\vee=u^{-1}W_{r-i-1}$, for $0\leq i\leq r-1$,
(ii) $W_i/W_{i+1}\simeq \O^{d_i}$.
Again, extend the index set by periodicity by setting $W_{i+k\cdot r}=u^kW_i$
so that the form $h$ gives $W_i^\vee=W_{-i-a}$, with $a=0$ or $1$, and set
$\iota_i: W_{i+1}\to W_{i}$ as before. Consider the group
scheme ${\mathcal G}$ over $\O[u]$ of similitude automorphisms of the ``polarized"
system $(W_i, h_i)_{i\in {\mathbb Z}}$;
more precisely this is the subgroup scheme of ${{\mathbb G}_{\rm m}}\times\prod_{i=0}^{r-1}{\rm GL}(W_i)={{\mathbb G}_{\rm m}}\times\prod_{i\in {\mathbb Z}/r{\mathbb Z}}{\rm GL}(W_i)$
consisting of $(c, (g_i))$ such that
$$
h (g_i(x), g_{-i-a}(y))=c\cdot h(x,y), \quad \hbox{\rm for all $i\in {\mathbb Z}$}.
$$
As in \cite[Appendix]{RapZinkBook}, we can see that ${\mathcal G}$ is smooth
over $\O[u]$; it is isomorphic to the group scheme obtained by
Theorem \ref{grpschemeThm} applied to $G={\rm GSp}_{2n}$ and the
vertices $x$, $x_{\kappa((u))}$, obtained from the self-dual lattice
chains $\{W_i\otimes_{\O[u]}\O\}_i$,
$\{W_i\otimes_{\O[u]}\kappa((u))\}_i$.
\subsection{Non-split classical groups}\label{exClassical}
We now extend this to (essentially) the general classical case. When
in the sections below we consider symmetric or hermitian forms, we
will assume that the prime $p$ is odd. We first mostly concentrate
on describing explicitly the group schemes $\underline G$.
\subsubsection{Division algebras}\label{exDivision}
With the notations of \ref{sss4a1}, consider the associative
(central) $\O[u]$-algebra given by
\begin{equation}\label{order1}
\O({\mathcal D})=\O_d[u]\oplus \O_d[u]\cdot X\oplus\cdots \oplus \O_d[u]\cdot X^{d-1},
\end{equation}
with relations $X^d=u$, $f\cdot X=X\cdot \sigma(f)^s$ for $f\in
\O_d[u]$ with $\sigma(\sum a_iv^i)=\sum\sigma(a_i)v^i$. Notice that
${\mathcal D}:=\O({\mathcal D})[u^{-1}]$ is an Azumaya algebra over $\O[u,
u^{-1}]$ which splits after the unramified extension $ \O[u^{\pm
1}]\to \O_d[u^{\pm 1}]$; then $\O({\mathcal D})$ is a maximal order in this
Azumaya algebra. We have isomorphisms
\begin{equation}
{\mathcal D}\otimes_{\O[u^{\pm 1}]} F\simeq D,\qquad
\O({\mathcal D})\otimes_{\O[u]}\O_F\simeq \O_D
\end{equation}
where the ring homomorphisms are given by $u\mapsto\varpi$. In
addition, reducing $\O({\mathcal D})$ modulo $\varpi$ followed by
completing at $(u)$ also produces a maximal order in a central
division algebra of degree $d$ and invariant $s/d$ over the local
field ${\mathbb F}_{p^m}((u))$.
For $n\geq 1$, we can consider the affine group scheme over
$\O[u^{\pm1}]$ given by
$$
R\mapsto {\rm Aut}_{{\mathcal D}\otimes_{\O[u^{\pm 1}]}R} (
{\mathcal D}^n\otimes_{\O[u^{\pm 1}]}R).
$$
We can see directly from the construction of \S \ref{ss2b} that this
group scheme is isomorphic to $\underline G$ for $G={\rm GL}_n(D)$.
We can also consider the affine group scheme ${\mathcal G}$ over $\O[u]$
given by
$$
{\mathcal G}(R):={\rm Aut}_{\O({\mathcal D})\otimes_{\O[u]}R} (
\O({\mathcal D})^n\otimes_{\O[u]}R).
$$
The group ${\mathcal G}$ is smooth over $\O[u]$ and is isomorphic to the
group scheme of Theorem \ref{grpschemeThm} for the lattice chain
given by the multiples of $\O_D$.
\subsubsection{} In what follows, the base is ${\mathbb Q}_p$ and we will be discussing group schemes
over ${\mathbb Z}_p[u^{\pm 1}]$. We assume $F$ is a tame finite extension of
${\mathbb Q}_p$ and let ${\mathbb Q}_{p^r}={\rm Frac}(W({\ensuremath{\mathbb{F}}\xspace}_{p^r}))$ the maximal
unramified extension of ${\mathbb Q}_p$ contained in $F$. Denote by
${\mathbb Z}_{p^r}$ the ring of integers $W({\ensuremath{\mathbb{F}}\xspace}_{p^r})$ of ${\mathbb Q}_{p^r}$. When
$r$ is clear, we will simply write $W$ for
${\mathbb Z}_{p^r}=W({\ensuremath{\mathbb{F}}\xspace}_{p^r})$. We will then denote by $W_d$ the integers
of the unique unramified extension $W({\ensuremath{\mathbb{F}}\xspace}_{p^{rd}})$ of $W$ of
degree $d\geq 2$.
\subsubsection{}\label{sss4b3} We can now explicitly construct the group scheme $\underline G$
over ${\mathbb Z}_p[u^{\pm 1}]$ associated to the restriction of scalars
$G={\rm Res}_{F/{\mathbb Q}}{\rm GL}_m(D)$ with $D$ a division algebra over $F$ as
above.
There is a $W$-algebra isomorphism $j:
W[x]/(x^e-pc)\xrightarrow{\sim} \O_F$ where
$p\nmid e$ and $c\in W^*$. We choose such an isomorphism $j$,
i.e a uniformizer $\varpi$ of $F$ such that $\varpi^e$ is in $W$.
As above, we construct an associative (central) $W[v]$-algebra given
by
\begin{equation}\label{order}
\O({\mathcal D})=W_d[v]\oplus W_d[v]\cdot X\oplus\cdots \oplus W_d[v]\cdot X^{d-1},
\end{equation}
with relations $X^d=v$, $f\cdot X=X\cdot \sigma(f)^s$ for $f\in
W_d[v]$ with $\sigma(\sum a_iv^i)=\sum\sigma(a_i)v^i$. (After the
base change $W[v]\to \O[u]$, $v=u$, this produces the algebra
denoted by the same symbol in the previous paragraph.) Again,
${\mathcal D}=\O({\mathcal D})[v^{-1}]$ is an Azumaya algebra over $W[v,v^{-1}]$.
We have isomorphisms
\begin{equation}
{\mathcal D}\otimes_{W[v, v^{-1}]} F\simeq D,\qquad
\O({\mathcal D})\otimes_{W[v]}\O_F\simeq \O_D
\end{equation}
where $W[v, v^{-1}]\to F$, $W[v]\to \O_F$ are given by
$v\mapsto\varpi$. In addition, reducing $\O({\mathcal D})$ modulo $p$
followed by completing at $(v)$ also produces a maximal order in a
central division algebra of degree $d$ and invariant $s/d$ over
the local field ${\mathbb F}_{p^r}((v))$.
Define $\phi: {\mathbb Z}_p[u]\to {\mathbb Z}_{p^r}[v]$ by $u\mapsto v^e\cdot c^{-1}$
with $c=\varpi^e \cdot p^{-1}$. For $m\geq 1$, we set
$M=\O({\mathcal D})^m$. We consider
$$
{\mathcal G}'(R)={\rm Aut}_{\O({\mathcal D})\otimes_{{\mathbb Z}_p[u]}R} ( M\otimes_{{\mathbb Z}_p[u]}R).
$$
This defines a smooth affine group scheme over ${\mathbb Z}_p[u]$ such that
$$
{\mathcal G}'\otimes_{{\mathbb Z}_p[u], u\mapsto p}{\mathbb Q}_p\simeq {\rm Res}_{F/{\mathbb Q}_p}(
{\rm GL}_m(D)).
$$
Suppose we choose another uniformizer $\varpi_1$ with
$\varpi_1^e=pc_1$ and denote by $\phi_1$ the corresponding map as
above. Let $y=\varpi_1/\varpi\in \O^*_F$. Since $y^e\in W^*$ and
$p\nmid e$, the extension ${\mathbb Q}_{p^r}(y)/{\mathbb Q}_{p^r}$ is unramified and
therefore $y$ is in $W^*={\mathbb Z}_{p^r}^*$. Sending $v$ to $y\cdot v$ then
gives an isomorphism $\alpha: W[v]\xrightarrow{\sim}W[v]$ that maps
$(v^e-pc)$ to $(v^e-pc_1)$ and commutes with $\phi$, $\phi_1$.
\begin{comment}
(A different choice $y'=\zeta_e\cdot y$ produces $\alpha':
{\mathbb Z}_{p^r}[v]\xrightarrow{\sim}W[v]$ that differs by an (inertial)
automorphism of ${\mathbb Z}_{p^r}[v]$ over ${\mathbb Z}_p[u]$.)
\end{comment}
Find $z\in W_d^*$ such that $N_{{\mathbb Q}_{p^{rd}}/{\mathbb Q}_{p^r}}(z)=y$; sending
$X\mapsto X\cdot z$ gives $\O({\mathcal D}) \xrightarrow{\sim} \O({\mathcal D})
\otimes_{W[v],\alpha }W[v]$. This implies that ${\mathcal G}'$ is independent
from the choice of $\varpi$ with $\varpi^e\in W$. The group scheme
${\mathcal G}'_{|{\mathbb Z}_p[u^{\pm 1}]}$ is isomorphic to the group scheme $\underline G$
obtained from $G={\rm Res}_{F/{\mathbb Q}_p}({\rm GL}_m(D))$ as above; this
follows directly from the construction of \S \ref{ss2b} using
(\ref{invarRing}). The restriction ${\mathcal G}'_{|{\mathbb Z}_p[u^{\pm 1}]}\to
{\rm Spec } ({\mathbb Z}_p[u^{\pm 1}])$ is the Weil restriction of scalars from
$W[v^{\pm 1}]$ of a twisted form of ${\rm GL}_{md}$ over $W[v^{\pm
1}]$; this twisted form is the group of automorphisms of the module
${\mathcal D}^m=\O({\mathcal D}) [v^{-1}]^m$ for the Azumaya algebra ${\mathcal D}$ over
$W[v^{\pm 1}]$.
\subsubsection{}\label{sss4c4} Here again $W$ is ${\mathbb Z}_{p^r}=W({\ensuremath{\mathbb{F}}\xspace}_{p^r})$
and $W_d=W({\ensuremath{\mathbb{F}}\xspace}_{p^{rd}})$ as above. Write
\begin{equation}
W[v^{\pm 1}]^*/(W[v^{\pm 1}]^*)^2=\{1, \alpha, v, \alpha v\}
\end{equation}
where $\alpha$ is an element of $W^*$ which is not a square.
We consider a $W[v]$-algebra ${\mathfrak R}$ given as
$$
{\mathfrak R}=W_2[v],\quad {\rm or}\quad {\mathfrak R}=W[v'], \ v\mapsto
v'^2,\quad {\rm or}\quad {\mathfrak R}=W[v'], \ v\mapsto \alpha^{-1}v'^2.
$$
We will refer to the first possibility as the {\sl unramified}
case. The other two possibilities are the {\sl ramified} case. We
have a $W[v]$-symmetric bilinear form
$$
h_{\mathfrak R}: {\mathfrak R}\times{\mathfrak R}\to W[v]; \quad h_{\mathfrak R}(x,
y)=\frac{1}{2}{\rm Tr}_{{\mathfrak R}/W[v]}(x\bar y)
$$
where $r\mapsto \bar r$ is the order two automorphism of ${\mathfrak R}$
over $W[v]$. The form $h_{{\mathfrak R}}$ is perfect in the unramified
case; $h_{\mathfrak R}[v^{-1}]$ on ${\mathfrak R}[v^{-1}]$ is always perfect.
We also consider the central $W[v]$-algebra
$$
\O({\mathcal Q})=W_2[v]\oplus W_2[v]\cdot X
$$
with relations $X^2=v$, $f\cdot X=X\cdot \sigma(f)$ for $f\in
W_2[v]$; this corresponds to the quaternion case $(s,d)=(1,2)$ as
above. Denote by $x\mapsto \bar x$ the main involution of
$\O({\mathcal Q})$ which is $\sigma$ on $W_2[v]$ and maps $X$ to $-X$. Let
$\zeta$ be a root of unity that generates $W_2$ over $W$,
$W_2=W(\zeta)$. Then $\bar\zeta=-\zeta$. The reduced norm ${\rm
Norm}(r)=r\cdot \bar r$ defines a $W[v]$-linear quadratic form on
$\O({\mathcal Q})$. Denote by $h_{\O({\mathcal Q})}: \O({\mathcal Q})\times\O({\mathcal Q})\to
W[v]$ the corresponding $W[v]$-bilinear symmetric form.
We will use the symbol ${\mathfrak O}$ to denote one of $W[v]$, ${\mathfrak R}$,
or $\O({\mathcal Q})$; each of these $W[v]$-algebras supports an involution
$x\mapsto \bar x$ as above (this is trivial in the case of $W[v]$).
In the following paragraph we will give each time a free (left)
module $M$ over ${\mathfrak O}$ which is equipped with a certain form $h$
(alternating, symmetric, hermitian, etc.). All the forms below are
``perfect" after we invert $v$, i.e over $W[v^{\pm 1}]$. We will
consider the group scheme ${\mathcal G}'$ over $ W[v]$
given by the ${\mathfrak O}$-module automorphisms of $M$ that respect the corresponding form $h$.
Suppose that $F$ is a totally tamely ramified extension of ${\mathbb Q}_{p^r}$ of degree $e$.
Choose a uniformizer $\varpi$ of $F$ with
$\varpi^e\in W$ as above and consider the base change $W[v^{\pm
1}]\to F$ given by $v\to \varpi$. In the list below, we mention the
type of the isogeny class of the group ${\mathcal G}'|_{W[v^{\pm
1}]}\otimes_{W[v^{\pm 1}]}F$ over $F$ according to the tables 4.2
and 4.3 of \cite[p. 60-65]{TitsCorvallis}. The determination of
these types follows \cite[4.4, 4.5]{TitsCorvallis}. The
corresponding symbol is read from the first column of these tables.
\subsubsection{Alternating forms}\label{exAlt}
\begin{itemize}
\item{} $M=W[v]^{2n}=\oplus_{i=1}^{2n}W[v]\cdot e_i$ with the alternating $W[v]$-bilinear form $h$
determined by $h(e_i, e_{2n+1-j})=\delta_{ij}$, $1\leq i\leq n$.
(cf. \S \ref{exGSp}). (For $n\geq 2$, the type is $C_n$.)
\end{itemize}
\subsubsection{Symmetric forms}\label{exSymm}
(Set $n=2m+1$, or $n=2m$.)
\begin{itemize}
\item{\sl Split:} $M=W[v]^{n}=\oplus_{i=1}^{n}W[v]\cdot e_i$ with the symmetric $W[v]$-bilinear form $h=h(n)$
determined by $h(n)(e_i, e_{n+1-j})=\delta_{ij}$. (For $n\geq 6$,
the type is $B_m$, or $D_m$ respectively.)
\smallskip
\item{\sl Quasi-split, even case:} Here $n$ is even and
$M=W[v]^{n-2}\oplus {\mathfrak R}$ with the symmetric $W[v]$-bilinear form
$h$ given as the direct sum $h(n-2)+h_{{\mathfrak R}}$. (The types are
${}^2D_m$ if ${\mathfrak R}=W_2[v]$ (unramified) and $n\geq 8$ and
$C-B_{m-1}$ if ${\mathfrak R}$ is ramified and $n\geq 6$.)
\smallskip
\item{\sl Non quasi-split, even case:} Here $n$ is even and
$M=W[v]^{n-4}\oplus \O({\mathcal Q}) $
with the symmetric $W[v]$-bilinear form $h$ given as the direct
sum $h(n-4)+h_{\O({\mathcal Q})}$. (For $n\geq 6$, the type is ${}^2D'_m$.)
\smallskip
\item{\sl Non quasi-split, odd case:} Here $n$ is odd and $M=W[v]^{n-3}\oplus \O({\mathcal Q})^0 $
with the symmetric $W[v]$-bilinear form $h$ given as the direct sum
$h(n-3)+h_{\O({\mathcal Q})^0}$. We denote by $\O({\mathcal Q})^0$ the submodule
of elements $r$ for which $r+\bar r=0$ and by $h_{\O({\mathcal Q})^0}$ the
restriction of $h_{\O({\mathcal Q})}$ to this submodule. (For $n\geq 6$,
the type is ${}^2B'_m$.)
\end{itemize}
\subsubsection{Hermitian forms}\label{exHerm}
(Set $n=2m+1$, or $n=2m$.)
\begin{itemize}
\item{\sl quasi-split:} $M={\mathfrak R}^{n}$ with hermitian form $h=H(n)$ given by
$$
H(n)(x, y)=x^t\cdot K_n\cdot\bar y
$$
where $K_n$ is the antidiagonal $n\times n$ unit matrix. There are
subcases here according to the choice of ${\mathfrak R}$. (Suppose $n\geq
3$. In the unramified case, the type is ${}^2A'_{n-1}$. In the
ramified case, the type is $C-BC_m$ if $n=2m+1$, or $B-C_m$ if
$n=2m$.)
\item{\sl non quasi-split, ramified, even case:} Here $n=2m$ is even, $M={\mathfrak R}^{n-2}\oplus {\mathfrak R}^2$, ${\mathfrak R} $
ramified with hermitian form $H$ given as the direct sum
$h=H(n-2)\oplus H_\alpha$ with
$$
H_\alpha((x_1, x_2), (y_1, y_2) )= x_1\bar y_1- \alpha\cdot
x_{2}\bar y_{2}
$$
and $\alpha\in W^*$ which is not in $(W^*)^2$. (If $n\geq 3$, the
type is ${}^2B-C_m$.)
\item{\sl non quasi-split, unramified, even case:} Here again $n=2m$ is even, $M={\mathfrak R}^{n-2}\oplus {\mathfrak R}^2$, ${\mathfrak R}=W_2[v]$,
with hermitian form $h$ given as the direct sum $h=H(n-2)\oplus H_u$
with
$$
H_u((x_1, x_2), (y_1, y_2) )= x_1\bar y_1- v\cdot x_{2}\bar y_{2}.
$$
(If $n\geq 3$, the type is ${}^2A''_{n-1}$.)
\end{itemize}
\subsubsection{Quaternionic $\epsilon$-hermitian forms}\label{exQHermitian}
Let $\epsilon=\pm 1$. If $M$ is a left $\O({\mathcal Q})$-module, then a
$W[u]$-bilinear $H: M\times M\to \O({\mathcal Q})$ is called an
$\epsilon$-hermitian (i.e hermitian if $\epsilon=1$, anti-hermitian
if $\epsilon=-1$) form, for the main involution $d\mapsto \bar d$,
if it satisfies: $H(dx, y)=dH(x, y)$, $\overline{ H(x,y)}=\epsilon
H(y,x)$ for $d\in\O({\mathcal Q})$, $x$, $y\in M$. Choose a unit $\xi\in
W^*_2$ such that ${\rm Norm}(\xi)=-{\rm Norm}(\zeta)=\zeta^2$.
\begin{itemize}
\item{\sl Quaternionic hermitian:} $M=\O({\mathcal Q})^n$, with hermitian form $h=H(n): M\times M\to \O({\mathcal Q})$
given by
$$
H(n)(x, y)= x^t\cdot K_n\cdot \bar y.
$$
(If $n\geq 2$, the type is ${}^2C_n$.)
\item{\sl Quaternionic anti-hermitian:} $M=\O({\mathcal Q})^{m}\oplus \O({\mathcal Q})^m\oplus M_0$ where $M_0=\O({\mathcal Q})^r$,
$n=2m+r$. The anti-hermitian form $h='H: M\times M\to \O({\mathcal Q})$ is
the direct sum $'H(2m)\oplus 'H_0$ where
$$
'H(2m)(x, y)= x^t\cdot \left(\begin{matrix}0& I_m\\
-I_m&0\end{matrix}\right)\cdot \bar y
$$
is the standard anti-hermitian hyperbolic form and $M_0$ with its form $'H_0$ is given as in the one of the following four cases:
\begin{itemize}
\item{(a)} $M_0=(0)$. (Here $n$ is even. If $n\geq 6$, the type is ${}^2D''_n$.)
\item{(b)} $M_0=\O({\mathcal Q})$ with form $xc\bar y$ with $c=X$, $c=\zeta$, or $c=X\xi$.
(Here $n$ is odd. The type is either ${}^2D''_n$ if $c=\zeta$ and
$n\geq 5$, or for $n\geq 3$, ${}^2C-B_{n-1}$ otherwise.)
\item{(c)} $M_0=\O({\mathcal Q})^2$ with form $x_1a_1\bar y_1+x_2a_2\bar y_2$ with $a_1$, $a_2$
two distinct elements of the set $\{X, \zeta, X\xi\}$. (Here $n$ is
even. If $n\geq 4$, the type is ${}^4D_n$.)
\item{(d)} $M_0=\O({\mathcal Q})^3$ with form $x_1X\bar y_1+x_2\zeta\bar y_2+x_3X\xi\bar y_3$.
(Here $n$ is odd. If $n\geq 5$, the type is ${}^4D_n$.)
\end{itemize}
\end{itemize}
\smallskip
\subsubsection{}\label{exhaustive}
Our list of cases above is exhaustive in the following sense: Choose
once and for all the uniformizer $\varpi$ of $F$. The connected
components of the specializations ${\mathcal G}'|_{W[v^{\pm
1}]}\otimes_{W[v^{\pm 1}]}F$, together with the groups ${\rm
SL}_m(D)$ for $F$-central division algebras $D$ (these groups are of
type ${}^dA_{md-1}$ with $d$ the degree of $D$), give exactly all
the isogeny classes of absolutely almost simple groups over $F$
which are of classical type. (More precisely, if we avoid
exceptional isomorphisms by obeying the listed restrictions on the
dimension $n$, we obtain each isogeny class exactly once.) This
follows from the above, the discussion in \cite[4.5]{TitsCorvallis},
and classical results on the classification of quadratic and
(quaternionic) hermitian forms over local fields (e.g.
\cite{Jacobson}, \cite{Tsukamoto}). For example, to deal with the
quasi-split case for symmetric forms, we notice that, since the
residue characteristic is odd, $v\mapsto\varpi$ gives an isomorphism
\begin{equation}
W[v^{\pm 1}]^*/(W[v^{\pm 1}]^*)^2\xrightarrow{\sim} F^*/(F^*)^2.
\end{equation}
This allows us to realize any quadratic extension $L/F$ as a
specialization of a uniquely specified ${\mathfrak R}/W[v]$ at $v\mapsto
\varpi$. As a result, the trace form $\frac{1}{2}{\rm Tr}_{L/F}( \
)$ can be obtained by specializing $\frac{1}{2}{\rm
Tr}_{{\mathfrak R}[v^{\pm 1}]/W[v^{\pm}]}( \ )$ by $v\mapsto \varpi$.
\subsubsection{}
In what follows, the symbol ${\mathfrak O}$ will denote either $\O({\mathcal D})$
as in (\ref{order}), or $W[v]$, ${\mathfrak R}$, $\O({\mathcal Q})$ as in the
previous sections. Recall that we denote by ${\mathcal G}'$ the group scheme
over $W[v]$ of ${\mathfrak O}$-automorphisms of $M$ that also preserve the
form $h$ if applicable. All the above forms are ``perfect" after we
invert $v$, i.e over $W[v^{\pm 1}]$, and we can see that ${\mathcal G}'|_{
W[v^{\pm 1}]}$ is reductive. Denote by $G'$ the specialization of
the neutral component $({\mathcal G}'|_{W[v^{\pm 1}]})^\circ\otimes_{W[v^{\pm
1}]}F$ and consider the Weil restriction of scalars $G={\rm
Res}_{F/{\mathbb Q}_p}G'$. Regard $W[v^{\pm 1}]$ as a ${\mathbb Z}_p[u^{\pm
1}]$-algebra via $u\mapsto v^e\cdot p\varpi^{-e}$ as before.
\begin{prop}\label{coincide}
The group scheme $\underline G$, as constructed in \S \ref{ss2b} from $G$
above, is isomorphic to the neutral component of the group scheme
over ${\mathbb Z}_p[u^{\pm 1}]$ with $R$-valued points the
${\mathfrak O}\otimes_{{\mathbb Z}_p[u^{\pm 1}]}R$-linear automorphisms of
$M\otimes_{{\mathbb Z}_p[u^{\pm 1}]}R$ that also respect the form
$h\otimes_{{\mathbb Z}_p[u^{\pm 1}]}R$ if applicable.
\end{prop}
\begin{proof}
As above, consider
the neutral component $\underline J:=({\mathcal G}'|_{W[v^{\pm 1}]})^\circ$ of the group scheme over
$W[v^{\pm 1}]$
of ${\mathfrak O}$-automorphisms of $M$ that also preserve the form $h$ if
applicable. Then the group scheme in the statement of the
Proposition is isomorphic to ${\rm Res}_{W[v^{\pm 1}]/{\mathbb Z}_p[u^{\pm
1}]}\underline J$ and is enough to check that $\underline J$ is isomorphic to
the group scheme $\underline G'$ which is obtained by $G'$ using our
constructions in the previous chapters. This can be shown by a
case-by-case verification and we will leave some of the work to the
reader:
First, in the case of inner forms of type $A_n$ where ${\mathfrak O}=\O({\mathcal D})$,
the result is in \S \ref{sss4b3} and follows directly from the
construction of \S \ref{ss2b}. Second, suppose we consider the rest
of the cases of the previous section; then we assume that $p$ is
odd. The group
$G'$ contains a standard split torus and we can compute the
quasi-split forms and the corresponding anisotropic kernel in the
explicit descriptions of the cases above. The adjoint groups of
these anisotropic kernels are inner forms of products of
${\rm PGL}_2$ or ${\rm PGL}_4$ (the latter occuring only in case
\ref{exQHermitian} (d)). We first check that $\underline
J\otimes_{W[u^{\pm 1}]}\breve{\mathbb Z}_p[v^{\pm 1}]$ is quasi-split over
$\breve{\mathbb Z}_p[v^{\pm 1}]$. Then the rigidity of quasi-split forms (\S
\ref{ssstameeq}) shows the base change $\underline J\otimes_{W[u^{\pm
1}]}\breve{\mathbb Z}_p[v^{\pm 1}]$ is isomorphic to
$\underline {G'}^*\otimes_{W[u^{\pm 1}]}\breve{\mathbb Z}_p[v^{\pm 1}]$ and
it remains to verify that the inner twists of $\underline {G'}^*$ that
define $\underline J$ and $\underline G'$ agree. This can be done case-by-case;
we leave the details to the reader. (Alternatively, since the
reductive groups $\underline J$ always split over a degree $4$ Galois
cover of $W[u^{\pm 1}]$ and $p$ is odd, the isomorphism between
$\underline J$ and $\underline G'$ can also be shown directly using Remark
\ref{Piano}.)
\begin{comment} In all but case \ref{exQHermitian} (d) we can also argue as follows:
\cite[Theorem 2.4]{ParimalaLaurent} together with \cite{KnusQuadratic}, imply that
the only, up to
isomorphism, non-split inner form of ${\rm PGL}_2$ over $W[v^{\pm
1}]$ which splits over $\breve{\mathbb Z}_p[v^{\pm 1}]$ is given by the
adjoint quotient of ${\rm Aut}_{{\mathcal Q}}( {\mathcal Q})$. This, in turn,
implies that the class $\underline c$ of \ref{sss2c3} defining the inner
twist is then uniquely determined by its restriction along $v\mapsto
\varpi$; the result then follows.)
\end{comment}
\end{proof}
\subsubsection{}\label{sss4b11} We can now extend the explicit descriptions
of the group schemes ${\mathcal G}$ from the split cases above to the general
classical case. Recall ${\mathfrak O}$ will denote either $\O({\mathcal D})$ as in
(\ref{order}), or $W[v]$, ${\mathfrak R}$, $\O({\mathcal Q})$ as in the previous
sections. Then, we also denote by $Y$ the ``standard uniformizer" of
each of these algebras, i.e $X$ when ${\mathfrak O}$ is $\O({\mathcal D})$ or
$\O({\mathcal Q})$, $v$ when ${\mathfrak O}=W[v]$ or ${\mathfrak R}=W_2[v]$ in the
unramified case, $v'$ when ${\mathfrak O}={\mathfrak R}$ in the ramified case.
A ${\mathfrak O}$-lattice chain in ${\mathfrak O}[v^{\pm 1}]^n$ is a totally
ordered non-empty set $M_\bullet$ of left ${\mathfrak O}$-submodules of
${\mathfrak O}[v^{\pm 1}]^n$ which are free of rank $n$, which is stable
under multiplication by $Y$ and $Y^{-1}$, and of the form
\begin{equation}
\cdots\subset Y M_0 \subset M_{r-1}\subset\cdots \subset M_1\subset
M_0\subset\cdots .
\end{equation}
with $M_i/M_{i+1}$, for all $i\in{\mathbb Z}$, free over $W$.
If ${\mathfrak O}$ is one of $W[v]$, ${\mathfrak R}$, $\O({\mathcal Q})$ with form $h$ as
in the previous section, and $N$ a ${\mathfrak O}$-lattice in
${\mathfrak O}[v^{\pm 1}]^n$ we can consider the dual $N^\vee=\{x\in
{\mathfrak O}[v^{\pm 1}]^n\ |\ h(x, m)\in {\mathfrak O}, \ \forall m\in N\}$ which
is also an ${\mathfrak O}$-lattice. The ${\mathfrak O}$-lattice chain $M_\bullet$
is called self-dual, when $N$ belongs to the lattice chain
$M_\bullet$, if and only if $N^\vee$ belongs to the lattice chain
$M_\bullet$. Then, for each index $i$, there is $j$ such that $h$
induces a prefect pairing
\begin{equation}\label{forms}
h: M_i\times M_j\to {\mathfrak O}.
\end{equation}
If $M_\bullet$ is a (self-dual) ${\mathfrak O}$-lattice chain, the base
change $M_\bullet\otimes_{W[v]}\O$, by $v\mapsto \varpi$, gives a
(self-dual) lattice chain as in \S \ref{ss4a}.
\begin{Remark}\label{genericLattice}
{\rm In fact, we can give such self-dual ${\mathfrak O}$-lattice chains by
the following construction: Let $M$ be $\O({\mathcal D})^n$ or in general
${\mathfrak O}^n$ with the form as in \S \ref{exAlt}, \ref{exSymm},
\ref{exHerm}, etc. Consider the $F((v))$-vector space
$V=M\otimes_{W[v^{\pm 1}]}F((v))$; it is a free (left)
${\mathfrak O}\otimes_{W[v]}F((v))$-module and supports the perfect form
$h\otimes_{W[v]}F((v))$. Choose a self-dual lattice chain
$L^\bullet=(L_i)_i$ of ${\mathfrak O}\otimes_{W[v]}F[[v]]$-lattices in $V$
in the sense of \S \ref{ss4a} (cf. \cite{RapZinkBook}) (for the
equal characteristic dvr $F[[v]]$). Since $M\otimes_{W[v]}F[[v]]$ is
also a lattice in $V$, for each $i\in{\mathbb Z}$, there is $r_i\geq 0$ so
that $v^{r_i}M\otimes_{W[v]}F[[v]]\subset L_i\subset
v^{-r_i}M\otimes_{W[v]}F[[v]]$. The intersection $M_i:=L_i\cap
M[v^{-1}]$ is a finitely generated and reflexive (i.e equal to its
double dual) $W[v]$-module.
Since $W[v]$ is regular Noetherian
of Krull dimension $2$ and $(v, p)$ has codimension $2$, it follows
that
$M_i$ is a finitely generated projective
module which then is actually $W[v]$-free (\cite{SeshadriPNAS}). In
fact, locally free coherent sheaves on ${\rm Spec } (W[v])-\{(v, p)\}$
uniquely extend to locally free coherent sheaves over ${\rm Spec } (W[v])$.
Similarly, module homomorphisms between two such sheaves
uniquely extend.
Using this extension
property, we now see that ${\mathfrak O}$-multiplication and the perfect
forms (considered as maps $L_i\to L_j^{\rm dual}$) extend to
$\{M_i\}$. Suppose that $L_i\subset L_j$ are two $F[[v]]$-lattices
in the chain and consider the corresponding $W[v]$-lattices
$M_i\subset M_j$. Notice that $M_i/M_j$ has projective dimension $1$
as a $W[v]$-module and so it follows from the Auslander-Buchsbaum
theorem that $M_i/M_j$ has depth $1$. On the other hand, since
$M_i[v^{-1}]=M_j[v^{-1}]$, $M_i/M_j$ is supported along $v=0$. By
the above, all the associated primes of $M_i/M_j$ have height $1$,
therefore $M_i/M_j$ has no section supported over $(v, p)$. It now
follows that if the quotient $L_i/L_j$ is annihilated by $v$ and has
$F$-rank $d$, then
$M_i/M_j$ is also annihilated by $v$ and is actually $W$-free of rank $d$.
Similarly, we see that there are $m_j\geq 0$ such that
$$
v^{m_j}{\mathfrak O}^n\subset M_i\subset v^{-m_j}{\mathfrak O}^n
$$
with $M_i/v^{m_j}{\mathfrak O}^n$, $v^{-m_j}{\mathfrak O}^n/M_i$ both $W$-free. We
can now show that all such $M_i$ are free ${\mathfrak O}$-modules. Hence,
it also follows that $M_\bullet=\{M_i\}_i$ is a (self-dual)
${\mathfrak O}$-lattice chain in $M[u^{\pm 1}]$ in the sense above. }
\end{Remark}
Now consider the group scheme ${\mathcal G}'$ over ${\mathbb Z}_p[u]$ with $R$-valued
points the ${\mathfrak O}\otimes_{{\mathbb Z}_p[u ]}R$-linear automorphisms of the
chain $M_\bullet\otimes_{{\mathbb Z}_p[u ]}R$ that respect the forms
$h\otimes_{{\mathbb Z}_p[u ]}R$ of (\ref{forms}). The arguments in
\cite[Appendix]{RapZinkBook} show that ${\mathcal G}'$ is smooth. Base
changing by ${\mathbb Z}_p[u]\to {\mathbb Z}_p$ via $u\mapsto p$ or by ${\mathbb Z}_p[u]\to
{\mathbb Q}_p((u))$ produces smooth group schemes whose neutral component is
a Bruhat-Tits parahoric group scheme (see \S \ref{sss4a2}). Using \S
\ref{unique}, we can now see that the neutral component ${\mathcal G}$ of
${\mathcal G}'$ is a group scheme obtained by Theorem \ref{grpschemeThm}.
\subsubsection{Variants}\label{exVariants}
Similarly, we can consider:
\begin{itemize}
\item{} anti-hermitian forms $\tilde h$ on ${\mathfrak R}^n$ obtained by multiplying
the hermitian forms $h$ of \ref{exHerm} by either $\zeta$ (in the
unramified case), or by $v'$ (in the ramified cases).
\item{} $\epsilon$-hermitian forms $\tilde h$ on $\O({\mathcal D})$ for the ``new involution" $d\mapsto d'$ on $\O({\mathcal D})$
given by $d':=X^{-1}\cdot \bar d\cdot X$. Such forms can be obtained
by multiplying the $\epsilon$-hermitian forms $h$ for the main
involution of \S \ref{exQHermitian} by $X$. Indeed, if $h$ is
$\epsilon$-hermitian for $d\mapsto \bar d$, then $ h\cdot X$ is
$(-\epsilon)$-hermitian for $d\mapsto d'$.
\end{itemize}
The automorphisms groups of these forms (after specialization to
$F$) do not produce additional isogeny classes of reductive groups.
However, as we will see later, considering these forms is useful in
constructing certain symplectic embeddings and so these will appear
in our discussion of local models for Shimura varieties of PEL type.
\bigskip
\section{Loop groups and affine Grassmannians}\label{chapterLoop}
\setcounter{equation}{0}
In this chapter we define and show the (ind-)representability of the
various versions (``local" and ``global")
of the affine Grassmannian that we will use. We start
by showing a version of the descent lemma of Beauville-Laszlo
for ${\mathcal G}$-torsors.
\subsection{A descent lemma}\label{affGrass}
We continue with the same notations, so that $\O$ is a discrete
valuation ring with fraction field $F$ and perfect residue field
$k$. Let ${\mathcal G} \to X=\AA^1_\O={\rm Spec } (\O[u])$ be a smooth affine group
scheme over $\AA^1_\O$ with connected fibers. Suppose that $R$ is an
$\O$-algebra and denote by $r: {\rm Spec } (R)\to {\rm Spec } (\O[u])$ the
$R$-valued point of $\AA^1_\O$ given by $u\mapsto r\in R$. We can
identify the completion of $\AA^1_R={\rm Spec } (R[u])$ along the graph of
$r$ with ${\rm Spec } (R[[T]])$ using the local parameter $T=u-r$.
The following extends the descent lemma of Beauville-Laszlo \cite{BLdescente}.
\begin{lemma}\label{descentBL} There is a $1$-$1$ correspondence
between elements of ${\mathcal G}(R((T)))$ and triples $({\mathcal T},
\alpha, \beta)$, where ${\mathcal T}$ is a ${\mathcal G}$-torsor over $R[u]$
and $\alpha$, $\beta$ are trivializations of the torsors ${\mathcal
T}\otimes_{R[u]}R[T^{-1}]$ and ${\mathcal T}\otimes_{R[u]}R[[T]]$
respectively. The inverse of the correspondence associates to the
triple $({\mathcal T}, \alpha, \beta)$ the element
$(\alpha^{-1}\cdot \beta)(1)$.
\end{lemma}
\begin{proof}
In \cite {BLdescente}, this is proven when ${\mathcal G}={\rm GL}_n$. More
generally, \cite {BLdescente} shows how one can construct a
$T$-regular $R[u]$-module $M$ from a triple $(F, G, \phi)$ of a
$R[T,T^{-1}]$-module $F$, a $T$-regular $R[[T]]$-module $G$ and an
$R((T))$-isomorphism $\phi: R[[T]]\otimes_{R[u]} F\xrightarrow{\sim}
G[T^{-1}]$. Starting from $g\in {\mathcal G}(R((T)))$, we can apply this to
$F=B\otimes_{R[u]}R[T^{-1}]$, $G=B\otimes_{R[u]}R[[T]]$ and $\phi$
given by the (co)action of $g$, i.e $\phi$ is the composition
$$
B\otimes_{\O[u]}R((T))\xrightarrow{ } (B\otimes_{\O[u]}R((T)))\otimes_{R((T))} (B\otimes_{\O[u]}R((T)))\xrightarrow{id\otimes g^*} B\otimes_{\O[u]}R((T)) .
$$
In this, $g^*$ is the $R[u]$-algebra homomorphism $B\to R((T))$
corresponding to $g\in {\mathcal G}(R((T)))$. Denote by $C$ the corresponding
$R[u]$-module obtained using \cite {BLdescente}. Notice that $\phi$
is an $R((T))$-algebra homomorphism; this allows us to deduce that
$C$ is an $R[u]$-algebra. \begin{comment}(Using Prop.
\ref{linearrep} (b), we can see that $C$ has a finite presentation
over $R[u]$.) Byt follows from descent anyway.
\end{comment}
Since $B$ is flat over $\O[u]$, by \cite {BLdescente} we see that
$C$ is also flat over $R[u]$. In fact, since by construction,
$C\otimes_{R[u]} R[[T]]\simeq B\otimes_{\O[u]}R[[T]]$, the map
${\rm Spec } (C)\to {\rm Spec } (R[u])$ is surjective and so $C$ is faithfully
flat over $R[u]$. For our choice of $\phi$, the algebra $C$ affords
a $B$-comodule structure $C\to B\otimes_{R[u]}C$ which base-changes
to the standard $B$-comodule structures on
$B\otimes_{\O[u]}R[T^{-1}]$ and $B\otimes_{\O[u]}R[[T]]$. Now set
${\mathcal T}={\rm Spec } (C)\to {\rm Spec } (R[u])$. By the above, $\mathcal T$
is a faithfully flat scheme with ${\mathcal G}$-action which is
${\mathcal G}$-equivariantly isomorphic to ${\mathcal G}$ over $R[[T]]$ and $R[T,
T^{-1}]$. We would like to conclude that
${\mathcal T}:={\rm Spec } (C)$ is the corresponding torsor.
Consider the map $m: {\mathcal G}\times_{{\rm Spec } (\O[u])} {\mathcal T}\to
{\mathcal T}\times_{{\rm Spec } (R[u])}{\mathcal T}$ given by $(g,
t)\mapsto (g\cdot t, t)$. It is enough to show that the
corresponding ring homomorphism
$$
m^*: C\otimes_{R[u]}C\xrightarrow{\ } B\otimes_{\O[u]}C
$$
is an isomorphism. Observe that $m^*$ is injective since
$m^*[T^{-1}]$ is an isomorphism. Also $m^*$ is surjective by
\cite[Lemme 2] {BLdescente} since the base-change $R[u]\to R[T,
T^{-1}]\times R[[T]]$ is faithful.
\end{proof}
\subsection{Affine Grassmannians}
\subsubsection{The local affine Grassmannian.}\label{LocalaffGrass}
If $S$ is an $\O$-scheme we will denote by $D_S$ the scheme given by
the completion of $X\times_\O S$ along the zero section $0: S\to
X\times_\O S$. We also set $D^0_S=D_S-\{0\}$. If $R$ is an
$\O$-algebra and $S={\rm Spec } (R)$ is affine, $D_S={\rm Spec } (R[[u]])$,
$D^*_S={\rm Spec } (R((u)))$.
Let ${\mathcal G} \to X=\AA^1_\O={\rm Spec } (\O[u])$ be a smooth affine group scheme with connected
fibers. If $R$ is an $\O$-algebra, we set $L{\mathcal G}(R)={\mathcal G}(R((u)))$ and $L^+{\mathcal G}(R)={\mathcal G}(R[[u]])$.
Since ${\mathcal G}$ is affine, we can see that $L{\mathcal G}$, resp. $L^+{\mathcal G}$, is
represented by an ind-affine scheme, resp. affine scheme, over $\O$.
We also consider the quotient fpqc sheaf ${\rm
Gr}_{{\mathcal G}}:=L{\mathcal G}/L^+{\mathcal G}$ on $({\rm Sch}/\O)$ which is given by the
presheaf described by $R\mapsto L{\mathcal G}(R)/L^+{\mathcal G}(R)$.
\begin{prop} \label{identify} Let $S$ be a scheme over $\O$.
There are natural identifications
\begin{equation}\label{a}
{\rm Gr}_{{\mathcal G}}(S)= \biggl\{\,\text{iso-classes of pairs } ( \mathcal T, \alpha) \biggm|
\twolinestight{$\mathcal T$ a ${\mathcal G}$-torsor on $D_S$,}
{$\alpha$ a trivialization of $\mathcal T |_{ D^*_S}$}\,\biggr\}\, .
\end{equation}
\begin{equation}\label{fun1}
{\rm Gr}_{{\mathcal G}}(S)= \biggl\{\,\text{iso-classes of pairs } ( \mathcal E, \beta) \biggm|
\twolinestight{$\mathcal E$ a ${\mathcal G}$-torsor on $\AA^1_S$,}
{$\beta$ a trivialization of $\mathcal E |_{ \AA^1_S\setminus\{u=0\}}$}\,\biggr\}\, .
\end{equation}
\end{prop}
\begin{proof} The argument showing this can be found in
\cite{LaszloSorger}, see especially \cite[Prop. 3.10]{LaszloSorger}.
The crucial point is to observe that every ${\mathcal G}$-torsor ${\mathcal
T}$ over $R[[u]]$ can be trivialized over $R'[[u]]$ where $R\to R'$
is a faithfully flat extension. Indeed, ${\mathcal
T}\otimes_{R[[u]]}R$ has a section after such an extension $R\to
R'$; since ${\mathcal T}\to {\rm Spec } (R[[u]])$ is smooth this section
can be extended to a section over $R'[[u]]$. This shows the first
identification. The second identification now follows using the
descent lemma \ref{descentBL}.
\end{proof}
\subsubsection{}
Assume now in addition that ${\mathcal G}={\mathcal G}_x \to X=\AA^1_\O={\rm Spec } (\O[u])$
is a Bruhat-Tits group scheme in the sense of Theorem
\ref{grpschemeThm}.
\begin{prop}\label{indproper}
The sheaf ${\rm Gr}_{{\mathcal G}}$ is represented by an ind-projective
ind-scheme over $\O$.
\end{prop}
\begin{proof} For this we can appeal to the sketchy \cite{FaltingsLoops} (for the split case)
and to \cite{PappasRaTwisted} when the residue field $k$ is
algebraically closed. We give here a general proof by a different
argument.
We first show that ${\rm Gr}_{{\mathcal G}}$ is representable by an
ind-scheme of ind-finite type and separated over $X$. By Proposition
\ref{qaffine}, there is a closed group scheme immersion
${\mathcal G}\hookrightarrow {\rm GL}_n$ such that the quotient ${\rm GL}_n/{\mathcal G}$ is
representable by a quasi-affine scheme over $\O[u]$. The argument in
\cite[4.5.1]{BeilinsonDrinfeld} (or \cite[Appendix]{GaitsgoryInv},
cf. \cite{PappasRaTwisted}) now shows that the natural functor $
{\rm Gr}_{{\mathcal G}}\to {\rm Gr}_{{\rm GL}_n} $ is representable and is a
locally closed immersion. In fact, if the quotient ${\rm GL}_n/{\mathcal G}$ is
affine, this functor is a closed immersion. Now note that, as is
well-known (loc. cit.), the affine Grassmannian ${\rm Gr}_{{\rm GL}_n}$
is representable by an ind-scheme which is ind-projective over $\O$.
It remains to show that ${\rm Gr}_{{\mathcal G}}$ is ind-proper.
Assume first that $\underline G=H\otimes_{\mathbb Z} \O[u^{\pm 1}]$ is split.
Consider an alcove $C$ whose closure contains $x$. If $y$ is in the
interior of the alcove $C$, then an argument as in \ref{unique}
shows that there is a group scheme homomorphism ${\mathcal G}_y\to {\mathcal G}_x$
which induces ${\mathcal G}_y[u^{-1}]={\mathcal G}_x[u^{-1}]$. (${\mathcal G}_y$ is a group
scheme corresponding to an Iwahori subgroup.) Hence, the morphism
${\rm Gr}_{{\mathcal G}_y}=L{\mathcal G}_y/L^+{\mathcal G}_y\to {\rm
Gr}_{{\mathcal G}_x}=L{\mathcal G}_x/L^+{\mathcal G}_x$ is surjective and it is enough to show
that ${\rm Gr}_{{\mathcal G}_y}$ is ind-proper. Now observe that the closure
of the alcove $C$ always contain a hyperspecial point $x_0$; then
${\mathcal G}_{x_0}$ is reductive, ${\mathcal G}_{x_0}\simeq H\otimes_{\mathbb Z}\O[u]$. As in
the proof of Theorem \ref{grpschemeThm} the group scheme
homomorphism ${\mathcal G}_{y}\to {\mathcal G}_{x_0}$ identifies ${\mathcal G}_{y}$ with the
dilation of $H\otimes_{\mathbb Z}\O[u]$ along a Borel subgroup $B$ of the
fiber $H$ over $u=0$. This implies that the fpqc sheaf associated to
the presheaf $R\mapsto {\mathcal G}_{x_0}(R[[u]])/{\mathcal G}_y(R[[u]])$ is
representable by the smooth projective homogeneous space $Y:=H/B$
over $\O$. Hence, the morphism ${\rm Gr}_{{\mathcal G}_y}=L{\mathcal G}_y/L^+{\mathcal G}_y\to
{\rm Gr}_{{\mathcal G}_{x_0}}=L{\mathcal G}_{x_0}/L^+{\mathcal G}_{x_0}$ is an fppf fibration
with fibers locally isomorphic to $Y$; in particular it is a
projective surjective morphism. (In fact, we note here that, as in
\cite{FaltingsLoops}, we can see that the quotient morphism
$L{\mathcal G}_y\to {\rm Gr}_{{\mathcal G}_y}=L{\mathcal G}_y/L^+{\mathcal G}_y$ is an $L^+{\mathcal G}_y$-torsor
which splits locally for the Zariski topology.) Now recall that
since ${\mathcal G}_{x_0}$ is reductive, ${\mathcal G}_{x_0}\simeq H\otimes_{\mathbb Z}\O$,
there is a representation $H\hookrightarrow{\rm GL}_n$ with ${\rm GL}_n/H$ affine.
As above, we see then that ${\rm Gr}_{{\mathcal G}_{x_0}}\hookrightarrow {\rm
Gr}_{{\rm GL}_n}$ is a closed immersion, and that ${\rm Gr}_{{\mathcal G}_{x_0}}$
is ind-projective and also ind-proper.
Next we consider the general case. It is enough to prove that ${\rm
Gr}_{{\mathcal G}}$ is ind-proper over ${\rm Spec } (\O)$ after base changing by a
finite unramified extension $\O'/\O$. Therefore, by replacing $\O$
by $\O'$ we may assume that $\underline G$ is quasi-split and splits over
$\O[v^{\pm 1}]/\O[u^{\pm 1}]$. We now return to the notations of the
proof of Theorem \ref{grpschemeThm}. In particular, if $x$ is in
${\mathcal A}(\underline G_F, \underline S_F, F)={\mathcal A}(H, T_H, \tilde F)^\Gamma$, then
${\mathcal G}_x$ is the connected component of $({\rm
Res}_{\O[v]/\O[u]}{\mathcal H}_x)^{\gamma_0}$. By the argument in the last
part of that proof, we can find a $\gamma_0$-stable affine alcove
$C$ in the apartment ${\mathcal A}(H_{\tilde F}, T_H, \tilde F)$ such that $x$
belongs to the closure $\bar C$.
Denote by $y$ the barycenter of $C$ which is then fixed by $\gamma_0$.
Then ${\mathcal H}_y$ is an Iwahori group scheme and $\overline{\mathcal H}_y^{\rm
red}={\mathcal T}$ is the split torus over $\O$. An argument as in the
split case above, shows that it is enough to show that ${\rm
Gr}_{{\mathcal G}_y}$ is ind-proper. For simplicity, set ${\mathcal G}={\mathcal G}_y$,
${\mathcal H}={\mathcal H}_y$. There is an exact sequence of pointed sets
$$
{\mathcal H}(R((v)))^{\gamma_0}/{\mathcal H}(R[[v]])^{\gamma_0}\hookrightarrow
({\mathcal H}(R((v)))/{\mathcal H}(R[[v]])^{\gamma_0} \xrightarrow{\delta}
{{\mathrm H}}^1(\Gamma, {\mathcal H}(R[[v]])).
$$
Now observe $({\mathcal H}(R((v)))/{\mathcal H}(R[[v]])^{\gamma_0}={\rm
Gr}_{{\mathcal H}}(R)^{\gamma_0}={\rm Gr}_{{\mathcal H}}^{\gamma_0}(R)$ where ${\rm
Gr}_{{\mathcal H}}^{\gamma_0}$ is the closed ind-subscheme of ${\rm
Gr}_{{\mathcal H}}$ given by taking ${\gamma_0}$-fixed points. The kernel of
$ {\mathcal H}(R[[v]]))\to \overline{{\mathcal H}}^{\rm red}(R)={\mathcal T}(R)$ is affine
pro-unipotent and we see that ${\mathrm H}^1(\Gamma,
{\mathcal H}(R[[v]]))={\mathrm H}^1(\Gamma, {\mathcal T}(R))$. Now, consider the closed
subgroup scheme $Q_{\mathcal T}$ of ${\mathcal T}$ of elements $x$ that satisfy
the equation $N(x)=\prod_{i=0}^{e-1}\gamma_0^i(x)=1$. We can see
that the sheaf $R\mapsto {\mathrm H}^1(\Gamma, {\mathcal T}(R))$ is given by the
quotient $Q_{\mathcal T}/{\mathcal T}^{\gamma_0-1}$. The map $\delta$ is given as
follows: starting with $x\in ({\mathcal H}(R((v)))/{\mathcal H}(R[[v]])^{\gamma_0}$
we can find $h\in {\mathcal H}(R((u)))$ such that $h\gamma_0(h)^{-1}$ is in
${\mathcal H}(R[[v]]))$. We set $\delta(x)=\overline{h\gamma_0(h)^{-1}}$
which is well-defined in $Q_{\mathcal T}/{\mathcal T}^{\gamma_0-1}$.
Using Proposition \ref{locconstant}
we see that $Q_{\mathcal T}/{\mathcal T}^{\gamma_0-1}$ is a finite \'etale
commutative group scheme $Q$ over $\O$ (of order that divides $e$).
The above exact sequence now gives that the sheaf associated to the
presheaf $R\mapsto {\mathcal H}(R((v)))^\gamma/{\mathcal H}(R[[v]])^{\gamma_0}$ is
represented by the fiber of the ind-scheme morphism $\delta: {\rm
Gr}_{{\mathcal H}}^{\gamma_0}\to Q$ over the identity section ${\rm Spec } (\O)\to
Q$. We conclude that the fpqc quotient
$L{\mathcal H}^{\gamma_0}/L^+{\mathcal H}^{\gamma_0}$ is represented by an ind-proper
ind-scheme over $\O$. To finish the proof recall that by
construction ${\mathcal G}$ is the neutral component of $({\rm
Res}_{\O[v]/\O[u]}{\mathcal H})^{\gamma_0}$. Using this, Corollary
\ref{locconstant} and the fact that $\gamma_0$-fixed points of
affine pro-unipotent groups are connected, we see that the sheaf
associated to
$$
R\to ({\rm Res}_{\O[v]/\O[u]}{\mathcal H})^{\gamma_0}(R[[u]])/{\mathcal G}(R[[u]])=
{\mathcal H}(R[[v]])^{\gamma_0}/{\mathcal G}(R[[u]])
$$
is represented by the finite \'etale commutative group scheme of
connected components of ${\mathcal T} =\overline{{\mathcal H}}^{\rm red}$.
Therefore, ${\rm Gr}_{{\mathcal G}}$ given by $R\mapsto
{\mathcal H}(R((v)))^{\gamma_0}/{\mathcal G}(R[[u]])= {\mathcal G}(R((u)))/{\mathcal G}(R[[u]])$ is
represented by a finite \'etale cover of
$L{\mathcal H}^{\gamma_0}/L^+{\mathcal H}^{\gamma_0}$. As such it is also an
ind-proper ind-scheme over $\O$.
\end{proof}
\subsubsection{The global affine Grassmannian}\label{6b3}
We continue with the same assumptions, but for a little while we allow ${\mathcal G}$ to
be any smooth affine group
scheme over $X=\AA^1_\O$ with connected fibers.
Let $S\in ({\rm Sch}/X)$, with structure morphism $y: S\to X$. We
will denote by $\Gamma_y\subset X\times S$ the closed subscheme
given by the graph of $y$ and by $\hat\Gamma_y$ the formal
completion of $X\times S$ along $\Gamma_y$.
Suppose that $S={\rm Spec } (R)$ is affine. Then $\hat\Gamma_y$ is an
affine formal scheme and following
\cite[2.12]{BeilinsonDrinfeld} we can also consider the affine scheme $\hat\Gamma_y'$
given by the relative spectrum of the ring of regular functions on $\hat\Gamma_y$.
There is a natural closed immersion $\Gamma_y\to \hat\Gamma'_y$ and we will
denote by $\hat\Gamma_y^o:=\hat\Gamma'_y-\Gamma_y$ the complement of
the image. If $y: {\rm Spec } (R)\to X=\AA^1_\O$ is given by $u\mapsto y$,
we have $\Gamma_y\simeq {\rm Spec } (R[u]/(u-y))$, $\hat\Gamma'_y\simeq
{\rm Spec } (R[[w]])$. When $y=0$, $\hat\Gamma_y'=D_S$,
$\hat\Gamma_y^o=D^0_S$. We can see directly that there is a morphism
$\hat\Gamma'_y\to X\times S$ given by $R[u]\to R[[w]]$; $u\mapsto
w+y$. We will often abuse notation and write
$\hat\Gamma'_y=\hat\Gamma_y={\rm Spec } (R[[u-y]])$. Then
$\hat\Gamma_y^o={\rm Spec } (R[[u-y]][(u-y)^{-1}])$.
\subsubsection{}\label{6b4} We will now consider various functors on $({\rm Sch}/X)$.
These will be fpqc sheaves on $X$ that can be described by giving
their values on affine schemes over $X$.
First consider the functor that associates to an $\O[u]$-algebra $R$
(given by $u\mapsto y$) the group
\begin{equation}\label{globloop1}
\L{\mathcal G}(R)= {\mathcal G}(\hat\Gamma^o_y)={\mathcal G}(R[[u-y]][(u-y)^{-1}]).
\end{equation}
Since ${\mathcal G}\to {\rm Spec } (\O[u])$ is smooth and affine, $\L{\mathcal G}$ is
represented by a formally smooth ind-scheme over $X$.
Next consider the functor that associates to an $\O[u]$-algebra $R$
the group
\begin{equation}\label{globloop2}
\L^+{\mathcal G}(R)= {\mathcal G}(\hat\Gamma_y)={\mathcal G}(R[[u-y]]).
\end{equation}
We can see that $\L^+{\mathcal G}$ is represented by a scheme over $X$ (not
of finite type) which is formally smooth.
Finally define the global affine Grassmannian of ${\mathcal G}$ over $X$ to
be the functor on $({\rm Sch}/X)$ given by
\begin{equation}\label{fun1}
{\rm Gr}_{{\mathcal G}, X}(S)= \biggl\{\,\text{iso-classes of pairs } ( \mathcal E, \beta) \biggm|
\twolinestight{$\mathcal E$ a ${\mathcal G}$-torsor on $X\times S$,}
{$\beta$ a trivialization of $\mathcal E|_{ (X\times S)\setminus\Gamma_y}$}\,\biggr\}\, .
\end{equation}
Here and everywhere else the fiber products are over ${\rm Spec } (\O)$.
Similarly to Proposition \ref{identify}, the descent lemma
\ref{descentBL} implies that the natural map given by restriction
along $\hat\Gamma_y\to X\times S$
\begin{equation}\label{equivfun}
{\rm Gr}_{{\mathcal G}, X}(R)\xrightarrow{ \ } \biggl\{\,\text{iso-classes of pairs } ( \mathcal E, \beta) \biggm|
\twolinestight{$\mathcal E$ a ${\mathcal G}$-torsor on $\hat\Gamma_y$,}
{$\beta$ a trivialization of $\mathcal E |_{ \hat\Gamma_y^o}$}\,\biggr\}\,
\end{equation}
is a bijection for each $\O[u]$-algebra $R$. This provides with an
alternative description of ${\rm Gr}_{{\mathcal G}, X}$. Using this
description, we can see that $\L{\mathcal G}$, $\L^+{\mathcal G}$ act on ${\rm Gr}_{{\mathcal G},
X}$ by changing the trivialization $\beta$.
\subsubsection{}\label{kappafibers} Suppose now that ${\mathcal G}$ is as in Theorem \ref{grpschemeThm}. Let $\kappa$ be either the fraction field $F$ or the residue field
$k$ of $\O$. Let $x: {\rm Spec } (\kappa)\to X$, where $\kappa$ is as
above and identify the completed local ring $\widehat{\mathcal O}_x$
of $X\times {\rm Spec } (\kappa)$ with $\kappa[[T]]$, using the local
parameter $T=u-x$. Let
\begin{equation}\label{basechangeG}
{\mathcal G}_{\kappa,x}:={\mathcal G}\times_{{\rm Spec } (\O[u])}{\rm Spec } (\kappa[[T]]).
\end{equation}
(i) Suppose that $x$ factors through $0: {\rm Spec } (\O)\to X$. Recall
that by Theorem \ref{grpschemeThm} the base change ${\mathcal G}_{\kappa,
0}$, can be identified with a Bruhat-Tits group scheme $P_\kappa
:=\P_{x_{\kappa((u))}}$ over the dvr $\kappa[[T]]$.
Let $L^+ P_\kappa$ be the affine group scheme over ${\rm Spec } (\kappa)$
representing the functor on $\kappa$-algebras
\begin{equation*}
R\mapsto L^+ P_\kappa (R) = P_\kappa ( R[[T]])\, ,
\end{equation*}
and $L P_\kappa$ the ind-group scheme over ${\rm Spec } (\kappa)$
representing the functor
\begin{equation*}
R\mapsto L P_\kappa (R) = P_\kappa \big(R((T))\big)=G_\kappa
\big(R((T))\big)\, .
\end{equation*}
Here $G_\kappa=P_\kappa[T^{-1}]$ (which is denoted by $\underline
G_{\kappa((u))}$ in Chapter \ref{groupscheme}) is the connected
reductive group over $\kappa((T))$ which is obtained by base
changing ${\mathcal G}\to {\rm Spec } (\O[u])$ along $\O[u]\to \kappa((T))$,
$u\mapsto T$. As in Proposition \ref{indproper} we see that there is
an ind-proper ind-scheme ${\rm Gr}_{P_\kappa}$ over $\kappa$ which
represents the quotient $L P_\kappa/L^+ P_\kappa$ of fpqc-sheaves on
$\kappa$-schemes. By Proposition \ref{identify}, ${\rm Gr}_{P_\kappa}$ is
the ind-scheme representing the functor
\[
R\mapsto {\rm Gr}_{P_\kappa} (R) = \biggl\{\,\text{iso-classes of pairs } (\mathcal E, \beta) \biggm|
\twolinestight{$\mathcal E$ a $P_\kappa$-torsor on ${\rm Spec } R[[T]]$,}
{$\beta$ a trivialization of $\mathcal E|_{{\rm Spec } R((T))}$}\,\biggr\} .
\]
The base change
${\rm Gr}_{P_\kappa}\times_{{\rm Spec } (\kappa)}{\rm Spec } (\bar\kappa)$
is an affine flag variety as in \cite{PappasRaTwisted}.
(ii) Suppose $x: {\rm Spec } (\kappa)\to X$ does not factor through $0:
{\rm Spec } (\O)\to X$. Then by Theorem \ref{grpschemeThm} (i), the base
change ${\mathcal G}_{\kappa, x}$ is a reductive group scheme which is a form
of $H$. We can see that
\begin{equation}
{\mathcal G}_{\kappa, x}\times_{{\rm Spec } (\kappa)}{\rm Spec } (\bar\kappa)\simeq
H\times_{{\rm Spec } (\O)}{\rm Spec } ( \bar \kappa[[T]]).
\end{equation}
As above, we also have the affine Grassmannian ${\rm Gr}_{{\mathcal G}_{\kappa,
x}}$ over $\kappa$; by the above, we can see that ${\rm Gr}_{{\mathcal G}_{\kappa,
x}}\times_{{\rm Spec } (\kappa)}{\rm Spec } (\bar\kappa)$ can be identified with
the usual affine Grassmannian ${\rm Gr}_H$ over $\bar\kappa$ for
the split reductive group $H$. The following observation now follows
from Proposition \ref{identify}:
\begin{prop} \label{globalGr}
Let $x: {\rm Spec } (\kappa)\to X$, where $\kappa$ is either the residue
field $k$ of $\mathcal O$, or the fraction field $F$ of $\mathcal
O$, and identify the completed local ring $\widehat{\mathcal O}_x$
of $X\times {\rm Spec } (\kappa)$ with $\kappa[[T]]$, using the local
parameter $T=u-x$. Then restricting ${\mathcal G}$-bundles from
$\O[u]\otimes_\O R$ to $R[[T]]$, $u\mapsto T+x$, induces an
isomorphism over ${\rm Spec } (\kappa)$,
\begin{equation*}
i_x^\ast \colon {\rm Gr}_{{\mathcal G}, X}\times_{X, x}
{\rm Spec } (\kappa)\xrightarrow{\sim} {\rm Gr}_{{\mathcal G}_{\kappa, x}}\, .
\end{equation*}
Here ${\rm Gr}_{{\mathcal G}_{\kappa, x}}=L{\mathcal G}_{\kappa, x}/L^+{\mathcal G}_{\kappa, x}$
denotes the affine Grassmannian over $\kappa$ as above; this is
isomorphic to either ${\rm Gr}_{P_\kappa}$ if $x$ maps to $0$, or to
${\rm Gr}_H$ over $\bar \kappa$ otherwise.
\end{prop}
Notice here that at this point we only consider ${\rm Gr}_{{\mathcal G}, X}$ as a fpqc sheaf over $X$.
However, using the next proposition we will soon see that these are
actually isomorphisms of
ind-schemes.
Remark here that the above proposition combined with Proposition \ref{indproper}
already shows that the
fiber of ${\rm Gr}_{{\mathcal G}, X}$ over $x: {\rm Spec } (\kappa)\to X$ is represented by an
ind-scheme which is ind-projective over ${\rm Spec } (\kappa)$.
\begin{prop}\label{indscheme} Suppose that ${\mathcal G}$ is as in Theorem \ref{grpschemeThm}.
The functor ${\rm Gr}_{{\mathcal G}, X}$ on $({\rm Sch}/X)$ is representable
by an ind-projective ind-scheme over $X$.
\end{prop}
\begin{proof} We first show that ${\rm Gr}_{{\mathcal G}, X}$ is representable by an ind-scheme
of ind-finite type and separated over $X$. This follows the corresponding argument in the proof of Proposition
\ref{indproper}. By Proposition \ref{qaffine}, there is a closed
group scheme immersion ${\mathcal G}\hookrightarrow {\rm GL}_n$ such that the
quotient ${\rm GL}_n/{\mathcal G}$ is representable by a quasi-affine scheme over
$\O[u]$. The argument in \cite{BeilinsonDrinfeld} (or
\cite[Appendix]{GaitsgoryInv}) now shows that the natural functor $
{\rm Gr}_{{\mathcal G}, X}\to {\rm Gr}_{{\rm GL}_n, X} $ is representable and is a
locally closed immersion. In fact, if the quotient ${\rm GL}_n/{\mathcal G}$ is
affine, this functor is a closed immersion. Now note that ${\rm
Gr}_{{\rm GL}_n, X}$ is representable by an ind-scheme separated of
ind-finite type over $X$. This is well-known (see for example
\cite{BeilinsonDrinfeld}). In fact, ${\rm Gr}_{{\rm GL}_n, X}$ is
ind-projective over $X$.
It remains to show that ${\rm Gr}_{{\mathcal G}, X}\to X$ is ind-proper. By
Proposition \ref{indproper} and \ref{globalGr} each fiber of
${\rm Gr}_{{\mathcal G}, X}\to X$ is ind-proper. It is enough to show that the
base change by $\tilde X={\rm Spec } (\tilde\O_0[v])\to X$ is ind-proper.
Notice that, since $\tilde X-\{0\}\to X-\{0\}$ is finite \'etale,
there is an isomorphism ${\rm Gr}_{{\mathcal G}, X}\times_X (\tilde X-\{0\})\simeq
{\rm Gr}_{H, X}\times_X (\tilde X-\{0\})={\rm Gr}_H\times_{\O}(\tilde
X-\{0\})$ (cf. \cite[Lemma 3.3]{ZhuCoherence}, here again $H$ is the
split Chevalley form). Therefore, by Proposition \ref{indproper}
applied to ${\rm Gr}_H$, we see that
the restriction of ${\rm Gr}_{{\mathcal G}, X}\to X$ over $U=(\tilde X-\{0\})\otimes_\O F$ is ind-proper.
We can write this restriction as a limit $S_i$ of proper schemes
over $U$. In fact, using standard results on the structure of the
affine Grassmannians ${\rm Gr}_H$ over the field $F$
(\cite{GaitsgoryInv}, \cite{FaltingsLoops}, \cite{PappasRaTwisted})
we can assume that $S_i=\sqcup_j S_{ij}$ with $S_{ij}$ proper
schemes over $U$ with geometrically connected fibers. Denote by
$Y_{ij}$, resp. $Z_{ij}$, the Zariski closures of $S_{ij}$ in ${\rm
Gr}_{{\mathcal G}, X}\times_X \tilde X$, resp. ${\rm Gr}_{{\rm GL}_n, X}\times_X
\tilde X$. Since ${\rm Gr}_{{\rm GL}_n, X}\to X$ is ind-proper,
$Z_{ij}\to \tilde X$ is proper. Since ${\rm Gr}_{{\mathcal G}, X}\to {\rm
Gr}_{{\rm GL}_n, X}$ is a locally closed immersion, $Y_{ij}$ is open and
dense in $Z_{ij}$. Denote by bar fibers at a closed point of $\tilde
X$. It enough to show that we always have $\bar Y_{ij}=\bar Z_{ij}$.
Since all the fibers of ${\rm Gr}_{{\mathcal G}, X}\to X$ are ind-proper, $\bar
Y_{ij}$ is proper and so $\bar Y_{ij}$ is closed in $\bar Z_{ij}$.
By Zariski's main theorem applied to $Z_{ij}\to \tilde X$, we see
that $\bar Z_{ij}$ is connected and so $\bar Y_{ij}=\bar Z_{ij}$.
Hence, $Y_{ij}=Z_{ij}$ and $Y_{ij}\to \tilde X$ is proper. It
remains to see that each point of each fiber of $ {\rm Gr}_{{\mathcal G},
X}\times_X \tilde X\to \tilde X$ belongs to some $Y_{ij}$. This
lifting property
can be seen by the argument in the proof of Proposition \ref{Prop8.8}.
\end{proof}
\begin{comment} TAKEN OUT
\begin{Remark}
{\rm Alternatively, to show that ${\rm Gr}_{{\mathcal G}, X}$ is
ind-proper, we can imitate the proof of Proposition \ref{indproper}.
When the group $\underline G$ is split and $\Omega$ contains a
hyperspecial vertex (as in the Iwahori case), we can apply
\ref{affinecase} and the argument in \cite[Appendix]{GaitsgoryInv}
as above to produce a closed immersion
\begin{equation*}
{\rm Gr}_{{\mathcal G}}\hookrightarrow {\rm Gr}_{({\rm GL}_n)_Q}.
\end{equation*}
This reduces the problem to the case of a parahoric subgroup of
${\rm GL}_n$; this is an ind-scheme of flags of lattices which is
ind-projective over $X$, see \S \ref{ChLocal}. The general parahoric
case for a split $\underline G$ can be reduced to this Iwahori case using
the argument in the proof of Proposition \ref{indproper}. When
$\underline G$ is general we still need a fixed point argument as in the
proof of Proposition \ref{indproper} to complete the proof. }
\end{Remark}
\end{comment}
\subsubsection{Specialization along $u=\varpi$.}
Now let us fix a uniformizer $\varpi$ of $\O$. We denote by $\varpi$
the section of $X=\AA^1_\O$ over $\O$ defined by $u\mapsto\varpi$.
Let $G$ is connected reductive over $F$, and splits over a tamely
ramified extension $\tilde F/F$ as in \ref{sss1a2}; Let $\underline G$ be
constructed from $G$ as in \S \ref{reductive group}. In addition, we
fix an isomorphism $\underline G_F\simeq G$ from a rigidification of $G$
as explained in \ref{sss3a4}. This produces a group scheme
${\mathcal G}:={\mathcal G}_x$ as in Corollary \ref{application}, which is
independent of the choice of the rigidification of $G$ up to
isomorphism.
Notice that there is an isomorphism
\begin{equation}\label{poweriso}
\tilde\O_0[v^{\pm 1}]\otimes_{\O[u^{\pm
1}]}F[[u-\varpi]]\xrightarrow{\sim} \tilde F[[z]]=\tilde
F[[u-\varpi]],
\end{equation}
given by $v\mapsto \tilde\varpi \cdot (1+z)$ where $\tilde\varpi^e=\varpi
$. Here $z$ maps to the power series
$(1+\frac{(u-\varpi)}{\varpi})^{1/e}-1$, where the $e$-th root is
expressed by using the standard binomial formula. This isomorphism
matches the action of $\Gamma$ on the left hand
side (coming from the cover $\O[u]\to \tilde\O_0[v]$ by base
change), with the action on $\tilde F[[z]]$ given by the Galois
action on the coefficients $\tilde F$. Using this and the
construction of the group scheme $\underline G$ in \S \ref{reductive
group} we obtain an isomorphism
\begin{equation}\label{iso6.9}
{\mathcal G}_{F, \varpi}\xrightarrow{\sim} G\times_{{\rm Spec } (F)}{\rm Spec } (
F[[u-\varpi]])
\end{equation}
well defined up to $G(F)$-conjugation.
Denote by ${\rm Gr}_{{\mathcal G}, \O}$ the fiber product
\begin{equation}
{\rm Gr}_{{\mathcal G}, \O}:={\rm Gr}_{{\mathcal G}, X}\times_{X, \varpi} {\rm Spec } (\O)\to
{\rm Spec } (\O).
\end{equation}
Using Proposition \ref{indscheme} we see that this is an
ind-projective ind-scheme
over ${\rm Spec } (\O)$.
Proposition \ref{globalGr} and the discussion in
the beginning of \S \ref{kappafibers} implies:
\begin{cor}\label{fibers}
1) The generic fiber ${\rm Gr}_{{\mathcal G},
\O}\times_{{\rm Spec } (\O)}{\rm Spec } (F)$ is equivariantly isomorphic to the
affine Grassmannian ${\rm Gr}_{G, F}$ of $G$ over ${\rm Spec } (F)$.
2) The special fiber ${\rm Gr}_{{\mathcal G}, \O}\times_{{\rm Spec } (\O)}{\rm Spec } (k)$
is equivariantly isomorphic to the affine Grassmannian ${\rm Gr}_{P_k}$
over ${\rm Spec } (k)$.\endproof
\end{cor}
\subsubsection{Notation}
From now on, we will denote ${\mathcal G}$ as in Corollary
\ref{application}. In general, if $f: S\to X$ is a scheme morphism,
we set
\begin{equation*}
{\rm Gr}_{{\mathcal G}, S, f}:={\rm Gr}_{{\mathcal G}, X}\times_{X, f} S
\end{equation*}
for the fiber product. If $S={\rm Spec } (R)$ with $f$ given by $u\mapsto
r\in R$, we will simply write ${\rm Gr}_{{\mathcal G}, R, r}$ instead of
${\rm Gr}_{{\mathcal G}, S, f}$. We will omit $f$ from the notation, when its
choice is clear. In addition, if $R$ is an $\O$-algebra and
$\O[u]\to R$ is given by $u\mapsto \varpi$, we will simple write
${\rm Gr}_{{\mathcal G}, R}$ instead. This agrees with our use of notation
${\rm Gr}_{{\mathcal G}, \O}$ above.
\bigskip
\section{Local models}\label{ChLocal}
\setcounter{equation}{0}
Here we give our group-theoretic definition of local models. We also
explain how, in the examples of ${\rm GL}_n$, ${\rm GSp}_{2n}$ and a
minuscule coweight, it follows from \cite{GortzFlatGLn} and
\cite{GortzSymplectic} that these agree with the local models of
\cite{RapZinkBook}. This last result will be generalized in the next
chapter.
\subsection{Generalized local models}
\subsubsection{Cocharacters.}\label{coch} We continue with the above assumptions and notations.
Suppose now that $\{\mu\}$ is a geometric conjugacy class of one
parameter subgroups of $G$, defined over an algebraic closure
$\overline F$ of $F$ that contains the field $\tilde F$. Let $E$ be
the field of definition of $\{\mu\}$, a finite extension of $F$
contained in $\overline F$ (the reflex field of the pair $(G, \{
\mu\})$).
First observe that since $G$ is quasi-split over the maximal
unramified extension $\tilde F_0$ of $F$ in $\tilde F$ we can find
(\cite[Lemma (1.1.3)]{KottTwisted}) a representative of $\{\mu\}$
defined over $E'=E\tilde F_0$, which factors $\mu: {{\mathbb G}_{\rm m}}_{E'}\to
T_{E'}\to G_{E'}$, where $T$ is the maximal torus of $G$ given as in
\ref{sss1a3}.
Notice that $\mu$ gives an
$E'[z,z^{-1}]$-valued point of $G_{E'}$, therefore an
$E'((z))$-valued point of $G_{E'}$, therefore an $E'$-valued point
of the loop group $LG $. By (\ref{iso6.9}) we have an isomorphism
$$
G(F((z)))\xrightarrow{\sim}
{\mathcal G}_{F,\varpi}(F((u-\varpi)))={\mathcal G}_{F,\varpi}(F((t))).
$$
We denote by $s_\mu$ the corresponding $E'$-valued point in
$L{\mathcal G}_{F, \varpi}$.
\subsubsection{Schubert varieties in mixed characteristic.}\label{7a2}
We would like to define
a projective scheme $M_{{\mathcal G}, \mu }$ over $\O_E$ which we might view
as a generalized local model. Recall the definition of $s_\mu\in
L{\mathcal G}_{F, \varpi}(E')$ and consider the $L^+{\mathcal G}_{F, \varpi}$-orbit
$(L^+{\mathcal G}_{F, \varpi})_{E'}\cdot [s_\mu]$ of the corresponding point
$[s_\mu]$ in the affine Grassmannian $(L{\mathcal G}_{F, \pi}/L^+{\mathcal G}_{F,
\pi})\times_F E'$. This orbit is contained in a projective
subvariety of $(L{\mathcal G}_{F, \pi}/L^+{\mathcal G}_{F, \pi})\times_F E'$ which by
Corollary \ref{fibers} (1) above can be identified with the generic
fiber of ${\rm Gr}_{{\mathcal G}, \O}\otimes_\O\O_{E'}\to {\rm Spec } (\O_{E'})$. Since
the conjugacy class of $\mu: {{\mathbb G}_{\rm m}}_{E'}\to G_{E'}$ is defined over
$E$, the same is true for the orbit $(L^+{\mathcal G}_{F, \varpi})_{E'}\cdot
[s_\mu]$: There is an $E$-subvariety $X_\mu$ of $(L{\mathcal G}_{F,
\pi}/L^+{\mathcal G}_{F, \pi})\times_F E$ such that
$X_\mu\times_EE'=(L^+{\mathcal G}_{F, \varpi})_{E'}\cdot [s_\mu]$.
\begin{Definition}
The generalized local model (or mixed characteristic Schubert
variety) $M_{{\mathcal G}, \mu}$ is the reduced scheme over ${\rm Spec } (\O_E)$
which underlies the Zariski closure of the orbit $X_\mu$ in the
ind-scheme ${\rm Gr}_{{\mathcal G}, \O_E}={\rm Gr}_{{\mathcal G},
\O}\times_{{\rm Spec } (\O)}{\rm Spec } (\O_E)$.
\end{Definition}
\smallskip
\subsection{Some examples}\label{ShimuraLocal}
\subsubsection{} \label{lattice}
{\sl The case of ${\rm GL}_{N}$.} Recall the notations of \S \ref{exGL}.
In particular, ${\mathcal G}$ is the group scheme over $\O[u]$ associated to
the lattice chain $\{W_i\}_i$.
Consider the functor ${\mathfrak L}$ on $({\rm Sch}/X)$ which to an
$X$-scheme $y: S\to X$, associates the set of isomorphism classes of
collections $({\mathcal E}_i, \psi_i, \alpha_i)_{i\in {\mathbb Z}}$ where, for each
$i\in {\mathbb Z}$, ${\mathcal E}_i$ are locally free coherent $\O_{X\times
S}$-sheaves on $X\times S$ of rank $N$, $\psi_i : {\mathcal E}_{i+1}\to {\mathcal E}_{i
}$ are $\O_{X\times S}$-module homomorphisms, and $\alpha_i$ are
$\O_{{X\times S}-\Gamma_y}$-module isomorphisms $\alpha_i:
W_i\otimes_{\O[u]}\O_{{X\times S}-\Gamma_y}\xrightarrow{\sim}
{\mathcal E}_{i}\otimes_{\O_{X\times S}}\O_{{X\times S}-\Gamma_y} $
that satisfy the
following conditions:
(a) the data are periodic of period $r$, $({\mathcal E}_i, \psi_i,
\alpha_i)=( {\mathcal E}_{i+r}, \psi_{i+r}, \alpha_{i+r})$, for all $i\in
{\mathbb Z}$,
(b) we have $\alpha_{i+1 }\cdot \psi_{i} =\alpha_i\cdot \iota_i$,
for all $i\in {\mathbb Z}$,
(c) Each composition of $r$ successive $\psi_i$ is given by
multiplication by $u$, i.e
$$
\prod_{k=0}^{r-1}\psi_{i-k}=u :{\mathcal E}_{i+r}={\mathcal E}_{i}\to {\mathcal E}_{i},
$$
for all $i\in {\mathbb Z}$, and,
(d) for each $i\in {\mathbb Z}$, the cokernel ${\mathcal E}_{i}/\psi_{i}({\mathcal E}_{i+1})$ is
a locally free $\O_S$-module of rank $r_i$.
\smallskip
We can see that ${\mathfrak L}$ is an fpqc sheaf on $({\rm
Sch}/X)$.
When $S={\rm Spec } (R)$ is affine, and $y: {\rm Spec } (R)\to X={\rm Spec } (\O[u])$ is
given by $u\mapsto r$, we have $X\times
S-\Gamma_y={\rm Spec } (R[u][(u-r)^{-1}])$. Since $u-r$ is not a zero
divisor in $R[u]$ we can use $\alpha_i$ to identify ${\mathcal E}_i$ with the
sheaf corresponding to a $R[u]$-locally free rank $N$ submodule
$E_i$ of $R[u][(u-r)^{-1}]^N$.
\smallskip
We can now show:
\begin{prop}\label{latGL}
There is a natural equivalence of functors ${\rm Gr}_{{\mathcal G},
X}\xrightarrow{\sim} {\mathfrak L}$ where ${\mathcal G}$ is the group scheme
as above.
\end{prop}
\begin{proof}
Observe that a ${\mathcal G}$-torsor ${\mathcal T}$ over $X\times S$ induces via
${\mathcal G}\hookrightarrow \prod_{i=0}^r{\rm GL}(W_i)\to {\rm GL}(W_i)$ a
${\rm GL}(W_i)$-torsor over $X\times S$. This amounts to giving a locally
free coherent $\O_{X\times S}$-sheaf ${\mathcal E}_i$ of rank $N$; since ${\mathcal G}$
respects the maps $W_{i+1}\to W_{i}$, we obtain $\psi_i:
{\mathcal E}_{i+1}\to {\mathcal E}_{i}$. A ${\mathcal G}$-trivialization of ${\mathcal T}$ over $X\times
S-\Gamma_y$ produces isomorphisms $\alpha_i$ as above. We extend
this data by periodicity; then (a), (b), (c), (d) are satisfied.
This gives the arrow ${\rm Gr}_{{\mathcal G}, X}\to {\mathfrak L}$. To show
that this is an equivalence, start with data $({\mathcal E}_i, \psi_i,
\alpha_i)_{i\in {\mathbb Z}}$ giving an element of ${\mathfrak L}(S)$. We
would like to show that these are produced by a ${\mathcal G}$-torsor ${\mathcal T}$
with a trivialization over $X\times S-\Gamma_y$. It is enough to
assume that $S$ is affine, $S={\rm Spec } (R)$. Since, ${\mathcal G}$ is the
subgroup of $\prod_{i\in {\mathbb Z}/r{\mathbb Z}}{\rm GL}(W_i)$ that respects $\iota_i$ we
can now see that it is enough to show the following: Locally for
the Zariski topology on $R$, there are isomorphisms
$$
\lambda_i: W_{i}\otimes R[u]\xrightarrow {\sim} {\mathcal E}_i
$$
such that $\lambda_{i}\cdot \iota_{i}=\psi_i\cdot \lambda_{i+1}$.
This follows by an argument similar to the proof of \cite[Appendix,
Prop. A.4]{RapZinkBook}.
\end{proof}
Now if $\mu:{{\mathbb G}_{\rm m}} \to {\rm GL}_N$ is the minuscule coweight given by
$a\mapsto (a^{(d)}, 1^{(N-d)})$ and $\O=W({\mathbb F}_p)$, we can see, using
Proposition \ref{latGL}, that in this situation, the local models
$M_{{\mathcal G}, \mu}$ agree with the Rapoport-Zink local models for ${\rm GL}_N$
and $\mu$ considered in \cite{RapZinkBook}. Indeed, in this case, by
\cite{GortzFlatGLn}, the local models of \cite{RapZinkBook} are
flat over $\O$ and so they agree with the $M_{{\mathcal G}, \mu}$ above.
\smallskip
\subsubsection{} \label{latticeGSp}
{\sl The case of ${\rm GSp}_{2n}$.} Recall the notations of \S \ref{exGSp}.
In particular, ${\mathcal G}$ is the group
scheme over $\O[u]$ associated to the self-dual lattice chain
$\{W_i\}_i$.
Consider the functor ${\mathfrak {LSP}}$ on $({\rm Sch}/X)$ which to
an $X$-scheme $y: S\to X$, associates the set of isomorphism classes
of collections $({\mathcal E}_i, \psi_i, \alpha_i, h_i)_{i\in {\mathbb Z}}$ where
$({\mathcal E}_i, \psi_i, \alpha_i)_{i\in {\mathbb Z}}$ give an object of
$\mathfrak L$ and in addition
$$
h_i : {\mathcal E}_i\times {\mathcal E}_{-i-a}\to \O_{X\times S}
$$
are perfect $\O_{X\times S}$-bilinear forms that satisfy
\begin{itemize}
\item[a)] $h_{i}(\psi_i(x), y)=h_{i+1}(x, \psi_{-i-1-a}(y))$, for $x\in {\mathcal E}_{i+1}$, $y\in {\mathcal E}_{-i-a}$.
\item[b)] There is $c\in \O_{{X\times S}-\Gamma_y}^*$, such that $h_i\cdot (\alpha_i, \alpha_{-i-a})=c\cdot h$
for all $i\in {\mathbb Z}$ (as forms $W_i\otimes\O_{{X\times
S}-\Gamma_y}\times W_{-i-a}\otimes\O_{{X\times S}-\Gamma_y}\to
\O_{{X\times S}-\Gamma_y}$).
\end{itemize}
A similar argument as above, (cf. \cite[Appendix, Prop.
A.21]{RapZinkBook}) now gives
\begin{prop}
There is a natural equivalence of functors ${\rm Gr}_{{\mathcal G},
X}\xrightarrow{\sim} {\mathfrak {LSP}}$ where ${\mathcal G}$ is the
(symplectic) group scheme as above.\endproof
\end{prop}
Again as a result of the above, combined with the flatness result of
\cite{GortzSymplectic}, we can see that if $\mu:{{\mathbb G}_{\rm m}} \to {\rm
GSp}_{2n}$ is the standard minuscule coweight given by $a\mapsto
(a^{(n)}, 1^{(n)})$ and $\O=W({\mathbb F}_p)$, then the local models
$M_{{\mathcal G}, \mu}$ in this situation agree with the local models for
${\rm GSp}_{2n}$ considered in \cite{RapZinkBook}.
\smallskip
\subsubsection{} One can find a similar interpretation of ${\rm Gr}_{{\mathcal G}, X}$ as
moduli spaces of chains of bundles with additional structure given
by suitable forms in more cases as in \S \ref{exClassical}, for
example when $G$ is an orthogonal group or a (ramified) unitary
group. We will leave the details to the reader. A corresponding
statement comparing the local models $M_{{\mathcal G}, \mu}$ with the local
models in the theory of PEL Shimura varieties (\cite{RapZinkBook},
\cite{GortzSymplectic}, \cite{GortzFlatGLn}, \cite{PappasRaI},
\cite{PappasRaII}, \cite{PappasRaIII}, \cite{PRS}) will be explained
in the next paragraph.
\bigskip
\section{Shimura varieties and local models} \label{Shimura}
\setcounter{equation}{0}
Here we discuss Shimura varieties and their integral models over
primes where the level subgroup is parahoric. We conjecture that
there exist integral models that fit in a ``local model diagram" in
which the local model is given by our construction in the previous
chapter. We show this in most cases of Shimura varieties of PEL
type. We also explain how Theorem \ref{thmPEL}
of the introduction follows from our main structural results
on local models (which will be shown in the next section).
\subsection{The local model diagram}\label{8a}
Let $Sh_{\bold K} = Sh ({\bold G}, \{h\}, {\bold K})$ denote a
Shimura variety \cite{DeligneTravauxShimura} attached to the triple
consisting of a {\sl connected} reductive group $\bold G$ over
$\ensuremath{\mathbb{Q}}\xspace$, a family of Hodge structures $h$ and a compact open subgroup
$\bold K\subset \bold G(\ensuremath{\mathbb{A}}\xspace_f)$. We fix a prime number $p$ and
assume that $\bold K$ factorizes as $\bold K = K^p\cdot K_p\subset
{\bold G}(\ensuremath{\mathbb{A}}\xspace_f^p)\times {\bold G} (\ensuremath{\mathbb{Q}}\xspace_p)$. We assume in addition
that $K=K_p$ is a parahoric subgroup of $\bold G (\ensuremath{\mathbb{Q}}\xspace_p)$, i.e it
corresponds to the connected stabilizer of a vertex of the
Bruhat-Tits building of $\bold G\otimes_{{\mathbb Q}}\ensuremath{\mathbb{Q}}\xspace_p$. We denote by
${\mathcal P}$ the corresponding Bruhat-Tits group scheme over ${\mathbb Z}_p$.
Let ${\bold E}\subset\ensuremath{\mathbb{C}}\xspace$ denote the reflex field of $({\bold G},
\{h\})$, i.e. the field of definition of the geometric conjugacy
class of one-parameter subgroups $\{\mu\} = \{\mu_h\}$ attached to
$\{h\}$, cf.~\cite{DeligneTravauxShimura}. Then ${\bold E}$ is a finite
extension of $\ensuremath{\mathbb{Q}}\xspace$. Fixing an embedding
$\overline{\ensuremath{\mathbb{Q}}\xspace}\to\overline{\ensuremath{\mathbb{Q}}\xspace}_p$ determines a place $\wp$ of
${\bold E}$ above $p$. We denote by the same symbol the canonical model
of $Sh_{\bold K}$ over $\ensuremath{\mathbb{E}}\xspace$ and its base change to $\ensuremath{\mathbb{E}}\xspace_{\wp}$. For
simplicity, set $E={\bold E}_{\wp}$ and denote by $\O_E$ the ring of
integers of $E$ and by $k_E$ its residue field. It is then an
interesting problem to define a suitable model $\mathcal S_{\bold
K}$ of $Sh_{\bold K}$ over ${\rm Spec } (\O_E)$. Such a model should be
projective if $Sh_{\bold K}$ is (which is the case when $\bold
G_{\rm ad}$ is $\ensuremath{\mathbb{Q}}\xspace$-anisotropic), and should always have manageable
singularities. In particular, it should be flat over ${\rm Spec } (\O_E)$,
and its local structure should only depend on the localized group $G
= \bold G\otimes_{\ensuremath{\mathbb{Q}}\xspace}\ensuremath{\mathbb{Q}}\xspace_p$, the geometric conjugacy class
$\{\mu\}$ over $\overline{\ensuremath{\mathbb{Q}}\xspace}_p$, and the parahoric subgroup $K =
K_p$ of $G (\ensuremath{\mathbb{Q}}\xspace_p)$. Note that, due to the definition of a Shimura
variety, the conjugacy class $\{\mu\}$ is minuscule.
Suppose now in addition that the group $G$ splits over a tamely
ramified extension of ${\mathbb Q}_p$. \begin{comment}We can find a
reductive group $G_1$ over ${\mathbb Q}_p$ that also splits over the same
extension, with $G_{\rm ad}=G_{1,\rm ad}$ such that $G_{1,\rm der}$
is simply connected, and such that $\mu_{\rm ad}$ lifts to a
coweight $\mu_1$ of $G_1$. (Of course, if $G_{\rm der}$ is simply
connected, we take $G_1=G$.) Denote by $K_1$ the parahoric subgroup
of $G_1({\mathbb Q}_p)$ that corresponds to $K$.
\end{comment}
We can then apply the constructions of the previous paragraphs to $G
$, $\O={\mathbb Z}_p$, and the vertex of the building ${\mathcal B}(G ,
\ensuremath{\mathbb{Q}}\xspace_p)$ that corresponds to $K \subset G(\ensuremath{\mathbb{Q}}\xspace_p)$. By Theorem
\ref{grpschemeThm}, we obtain a smooth affine group scheme ${\mathcal G} \to
{\rm Spec } ({\mathbb Z}_p[u])$; the choice of $\{\mu \}$ allows us to give a
projective scheme $M_{{\mathcal G} , \mu }\to {\rm Spec } (\O_E)$. Let us set
$$
{\rm M}(G, \{\mu\})_K=M_{{\mathcal G} , \mu }.
$$
By its construction,
${\rm M}(G, \{\mu\})_K$ affords an action of the group scheme
${\mathcal G}\otimes_{{\mathbb Z}_p[u],u\mapsto p}\O_E=\P\otimes_{{\mathbb Z}_p}\O_E$. The
conjecture is that there exists a model $\mathcal S_{\bold K}$
of the Shimura variety over $\O_{E}$ whose singularities are
``described by the local model ${\rm M}(G, \{\mu\})_K$''. More
precisely:
We conjecture that there such a $\mathcal S_{\bold K}$ that affords
a {\it local model diagram}
\begin{equation}\label{locmoddiagram}
\xymatrix{
& {\widetilde{\mathcal S}}_{\bold K}\ar[ld]_{\pi}\ar[rd]^{\widetilde{\varphi}} & \\
{\quad\quad \mathcal S_{\bold K} \quad\quad} & & {{\rm M}(G,
\{\mu\})_K}\, ,
}
\end{equation}
of $\mathcal O_{E}$-schemes, in which:
\begin{itemize}
\item $\pi$ is a torsor under the group $\mathcal P_{\O_E}=\mathcal P\otimes_{\ensuremath{\mathbb{Z}}\xspace_p}\mathcal O_{E}$,
\item $\widetilde{\varphi}$ is smooth of relative dimension $\dim G$.
\end{itemize}
(Equivalently, using the language of algebraic stacks, there should
be a smooth morphism of algebraic stacks
\begin{equation*}
\varphi: {\mathcal S}_{\bold K} \to [{\rm M}(G, \{\mu\})_K/\mathcal
P_{\O_E}]\, .
\end{equation*}
of relative dimension $\dim G$ where in the brackets we have the
stack quotient. See also \cite{PRS}.)
The existence of the diagram implies the following: Suppose $x$ is
a point of
$\mathcal S_{\bold K} $ with values in the finite field $\ensuremath{\mathbb{F}}\xspace_q$. By Lang's theorem
the $\P_{\O_E}$-torsor $\pi$ splits over $x$, and so there is $\tilde x\in \widetilde{\mathcal S}_{\bold K}(\ensuremath{\mathbb{F}}\xspace_q)$
with image $y=\tilde\phi(\tilde x)\in {\rm M} (G, \{\mu\})_K
({\ensuremath{\mathbb{F}}\xspace}_q)$ such that the henselizations of $ \mathcal S_{\bold K} $
at $x$ and of ${\rm M}(G,\{\mu\})_K$ at $y$ are isomorphic. The
$\mathcal P_{\O_E}$-orbit of $y$ in ${\rm M}(G,\{\mu\})_K$ is
well-defined.
\subsubsection{} Now suppose that there is a closed group
scheme immersion $\rho: {\bold G}\hookrightarrow {\rm GSp}_{2n}$
such that the composition of $\rho$ with $\mu$ is in the conjugacy
class of the standard minuscule cocharacter of ${\rm GSp}_{2n}$. For
typesetting simplicity, set $k=\overline{{\mathbb F}}_p=\bar k_E$. We will
also assume that there is a self-dual ``lattice" chain
$W_\bullet=\{W_i\}_{i\in {\mathbb Z}}$ in ${\mathbb Z}_p[u]^{2n}$ as in \S
\ref{latticeGSp}
such that
\begin{itemize}
\item the homomorphism ${\bold G}\rightarrow {\rm GSp}_{2n}$
extends to a homomorphism ${\mathcal G}\rightarrow {\rm GSp}(W_\bullet)$,
\item the homomorphism ${\mathcal G}\otimes_{{\mathbb Z}_p[u]}k[[u]]\rightarrow {\rm GSp}(W_\bullet)\otimes_{{\mathbb Z}_p[u]}k[[u]]$
is a locally closed immersion, the Zariski closure of
${\mathcal G}\otimes_{{\mathbb Z}_p[u]}k((u))$ in ${\rm
GSp}(W_\bullet)\otimes_{{\mathbb Z}_p[u]}k[[u]]$ is a smooth group scheme
$P'_k$ over $k[[u]]$ and $P'_k(k[[u]])$ stabilizes $x_{k((u))}$ in
the building of ${\mathcal G}(k((u)))$; then
$P_k:={\mathcal G}\otimes_{{\mathbb Z}_p[u]}k[[u]]$ is the neutral component of
$P'_k$.
\end{itemize}
Under these assumptions, extending torsors via the homomorphism
${\mathcal G}\to {\rm GSp}(W_\bullet)$ gives ${\rm Gr}_{{\mathcal G}, {\mathbb Z}_p}\to
{\rm Gr}_{{\rm GSp}(W_\bullet), {\mathbb Z}_p}$. Restricting to ${\rm M}(G,
\{\mu\})_K\hookrightarrow {\rm Gr}_{{\mathcal G}, {\mathbb Z}_p}\otimes_{\O}\O_E$
gives a morphism of schemes
$$
\iota: {\rm M}(G, \{\mu\})_K\to {\rm M}({\rm
GSp}_{2n})_{W_\bullet}\otimes_{{\mathbb Z}_p}\O_E
$$
where ${\rm M}({\rm GSp}_{2n})_{W_\bullet}$ is the symplectic local
model as in \cite{GortzSymplectic} (cf. \ref{latticeGSp}).
\begin{prop}\label{embeddLoc}
Under the above assumptions, $\iota: {\rm M}(G, \{\mu\})_K\to {\rm
M}({\rm GSp}_{2n})_{W_\bullet}\otimes_{{\mathbb Z}_p}\O_E $ is a closed
immersion.
\end{prop}
\begin{proof}
Recall that the generic fiber ${\rm M}(G, \{\mu\})_K\otimes_{\O_E}E$
of ${\rm M}(G, \{\mu\})_K$ is the flag variety of parabolics
corresponding to $\{\mu\}$; the generic fiber of
${\rm M}({\rm GSp}_{2n})_{W_\bullet}$ is the Lagrangian Grassmannian
${\rm LGr}(n, 2n)$ of $n$-dimensional isotropic subspaces; our
assumption on $\rho\cdot\mu$ implies that $\iota\otimes_{\O_E}E$ is
a closed immersion. We will now explain why, in this set-up, the
morphism on the special fibers $\iota\otimes_{\O_E}k_E$ is also a
closed immersion.
As above $P_k={\mathcal G}\otimes_{{\mathbb Z}_p}k$ and $P'_k$ is the closure of
$P_k[u^{-1}]$ in ${\rm GSp}(W_\bullet)\otimes_{{\mathbb Z}_p[u]}k[[u]]$;
$P_k$ is a parahoric group scheme over $k[[u]]$. By our assumption,
$P'_k$ is a smooth affine group scheme over $k[[u]]$ and $P$ is the
neutral component of $P'$.
Both ${\rm Gr}_{P_k}$ and $ {\rm Gr}_{P'_k}$
are ind-proper ind-schemes over $k$ and the natural morphism
$$
{\rm Gr}_{{\mathcal G}, k}={\rm Gr}_{P_k}\to {\rm Gr}_{P'_k}
$$
is finite \'etale. In what follows, for simplicity, set $P=P_k$,
$P'=P'_k$. Consider the
Kottwitz homomorphism $\kappa: P'(k((u)))=P(k((u)))\to \pi_1(P[u^{-1}])_I$ for the reductive group $P[u^{-1}]=P'[u^{-1}]$
over $k((u))$. By \cite{HainesRapoportAppendix}, since $P'(k[[u]])$
stabilizes $x_{k((u))}$, the intersection of the kernel ${\rm
ker}(\kappa)$ with $P'(k[[u]])$ is equal to $P(k[[u]])$. By
\cite{PappasRaTwisted}, the homomorphism $\kappa$ induces a
bijection
$$
\pi_0(LP)\simeq \pi_0({\rm Gr}_{P})\xrightarrow{\sim}
\pi_1(P[u^{-1}])_I
$$
between the set of connected components of ${\rm Gr}_P$ and the
group $\pi_1(P(k((u))))_I$. The above now imply that ${\rm
Gr}_{{\mathcal G}, k}={\rm Gr}_{P}\to {\rm Gr}_{P'}$ identifies each
connected component of ${\rm Gr}_{P}={\rm Gr}_{{\mathcal G}, k}$ with a
connected component of ${\rm Gr}_{P'}$.
Now $P'$ is a closed subgroup scheme of $Q:={\rm
GSp}(W_\bullet)\otimes_{\O[u]}k[[u]]$. By \cite{Ana} and \cite[VI.
2.5]{RaynaudLNM119}, the quotient $Q/P'$ is quasi-projective over
$k[[u]]$.
Suppose that $A$ is an Artin local
$k$-algebra. Then $A((u))$ is a local ring and so each morphism
${\rm Spec } (A((u)))\to Q/P'$ factors through an open affine subscheme of
$Q/P'$. Using this together with the argument of
\cite[Appendix]{GaitsgoryInv}, we can see that the fibered product
${\rm Spec } (A)\times_{{\rm Gr}_Q}{\rm Gr}_{P'}$ is represented by the
closed subscheme of ${\rm Spec } (A)$ where the morphism ${\rm Spec } (A'((u)))\to
Q/P'$ obtained from a corresponding ${\rm Spec } (A((u)))\to Q/P'$ extends
to ${\rm Spec } (A'[[u]])\to Q/P'$. In particular, for any such $A$, ${\rm
Gr}_{P'}(A)\to {\rm Gr}_{Q}(A)$ is injective. Now let $ {\rm
Gr}_{P'}=\varinjlim_iY_i$, ${\rm Gr}_{Q}=\varinjlim_j Z_j$, with
$Y_i$, $Z_j$ proper closed subschemes and suppose $Y_i$ maps to
$Z_{j(i)}$. Applying the above, we see that $f_i: Y_i\to Z_{j(i)}$
is quasi-finite; since $Y_i$ is proper, $f_i$ is also proper and
hence finite by Zariski's main theorem. Since $f_i(A)$ is injective
for all $A$ as above, we see that $f_i$ is a closed immersion.
We conclude that
$$
{\rm Gr}_{P'}\to {\rm Gr}_{Q}
$$
is a closed immersion. Now notice that Zariski's main theorem
implies that the special fiber ${\rm M}(G,
\{\mu\})_K\otimes_{\O_E}k$ of ${\rm M}(G, \{\mu\})_K$ is connected:
indeed, the generic fiber ${\rm M}(G, \{\mu\})_K\otimes_{\O_E}E $
over $E$ is geometrically connected and ${\rm M}(G, \{\mu\})_K\to
{\rm Spec } (\O_E)$ is proper by construction. Since each connected
component of ${\rm Gr}_{{\mathcal G}, k}={\rm Gr}_P$ identifies with a
connected component of ${\rm Gr}_{P'}$ and ${\rm M}(G,
\{\mu\})_K\otimes_{\O_E}k$ is connected we conclude from by above
that the morphism ${\rm M}(G, \{\mu\})_K\otimes_{\O_E}k\to {\rm
Gr}_{Q}$ is a closed immersion. Therefore the morphism
$\iota\otimes_{\O_E}k$ is also a closed immersion.
We will now show that $\iota$ is a closed immersion. For simplicity,
set $\rm M={\rm M}(G, \{\mu\})_K$. Denote by $\iota(\rm M)$ the
closed scheme theoretic image of $\iota: {\rm M}\to {\rm M}({\rm
GSp}_{2n})_{W_\bullet}\otimes_{\O}\O_E$. (Since
$\iota\otimes_{\O_E}E$ is a closed immersion and $\rm M$ is
integral, $\iota(M)$ coincides with the Zariski closure of ${\rm
M}\otimes_{\O_E}E$ in ${\rm M}({\rm
GSp}_{2n})_{W_\bullet}\otimes_{\O}\O_E$.) We would like to show that
$\rm M=\iota(\rm M)$. Once again, by Zariski's main theorem
$\iota({\rm M})_k=\iota({\rm M})\otimes_{\O_E}k$ is connected. Using
the valuative criterion of properness and the fact that both $\rm M$
and $\iota(\rm M)$ are proper and flat over $\O_E$, we see that
${\rm M}_k\to \iota({\rm M})_k$ is surjective. Consider
$\iota\otimes_{\O_E}k: {\rm M}_k\to \iota({\rm M})_k\hookrightarrow {\rm M}({\rm GSp}_{2n})_{W_\bullet}\otimes_{\O}k$;
this is a closed immersion, and therefore so is ${\rm M}_k\to
\iota({\rm M})_k$. The map ${\rm M}\to \iota({\rm M})$ is proper and
quasi-finite, hence finite. Let $A$ be the local ring of $\iota(\rm
M)$ at a closed point $\iota(x)$ of $\iota({\rm M})_k$ which is the
image of a closed point $x$ of ${\rm M}_k$. Denote by $B$ the local
ring of ${\rm M}$ at $x$, then $A\subset B$. Also since $x$ is the
unique point of $\rm M$ that maps to $\iota(x)$, $B$ is finitely
generated over $A$. Since $\iota\otimes_{\O_E}k$ is a closed
immersion, $A/\varpi_E A$ surjects onto $B/\varpi_E B$ and hence
$\varpi_E\cdot B/A=(0)$. Applying Nakayama's lemma to the finitely
generated $A$-module $B/A$ we can conclude $A=B$. From this and the
above we deduce $\rm M=\iota({\rm M})$.
\end{proof}
\smallskip
\subsection{The PEL case}\label{PELremark}
In this paragraph, we elaborate on the local models for Shimura
varieties of PEL type. We will assume throughout that the prime
$p$ is odd.
We follow \cite[Chapter 6]{RapZinkBook} (see also \cite{KottJAMS}):
Let ${{\bold B}}$ be a finite dimensional semisimple algebra over ${\mathbb Q}$
with a positive involution $*$. Then the center ${\bold F}$ of ${\bold B}$ is
a product of CM fields and totally real fields.
Let ${\bold V}$ be a finite dimensional ${\mathbb Q}$-vector space
of dimension $2n$ with a perfect alternating ${\mathbb Q}$-bilinear form $(\
,\ ): {\bold V}\times {\bold V}\to {\mathbb Q}$. Assume that ${\bold V}$ is equipped with a
${\bold B}$-module structure, such that
\begin{equation}\label{*form}
(bv, w)=(y, b^*w), \quad \forall v, w\in {\bold V}, \quad b\in {\bold B}.
\end{equation}
Set ${{\bold G}}\subset {\rm Aut}_{{\bold B}}({\bold V})$ to be the closed
algebraic subgroup over ${\mathbb Q}$ such that
$$
{{\bold G}}({\mathbb Q})=\{g\in {\rm Aut}_{{\bold B}}({\bold V})\ |\ (gv, gw)=c(g)(v, w),
\forall v, w\in {\bold V}, c(g)\in {\mathbb Q}\}.
$$
Let $h:{{\bold S}}:={\rm Res}_{{\mathbb C}/{\mathbb R}}{{\mathbb G}_{\rm m}}_{{\mathbb C}}\to {\bold G}_{{\mathbb R}}$ be the
morphism that defines on ${\bold V}_{{\mathbb R}}$ a Hodge structure of type
$(1,0)$, $(0,1)$, such that $(v, h(\sqrt{-1})w)$ is a symmetric
positive bilinear form on ${\bold V}_{{\mathbb R}}$. This gives a ${\bold B}$-invariant
decomposition ${\bold V}_{\mathbb C}={\bold V}_{0,{\mathbb C}}\oplus {\bold V}_{1, {\mathbb C}}$ where $z\in
{{\bold S}}$ acts on ${\bold V}_{0,{\mathbb C}}$ by multiplication by $\bar z$ and on
${\bold V}_{1,{\mathbb C}}$ by multiplication by $z$. Then $({{\bold G}}, \{h\})$
defines a Shimura variety of PEL type (cf.
\cite{DeligneTravauxShimura}, \cite{KottJAMS}).\footnote{Note that
${\bold G}$ is not always connected and so the set-up differs slightly
from the previous paragraph where it was assumed that ${\bold G}$ is
connected.} The reflex field ${\bold E}$ is the subfield of ${\mathbb C}$ that is
generated by the traces ${\rm Tr}_{{\mathbb C}}(b|{\bold V}_{0,{\mathbb C}})$ for $b\in
{\bold B}$. Using the isomorphism ${\bold S}_{\mathbb C}\simeq {\mathbb C}^*\times {\mathbb C}^*$,
$z\to (z, \bar z)$, we define $\mu: {{\mathbb G}_{\rm m}}_{\mathbb C}\to {{\bold G}}_{\mathbb C}$ as
$\mu(z)=h_{\mathbb C}(z, 1)$; the field ${\bold E}$ is the field of definition of
the ${{\bold G}}$-conjugacy class of $\mu$. By definition, we have an
embedding $\rho: {{\bold G}}\hookrightarrow {\rm GSp}({\bold V}, (\ ,\ ))={\rm
GSp}_{2n}$ and $\rho\cdot \mu$ is the standard minuscule coweight of
${\rm GSp}_{2n}$.
\subsubsection{} Now set $G^\flat={{\bold G}}_{{\mathbb Q}_p}$ and denote by
$G={{\bold G}}_{{\mathbb Q}_p}^\circ$ the neutral component. Let ${\mathfrak
P}|(p)$ be a prime of ${\bold E}$ and set $E^\flat={{\bold E}}_{\mathfrak
P}$; we see that $\mu$ gives a unique conjugacy class of coweights
${{\mathbb G}_{\rm m}}_{\bar{\mathbb Q}_p}\to G^\flat_{\bar {\mathbb Q}_p}$ which is defined over
$E^\flat$. Observe that each cocharacter $\mu: {{\mathbb G}_{\rm m}}_{\bar{\mathbb Q}_p}\to
G^\flat_{\bar{\mathbb Q}_p}$ lands in the neutral component $G_{\bar{\mathbb Q}_p}$
and we can choose a representative $\mu: {{\mathbb G}_{\rm m}}_{\bar{\mathbb Q}_p}\to G_{\bar
{\mathbb Q}_p}$. Then the corresponding geometric $G_{\bar{\mathbb Q}_p}$-conjugacy
class is defined over $E$ which is a finite extension of $E^\flat$
(and can depend on our choice). We will also denote this conjugacy
class by $\{\mu\}$. Consider an order $\O_{\bold B}$ of $B$ such
that $\O_B:=\O_{\bold B}\otimes{\mathbb Z}_p$ is a maximal order which is
stable under the involution $*$. Also, let $\{{\mathcal L}\}$ be a
self-dual multi-chain of $\O_{ B} $-lattices in $V={\bold V}_{{\mathbb Q}_p}$
with respect to $*$ and the alternating form $(\ ,\ )$ (in the sense
of \cite[Chapter 3]{RapZinkBook}). Consider the group scheme
$\P^\flat$ over ${\mathbb Z}_p$ whose $S$-valued points is the group of
$\O_B\otimes\O_S$-isomorphisms of the multi-chain
$\{\L\otimes\O_S\}$ that respect the forms up to (common)
similitude in $\O^*_S$. The generic fiber of $\P^\flat$ is
$G^\flat$. It follows from \cite[Appendix]{RapZinkBook} that
$\P^\flat$ is smooth over ${\mathbb Z}_p$.
As we discussed in \ref{ss4a}, each such self-dual multichain
$\{{\mathcal L}\}$ defines a point $x(\L)$ in the Bruhat-Tits
building ${\mathcal B}(G, \breve{\mathbb Q}_p)$ and we can check that the
group $\P^\flat(\breve{\mathbb Z}_p)\cap G(\breve{\mathbb Q} _p)$ is the
stabilizer subgroup of $x(\L)$ in $G(\breve{\mathbb Q}_p) $. By
\cite{BTII}, there is a unique affine smooth group scheme ${\mathcal P}'$
over ${\rm Spec } ({\mathbb Z}_p)$ with generic fiber $G$ such that
${\mathcal P}'(\breve{\mathbb Z}_p )$ is the stabilizer of $x(\L)$ in $G(\breve{\mathbb Q}_p)$. Then there is a group scheme embedding $\P'\hookrightarrow
\P^\flat$ which extends $G=(G^\flat)^\circ \hookrightarrow G^\flat$
and in fact, ${\mathcal P}'$ is the Zariski closure of $G$ in
${\mathcal P}^\flat$. Finally, the neutral component $\P:=(\P')^\circ$ is
the parahoric group scheme of $G$ associated to $x(\L)$.
\subsubsection{} In this paragraph we describe some constructions from \cite{RapZinkBook}.
The reader is referred to this work for more details.
We will consider $B:={\bold B}\otimes_{{\mathbb Q}}{\mathbb Q}_p$ as a central
$F:={\bold F}\otimes_{{\mathbb Q}}{\mathbb Q}_p$-algebra. For simplicity, we will
assume that the invariants of the involution $*$ on the center $F$
of $B$ are a field $F_0$. (The general case of $(B, *, V, (\ ,\ ))$
can be decomposed into a direct sum of cases with this property.)
There are two cases:
(A) The center of $B$ is a product $Z=F\times F$, then $B\simeq
M_n(D)\times M_n(D^{\rm opp})$ and $(x, y)^*=(y, x)$. (Here $D$ is a
central division algebra over $F$. Note that $D=D^{\rm opp}$ as
sets.)
(B) The center of $B\simeq M_n(D)$ is a field $F$.
\smallskip
(Case A) We set $\O_B=M_n(\O_D)\times M_n({\O_D}^{\rm opp})$; denote
by $*$ the exchange involution on $\O_B$. Set
$U=\O_D^n\otimes_{\O_D}T$ (a left $M_n(\O_D)$-module) where $T\simeq
\O_D^m$ and $\tilde U={\rm Hom}_{{\mathbb Z}_p}(U,{\mathbb Z}_p)$ which is then also
naturally a left $M_n({\O_D}^{\rm opp})$-module. Set $W=U\oplus
\widetilde U $ which is a left $\O_B$-module; then $W$ also
supports a unique perfect alternating ${\mathbb Z}_p$-bilinear form $(\ ,\ )$
for which $U$, $\widetilde U$ are isotropic and $((u, 0), (0, \tilde
u))=\tilde u(u)$ for all $u\in U$, $\tilde u\in \widetilde U$. In
this case, the lattice chain $\L_\bullet$ comes about as follows:
Choose an $\O_D$-lattice chain $\Gamma_\bullet$ in
$\O_D^m=\oplus_{i=1}^m\O_De_i$
$$
\Gamma_r=\Pi\cdot \Gamma_0\subset \Gamma_{r-1}\subset \cdots \subset
\Gamma_0=\O_D^m
$$
such that $\Gamma_{j}=\oplus_{j=1}^m \Pi^{a_j} \O_D $ for some
$1\geq a_j\geq 0$. Consider
$$
U_\bullet=\O({\mathcal D})^n\otimes_{\O_D}\Gamma_\bullet.
$$
Then $\L_\bullet=U_\bullet\oplus \widetilde U_\bullet$
for a unique choice of $\Gamma_\bullet$ as above.
(Case B) This can be split into three cases (B1), (B2), (B3) which
correspond to (II), (III), (IV) of \cite{RapZinkBook}, p. 135.
(B1) $B=M_n(F)$, $F=F_0$.
(B2) $B=M_n(F)$ and $F/F_0$ is a quadratic extension.
(B3) $B=M_n(D)$ where $D$ is a quaternion algebra over $F$ and
$F=F_0$.
Recall that, as in \cite{RapZinkBook}, we always assume that we have
a maximal order $\O_D$ such that $\O_B=M_n(\O_D)$ is stable under
the involution $*$. Here, we use $D$ to denote either the quaternion
algebra $D$ or $F$ depending on the case we are considering. As in
\cite[App.]{RapZinkBook}, we see that there is a certain perfect
form $H:\O_D^n\times\O_D^n\to \O_D$ (the possible types are:
symmetric, alternating, hermitian, anti-hermitian, quaternionic
hermitian or antihermitian for the main or the new involution on
$D$) on the (right) $\O_D$-module $U=\O_D^n$ such that the
involution $*$ satisfies
\begin{equation}
H(Au, v)=H(u, A^*v), \quad u, v\in U , \ A\in M_n(\O_D).
\end{equation}
Here we identify $M_n(\O_D)$ with the right $\O_D$-module
endomorphisms ${\rm End}_{-\O_D}(U)$. We will denote the involution
of $\O_D$ that we are using by $d\mapsto \breve d$. Then $d\mapsto
\breve{d}$ can be trivial, conjugation, the main involution or the
new involution in the case of quaternion algebras. For simplicity,
we will also refer to all the possible types of forms as
$\epsilon$-hermitian for the involution $d\mapsto \breve d$. If this
involution is trivial, ``$1$-hermitian" means symmetric and
``$(-1)$-hermitian" means alternating.
Let $\vartheta_F$ be a generator of the different of $F/{\mathbb Q}$ such
that $\bar\vartheta_F=-\vartheta_F$ if $F\neq F_0$. Define $h:
\O_D\times\O_D\to {\mathbb Z}_p$ by $h={\rm Tr}_{F/{\mathbb Q}_p}(\vartheta^{-1}_FH)$
if $D=F$, and by $h={\rm Tr}_{F/{\mathbb Q}_p}(\vartheta^{-1}_F{\rm
Tr}^0(\Pi^{-1}H))$ if $D$ is quaternion, where ${\rm Tr}^0: D\to F$
is the reduced trace.
As explained in \cite[Appendix]{RapZinkBook}, we can now employ
Morita equivalence and write $V=M_n(D)\otimes_DW$ with $W\simeq D^m$
a free left $D$-module. The alternating form $(\ ,\ )={\mathcal E}(\ ,\ )$
on $V$ can be written
\begin{equation}
{\mathcal E}(u_1\otimes w_1, u_2\otimes w_2)=h(u_1, u_2\Psi(w_1, w_2))
\end{equation}
where $\Psi: W\times W\to D$ is an $\epsilon$-hermitian form on $W$.
The sign $\epsilon$ of $\Psi$ is the opposite of that of $H$. We can
also write
\begin{equation}
\L_i=M_n(\O_D)\otimes_{\O_D} N_i
\end{equation}
where $N_i$ are $\O_D$-lattices in $W$. The perfect forms ${\mathcal E}:
\L_{i}\times\L_j\to {\mathbb Z}_p$ induce $\Psi: N_i\times N_j\to \O_D$ such
that
\begin{equation}
{\mathcal E}(u_1\otimes n_1, u_2\otimes n_2)=h(u_1, u_2\Psi(n_1, n_2)).
\end{equation}
Then $\{N_i\}$ give a polarized chain of $\O_D$-lattices in $W$ for
the form $\Psi$ and the point here is that there is a uniquely
determined polarized lattice chain $N_\bullet$ that produces the
polarized chain $\L_\bullet$ as above.
\subsubsection{} To proceed we assume that the group $G$ splits over a
tamely ramified extension of ${\mathbb Q}_p$. In particular, the prime $p$ is
at most tamely ramified in the center $F$ of $B$. Now let us explain
how we can extend the above construction over the base ${\mathbb Z}_p[u]$.
(Case A) We set $\O({\mathcal B})=M_n(\O({\mathcal D}))\times M_n(\O({\mathcal D})^{\rm
opp})$ with $\O({\mathcal D})$ over $W[v]$ as given in (\ref{order});
denote again by $*$ the exchange involution on $\O({\mathcal B})$. Recall
we view $W[v]$ as a ${\mathbb Z}_p[u]$-algebra via $u\mapsto v^e\cdot (p\cdot
\varpi^{-e})$. Set $\underline U = \O({\mathcal D})^n\otimes_{\O({\mathcal D})}\underline T$
(a left $M_n(\O({\mathcal D}))$-module) where $\underline T\simeq \O({\mathcal D})^m$
and $ \widetilde {\underline U} ={\rm Hom}_{{\mathbb Z}_p[u]}(\underline U,{\mathbb Z}_p[u])$
which is then also naturally a left $M_n(\O({\mathcal D})^{\rm
opp})$-module. Set $\underline W=\underline U\oplus \widetilde {\underline U} $ which
is a left $\O({\mathcal B})$-module; then $\underline W$ also supports a unique
perfect alternating ${\mathbb Z}_p[u]$-bilinear form $(\ ,\ )$ for which
$\underline U$, $\widetilde {\underline U}$ are isotropic and $((u, 0), (0,
\tilde u))=\tilde u(u)$ for all $u\in \underline U$, $\tilde u\in
\widetilde {\underline U}$. Now choose an $\O({\mathcal D})$-lattice chain
$\underline\Gamma_\bullet$ in $\O({\mathcal D})^m=\oplus_{i=1}^m\O(\mathcal D)e_i$
$$
\underline\Gamma_r=X\cdot \underline \Gamma_0\subset \underline\Gamma_{r-1}\subset
\cdots \subset \underline \Gamma_0=\O({\mathcal D})^m
$$
such that $\underline\Gamma_{j}=\oplus_{j=1}^m X^{a_j} \O({\mathcal D}) $ for
some $1\geq a_j\geq 0$ which lifts the corresponding lattice chain
$\Gamma_j$. Consider
$$
\underline U_\bullet=\O({\mathcal D})^n\otimes_{\O({\mathcal D})}\Gamma_\bullet.
$$
Then $\underline W_\bullet=\underline {U}_\bullet\oplus \widetilde {\underline
{U}}_\bullet$ is a self-dual (polarized) chain
(as in \S \ref{lattice} (case 2)) that lifts $\{\L_i\}_i$. Consider
$$
{\mathcal G}(R)={\rm Aut}_{\O({\mathcal B})\otimes_{{\mathbb Z}_p[u]}R}(\{ \underline
W_\bullet\otimes_{{\mathbb Z}_p[u]}R\}, (\ ,\ ))
$$
where the automorphisms of the chain are supposed to preserve the
form $( , )$ up to common similitude in $R^*$. As in \S
\ref{sss4b3}, \S \ref{sss4b11}, we can see that ${\mathcal G}$ is one of the
group schemes constructed in Theorem \ref{grpschemeThm}. By its
construction above, the group scheme ${\mathcal G}$ is a closed subgroup
scheme of ${\rm GSp}(\underline W_\bullet)$.
\smallskip
(Case B) Recall from \ref{sss4c4} our notation of ${\mathfrak O}$ which
could be $W[v]$, ${\mathfrak R}$ or $\O({\mathcal Q})$. The algebra ${\mathfrak O}$
specializes to $\O_D$ above under $v\mapsto \varpi_0$ (note here
that in general the ``base field" is $F_0$). Notice that the
involution $d\mapsto \breve d$ of $\O_D$ has a canonical extension
to an involution of ${\mathfrak O}$; we will denote this involution by the
same symbol.
Now to start our construction, first extend
the form $H$ to a perfect form $\underline H: {\mathfrak O}^n\times{\mathfrak O}^n\to
{\mathfrak O}$ of the type described in \S \ref{exClassical} such that
under $W[v]\to \O_{F_0}$, $v\mapsto \varpi_0$, we obtain a form
isomorphic to $H$. For example, suppose we are in case (B1). Then
depending on the type of the involution $*$, the form $H$ is either
symmetric or alternating. In the alternating case, all perfect forms
on $\O_F^n$ are isomorphic to the standard form and we can extend
this over $W[v]$ as in \S \ref{exAlt}. In the symmetric case, the
discriminant of the perfect form is in
$W^*/(W^*)^2\simeq\O_F^*/(\O_F^*)^2$ and the Hasse invariant is
trivial; this implies that there are only two cases to consider: the
split case in which $H$ is isomorphic to the standard form and the
quasi-split unramified case; these can be lifted as in the first
cases of \S \ref{exSymm}. The other cases can be dealt by similar
arguments using \S \ref{exhaustive} and following the pattern
explained in \cite[Appendix]{RapZinkBook}. We will leave the details
to the reader. Notice that since we are assuming that the form $H$
is perfect on $\O_D^n$ there are fewer cases to consider. On the
other hand, we now also have to allow the anti-hermitian forms and
also the quaternionic $\epsilon$-hermitian for the new involution as
in
\ref{exVariants}.
Define now a perfect form $\underline h: {\mathfrak O}^n\times{\mathfrak O}^n\to
{\mathbb Z}_p[u]$ as follows: We set $\underline h={\rm
Tr}_{{\mathfrak O}/{\mathbb Z}_p[u]}(v^{1-e}\cdot \underline H)$ if ${\mathfrak O}=W[v]$ or
${\mathfrak O}={\mathfrak R}=W_2[v]$ (unramified case), $\underline h={\rm
Tr}_{{\mathfrak O}/{\mathbb Z}_p[u]}(v'^{1-2e}\cdot \underline H)$ if
${\mathfrak O}={\mathfrak R}=W[v']$ (ramified case), and finally $\underline h={\rm
Tr}_{W[v]/{\mathbb Z}_p[u]}(v^{1-e}\cdot {\rm Tr}^0(X^{-1}\underline H))$ if
${\mathfrak O}=\O({\mathcal Q})$.
This construction allows us to extend the involution $*$ on
$\O_B=M_n(\O_D)$ to an involution on ${\O({\mathcal B})}=M_n({\mathfrak O})$ which
we will also denote by $*$. Indeed, we can set
\[
A^*=\underline C\cdot {}^t\breve A\cdot \underline C^{-1}
\]
where $\underline C$ is the matrix in ${\rm GL}_n({\mathfrak O})$ that gives the
perfect form $\underline H$. This satisfies $C=\epsilon\cdot {}^t\breve C$.
Using \S \ref{exhaustive} together with \S \ref{exVariants}, we
extend $\Psi$ above to an $\epsilon$-hermitian form $\underline\Psi$ on
${\mathcal U}={\mathfrak O}[v^{\pm 1}]^m$ for the involution $d\mapsto \breve d $
of the type described in \S \ref{exClassical}. As before, the form
$\underline\Psi$ has the opposite parity of that of $\underline H$ (i.e if one
is $\epsilon$-hermitian the other is $(-\epsilon)$-hermitian).
The self-dual $\O_D$-chain $N_\bullet$ gives a point $x$ in the
building of the corresponding group over ${\mathbb Q}_p$. We can assume that
$x$
belongs to the apartment of the standard maximal split torus; then we can give a
self-dual ${\mathfrak O}$-lattice chain $\underline N_\bullet$ in ${\mathfrak O}[v^{\pm 1}]^m$
that extends the self-dual $\O_D$-chain $N_\bullet$. This can be
done explicitly on a case-by-case basis by appealing to the list of
cases in \cite[Appendix]{RapZinkBook}. Alternatively, we can argue
as follows: As before the statement of Theorem \ref{grpschemeThm},
our identification of apartments induced by $u\mapsto p$, gives a
corresponding point $x_{{\mathbb Q}_p((u))}$ in the building of $\underline
G_{{\mathbb Q}_p((u))}$ over ${\mathbb Q}_p((u))$; this corresponds to a self-dual
${\mathbb Q}_p[[u]]$-lattice chain. Now we can use the construction in Remark
\ref{genericLattice} to obtain the desired ${\mathfrak O}$-lattice chain
over ${\mathbb Z}_p[u]$.
Now consider the tensor products
$
\underline M_i={\mathfrak O}^n\otimes_{{\mathfrak O}}\underline N_i.
$
These are free (left) $M_n({\mathfrak O})$-modules; they are all contained
in $\underline M_i[v^{-1}]=({\mathfrak O}^n\otimes_{{\mathfrak O}}{\mathfrak O}^m)[v^{-1}]$.
Define a ${\mathbb Z}_p[u]$-valued form $\underline {\mathcal E}$ on
$({\mathfrak O}^n\otimes_{{\mathfrak O}}{\mathfrak O}^m)[v^{-1}]$ by
\begin{equation}
\underline {\mathcal E}(u_1\otimes n_1, u_2\otimes n_2)=\underline h(u_1, u_2\underline
\Psi(n_1, n_2)).
\end{equation}
We can check that the parity condition above implies that the form
is alternating, i.e $\underline{\mathcal E}(x, y)=- \underline{\mathcal E}(y, x)$. Also, the form
$\underline{\mathcal E}$ satisfies
\begin{equation}
\underline{\mathcal E}(b^*m_1, m_2)=\underline{\mathcal E}(m_1, bm_2)
\end{equation}
for $b\in M_n({\mathfrak O})$, $m_1$, $m_2\in \underline M_i[v^{-1}]$. (Notice
that the form $\underline\Psi$ is uniquely determined by $\underline{\mathcal E}$.) As
a result of this construction, we have given in particular, a
self-dual chain $\{\underline M_i\}_i$ of $M_n({\mathfrak O})$-lattices for the
involution $*$ and the alternating form $\underline{\mathcal E}$ that specializes
to the self-dual chain of $M_n(\O_D)$-lattices $\{{\mathcal L}\}$ of our
initial data.
Now consider ${\mathcal G}'$ the group scheme of automorphisms of the
self-dual ${\mathfrak O}$-lattice chain $(\underline N_i)_i$ such that the
corresponding automorphism of $(\underline M_i)_i$ respects the form
$\underline{\mathcal E}$ up to a common similitude as above. By \S \ref{sss4b11}
the connected component ${\mathcal G}$ of ${\mathcal G}'$ is an example of the group
schemes of Theorem \ref{grpschemeThm}. Our construction provides
a group scheme homomorphism
\begin{equation}
\underline\rho: {\mathcal G}'\xrightarrow {\ }{\rm GSp}(\underline M_\bullet).
\end{equation}
As in \cite[Appendix]{RapZinkBook}, using Morita equivalence we can
see that this is a closed immersion. We can also see that $\underline\rho$
extends the natural symplectic representation $\rho: {\bold G}\to {\rm
GSp}$ obtained from the PEL data. It follows that the homomorphism
$\underline\rho: {\mathcal G}\to {\rm GSp}(\underline M_\bullet)$ satisfies the
assumptions of the previous section. (Here, we indeed have to allow
that ${\mathcal G}$ may not be closed in ${\rm GSp}(\underline M_\bullet)$. An
example is when $G$ is a ramified unitary similitude group on an
even number of variables and $x$ a vertex corresponding to a single
self-dual lattice for the corresponding hermitian form. Then the
fiber over $u=0$ of the Zariski closure of $\underline G$ in ${\rm
GSp}(\underline M_\bullet)$ has two connected components, see
\cite[1.3]{PappasRaIII}.)
Using the above together with Proposition \ref{embeddLoc}, we now
see that in the case of PEL Shimura varieties of \cite{RapZinkBook}
we can identify ${\rm M}(G, \{\mu\})_K=M_{{\mathcal G}, \mu}$ with the
Zariski closure of $G/P_{\mu}$ in the symplectic local model ${\rm
M}({\rm GSp}_{2n})_{M_\bullet}\otimes_{\O}\O_E$ of
\cite{GortzSymplectic} under the standard symplectic representation
$\rho: {\bold G}\to {\mathrm {GSp}}_{2n}$.\footnote{We emphasize
here that, in general, ${\bold G}$ is not connected and that
$G={{\bold G}}_{{\mathbb Q}_p}^\circ$.}
\subsubsection{} \label{sss8c4}
We will now explain how the work of Rapoport and Zink
(\cite{RapZinkBook}) combined with the above can be used to produce
integral models of PEL Shimura varieties that afford a diagram as in
(\ref{locmoddiagram}). Then as a result of Theorem \ref{thm01},
these models satisfy favorable properties, c.f. Theorem
\ref{thmPEL}. Our explanation becomes more complicated when the
group ${\bold G}_{{\mathbb Q}_p}$ is not connected, one reason being that
our theory of local models has been set up only for connected
groups. At first glance, the reader can assume that ${\bold
G}_{{\mathbb Q}_p}$ is connected; then everything simplifies considerably.
We will continue to use some of the notations and constructions of
\cite{RapZinkBook}. Starting from the PEL data ${\mathfrak
D}=({{\bold B}}, \O_{{\bold B}}, $*$, {{\bold V}}, (\ ,\ ), h, \{\L \}, K^p)$ with
corresponding group ${\bold G}$ and the choice of a prime ${\mathfrak P}|(p)$ of
the reflex field ${\bold E}$, Rapoport and Zink define a moduli functor
${\mathcal A}_{K^p}$ over $\O_{E^\flat}=\O_{{{\bold E}}_{{\mathfrak P}}}$ (see
\cite[Definition 6.9]{RapZinkBook}, here $E^\flat={{\bold E}}_{{\mathfrak P}}$
is the local reflex field). Here $K^p$ is a compact open subgroup of
${\bold G}({\mathbb A}^p_f)$. When $K^p$ is small enough, this
functor is representable by a quasi-projective scheme ${\mathcal A}_{K^p}$
over ${\rm Spec } (\O_{E^\flat})$.
Recall $G^\flat={{\bold G}}_{{\mathbb Q}_p}$, $G={{\bold G}}^\circ_{{\mathbb Q}_p}$ (a connected
reductive group); as usual, we assume these split over a tamely
ramified extension of ${\mathbb Q}_p$. The Shimura data give a conjugacy
class of cocharacters $\mu: {{\mathbb G}_{\rm m}}_{\bar{\mathbb Q}_p}\to G^\flat_{\bar{\mathbb Q}_p}$;
then $E^\flat$ is the field of definition of this conjugacy class.
Denote by $K^\flat_p$ the stabilizer of the lattice chain $\{\L\}$
in $G^\flat({\mathbb Q}_p)={\bold G}({\mathbb Q}_p)$. Set ${\bold K}^\flat=K^p\cdot
K^\flat_p$.
Then the generic fiber ${\mathcal A}_{K^p}\otimes_{\O_{E^\flat}}E^\flat$ contains
the Shimura variety $Sh_{\bold K^\flat}\otimes_{{\bold E}}E^\flat$ for
${\bold G}$, as a union of some of its connected components;
${\mathcal A}_{K^p}\otimes_{\O_{E^\flat}}E^\flat$ could also contain more
Shimura varieties, which correspond to other forms of the group
${\bold G}$ (for example, when the Hasse principle fails c.f.
\cite{KottJAMS}). Set $K'_p=K^\flat_p\cap G({\mathbb Q}_p)$ and denote by
$K=K_p$ the parahoric subgroup of $G({\mathbb Q}_p)$ that corresponds to $x(
\L )$. Recall we denote by
${\mathcal P}$ the corresponding (connected) smooth group scheme over ${\mathbb Z}_p$ and by ${\mathcal P}'$ the smooth group
scheme over ${\mathbb Z}_p$ determined by the stabilizer of $x(\L)$ so that $K'_p={\mathcal P}'({\mathbb Z}_p)$;
${\mathcal P}$ is the neutral component of ${\mathcal P}'$. Then $K=K_p$ is a
normal subgroup of finite index in $K'_p$. (In most cases, we have
$K'_p=K_p$, ${\mathcal P}'={\mathcal P}$.) Also recall that we denote by
${\mathcal P}^\flat$ the smooth group scheme of $\O_B$-isomorphisms of the
polarized multichain $ \{\L \}$ up to common similitude; we have
${\mathcal P}^\flat\otimes_{{\mathbb Z}_p}{\mathbb Q}_p=G^\flat$.
By \cite{RapZinkBook}, we have a smooth morphism of algebraic stacks
over ${\rm Spec } (\O_{E^\flat})$
\begin{equation}\label{phiRZ}
\varphi: {\mathcal A}_{K^p}\to [{\rm M}^{\rm naive}/\mathcal
P^\flat_{\O_{E^\flat}}]\,
\end{equation}
where ${\rm M}^{\rm naive}$ is the ``naive" local model that
corresponds to our data (see loc. cit. Def. 3.27, where this is
denoted by ${\rm M}^{\rm loc}$). By its definition, ${\rm M}^{\rm
naive}$ is a closed subscheme of the symplectic local model ${\rm
M}({\rm GSp}_{2n})_{M_\bullet}\otimes_{{\mathbb Z}_p}\O_{E^\flat}$ as above.
The generic fiber of ${\rm M}^{\rm naive}$ is a projective
homogeneous space over $E^\flat$ for the group $G^\flat$ (which is
not always connected). Observe that each cocharacter $\mu:
{{\mathbb G}_{\rm m}}_{\bar{\mathbb Q}_p}\to G^\flat_{\bar{\mathbb Q}_p}$ lands in the neutral
component $G_{\bar{\mathbb Q}_p}$. Let $\{\mu_i\}_{i}$ be a set of
representatives (up to $G(\bar{\mathbb Q}_p)$-conjugation) of the
cocharacters of $G_{\bar{\mathbb Q}_p}$ in the $G^\flat(\bar{\mathbb Q}_p)$-conjugacy
class of $\mu$; we can write $\mu_i=\tau_i\mu\tau_i^{-1}$ with
$\tau_i\in G^\flat(\bar{\mathbb Q}_p)$. Denote by $E_i$ the field of
definition of the $G(\bar{\mathbb Q}_p)$-conjugacy class of $\mu_i$. Then
$E^\flat\subset E_i$. The generic fiber ${\rm M}(G,
\{\mu_i\})_K\otimes_{\O_{E_i}}E_i=G_{E_i}/P_{\mu_i}$ is a
homogeneous space for $G_{E_i}$. By the above and Proposition
\ref{embeddLoc}, we obtain closed immersions
$$
\iota_i: {\rm M}(G, \{\mu_i\})_K\to {\rm M}^{\rm
naive}\otimes_{\O_{E^\flat}}\O_{E_i}
$$
of schemes over $\O_{E_i}$ which are equivariant for the action of
the group scheme ${\mathcal P}_{\O_{E_i}}$. In fact, since
${\mathcal P}'_{\O_{E_i}}$ is a closed subgroup scheme of ${\rm
GSp}(M_\bullet)$, the action of ${\mathcal P}_{\O_{E_i}}$ on ${\rm M}(G,
\{\mu_i\})_K$ extends to an action of ${\mathcal P}'_{\O_{E_i}}$ such that
$\iota_i$ remains equivariant. Now let $\tilde E$ be a Galois extension
of ${\mathbb Q}_p$ that splits $G$ and contains all the fields $E_i$. Then
the base change ${\rm M}^{\rm naive}\otimes_{\O_{E^\flat}}\tilde E$ is
a disjoint union
$$
{\rm M}^{\rm naive}\otimes_{\O_{E^\flat}}\tilde E=\bigsqcup_i ({\rm
M}(G, \{\mu_i\})_{K}\otimes_{\O_{E_i}}\tilde E)=\bigsqcup_i
G_{E_i}/P_{\mu_i}\otimes_{E_i}\tilde E.
$$
We obtain a morphism
$$
\iota: N:= \bigsqcup_i {\rm M}(G, \{\mu_i\})_K\otimes_{\O_{E_i}}\O_{\tilde E}\to {\rm M}^{\rm naive}\otimes_{\O_{E^\flat}}\O_{\tilde E}
$$
of schemes over $\O_{\tilde E}$. The scheme $N$ supports an action of
$\P'_{\O_{\tilde E}}$ while its generic fiber supports a compatible action of
$G^\flat_{\tilde E}$; we can see that the action of $\P'_{\O_{\tilde E}}$ on $N$ extends to an action of $\P^\flat_{\O_{\tilde E}}$
such that $\iota$ is $\P^\flat_{\O_{\tilde E}}$-equivariant. Both the source and target of $\iota$ (considered as schemes over $\O_{E^\flat}$) support an action of
the Galois group $\Gamma={\rm Gal}(\tilde E/E^\flat)$ and $\iota$ is $\Gamma$-equivariant (we can check these statements by looking at the generic fibers).
In this more general situation, we define the local model to be the quotient
\[
{\rm M}^{\rm loc}=N/\Gamma.
\]
This is a flat $\O_{E^\flat}$-scheme. The morphism $\iota$ gives
$$
\xi: {\rm M}^{\rm loc}=N/\Gamma\xrightarrow{ \ } ({\rm M}^{\rm naive}\otimes_{\O_{E^\flat}}\O_{\tilde E})/\Gamma=
{\rm M}^{\rm naive}
$$
which we can check is $\P^\flat_{\O_{E^{\flat}}}$-equivariant and
gives an isomorphism between generic fibers.
\begin{Remark}
{\rm If ${\bold G}$ is connected, then $G=G^\flat$, there is only
one $i$ and $E=E_i=E^\flat$ is the local reflex field as above. In
that case, ${\rm M}^{\rm loc}={\rm M}(G, \{\mu\})_K$.}
\end{Remark}
Pulling back the smooth $\phi$ (\ref{phiRZ}) along $\xi$ gives a
cartesian diagram
\begin{equation}\label{pullback}
\xymatrix{
{\mathcal A}'_{K^p} \ar[r] \ar[d] & [{\rm M}^{\rm loc} /{\mathcal P}^\flat_{\mathcal O_{E^\flat}}] \ar[d] \\
{\mathcal A}_{K^p} \ar[r] &
[{\rm M}^{\rm naive} /\mathcal\P^\flat_{\mathcal O_{E^\flat}}]
}
\end{equation}
with both horizontal arrows smooth.
The schemes $ {\mathcal A}'_{K^p}$ and ${\mathcal A}_{K^p}$ have isomorphic
generic fibers. Using the arguments in \cite[\S 8]{KottJAMS}
applied to ${\mathcal A}'_{K^p}\otimes_{\O_{E^\flat}}E^\flat=
{\mathcal A}_{K^p}\otimes_{\O_{E^\flat}}E^\flat$ we can now express this
scheme as a union of Shimura varieties $Sh^{(j)}_{\bold K^\flat}$
given by various forms ${\bold G}^{(j)}$ of the group $\bold
G$.\footnote{Here, we also allow in our formalism Shimura varieties
for non-connected reductive groups.} (These forms satisfy ${\bold
G}^{(j)}({{\mathbb Q}_v})\simeq {\bold G}({{\mathbb Q}_v})$, for all places $v\neq
p$). Therefore, ${\mathcal A}'_{K^p}$ is a flat model over $\O_{E^\flat}$
for such a union of Shimura varieties.
Since we assume that $p$ is odd, we have
$p\nmid|\pi_1(G_{\on{der}})|$. Theorem \ref{CMfiber} and Proposition \ref{normalgen} below
imply that the base changes ${\rm M}(G, \{\mu_i\})_K\otimes_{\O_{E_i}}\O_{\tilde E}$
are normal. Since normality is preserved by taking invariants by a finite group, ${\rm M}^{\rm loc}=N/\Gamma$
is also normal. It follows that the strict henselizations of ${\rm M}^{\rm loc}$ at all closed points
are normal and therefore also integral. We can now conclude, using the smoothness of
$\phi: {\mathcal A}'_{K^p} \rightarrow [{\rm M}^{\rm loc}
/{\mathcal P}^\flat_{\mathcal O_{E^\flat}}] $, that the same is true for the strict henselizations of
${\mathcal A}'_{K^p}$. It follows that the
Zariski
closures of $Sh^{(j)}_{\bold K^\flat}$ in
${\mathcal A}'_{K^p}$ do not intersect; denote these (reduced) Zariski closures by ${\mathcal S}^{(j)}_{\bold
K^\flat}$. They are integral models
of the Shimura varieties $Sh^{(j)}_{\bold K^\flat}$. Then, for each $j$, the morphism
\begin{equation}\label{calSj}
\phi: {\mathcal S}^{(j)}_{\bold
K^\flat} \rightarrow [{\rm M}^{\rm loc} /{\mathcal P}^\flat_{\mathcal
O_{E^\flat}}],
\end{equation}
obtained by restricting $\phi$, is also smooth.
Hence, ${\rm M}^{\rm loc}$ is also a ``local model" for the integral models
${\mathcal S}^{(j)}_{\bold K^\flat}$ of the Shimura varieties $Sh^{(j)}_{\bold K^\flat}$.
In most cases, the structure of ${\rm M}^{\rm loc}$ and therefore also the local structure
of ${\mathcal S}^{(j)}_{\bold K^\flat}$ can be understood using Theorem
\ref{thm01}. We will explain this below.
\begin{Remark}\label{sss8d5}
{\rm We can see as in \cite{KottJAMS} that, in the situation above, all the forms ${\bold G}^{(j)}$ that appear in the list
satisfy ${\bold G}^{(j)}(\breve{\mathbb Q}_p)\simeq {\bold G}({\breve{\mathbb Q}_p})$.
(a) Suppose now that, in addition, at least one
of the following two conditions is satisfied:
\begin{itemize}
\item[(p1)] the special fiber of $\P^\flat$ is connected, i.e $K^\flat_p$ is parahoric,
or
\item[(p2)] the sheaf of connected components $\P^\flat/(\P^\flat)^\circ$
is \'etale locally constant and its generic fiber is isomorphic
(via the natural map) to $G^\flat/(G^\flat)^\circ$.
\end{itemize}
Then, the arguments of \cite[ Lemma 7.2 and \S 8]{KottJAMS} extend
to show that all the groups ${\bold G}^{(j)}$ also satisfy ${\bold G}^{(j)}({\mathbb Q}_p)\simeq {\bold G}({\mathbb Q}_p)$.
(Indeed, then one can apply Lang's theorem as in the proof of Lemma
7.2 loc. cit. since the special fiber of $(\P^\flat)^\circ$ is
connected.)
(b) Assume that, in addition to (p1) or (p2), $\bold G$ satisfies the Hasse principle, i.e
${\mathrm H}^1({\mathbb Q}, {\bold G})\to \prod_v{\mathrm H}^1({\mathbb Q}_v, {\bold G})$ is
injective. Then the argument of \cite[\S 8]{KottJAMS} shows that
there is only one Shimura variety for the group $\bold G^{(j)}=\bold
G$ that appears in the generic fiber of ${\mathcal A}_{K_p}$. In this case, we just
take ${\mathcal S}_{\bold K^\flat}={\mathcal A}'_{K^p}$.
}
\end{Remark}
\subsubsection{}\label{rem8c4} Here we explain how we can combine
the local model diagram (\ref{calSj}) with Theorem \ref{thm01} to give results
on integral models of PEL Shimura varieties. In particular, we show Theorem
\ref{thmPEL}.
a) The set-up simplifies drastically if $G^\flat$ is
connected, i.e if $G=G^\flat$. This is the case when ${\bold G}$
does not have any orthogonal factors. Then ${\rm M}^{\rm loc}={\rm
M}(G, \{\mu\})_K$ ($\Gamma=\{1\}$) and $\xi=\iota$ is a closed immersion. Also, in
this case the derived group $G_{\rm der}$ is simply-connected. We
can then use Theorem \ref{thm01} and the above directly to obtain
information on the integral models
${\mathcal A}'_{K^p}$ and ${\mathcal S}^{(j)}_{\bold K^\flat}$. Assume in addition that $K^\flat_p$ is
parahoric, i.e that the special fiber of $\P^\flat=\P'$ is
connected. Then the existence of the local model diagram morphism $\phi$ of (\ref{calSj})
together with the results in \S \ref{ss8d}, and in particular Theorem \ref{thm01}, implies, in a standard way, that the integral models ${\mathcal A}'_{K^p}$ and ${\mathcal S}^{(j)}_{\bold K^\flat}$ satisfy the conclusions of the statement of Theorem \ref{thmPEL}.
(Notice here that in the statement of Theorem \ref{thmPEL}
we refer somewhat ambiguously to the ``Shimura variety $Sh_{\bold K}$
defined by the PEL data $\mathfrak D$". If the Hasse principle is not satisfied,
$Sh_{\bold K}$ is by definition the generic fiber of ${\mathcal A}'_{K^p}$ which is
a union of several Shimura varieties; by the above, ${\mathcal S}^{(j)}_{\bold K^\flat}$
give integral models as in
Theorem \ref{thmPEL} for each one of them.)
b) Continue to assume that $G=G^\flat$. In general, if ${\mathcal P}'={\mathcal P}^\flat$ is not connected, $K'_p/K_p$ is a non-trivial abelian group.
The above allows us to understand the
structure of integral models of Shimura varieties with $K'_p$-level structure.
With some extra work, one should be able to extend this and also produce
local model diagrams for Shimura
varieties with $K_p$-level structure. We will not
discuss this in this paper. See \cite[1.3]{PappasRaIII} for an example where a case of
$K_p$-level structure for the ramified unitary group is discussed.
c) Let us now consider the general case, i.e allow $G\neq G^\flat$.
First of all, let us remark that the scheme ${\rm M}^{\rm loc}$ defined above coincides in
various special cases with the ``corrected" local model as
considered in \cite{PappasRaI}, \cite{PappasRaII},
\cite{PappasRaIII}, \cite{SmithOrth1}, \cite{PRS}. Indeed, first
assume that $G=G^\flat$, $E=E^\flat$ as above. Then, for both our
definition above and the definition in these references, ${\rm
M}^{\rm loc}$ is simply the Zariski closure of $G_E/P_\mu$ in ${\rm
M}^{\rm naive}$. Now let us allow $G\neq G^\flat$; such cases have not been
studied extensively before and one needs to work a bit harder. Some (even) orthogonal similitude group cases where
$G\neq G^\flat$ have been considered in \cite{PappasRaIII},
\cite{SmithOrth1} (see \cite{PRS}). In the split orthogonal case,
$E=E^\flat={\mathbb Q}_p$, and it makes sense to define the corrected local
model as the Zariski closure of $G^\flat/P_\mu=(G/P_{\mu_1})\sqcup
(G/P_{\mu_2})$ in ${\rm M}^{\rm naive}$; it follows from \cite[\S
8.2]{PappasRaIII} that this Zariski closure is the disjoint union of
the closures of $G/P_{\mu_1}$ and $G/P_{\mu_2}$. This coincides with
the description of ${\rm M}^{\rm loc}$ above and we have ${\rm
M}^{\rm loc}={\rm M}(G, \{\mu_1\})_K\sqcup {\rm M}(G, \{\mu_2\})_K$.
This local model
was studied in the Iwahori case by Smithling in \cite{SmithOrth1}.
In general, the suggested correction of the naive local model in this orthogonal case
\cite{PappasRaIII} involves the so-called ``spin condition" whose
main virtue is that it also attempts to give a moduli theoretic
interpretation. (The spin condition also makes sense in the
quasi-split even orthogonal case (\cite[\S 8.2]{PappasRaIII},
\cite[\S 2.7]{PRS}). We conjecture that the corresponding local
model always agrees with our definition above.)
As an example of a result we can obtain when $G\neq G^\flat$,
let us suppose that $G^\flat $ is a split even orthogonal similitude
group as above and that $\L$ gives a maximal self-dual lattice chain
(the Iwahori case considered in \cite{SmithOrth1}). Then the special
fiber of $\P^\flat$ is connected.
By (\ref{calSj}) and the above, the local model ${\rm
M}^{\rm loc}={\rm M}(G, \{\mu_1\})_K\sqcup {\rm M}(G, \{\mu_2\})_K$ describes the singularities of
corresponding integral models of ``orthogonal" PEL Shimura varieties.
Theorem \ref{thm01} then also implies that these integral models
are normal and have reduced special
fibers with geometric components which are normal and
Cohen-Macaulay. In the general case that $G\neq G^\flat$, one would need to study the quotient $N/\Gamma={\rm M}^{\rm loc}$
but we will leave this for another occasion.
d) For general Shimura varieties of abelian type, the existence of a
suitable integral model and a local model diagram as above is the
subject of joint work in progress \cite{K-P} by the first named
author and M. Kisin.
\medskip
\section{The special fibers of the local models}\label{ss8d}
Here we show our main results on the structure of the local models,
including Theorem \ref{thm01} of the introduction.
\subsection{Affine Schubert varieties and the $\mu$-admissible set}\label{8d1}
Let us review some aspects of the theory of Schubert varieties in the (generalized)
affine flag varieties, following
\cite{FaltingsLoops,PappasRaTwisted}.
In this section, we assume
that $F'=k((u))$, $\O'=k[[u]]$, with $k$ algebraically closed, and let $G'$ be a
connected reductive group over $F'$, split over a tamely ramified
extension $\tilde{F}'$ of $F'$.
\subsubsection{} Let $S'$
be a maximal $F'$-split torus of $G'$ and $T'=Z_{G'}(S')$ a maximal
torus. Let $I={\rm Gal}(\tilde{F}'/F')$. Let $x\in{\mathcal B}(G',F')$ be a
point in the building, which we assume lies in the apartment
${\mathcal A}(G',S', F')$ and let ${\mathcal P}'_{x}$ be the corresponding parahoric
group scheme over $k[[u]]$. Let ${\rm Gr}_{{\mathcal P}'_{x}}$ be the affine
Grassmannian. Then $L^+{\mathcal P}'_{x}$ acts on ${\rm Gr}_{{\mathcal P}'_{x}}$ via
left multiplication. The orbits then are parameterized by
$W^{{\mathcal P}'_{x}}\setminus\widetilde{W}'/W^{{\mathcal P}'_{x}}$, where
$\widetilde W'$ is the Iwahori-Weyl group of $G'$ and
$W^{{\mathcal P}'_{x}}$ is the Weyl group of ${\mathcal P}'_{x}\otimes k$. For
$w\in W^{{\mathcal P}'_{x}}\setminus\widetilde{W}'/W^{{\mathcal P}'_{x}}$, let
$S^{{\mathcal P}'_x}_w\subset{\rm Gr}_{{\mathcal P}'_x}$ denote the corresponding
Schubert variety, i.e. the closure of the $L^+{\mathcal P}'_x$-orbit
through $w$. Then according to \cite{FaltingsLoops}, \cite[Thm.
8.4]{PappasRaTwisted}, if $\on{char} k \nmid
|\pi_1(G'_{\on{der}})|$, where $G'_{\on{der}}$ is the derived group
of $G'$, then $S^{{\mathcal P}'_x}_w$ is normal, has rational singularities
and is Frobenius-split if $\on{char} k>0$.
Let us also recall the structure of the Iwahori-Weyl group
$\widetilde W'$ of $G'$. It is defined via the exact sequence
\begin{equation}\label{IwahoriWeyl}
1\to T'(\O')\to N'(F')\to \widetilde W'\to 1
\end{equation}
(where $T'(\O')$ is the group of $\O'$-valued points
of the unique Iwahori group scheme for the torus $T'$) and acts on
${\mathcal A}(G',S', F')$ via affine transformations. In addition, there is a
short exact sequence
\begin{equation}\label{Iwahori-Weyl}
1\to \mathbb{X}_\bullet(T')_{I}\to\widetilde{W}'\to {W'_0}\to 1,
\end{equation}
where ${W'_0}=N'(F')/T'(F')$ is the relative Weyl group of $G'$ over
$F'$. In what follows, we use $t_{\lambda}$ to denote the translation
element in $\widetilde{W}'$ given by ${\lambda}\in\mathbb{X}_\bullet(T')_I$ from the
above map \eqref{Iwahori-Weyl}\footnote{\label{fn}Note that under
the sign convention of the Kottwitz homomorphism in
\cite{KottIsocrystalsII}, $t_{\lambda}$ acts on ${\mathcal A}(G',S', F')$ by
$v\mapsto v-{\lambda}$.}. But occasionally, if no confusion is likely to arise,
we will also use ${\lambda}$ itself to
denote this translation element.
A choice of a special vertex $v$ of ${\mathcal A}(G',S',F')$ gives a splitting
of the above exact sequence and then we can write $w=t_{\lambda} w_f$ for
${\lambda}\in \mathbb{X}_\bullet(T')_I$ and $w_f\in {W'_0}$.
Let us choose a rational Borel subgroup $B'$ of $G'$ containing $T'$. This
determines a set of positive roots $\Phi^+=\Phi(G',S')^+$ for $G'$.
There is a natural map $\mathbb{X}_\bullet(T')_I\to\mathbb{X}_\bullet(S')_{\mathbb R}$. We define
\begin{equation}\label{plus}
\mathbb{X}_\bullet(T')_I^+=\{{\lambda}\mid ({\lambda},a)\geq 0 \mbox{ for } a\in\Phi^+\}.
\end{equation}
Observe that the chosen special vertex $v$ and the rational Borel
$B'$ determine a unique alcove $C$ in ${\mathcal A}(G',S',F')$. Namely, we
identify ${\mathcal A}(G',S',F')$ with $\mathbb{X}_\bullet(S')_{\mathbb R}$ by $v$ and then $C$
is the unique alcove whose closure contains $v$, and is contained in
the finite Weyl chamber determined by $B'$.
Let $W'_{\on{aff}}$ be the affine Weyl group of $G'$, i.e. the
Iwahori-Weyl group of $G'_{\on{sc}}$, the simply-connected cover of
$G'_{\on{der}}$. This is a Coxeter group. One has
\begin{equation}\label{affine Weyl}
1 \to \mathbb{X}_\bullet(T'_{\on{sc}})_{I}\to\ W'_{\on{aff}}\to {W'_0}\to 1,
\end{equation}
where $T'_{\on{sc}}$ is the inverse image of $T'$ in $G'_{\on{sc}}$.
One can write $\widetilde{W}'=W'_{\on{aff}}\rtimes \Omega'$, where
$\Omega'$ is the subgroup of $\widetilde{W}'$ that fixes the chosen
alcove $C$. This gives $\widetilde{W}'$ a quasi Coxeter group
structure; it makes sense to talk about the length of an element
$w\in\widetilde{W}'$ and there is a Bruhat order on
$\widetilde{W}'$. Namely, if we write $w_1=w'_1\tau_1,
w_2=w'_2\tau_2$ with $w'_i\in W'_{\on{aff}}, \tau_i\in\Omega'$, then
$\ell(w_i)=\ell(w'_i)$ and $w_1\leq w_2$ if and only if
$\tau_1=\tau_2$ and $w'_1\leq w'_2$.
\subsubsection{} Now let us recall the definition of the \emph{$\mu$-admissible set}
in the Iwahori-Weyl group (cf. \cite{KoRaMinuscule}, see also
\cite{PRS}). We continue with the above notations. Let $\bar{W}'$ be the absolute Weyl group of $G'$, i.e.
the Weyl group for $(G'_{\tilde{F'}},T'_{\tilde{F'}})$. Let
$\mu:({\mathbb G}_m)_{\tilde{F'}}\to G'\otimes_{F'}\tilde{F'}$ be a
geometrical conjugacy class of 1-parameter subgroups. It determines
a $\bar{W}'$-orbit in $\mathbb{X}_\bullet(T')$. One can associate to $\mu$ a
${W'_0}$-orbit $\Lambda$ in $\mathbb{X}_\bullet(T')_{I}$ as follows. Choose a
Borel subgroup of $G'$ containing $T'$, and defined over $F'$. This
gives a unique element in this $\bar{W}'$-orbit, still denoted by
$\mu$, which is dominant with respect to this Borel subgroup. Let
$\bar{\mu}$ be its image in $\mathbb{X}_\bullet(T')_I$, and let
$\Lambda={W'_0}\bar{\mu}$. It turns out that $\Lambda$ does not
depend on the choice of the rational Borel subgroup of $G'$, since
any two such $F'$-rational Borel subgroups that contain $T'$ are
conjugate to each other by an element in ${W'_0}$. For
$\mu\in\mathbb{X}_\bullet(T')$, define the admissible set
\begin{equation}\label{Adm}
{\rm Adm}(\mu)=\{w\in\widetilde{W}'\mid w\leq t_{\lambda}, \mbox{ for some }
{\lambda}\in\Lambda\},
\end{equation}
and more generally,
\begin{equation}\label{Adm-parahoric}
{\rm Adm}^{{\mathcal P}'_x}(\mu)=W^{{\mathcal P}'_x}{\rm Adm}(\mu)W^{{\mathcal P}'_x}.
\end{equation}
If $\on{char} k\nmid |\pi_1(G'_{\on{der}})|$, let us define a
reduced closed subvariety of ${\rm Gr}_{{\mathcal P}'_x}$ whose underlying set
is given by
\[{\mathcal A}^{{\mathcal P}'_x}(\mu)=\bigcup_{w\in {\rm Adm}(\mu)} L^+{\mathcal P}'_x w L^+{\mathcal P}'_x/L^+{\mathcal P}'_x=\bigcup_{w\in W^{{\mathcal P}'_x}\setminus {\rm Adm}^{{\mathcal P}'_x}(\mu)/W^{{\mathcal P}'_x}}S^{{\mathcal P}'_x}_w.\]
In general, we can define a slightly different variety
${\mathcal A}^{{\mathcal P}'_x}(\mu)^{\circ}$ as in \cite[Sect.
10]{PappasRaTwisted} (see also \cite[Sect. 2.2]{ZhuCoherence}),
which is isomorphic to ${\mathcal A}^{{\mathcal P}'_x}(\mu)$ when $\on{char}
k\nmid |\pi_1(G'_{\on{der}})|$. (Here, we write
$|\pi_1(G'_{\on{der}})|$ for the order of the algebraic fundamental
group of $G'_{\on{der}}(\overline {k((u))}^s)$). In fact, when
$\on{char} k\nmid |\pi_1(G'_{\on{der}})|$,
${\mathcal A}^{{\mathcal P}'_x}(\mu)^{\circ}$ is the translation of
${\mathcal A}^{{\mathcal P}'_x}(\mu)$ back to the ``neutral connected component"
of ${\rm Gr}_{{\mathcal P}_x'}$ by a certain element of $G'(k((u)))$. The
variety ${\mathcal A}^{{\mathcal P}'_x}(\mu)^\circ$ depends only on $G'_{\on{ad}}$
and the image of $\mu$ under $G'\to G'_{\on{ad}}$. From its
definition (\emph{loc. cit.}), if we have the decomposition
$G'=G'_1\times G'_2$, $\mu=\mu_1+\mu_2$ and
${\mathcal P}'_x=({\mathcal P}'_x)_1\times({\mathcal P}'_x)_2$, then
\[{\mathcal A}^{{\mathcal P}'_x}(\mu)^\circ\simeq {\mathcal A}^{({\mathcal P}'_x)_1}(\mu_1)^\circ\times {\mathcal A}^{({\mathcal P}'_x)_2}(\mu_2)^\circ.\]
\subsection{Special fibers}\label{8d2}
Let us return to the local models $M_{{\mathcal G}, \mu}$ and show Theorem
\ref{thm01} of the introduction. We will assume throughout that
$p\nmid|\pi_1(G_{\on{der}})|$.
\subsubsection{} We denote by $\overline M_{{\mathcal G}, \mu}=M_{{\mathcal G}, \mu}\otimes_{\O_E}k_E$
the special fiber of $M_{{\mathcal G},\mu}\to {\rm Spec } (\O_E)$ over the residue
field $k_E$ of $\O_E$.
\begin{thm}\label{CMfiber}
Suppose that $p\nmid|\pi_1(G_{\on{der}})|$. Then the scheme $M_{{\mathcal G},\mu}$ is normal.
In addition, the special fiber
$\overline M_{{\mathcal G}, \mu}$ is reduced, and each geometric irreducible
component of $\overline M_{{\mathcal G}, \mu}$ is normal, Cohen-Macaulay
and Frobenius split.
\end{thm}
Notice that the set-up now is more general than in Theorem \ref{thm01},
since here $\mu$ is not necessarily minuscule.
Recall that by its construction, $\overline M_{{\mathcal G}, \mu}$ is a
closed subscheme of ${\rm Gr}_{P_{k_E}}$. Clearly, it is enough to prove
the theorem after base changing to $\breve{\O}_E$ with residue field
$\bar k$. Then the second part of Theorem \ref{CMfiber} that refers to the special fiber
$\overline M_{{\mathcal G}, \mu}$ is a corollary of the
aforementioned results of \cite{FaltingsLoops, PappasRaTwisted}
combined with Theorem \ref{special fiber} below which
gives a precise description of the geometric special fiber
$\overline M_{{\mathcal G}, \mu}\otimes_{k_E}\bar k$
as a union of affine Schubert varieties. We will first explain
how this second part also implies the first part, i.e the normality
of $M_{{\mathcal G},\mu}$. This follows from Proposition \ref{normalgen} below
together with the fact that the generic fiber $M_{{\mathcal G},\mu}\otimes_\O F$ is normal
(since it is given by the single affine Schubert variety $X_\mu$ associated to $\mu$).
\begin{prop}\label{normalgen}
Suppose that $Y\to {\rm Spec } (\O)$ is a flat scheme of finite type
with normal generic fiber and reduced special fiber. Then
$Y$ is normal.
\end{prop}
\begin{proof} By Serre's criterion, it is enough to check
that $Y$ satisfies properties (R1) and (S2). By assumption, these properties are satisfied
by the generic fiber $Y\otimes_\O F$. Since the special fiber $\overline Y=Y\otimes_\O k$
is a reduced scheme of finite type and $k$ is perfect, $\overline Y$ is generically smooth over $k$.
It follows that $Y$ is regular in codimension $1$, i.e it satisfies (R1). It remains to show that
$Y$ has depth $\geq 2$ at points of codimension $\geq 2$ which are supported on the special fiber. Since $Y$ is flat over ${\rm Spec } (\O)$,
the uniformizer $\varpi$ provides the start of a regular sequence; we can always obtain one additional element in this regular sequence
since $\overline Y$ is reduced and hence it satisfies (S1).
\end{proof}
\subsubsection{} Here we show the second part of Theorem \ref{CMfiber}.
In what follows, the notations are as in \S
\ref{kappafibers}. In particular, we have $P_{\bar k}={\mathcal P}_{x_{\bar
k((u))}}$; this is a parahoric group scheme which is obtained by
base-changing ${\mathcal G}$ to $\bar k[[u]]$. (The corresponding reductive
group is $G'=G_{\bar k}={\mathcal G}\times_X {\rm Spec } (\bar k((u)))$.) Recall
that our construction of $\underline G$ over $\breve {\O}[u,
u^{-1}]$ in \S \ref{reductive group} produces an isomorphism between
the Iwahori-Weyl groups $\widetilde W$ of $G\otimes_F\breve{F}$ and
$\widetilde W'$ of $G'$. The constructions in \S \ref{8d1} above can
also be applied to $G\otimes_F\breve F$ and $\{\mu\}$ to produce a
$W_0$-orbit $\Lambda$ in $\mathbb{X}_\bullet(T)_I\subset \widetilde W$ and a
subset ${\rm Adm}(\mu)\subset \widetilde W$. Using this
identification, we will view $\Lambda$ and ${\rm Adm}(\mu)$ also as,
respectively, a $W'_0$-orbit in $\mathbb{X}_\bullet(T')_I\subset \widetilde W'$, and a
subset ${\rm Adm}(\mu)\subset \widetilde W'$.
\begin{thm}\label{special fiber}
Suppose that $p\nmid|\pi_1(G_{\on{der}})|$. Then we have
\begin{equation*}
{\mathcal A}^{P_{\bar k}}(\mu)=\overline M_{{\mathcal G}, \mu}\otimes_{k_E}\bar k
\end{equation*}
as closed subschemes of ${\rm Gr}_{P_{\bar k}}$.
\end{thm}
\begin{cor}\label{specialCM}
Suppose that $p\nmid|\pi_1(G_{\on{der}})|$ and that
$x$ is special in ${\mathcal B}(G_{\breve F}, \breve F)$. Then the
scheme $M_{{\mathcal G}, \mu}$ is Cohen-Macaulay and normal
and the special fiber $\overline M_{{\mathcal G}, \mu}$ is geometrically
irreducible and normal.
\end{cor}
\begin{proof}
It is enough to show that the geometric special fiber $\overline
M_{{\mathcal G}, \mu}\otimes_{k_E}\bar k$ is irreducible. Indeed, then by
Theorem \ref{CMfiber}, the special fiber $\overline M_{{\mathcal G}, \mu}$
is Cohen-Macaulay and we can conclude that
$M_{{\mathcal G}, \mu}$ is also Cohen-Macaulay. Normality follows as above.
The irreducibility of $\overline M_{{\mathcal G},
\mu}\otimes_{k_E}\bar k$ follows from Theorem \ref{special fiber}:
Indeed,
when $x$ is special, $W^{{\mathcal P}'_x}=W_0'$ and
$ W^{{\mathcal P}'_x}\setminus {\rm Adm}^{{\mathcal P}'_x}(\mu)/W^{{\mathcal P}'_x}$ has only
one extreme element in the Bruhat order, namely the image of $t_\mu$
(cf. \cite[Prop. 6.15]{ZhuCoherence}).
\end{proof}
\begin{comment}
\begin{proof}
It is enough to show that the geometric special fiber $\overline
M_{{\mathcal G}, \mu}\otimes_{k_E}\bar k$ is irreducible. Indeed, then by
Theorem \ref{CMfiber}, the special fiber $\overline M_{{\mathcal G}, \mu}$
is reduced and normal and we can apply Hironaka's Lemma (EGA
IV.5.12.8) exactly as in \cite[Theorem 5.1]{PappasRaIII} or \cite[\S
6.3]{ZhuCoherence} to conclude that $M_{{\mathcal G}, \mu}$ is also normal
and Cohen-Macaulay. The irreducibility of $\overline M_{{\mathcal G},
\mu}\otimes_{k_E}\bar k$ follows from Theorem \ref{special fiber}:
Indeed,
when $x$ is special, $W^{{\mathcal P}'_x}=W_0'$ and
$ W^{{\mathcal P}'_x}\setminus {\rm Adm}^{{\mathcal P}'_x}(\mu)/W^{{\mathcal P}'_x}$ has only
one extreme element in the Bruhat order, namely the image of $t_\mu$
(cf. \cite[Prop. 6.15]{ZhuCoherence}).
\end{proof}
\end{comment}
\begin{Remark}\label{normalCM}
{\rm a) Some special cases of Theorems \ref{CMfiber} and
\ref{special fiber} were known before (e.g \cite{GortzFlatGLn},
\cite{GortzSymplectic} for ${\rm GL}_n$ and ${\rm GSp}_{2n}$).
See \cite{PRS} for a survey of these previous results.
A description of the components of the special fiber of local models
in terms of the $\mu$-admissible set as in Theorem \ref{special
fiber} was first suggested by Kottwitz and Rapoport
in \cite{KoRaMinuscule}.
b) We conjecture that when $p\nmid|\pi_1(G_{\on{der}})|$, $M_{{\mathcal G}, \mu}$
is always Cohen-Macaulay, even if $x$ is not special.
A proof of this conjecture in the split case, that uses our results
above, has recently been announced by He. }
\end{Remark}
\smallskip
Now let us discuss the proof of Theorem \ref{special fiber}.
\begin{proof}
We can assume that $F=\breve{F}$ and the residue field $k=\bar{k}$
is algebraically closed; then we are trying to show ${\mathcal A}^{P_{
k}}(\mu)=\overline M_{{\mathcal G}, \mu}$. The proof is divided into two
parts. The first part is to exhibit ${\mathcal A}^{P_k}(\mu)$ as a closed
subscheme of $\overline M_{{\mathcal G}, \mu}$. Then we apply the coherence
conjecture of Rapoport and the first author (\cite{PappasRaTwisted})
shown in \cite{ZhuCoherence} to deduce the theorem.
\begin{prop}\label{Prop8.8}
The scheme ${\mathcal A}^{P_k}(\mu)$ is naturally a closed subscheme of $\overline M_{{\mathcal G}, \mu}$.
\end{prop}
\begin{proof}
Let ${\lambda}\in\Lambda$, where $\Lambda\subset\mathbb{X}_\bullet(T)_I$ is the
$W_0$-orbit in $\mathbb{X}_\bullet(T)_I$ associated to $\mu$ as before. Let
$t_{\lambda}\in\widetilde{W}$ be the corresponding element in the
Iwahori-Weyl group. These elements then are extreme elements in
${\rm Adm}(\mu)$ under the Bruhat order of $\widetilde{W}$. To prove the
lemma, then it is enough to show that $S^{{\mathcal P}_k}_{t_{\lambda}}\subset
\overline M_{{\mathcal G}, \mu}$ for any $t_{\lambda}\in\Lambda$. This turns out to
be a direct consequence of the following two lemmas.
\begin{lemma}\label{groupaction} The scheme $\overline M_{{\mathcal G}, \mu}$ is
invariant under the action of $L^+P_k$ on ${\rm Gr}_{P_k}$.
\end{lemma}
\begin{proof} Recall from \S \ref{6b4}
that $\L^+{\mathcal G}$ acts on ${\rm Gr}_{{\mathcal G},X}$ naturally. Now let $x:{\rm Spec }
(\kappa)\to X$ be a point where $\kappa$ is either the residue field
of $\O$ or the fractional field of $\O$. Then by definition
$\L^+{\mathcal G}\times_X{\rm Spec } (\kappa)\simeq L^+{\mathcal G}_{\kappa,x}$, and the
induced action of $\L^+{\mathcal G}\times_X{\rm Spec } (\kappa)$ on
${\rm Gr}_{{\mathcal G},X}\times_X{\rm Spec } ( \kappa)$ is just the local action of
$L^+{\mathcal G}_{\kappa,x}$ on ${\rm Gr}_{{\mathcal G}_{\kappa,x}}$. Let
$(\L^+{\mathcal G})_\O:=\L^+{\mathcal G}\times_X{\rm Spec } ( \O)$ where the section ${\rm Spec }
(\O)\to X$ is given by $u\mapsto \varpi$ as before. Then
$(\L^+{\mathcal G})_\O$ acts on ${\rm Gr}_{{\mathcal G},\O}$. Over $E$, this is just the
action of $(L^+{\mathcal G}_{F,\varpi})_E$ on
$({\rm Gr}_{{\mathcal G}_{F,\varpi}})_E={\rm Gr}_{{\mathcal G}_{F,\varpi}}\times_{{\rm Spec } (F)}{\rm Spec } (E)$.
Since $\overline M_{{\mathcal G}, \mu}$ is defined to be the Zariski closure
of $(L^+{\mathcal G}_{F,\varpi})_E\cdot [s_\mu]$, the special fiber carries
the natural action of $L^+P_k=L^+{\mathcal G}_{k,0}$. \end{proof}
\smallskip
To state the second Lemma, recall that ${\rm Gr}_{{\mathcal G},\O}$ is ind-proper
(Proposition \ref{indproper}). Let for ${\lambda}\in\Lambda$, let
$\tilde{{\lambda}}\in\bar{W}\mu$ be a lift of ${\lambda}$. The $E$-point
$[s_{\tilde{{\lambda}}}]$ of ${\rm Gr}_{{\mathcal G},\O}$ (cf. \ref{coch}) gives rise to
a unique $\O_E$-section of ${\rm Gr}_{{\mathcal G},\O}$, still denoted by
$[s_{\tilde{{\lambda}}}]$. Let $0$ be the closed point of $\O_E$.
\begin{lemma}\label{sla}
We have $[s_{\tilde{{\lambda}}}](0)=t_{\lambda}$.
\end{lemma}
\begin{proof}
Let $\O[u]\to\O[v]$ given by
$u\mapsto v^e$. Let
\[{\mathcal T}={\rm Res}_{\O[v]/\O[u]}(T_H\otimes\O[v])^\gamma,\]
and ${\mathcal T}^0$ be the neutral connected component of ${\mathcal T}$. Then by
construction of the group scheme ${\mathcal G}$ in Theorem
\ref{grpschemeThm}, ${\mathcal T}^0$ is a subgroup of ${\mathcal G}$. Let $\L{\mathcal T}$,
$\L{\mathcal T}^0$ be the global loop groups over $X=\AA^1_\O$, whose
definitions are similar to \eqref{globloop1}. Exactly as in the proof of \cite[Prop. 3.5]{ZhuCoherence}
we see that
a cocharacter $\nu$ of $G$ defined over $E$ gives rise
to a map
\[s_{\nu,\O_E}:\O_E\to \L{\mathcal T}^0\to \L{\mathcal G},\]
such that: (i) the restriction of this map to
$E\to\L{\mathcal T}^0\to\L{\mathcal G}$ is the point $s_\nu$ as in \S \ref{coch};
and (ii) $s_{\nu,\O_E}(0)\in \L{\mathcal T}^0(k)=T(k((u)))$ maps to
$t_\nu$ under the Kottwitz homomorphism $T(k((u)))\to \mathbb{X}_\bullet(T)_I$.
Clearly, these two statements together imply the lemma.
\end{proof}
\begin{comment}TAKEN OUT
Let $\Ga_{\varpi}$ be the graph of ${\rm Spec } (\O_E)\to X$. Therefore,
the ring of functions for $\Ga_{\varpi}$ is $\O_E[u]/(u-\varpi)$,
and the ring of functions for $\hat{\Ga}_{\varpi}^o$ is
$\O_E((u-\varpi))$. By definition
\begin{equation}\label{pts in loop}
{\rm Hom}({\rm Spec } (\O_E),\L{\mathcal T})={\rm Hom}(\hat{\Ga}_\varpi^o,{\mathcal T})={\rm Hom}(\hat{\Ga}_{\varpi}^o\otimes_{\O[u]}\O[v],T_H)^\gamma,
\end{equation}
where $\gamma=\gamma_0$ acts on
$\hat{\Ga}_{\varpi}^o\otimes_{\O[u]}\O[v]$ via the action on the
second factor. Let $\zeta$ be an $e$-th root of unit in $\O$ so that
${\gamma}(v)=\zeta v$, and let $\tilde{\varpi}\in\O_{\tilde{F}}$ be a
uniformizer such that $\tilde{\varpi}^e=\varpi$. Observe that
$x_i=(\zeta^i\otimes v- \tilde{\varpi}\otimes 1)$ is invertible in
$\O_{\tilde{F}}((u-\varpi))\otimes_{\O[u]}\O[v]$ (in fact $x_1\cdots
x_e=u-\varpi$), and the action of ${\gamma}$ (on the second factor) sends
$x_i$ to $x_{i+1}$. Define
\[s_{\nu,\O_{\tilde{F}}}: {\rm Spec } (\O_{\tilde{F}}((u-\varpi))\otimes_{\O[u]}\O[v])\to T_H,\quad\quad s_{\nu,\O_{\tilde{F}}}=\prod_{i=1}^e(\gamma^{i}(\nu))(x_i).\] By
definition, $s_{{\lambda},\O_{\tilde{F}}}$ is $\gamma$-invariant. As ${\lambda}$
is defined over $E$, it is readily seen that $s_{\nu,O_{\tilde{F}}}$
factors as
\begin{equation}
\label{s_nv}{\rm Spec } (\O_{\tilde{F}}((u-\varpi))\otimes_{\O[u]}\O[v])\to
{\rm Spec } (\O_{E}((u-\varpi))\otimes_{\O[u]}\O[v]))\xrightarrow{s_{\nu,\O_E}}
T_H.
\end{equation}
By \eqref{pts in loop}, $s_{\nu,\O_E}$ gives rise to a map
${\rm Spec } (\O_E)\to\L{\mathcal T}$ (as well as a map
$\hat{\Ga}_\varpi^o\to{\mathcal T}$). We need to show that $s_{\nu,\O_E}$
factors through ${\rm Spec } (\O_E)\to\L{\mathcal T}^0\to\L{\mathcal T}$. In other words,
we need that the image of the map
\[s_{\nu,\O_E}:\hat{\Gamma}_{\varpi}^o|_{u=0}\to{\mathcal T}|_{u=0}\]
lands in the neutral connected component of ${\mathcal T}|_{u=0}\simeq
({\rm Res}_{(\O[v]/v^e)/\O}T_H)^{\gamma}$. By adjunction, this is equivalent
to the image of \eqref{s_nv} landing in the neutral connected
component of $T_H^\gamma$. But this map is given by $\prod_{i=1}^e
{\gamma}^i(\nu)(\tilde{\varpi})$, which clearly lands in the torus part
of $T_H^\gamma$.
Therefore, we have constructed the map $s_{\nu,\O_E}:
{\rm Spec } (\O_E)\to\L{\mathcal T}^0$. We need to verify the two properties.
Recall the isomorphism \eqref{poweriso}. By the same formula,
\[E((u-\varpi))\otimes_{\O[u]}\O[v]\simeq (E\otimes_F\tilde{F})((z)).\]
It is clear from this isomorphism that the restriction of
$s_{\nu,\O_E}$ to $E$ coincides with the point $s_\nu\in
T(E((z)))\subset G(E((z)))$ we start with. The proves the first
statement. Let us also analyze $s_{\nu,\O_E}(0)$. This is nothing
but $\prod_{i=1}^{e} (\gamma^i(\nu))(\zeta^iv)=N(\nu(v))$, where
$N:T(k((v)))\to T(k((u)))$ is the norm map. It is well known that
this element maps to $t_\nu$ under the Kottwitz homomorphism.
\end{proof}
\end{comment}
This now concludes the proof of Proposition \ref{Prop8.8}.
\end{proof}
\smallskip
Now we proceed with the second part of the proof of the theorem. To
apply the coherence conjecture, we need to construct a natural line
bundle on ${\rm Gr}_{{\mathcal G},X}$. The construction is parallel to \cite[Sect.
4]{ZhuCoherence} where we also refer the reader for more
information. Let ${\mathcal V}_0=\on{Lie}{\mathcal G}$ be the Lie algebra of ${\mathcal G}$.
By \cite{SeshadriPNAS}, this is a free $\O[u]$-module of rank
$\dim_F G$. Then the adjoint representation
$\on{Ad}:{\mathcal G}\to\on{GL}({\mathcal V}_0)$ gives rise to a morphism
\[\on{ad}: {\rm Gr}_{{\mathcal G},X}\to{\rm Gr}_{\on{GL}({\mathcal V}_0),X}.\]
Over ${\rm Gr}_{\on{GL}({\mathcal V}_0),X}$, we have the determinant line bundle
${\mathcal L}_{\det}$, defined as usual (for example, see \cite[Sect.
4]{ZhuCoherence}). Let ${\mathcal L}={\mathcal L}_{2c}=\on{ad}^*({\mathcal L}_{\det})$ be
the corresponding line bundle on ${\rm Gr}_{{\mathcal G},X}$, which is ample. Let
us denote by ${\mathcal L}_{ k}$ the restriction of ${\mathcal L} $ to the
special fiber ${\rm Gr}_{{\mathcal G},\O}\otimes_\O k\simeq {\rm Gr}_{P_k}$, and let
${\mathcal L}_{ \bar{F}}$ be the restriction of ${\mathcal L} $ to the geometric
generic fiber
${\rm Gr}_{{\mathcal G},\O}\otimes_\O\bar{F}\simeq{\rm Gr}_{{\mathcal G}_{F,\pi}}\otimes_F\bar{F}\simeq{\rm Gr}_H\otimes_F\bar{F}$.
Let $M_{{\mathcal G}, \mu, \bar F}$ be the geometric generic fiber of
$M_{{\mathcal G}, \mu}$. Then for $n\gg 0$,
\[\dim_{\bar{F}}\Ga(M_{{\mathcal G}, \mu, \bar F}, {\mathcal L}_{ \bar{F}}^{\otimes n})=\dim_{k}\Ga(\overline M_{{\mathcal G}, \mu }, {\mathcal L}_{ k}^{\otimes n})\geq \dim_k\Ga({\mathcal A}^{P_k}(\mu), {\mathcal L}_{ k}^{\otimes n}),
\]
and the equality holds for $n\gg 0$ if and only if
${\mathcal A}^{P_k}(\mu)=\overline M_{{\mathcal G}, \mu}$. Now the coherence
conjecture of \cite{PappasRaTwisted} as proved in
\cite{ZhuCoherence} implies that
\begin{equation}\label{cohconj}
\dim_{\bar{F}}\Ga(M_{{\mathcal G}, \mu ,\bar{F}}, {\mathcal L}_{ \bar{F}}^{\otimes
n})= \dim_k\Ga({\mathcal A}^{P_k}(\mu), {\mathcal L}_{ k}^{\otimes n})
\end{equation}
for any $n$ and this is enough to conclude the proof. For clarity,
we explain here how to deduce the equality (\ref{cohconj}) from the
results of \cite{ZhuCoherence}. First we assume that $G_{\on{der}}$
is simply-connected. Then we can write
$G_{\on{der}}=\on{Res}_{F_1/F}G_1\times\cdots \times
\on{Res}_{F_m/F}G_m$ as the decomposition into simple factors, where
$G_i$ are almost simple, absolutely simple and simply-connected
groups defined over $F_i$. Let $\mu_i$ be the corresponding
conjugacy classes of cocharacters for ${\rm Res}_{F_i/F}(G_i)_{\on{ad}}$,
where $(G_i)_{\on{ad}}$ is the adjoint group of $G_i$, and let
$(P_k)_i$ be the corresponding parahoric group scheme over $F_i$.
Then
\[{\mathcal A}^{P_k}(\mu)\simeq {\mathcal A}^{P_k}(\mu)^{\circ} \simeq \prod_i {\mathcal A}^{(P_k)_i}(\mu_i)^{\circ}\]
and under this isomorphism, ${\mathcal L}_{ k}\simeq {\mathcal L}_1\boxtimes
\cdots \boxtimes {\mathcal L}_m$, where ${\mathcal L}_i$ is ample on
${\mathcal A}^{(P_k)_i}(\mu_i)^{\circ}$ with central charge $2h^\vee_i$.
Here $h^\vee_i$ is the dual Coxeter number for
$G_i\otimes_{F_i}\bar{F}$, and the central charge is defined in
\cite[Sect. 10]{PappasRaTwisted} (also see \cite[Sect.
2.2]{ZhuCoherence}). On the other hand, $H_{\on{der}}=\prod
H_i^{m_i}$, where $H_i$ is the split form for $G_i$ and
$m_i=[F_i:F]$. Then $\mu_i=\mu_{i,1}+\cdots+\mu_{i,m_i}$ for
$\mu_{i,j}$ dominant coweights of $(H_i)_{\on{ad}}$. As usual, if
$\mu$ is a coweight of a split group $M$, we denote
$\overline{{\rm Gr}}_{M,\mu}$ the closure of the corresponding $L^+M$
orbit in ${\rm Gr}_{M}$, which is indeed the same as ${\mathcal A}^{M\otimes
k[[u]]}(\mu)$. We similarly denote ${\mathcal A}^{M\otimes
k[[u]]}(\mu)^\circ$ by $\overline{{\rm Gr}}_{M,\mu}^\circ$. We have
\[
M_{{\mathcal G}, \mu ,\bar{F}}\simeq \overline{{\rm Gr}}_{H,\mu}\simeq
\overline{{\rm Gr}}^\circ_{H_{\on{ad}},\mu}\simeq \prod
\overline{{\rm Gr}}^\circ_{(H_i)_{\on{ad}},\mu_{i,j}},
\]
and under this isomorphism, ${\mathcal L}_{\bar{F}}\simeq
\boxtimes_i({\mathcal L}_{b,i}^{\otimes 2h^\vee_i})^{\boxtimes m_i}$, where
${\mathcal L}_{b,i}$ is the ample generator of the Picard group of each
connected component of ${\rm Gr}_{(H_i)_{\on{ad}}}$ (which is isomorphic
to ${\rm Gr}_{H_i}$). Now by \cite[Theorem 2, Proposition
6.2]{ZhuCoherence}, we have
\[
\dim_k\Ga({\mathcal A}^{(P_k)_i}(\mu_i)^{\circ}, {\mathcal L}_i^{\otimes
n})=\prod_{j=1}^{m_i}\dim_{\bar{F}}\Ga(\overline{{\rm Gr}}_{(H_i)_{\on{ad}},\mu_{i,j}}^\circ,
{\mathcal L}_{b,i}^{\otimes 2nh_i^\vee}).
\]
Combining this equality with the above gives (\ref{cohconj}) in this
case.
Next, we consider the general case when ${\rm
char}(k)\nmid|\pi_1(G_{\on{der}})|$. Let
\[1\to S\to \tilde{G}\to G\to 1\]
be a $z$-extension of $G$, i.e. $S$ is an induced central torus, and
$\tilde{G}_{\on{der}}$ is simply-connected. We can further assume
that $\tilde{G}$ is also tamely ramified. Then $\mu$ can be lifted
to a geometric conjugacy class $\tilde{\mu}$ of $\tilde{G}$. Let
us apply our construction to $\tilde{G}$ and the corresponding parahoric group
scheme to obtain the global
affine Grassmannian ${\rm Gr}_{\tilde{{\mathcal G}},\O}$ over $\O$. We can also
define $M_{\tilde{{\mathcal G}},\tilde{\mu}}$. Observe that over $\bar{F}$,
we have the natural map
${\rm Gr}_{\tilde{{\mathcal G}},\bar{F}}\to{\rm Gr}_{{\mathcal G},\bar{F}}$. Under this map, the
line bundle ${\mathcal L}$ on ${\rm Gr}_{{\mathcal G},\bar{F}}$ pulls back to the
corresponding line bundle on ${\rm Gr}_{\tilde{{\mathcal G}},\bar{F}}$ since the
adjoint representation of $\tilde{G}$ factors through $G$. In
addition, $M_{\tilde{{\mathcal G}},\tilde{\mu},\bar{F}}$ maps isomorphically
to $M_{{\mathcal G},\mu,\bar{F}}$. Therefore,
\[\dim_{\bar{F}}\Ga(M_{{\mathcal G}, \mu, \bar F}, {\mathcal L}_{ \bar{F}}^{\otimes n})=\dim_{\bar{F}}\Ga(M_{\tilde{{\mathcal G}}, \tilde{\mu}, \bar F}, {\mathcal L}_{ \bar{F}}^{\otimes n}).\]
Likewise, we have ${\rm Gr}_{\tilde{P}_k}\to{\rm Gr}_{P_k}$ which maps
${\mathcal A}^{\tilde{P}_k}(\tilde{\mu})$ isomorphically to ${\mathcal A}^{P_k}(\mu)$
(cf. \cite[Sect. 6]{PappasRaTwisted}); the corresponding line
bundles ${\mathcal L}_{ k}$ respect the pullback under this map. Then
\[\dim_k\Ga({\mathcal A}^{P_k}(\mu),
{\mathcal L}_{ k}^{\otimes n})=\dim_k\Ga({\mathcal A}^{\tilde{P}_k}(\tilde{\mu}),
{\mathcal L}_{ k}^{\otimes n}).\] This allows to reduce to the previous
case.
\end{proof}
\bigskip
\section{Nearby cycles and the conjecture of Kottwitz}
\setcounter{equation}{0}
In this chapter, we study the sheaves of nearby cycles of the local
models. We extend work of Gaitsgory and Haines-Ng\^o (see Theorem
\ref{comm constraints}) and among other results, we show Theorems
\ref{thm02} (Kottwitz's conjecture) and \ref{thm03} of the
introduction.
\subsection{The nearby cycles}
\subsubsection{}\label{review of nearby cycles}
We begin by briefly recalling some general facts. (For more
details, see for example \cite{IllusieMonodromy},
\cite{GortzHainesCrelle}.) Let $(S,s,\eta)$ be a Henselian trait,
i.e. $S$ is the spectrum of a Henselian discrete valuation ring, $s$
is the closed point of $S$ with residue field $k(s)$, and $\eta$ is
the generic point of $S$. For our purposes, we will always assume
that $k(s)$ is either finite or algebraically closed. Let
$\bar{\eta}$ be a geometric point over $\eta$ and $\bar{S}$ be the
normalization of $S$ in $\bar{\eta}$. Let $\bar{s}$ be the closed
point of $\bar{S}$. Let $I={\rm
ker}({\rm Gal}(k(\bar{\eta})/k(\eta))\to{\rm Gal}(k(\bar{s})/k(s)))$ denote
the inertia group.
Let $\ell$ be a prime invertible in $\O_S$. For a scheme $X$,
(separated) and of finite type over $s$, there is the natural
category $\on{Sh}_c(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$ whose
objects are constructible $\overline{{\mathbb Q}}_\ell$-sheaves on
$X_{\bar{s}}$, together with continuous actions of
${\rm Gal}(k(\bar{\eta})/k(\eta))$, compatible with the action of
${\rm Gal}(k(\bar{\eta})/k(\eta))$ on $X_{\bar{s}}$ via
${\rm Gal}(k(\bar{\eta})/k(\eta))\to{\rm Gal}(\bar{s}/s)$ (see \cite[XIII,
1.2.4]{SGA7I-II}). The natural functor
$\on{Sh}_c(X)\to\on{Sh}_c(X\times_s\eta)$ is a full embedding with
essential image consisting of objects on which the inertia $I$ acts
trivially (SGA 7, XIII, 1.1.3). The ``bounded derived" category of
$\on{Sh}_c(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$ is denoted by
$\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$\footnote{As usual,
this category is not the real derived category of
$\on{Sh}_c(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$, but is defined via
a limit process. See \cite[Footnote 2]{HainesNgoNearby}.}. The usual
perverse $t$-structure on
$\on{D}_c^b(X_{\bar{s}},\overline{{\mathbb Q}}_\ell)$ is naturally lifted
to $\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$, and we have the
corresponding category of perverse sheaves
$\on{Perv}(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$. The natural functor
$\on{D}_c^b(X,\overline{{\mathbb Q}}_\ell)\to
\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$ is a full embedding,
and its essential image consists of objects in
$\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$ on which $I$ acts
trivially.
Recall that if $p:{\mathfrak X}\to S$ is a morphism, where ${\mathfrak X}$ is a
scheme, (separated) and of finite type over $S$ there is the
so-called nearby cycle functor
\[{\rm R}\Psi^{\mathfrak X}: \on{D}_c^b({\mathfrak X}_\eta,\overline{{\mathbb Q}}_\ell)\to \on{D}_c^b({\mathfrak X}_s\times_s\eta,\overline{{\mathbb Q}}_\ell),\]
which restricts to an exact functor (\cite[Sect.
4]{IllusieMonodromy} )
\[{\rm R}\Psi^{\mathfrak X}: \on{Perv}({\mathfrak X}_\eta,\overline{{\mathbb Q}}_\ell)\to \on{Perv}({\mathfrak X}_s\times_s\eta,\overline{{\mathbb Q}}_\ell),\]
Let $f:{\mathfrak X}\to{\mathfrak Y}$ be a morphism over $S$. There is a
canonical natural transform $f_!R\Psi^{{\mathfrak X}}\to R\Psi^{{\mathfrak Y}}
f_!$ which is an isomorphism if $f$ is proper. In addition, there is
a canonical natural transform $f^*R\Psi^{{\mathfrak Y}}\to
R\Psi^{{\mathfrak X}} f^*$, which is an isomorphism if $f$ is smooth.
We will also occasionally use the vanishing cycle functor
(\cite[XII]{SGA7I-II})
\[{\rm R}\Phi^{\mathfrak X}: \on{D}_c^b({\mathfrak X},\overline{{\mathbb Q}}_\ell)\to \on{D}_c^b({\mathfrak X}_s\times_s\eta,\overline{{\mathbb Q}}_\ell),\]
which roughly speaking, is defined via the distinguished triangle
\[{\mathcal F}_{\bar{s}}\to {\rm R}\Psi^{{\mathfrak X}}({\mathcal F}_\eta)\to {\rm R}\Phi^{{\mathfrak X}}({\mathcal F})\to. \]
A theorem of Gabber (cf. \cite[Sect. 4]{IllusieMonodromy} ) says
that ${\rm R}\Phi^{{\mathfrak X}}[-1]$ is also perverse exact with the
$t$-structure on $\on{D}_c^b({\mathfrak X},\overline{{\mathbb Q}}_\ell)$ defined
as in loc. cit.
\begin{Remark}\label{indsheaf}
{\rm As explained in \cite[A. 2]{GaitsgoryInv}, if $X$ is an
ind-scheme, of ind-finite type over a field, then
$\on{D}_c^b(X,\overline{{\mathbb Q}}_\ell)$ is defined as the direct limit
of the corresponding category on finite dimensional closed
subschemes. As the push-forward along closed immersions is perverse
exact, this allows to also define a corresponding category
$\on{Perv}(X,\overline{{\mathbb Q}}_{\ell})$. Similarly, we then have
$\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$ and
$\on{Perv}(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$. If ${\mathfrak X}$ is
of ind-finite type over $\O$, we can also define the nearby cycles
${\rm R}\Psi^{\mathfrak X}$ and the results in the above discussion
appropriately extend to this case.
In what follows, without mentioning it explicitly, we will
understand that any category of sheaves on an ind-scheme is defined
as a direct limit as above. }
\end{Remark}
\subsubsection{} Let us now return to our set up, so
$S={\rm Spec } (\O)$, where $\O$ is the ring of integers of a $p$-adic
field $F$ with residue field $k$. Let us fix a prime $\ell$, which
is invertible in $\O$.
Let $X_0={\rm Spec } (\O)\to X=\AA^1_\O$ be the morphism given by
$u\mapsto 0$ and let $P=P_\O$ be the group scheme over $\O[[T]]$
given by
\[P ={\mathcal G}\times_X{\rm Spec } (\O[[T]]), \quad u\mapsto T.\]
Then for $\kappa$ either the fraction field $F$ of $\O$, or the
residue field $k$ of $\O$, $P \times_\O\kappa=P_\kappa$ is the
parahoric group scheme over $\kappa[[T]]$ associated to the point
$x_{\kappa((u))}$ in the building of ${\mathcal B}(G_\kappa,\kappa((u)))$
as in \S \ref{3a}, see also \S \ref{kappafibers}. Let us also
consider
\[{\rm Gr}_{P}:={\rm Gr}_{{\mathcal G}, \O, 0}={\rm Gr}_{{\mathcal G},X}\times_X X_0\]
given by this specialization along $u=0$. This is identified with
the local affine Grassmannian ${\rm Gr}_{{\mathcal G}}$ over ${\rm Spec } (\O)$
considered in \S \ref{LocalaffGrass}. The jet group $L^+P $ over
$\O$ is defined as follows: for every $\O$-algebra $R$,
\[L^+P (R)=P (R[[T]]).\]
Then $L^+P $ acts on ${\rm Gr}_{P }$.
Let ${\rm Spec } (\kappa)\to{\rm Spec } (\O)$ be a perfect field-valued point.
Then ${\rm Gr}_{P
}\times_{{\rm Spec } (\O)}{\rm Spec } (\kappa)={\rm Gr}_{P_\kappa}={\rm Gr}_{P,\kappa}$ is
the
affine Grassmannian associated to $P_\kappa$, and when we
base change the action of $L^+P $ on ${\rm Gr}_{P }$ under $\O\to
\kappa$, we obtain the usual action of $L^+P_\kappa$ on
${\rm Gr}_{P_\kappa}$.
For simplicity, we set $\breve{P}=P\otimes_\O\breve{\O}$, similarly
for the other (ind)-schemes. The $L^+\breve{P} $-orbits of
${\rm Gr}_{\breve{P}}$ are parametrized by certain double cosets in the
extended Weyl group $\widetilde{W}$. For $w\in\widetilde{W}$, let
$\mathring{S}_w$ be the corresponding orbit which is smooth over
$\breve{\O}$, and let $S_w$ be the corresponding Schubert scheme
over $\breve\O$. Recall that the group splits after an extension of
$\breve{\O}$ of degree prime to $p$. We can see that if
$\breve{\O}\to \kappa$ is as above, then there is a nilpotent
immersion $S_{w,\kappa}\to S_w\otimes_{\breve{\O}} \kappa$ where
$S_{w,\kappa}$ is the Schubert variety $S_{w,\kappa}$ in
${\rm Gr}_{P_\kappa}$ corresponding to $w$. (This immersion is an
isomorphism if $p\nmid |\pi_1(G_{\on{der}})|$. Indeed, then the
Schubert varieties $S_{w,\kappa}$ are normal and the result
follows using \cite[Prop. 9.11]{PappasRaTwisted}.) We will often
identify $\overline{{\mathbb Q}}_\ell$-sheaves on $S_w\otimes_{\breve{\O}}
\kappa$ with the corresponding $\overline{{\mathbb Q}}_\ell$-sheaves on
$S_{w,\kappa}$.
We denote $\on{IC}_w$ to be the intersection cohomology sheaf on
$S_w$, i.e., the intermediate extension of
$\overline{{\mathbb Q}}_\ell[\dim S_{w}{+1}](\dim S_w/2)$ on
$\mathring{S}_w$. (Here $\dim S_w$ is the relative dimension over
$\breve\O$. For the definition of perverse sheaves on schemes over
$\O$, we refer to \cite[Sect. 4]{IllusieMonodromy}.) If $S_w$ is
defined over the discrete valuation ring $\O'$ with $\O\subset
\O'\subset\breve\O$, we will keep track of the action of
${\rm Gal}(\O'/\O)$ on $\on{IC}_w$, or equivalently, regard $\on{IC}_w$
also defined over $\O'$. For a $\kappa$-valued point of ${\rm Spec } (
\breve\O)$, the intersection cohomology sheaf on $S_{w,\kappa}$ is
denoted by $\on{IC}_{w,\kappa}$.
When $G=H\otimes_\O F$ with $H$ a split Chevalley group over $\O$
and ${\mathcal G}=H\times_{{\rm Spec } (\O)} X$, then ${\rm Gr}_P$ is the affine
Grassmannian
${\rm Gr}_H$ over $\O$ and $L^+P$ is
$L^+H$. The $L^+H$-orbits of ${\rm Gr}_H$ are parameterized by conjugacy
classes of one-parameter subgroups of $H$ and for
$\mu\in\mathbb{X}_\bullet(T_H)\subset\widetilde{W}$, we denote $S_\mu$ by
$\overline{{\rm Gr}}_\mu$. Similarly, we denote by $\on{IC}_\mu$ the
intersection cohomology sheaf on $\overline{{\rm Gr}}_\mu$.
\subsubsection{}
Let $M_{{\mathcal G},\mu, E}$ denote the generic fiber of $M_{{\mathcal G}, \mu}$. If
$H$ is the split form of $G$, then $ M_{{\mathcal G},\mu, \tilde F}=M_{{\mathcal G},
\mu, E}\otimes_E\tilde{F}$ is a projective subvariety of
${\rm Gr}_{{\mathcal G},\O}\otimes_{\O} \tilde{F}\simeq{\rm Gr}_H\otimes_{\O}
\tilde{F}$ by Corollary \ref{fibers}. In general $M_{{\mathcal G}, \mu, E}$
is not smooth unless $\mu$ is minuscule. We denote ${\mathcal F}_\mu$ to be
the intersection cohomology sheaf on the generic fiber $M_{{\mathcal G},\mu,
E}$. Then the pull back of ${\mathcal F}_\mu$ to $M_{{\mathcal G},\mu, \tilde F}$ is
isomorphic to $\on{IC}_{\mu,\tilde{F}}$. The goal of the subsection
is to establish a commutativity constraint for the nearby cycle
\[{\rm R}\Psi_\mu:={\rm R}\Psi^{ M_{{\mathcal G}, \mu}}({\mathcal F}_\mu).\]
Recall that we denote by $k_{\scriptscriptstyle E}$ the residue field of $\O_E$. We
first need
\begin{lemma}\label{equiv str}
The perverse sheaf\, ${\rm R}\Psi_\mu$ on the special fiber $
\overline M_{{\mathcal G}, \mu}=M_{{\mathcal G}, \mu}\otimes_{\O_E}k_{\scriptscriptstyle E}\subset{\rm Gr}_{P,
k_{\scriptscriptstyle E} }$ admits a natural $L^+P_{k_{\scriptscriptstyle E}}$-equivariant structure, i.e.,
${\rm R}\Psi_\mu$ admits a $L^+P_{k_{\scriptscriptstyle E}}\otimes_{k_{\scriptscriptstyle E}}
\bar{k}$-equivariant structure as perverse sheaves on ${\rm Gr}_{P,
k_{\scriptscriptstyle E}}\otimes_{k_{\scriptscriptstyle E}}\bar{k}$, which is compatible with the action of
${\rm Gal}(\bar{F}/E)$ in an obvious sense (which will be clear from the
proof).
\end{lemma}
\begin{proof}
Let $\L^+_n{\mathcal G}$ be the $n$-th jet group of ${\mathcal G}$, i.e. the group
scheme over $X={\mathbb A}^1_\O$, whose $R$-points classifying pairs
$(y,\beta)$ with $y:{\rm Spec } (R)\to X$ and $\beta\in{\mathcal G}(\Ga_{y,n})$,
where $\Ga_{y,n}$ is the $n-$th nilpotent thickening of $\Ga_y$. In
other words,
\begin{equation}\label{nthjet}\L^+_n{\mathcal G}(R)={\mathcal G}(R[u-y]/(u-y)^{n+1})\end{equation}
(cf. \eqref{globloop1}). It is clear that $\L^+_n{\mathcal G}$ is smooth over
$X$ and that the action of
$(\L^+{\mathcal G})_{\O_E}:=\L^+{\mathcal G}\times_X{\rm Spec } (\O_E)$ on $M_{{\mathcal G}, \mu}$
factors through the action of
$(\L^+_n{\mathcal G})_{\O_E}:=\L^+_n{\mathcal G}\times_{X}{\rm Spec } (\O_E)$ for some
sufficiently large $n$.
Let $m:(\L^+_n{\mathcal G})_{\O_E}\times_{\O_E} M_{{\mathcal G}, \mu}\to M_{{\mathcal G}, \mu}$
be the above action. Let $p:(\L^+_n{\mathcal G})_{\O_E}\times_{\O_E} M_{{\mathcal G},
\mu}\to M_{{\mathcal G}, \mu}$ be the natural projection. Then there is a
canonical isomorphism $m^*{\mathcal F}_\mu\xrightarrow{\sim} p^*{\mathcal F}_\mu$ as
sheaves on $(\L^+_n{\mathcal G})_{E}\times_{E} M_{{\mathcal G}, \mu}$ since the
intersection cohomology sheaf ${\mathcal F}_\mu$ is naturally
$(\L^+{\mathcal G})_{E}$-equivariant. By taking nearby cycles, we have a
canonical isomorphism
\[
{\rm R}\Psi^{(\L^+_n{\mathcal G})_{\O_E}\times_{\O_E} M_{{\mathcal G},
\mu}}(m^*{\mathcal F}_\mu)\xrightarrow{\sim} {\rm
R}\Psi^{(\L^+_n{\mathcal G})_{\O_E}\times_{\O_E} M_{{\mathcal G}, \mu}}(p^*{\mathcal F}_\mu),
\]
which is equivariant with respect to the action of
${\rm Gal}(k(\bar{\eta})/k(\eta))$. Since both $m$ and $p$ are smooth
morphisms and taking nearby cycles commutes with smooth base change,
we have
\[
m^*{\rm R}\Psi^{ M_{{\mathcal G}, \mu}}({\mathcal F}_\mu)\xrightarrow{\sim} p^*{\rm
R}\Psi^{ M_{{\mathcal G}, \mu}}({\mathcal F}_\mu),
\]
compatible with the action of ${\rm Gal}(k(\bar{\eta})/k(\eta))$. The
cocycle condition of this isomorphism follows from the corresponding
cocycle condition for $m^*{\mathcal F}_\mu\xrightarrow{\sim} p^*{\mathcal F}_\mu$. This
then establishes the desired action and the lemma follows.
\end{proof}
\begin{Definition}We let
$\on{Perv}_{L^+P_k}({\rm Gr}_{P_k}\times_kF,\overline{{\mathbb Q}}_\ell)$ be
the category whose objects are $({\mathcal F},\theta)$, where ${\mathcal F}\in
\on{Perv}({\rm Gr}_{P_k}\times_kF,\overline{{\mathbb Q}}_\ell)$, and
$\theta:m^*{\mathcal F}\xrightarrow{\sim} p^*{\mathcal F}$ is a
$L^+P_{\bar{k}}$-equivariant structure on ${\mathcal F}$, which is
compatible with the action of ${\rm Gal}(\bar{F}/F)$.
\end{Definition}
By the above Lemma, ${\rm R}\Psi_\mu={\rm R}\Psi^{ M_{{\mathcal G},
\mu}}({\mathcal F}_\mu)$ is an object of
$\on{Perv}_{L^+P_{\varkappa}}({\rm Gr}_{P_{\varkappa}}\times_{\varkappa}E,\overline{{\mathbb Q}}_\ell)$
for $\varkappa=k_E$.
\subsubsection{}
Let $w\in\widetilde{W}$. Recall that for a chosen reduced expression
$\tilde{w}$ of $w$, there is the Demazure resolution
$D_{\tilde{w}}\to S_w$, where $D_{\tilde{w}}$ is smooth proper over
$\breve\O$, containing $\mathring{S}_w$ as a Zariski open subset,
and $D_{\tilde{w}}\setminus\mathring{S}_w$ is a divisor with normal
crossings relative to $\breve\O$.
\begin{lemma}\label{no monodromy}
Let $F\subset F'\subset \breve{F}$, $\O'$ be the normalization of
$\O$ in $F'$, and $k'$ be the residue field of $\O'$. Write for
simplicity $P'=P\otimes_{\O}\O'$. Assume that $S_w$ is defined over
$\O'$. Let $\on{IC}_{w,F'}$ be the intersection cohomology sheaf on
$S_{w,F'}$. Then ${\rm R}\Psi^{{\rm Gr}_{P'}}(\on{IC}_{w,F'})$ is
isomorphic as an object of
$\on{Perv}(S_{w,k'}\times_{k'}F',\overline{{\mathbb Q}}_\ell)$ to the
intersection cohomology sheaf $\on{IC}_{w,k'}$ of $S_{w,k'}$ (recall
that we regard $\on{Perv}(S_{w,k'},\overline{{\mathbb Q}}_\ell)$ as a full
subcategory of
$\on{Perv}(S_{w,k'}\times_{k'}F',\overline{{\mathbb Q}}_\ell)$).
\end{lemma}
\begin{proof}The existence of $D_{\tilde{w}}$, together with the argument as in \cite[\S
5.2, \S 6.3]{HainesNgoNearby}, implies that the lemma holds for
$F'=\breve{F}$. Observe that we cannot apply the same argument
directly to the case $F'\subsetneqq F$ because $D_{\tilde{w}}$ is
not necessarily defined over $\O'$. Instead, we argue as follows.
For the case $F'=\breve{F}$, we know that ${\rm
R}\Psi^{{\rm Gr}_{P'}}(\on{IC}_{w,F'})=\on{IC}_{w,k'}\otimes{\mathcal L}$ for
some rank one local system on $S_{w,k'}$ coming from ${\rm Spec } (k')$. On
the other hand, $\mathring{S}_{w}$ is defined and is smooth over
$\O'$. Therefore, ${\rm
R}\Psi^{{\rm Gr}_{P'}}(\on{IC}_{w,F'})|_{\mathring{S}_{w,k'}}\simeq
\on{IC}|_{\mathring{S}_{w,k'}}=\overline{{\mathbb Q}}_\ell[\dim
S_{w,k'}](\frac{\dim S_{w,k'}}{2})$. Hence, ${\mathcal L}$ is trivial.
\end{proof}
\subsection{A commutativity constraint}\label{sectionConstraint}
\subsubsection{}
Let $\on{D}_{L^+P}({\rm Gr}_{P },\overline{{\mathbb Q}}_\ell)$ be the bounded
$L^+P $-equivariant derived category of (constructible)
$\overline{{\mathbb Q}}_\ell$-sheaves on ${\rm Gr}_{P }$ in the sense of
Bernstein-Lunts \cite{BernsteinLunts}. Let us recall that
$\on{D}_{L^+P }({\rm Gr}_{P },\overline{{\mathbb Q}}_\ell)$ is a monoidal
category with structure given by the ``convolution product" defined
by Lusztig (see \cite{GaitsgoryInv}, \cite{LusztigAst}). Namely, we
have the convolution diagram
\begin{equation}\label{Conv-diag}
{\rm Gr}_{P }\times{\rm Gr}_{P }\xleftarrow{\ q\ }LP \times{\rm Gr}_{P
}\xrightarrow{\ p\ }LP \times^{L^+P }{\rm Gr}_{P }=:{\rm Gr}_{P
}\tilde{\times}{\rm Gr}_{P }\xrightarrow{\ m\ } {\rm Gr}_{P },
\end{equation}
where $p,q$ are natural projections and $m$ is given by the left
multiplication of $LP $ on ${\rm Gr}_{P }$. Let ${\mathcal F}_i\in \on{D}_{L^+P
}({\rm Gr}_{P }, \overline{{\mathbb Q}}_l), i=1,2$, and let
${\mathcal F}_1\tilde{\times}{\mathcal F}_2$ be the unique sheaf (up to a
canonical isomorphism) on $LP\times^{L^+P }{\rm Gr}_{P }$ such that
\begin{equation}\label{twp1}
p^*({\mathcal F}_1\tilde{\times}{\mathcal F}_2)\simeq
q^*({\mathcal F}_1\boxtimes{\mathcal F}_2).
\end{equation}
Then, by definition
\begin{equation}\label{conv prod}
{\mathcal F}_1\star{\mathcal F}_2=m_!({\mathcal F}_1\tilde{\times}{\mathcal F}_2).
\end{equation}
If ${\rm Spec } (\kappa)\to{\rm Spec } (\O)$ is a field valued point, we have the
corresponding monoidal category
$\on{D}_{L^+P_\kappa}({\rm Gr}_{P_\kappa},\overline{{\mathbb Q}}_\ell)$ defined
in the same manner. Let
$\on{Perv}_{L^+P_\kappa}({\rm Gr}_{P_\kappa},\overline{{\mathbb Q}}_\ell)$ be
the core of the perverse $t$-structure.
Observe that if ${\mathcal F}_1$,
${\mathcal F}_2\in\on{Perv}_{L^+P_k}({\rm Gr}_{P_k}\times_kF,\overline{{\mathbb Q}}_\ell)$,
then there is a natural action of ${\rm Gal}(\bar{F}/F)$ on
${\mathcal F}_1\star{\mathcal F}_2$. (In fact, one can define the ``derived
category"
$\on{D}_{L^+P_k}({\rm Gr}_{P_k}\times_k\eta,\overline{{\mathbb Q}}_\ell)$ so
that ${\mathcal F}_1\star{\mathcal F}_2$ will be an object in
$D_{L^+P_k}({\rm Gr}_{P_k}\times_k\eta,\overline{{\mathbb Q}}_\ell)$. We will
not use this concept in the paper.)
\subsubsection{}
In general, if ${\mathcal F}_1,{\mathcal F}_2$ are perverse sheaves, it is not
always the case that ${\mathcal F}_1\star{\mathcal F}_2$ is perverse. However, the
main result of this subsection is
\begin{thm}\label{comm constraints}
There is a canonical isomorphism
\[
c_{{\mathcal F}}:\on{IC}_{w,\bar{k}}\, \star\ {\rm
R}\Psi_\mu\xrightarrow{\sim} {\rm R}\Psi_\mu\star\on{IC}_{w,\bar{k}}
\]
of perverse sheaves on ${\rm Gr}_{P_{\bar{k}}}$. In addition, if
$S_{w,\bar{k}}$ is defined over $k'\supset k_E$, this isomorphism
respects the action of ${\rm Gal}(\tilde{F}/E')$ on both sides, where
$E'=EF'\subset \bar{F}$, and $F'$ is the unique subfield in
$\breve{F}$ with residue field $k'$.
\end{thm}
\begin{Remark} {\rm In the case $G={\rm GL}_n$ or $\on{GSp}_{2n}$,
and $x$ is in an alcove, i.e the parahoric group is an Iwahori,
this is one of the main results of \cite{HainesNgoNearby} (loc. cit.
Proposition 22). }
\end{Remark}
The proof is a mixed characteristic analogue of the arguments in
\cite{GaitsgoryInv,ZhuCoherence}. We need a version of the
Beilinson-Drinfeld Grassmannian defined over $X=\AA^1_\O$. For a
scheme $y:S\to X$ we set,
\begin{equation*}\label{BD Grass}
{\rm Gr}^{\on{BD}}_{{\mathcal G},X}(S)=\biggl\{\,\text{iso-classes of pairs } (
\mathcal E, \beta) \biggm|
\twolinestight{${\mathcal E}$ a ${\mathcal G}$-torsor on $X\times_\O S$,}
{$\beta$ a trivialization of $\mathcal E |_{ (X\times S)\setminus\Gamma_y\cup (0\times S) }$}\,\biggr\}\, .
\end{equation*}
To prove that ${\rm Gr}^{\on{BD}}_{{\mathcal G},X}$ is indeed represented by an
ind-scheme, one proceeds as in the proof of Proposition
\ref{indscheme}. Namely, it is standard to see that
${\rm Gr}^{\on{BD}}_{{\rm GL}_n,X}$ is represented by an ind-scheme. The
general case follows from the fact that if ${\mathcal G}\to{\rm GL}_n$ is a closed
embedding such that ${\rm GL}_n/{\mathcal G}$ is quasi-affine, then
${\rm Gr}^{\on{BD}}_{{\mathcal G},X}\to{\rm Gr}^{\on{BD}}_{{\rm GL}_n,X}$ is a locally
closed embedding.
Let us describe ${\rm Gr}^{\on{BD}}_{{\mathcal G},X}$ more explicitly. For this
purpose, set $\mathring{X}={\rm Spec } (\O[u,u^{-1}])\hookrightarrow X$ and
let $X_0={\rm Spec } (\O)\hookrightarrow X$ be given by $u\mapsto 0$ as
before.
The following isomorphisms are clear
\[{\rm Gr}^{\on{BD}}_{{\mathcal G},X}|_{\mathring{X}}\simeq {\rm Gr}_{P }\times_{{\rm Spec } (\O)}{\rm Gr}_{{\mathcal G},X}|_{\mathring{X}}\ ,\quad\quad {\rm Gr}^{\on{BD}}_{{\mathcal G},X}|_{X_0}\simeq {\rm Gr}_{P }.\]
Let us denote
${\rm Gr}^{\on{BD}}_{{\mathcal G},\O}:={\rm Gr}^{\on{BD}}_{{\mathcal G},X}\times_X{\rm Spec } (\O)$,
where $\varpi:{\rm Spec } (\O)\to X$ is given by $u\mapsto\varpi$. Observe
that on
$$
{\rm Gr}^{\on{BD}}_{{\mathcal G},E'}
={\rm Gr}^{\on{BD}}_{{\mathcal G},\O}\times_{{\rm Spec } (\O)}{{\rm Spec } (E')}\simeq
{\rm Gr}_{P_{E'}}\times_{{\rm Spec } (E')}{\rm Gr}_{{\mathcal G}, E'},
$$
we can form
$\on{IC}_{w,E'}\boxtimes\, {\mathcal F}_\mu$ over
$S_{w,E'}\times_{{\rm Spec } (E')}M_{{\mathcal G}, \mu, E'}$. (Here and in what
follows, for simplicity, we write again ${\mathcal F}_\mu$ for the pull-back
of ${\mathcal F}_\mu$ to the base change over $E'$.)
Clearly, Theorem \ref{comm constraints} is a consequence of the
following.
\begin{prop}\label{aux}
We have canonical isomorphisms in
$\on{Perv}_{L^+P_{k'}}({\rm Gr}_{P_{k'}}\times_{k'}E',\overline{{\mathbb Q}}_\ell)$:
(a)
\ ${\rm R}\Psi^{{\rm Gr}_{{\mathcal G},\O_{E'}}^{\on{BD}}} (\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu)\xrightarrow{\sim} {\rm R}\Psi_\mu\star\on{IC}_{w,k'}.$
(b) \ ${\rm R}\Psi^{{\rm Gr}_{{\mathcal G},\O_{E'}}^{\on{BD}}
}(\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu)\xrightarrow{\sim}\on{IC}_{w,k'}\star\,
{\rm R}\Psi_\mu$.
\end{prop}
\begin{proof} In order to simplify the notation, in this proof we will set
$\O'=\O_{E'}$.
We first prove (a). We define an ind-scheme over $X$ by attaching to
every morphism $y:S\to X$,
\begin{equation*}\label{Conv Grass}
{\rm Gr}^{\rm Conv}_{{\mathcal G},X}(S)=\Biggl\{\,\text{iso-classes of } (
\mathcal E, \mathcal E',\beta,\beta') \Biggm|
\threelinestight{$\mathcal E$, $\mathcal E'$ are two ${\mathcal G}$-torsors on $X\times_\O S$,}
{$\beta$ a trivialization of $\mathcal E |_{ (X\times S)\setminus\Gamma_y}$,} {$\beta'$ an isomorphism $\mathcal E'|_{\mathring{X}\times S}\simeq \mathcal E|_{\mathring{X}\times S}$ }\Biggr\}\, .
\end{equation*}
Observe that there is an natural projection $p:{\rm Gr}^{\rm
Conv}_{{\mathcal G},X}\to{\rm Gr}_{{\mathcal G},X}$ by forgetting $({\mathcal E}',\beta')$, and a
natural map $m:{\rm Gr}^{\rm Conv}_{{\mathcal G},X}\to{\rm Gr}^{\on{BD}}_{{\mathcal G},X}$
sending $({\mathcal E},{\mathcal E}',\beta,\beta')$ to $({\mathcal E}',\beta\beta')$.
The map $p$ makes ${\rm Gr}^{\rm Conv}_{{\mathcal G},X}$ a fibration over
${\rm Gr}_{{\mathcal G},X}$ with fibers isomorphic to ${\rm Gr}_{P}$. To see this, we
denote
\[
\widetilde{{\rm Gr}}_{{\mathcal G},X}(S)=\biggl\{\,\text{iso-classes of } (
\mathcal E,\beta,\beta') \biggm|
\twolinestight{$\mathcal E$ a ${\mathcal G}$-torsor on $X\times S$,
$\beta$ a trivialization of} {$\mathcal E |_{ (X\times S)\setminus\Gamma_y}$, $\beta'$ a trivialization of
$\mathcal E|_{\widehat{ 0\times S}}$ }\,\biggr\}\, .
\]
Then $L^+P $ acts on $\widetilde{{\rm Gr}}_{{\mathcal G},X} $ be changing the
trivialization $\beta'$ and this makes $\widetilde{{\rm Gr}}_{{\mathcal G},X}$ a
$L^+P $-bundle over ${\rm Gr}_{{\mathcal G},X}$. In addition,
\[
{\rm Gr}^{\rm Conv}_{{\mathcal G},X}=\widetilde{{\rm Gr}}_{{\mathcal G},X}\times^{L^+P}{\rm Gr}_{P}.
\]
The map $m$ can be described as follows. We have
\[
m|_{\mathring{X}}:{\rm Gr}^{\rm
Conv}_{{\mathcal G},X}|_{\mathring{X}}\simeq{\rm Gr}^{\on{BD}}_{{\mathcal G},X}|_{\mathring{X}},\quad\quad
m|_{X_0}:{\rm Gr}_{P }\,\widetilde{\times}\, {\rm Gr}_{P }\to{\rm Gr}_{P },
\]
where ${\rm Gr}_{P }\, \widetilde{\times}\, {\rm Gr}_{P }\to{\rm Gr}_{P }$ is
defined in \eqref{twp1}.
By specialization along $u\mapsto \varpi$, we obtain corresponding
ind-schemes over ${\rm Spec } (\O)$. Let us base change further all the
ind-schemes along $\O\to\O'=\O_{E'}$. In particular, we denote
${\rm Gr}^{\rm Conv}_{{\mathcal G},X}\times_X{\rm Spec } (\O')$ by ${\rm Gr}^{\rm
Conv}_{{\mathcal G},\O'}$. Similarly, we write ${\rm Gr}^{\rm Conv}_{{\mathcal G}, E'}$,
$\widetilde{{\rm Gr}}_{{\mathcal G}, E'}$, etc. for the base change to ${\rm Spec } (E')$
also given by $u\mapsto \varpi$.
Regard $\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu$ as a sheaf on
${\rm Gr}_{{\mathcal G},E'}^{\rm Conv} \simeq {\rm Gr}_{P_{E'}}\times_{{\rm Spec } (E')}
{\rm Gr}_{{\mathcal G}, E'}$. Since taking nearby cycles commutes with proper
push-forward, to show (a) it will be enough to show that there is a
canonical isomorphism
\begin{equation}\label{ser3}
{\rm R}\Psi^{{\rm Gr}^{\rm Conv}_{{\mathcal G},\O'}}(\on{IC}_{w,E'}\boxtimes\,
{\mathcal F}_\mu)\xrightarrow{\sim}{\rm
R}\Psi_\mu\,\widetilde{\times}\,\on{IC}_{w,k'}
\end{equation}
of sheaves on ${\rm Gr}_{P_{k'}}\,\widetilde\,{\times}{\rm Gr}_{P_{k'}}$;
here $R\Psi_\mu\,\widetilde{\times}\,\on{IC}_{w,k'}$ is the twisted
product defined as in \eqref{twp1}.
Let $L^+_nP $ be the $n$-th jet group of $P $ whose definition is
similar to $\L^+_n{\mathcal G}$ in \eqref{nthjet}. (In fact, $L^+_nP
=\L^+_n{\mathcal G}\times_XX_0$.) Choose $n$ sufficiently large so that the
action of $L^+P $ on $S_w$ factors through $L^+_nP $. Let ${\rm Gr}_{{\mathcal G},
n, X}$ be the $L^+_nP $-torsor over ${\rm Gr}_{{\mathcal G},X}$ that classifies
$({\mathcal E}, \beta, \beta')$ where $({\mathcal E}, \beta)$ are as in the
definition of ${\rm Gr}_{{\mathcal G},X}$ and $\beta'$ is a trivialization of the
restriction of ${\mathcal E}$ over the $n$-th infinitesimal neighborhood
$X_n$ of $X_0\subset X$. Set
$$
{\rm Gr}_{{\mathcal G}, n, \O_{E'}}={\rm Gr}_{{\mathcal G}, n, X}\times_X{\rm Spec } (\O'),\quad
{\rm Gr}_{{\mathcal G}, n, E'}={\rm Gr}_{{\mathcal G}, n, X}\times_X{\rm Spec } ( E' ).
$$
Then ${\mathcal F}_\mu\,\widetilde{\times}\,\on{IC}_{w,E'}$ is supported on
\[
\widetilde{{\rm Gr}}_{{\mathcal G},E'} \times^{L^+P }S_w\simeq {\rm Gr}_{{\mathcal G},n, E'} \times^{L^+_nP }S_w\subset{\rm Gr}^{\rm Conv}_{{\mathcal G},E'} .
\]
Observe that over $E'$, it makes sense to talk about
${\mathcal F}_\mu\,\widetilde{\times}\, \on{IC}_{w,E'}$ (as defined via \eqref{twp1}), which
is canonically isomorphic to $\on{IC}_{w,E'}\boxtimes\, {\mathcal F}_\mu$.
Therefore, \eqref{ser3} is equivalent to
\begin{equation}\label{ser4}
{\rm R}\Psi^{{\rm Gr}^{\rm
Conv}_{{\mathcal G},\O'}}({\mathcal F}_\mu\,\widetilde{\times}\, \on{IC}_{w,E'})\simeq {\rm
R}\Psi_\mu\,\widetilde{\times}\, \on{IC}_{w,k'}.
\end{equation}
Let us denote the pullback of ${\mathcal F}_\mu$ to ${\rm Gr}_{{\mathcal G}, n, E'} $ by
$\widetilde{{\mathcal F}}_\mu$. Since ${\rm Gr}_{{\mathcal G},n, X }\to {\rm Gr}_{{\mathcal G},X}$ is
smooth, ${\rm R}\Psi^{{\rm Gr}_{{\mathcal G}, n,\O'}} (\widetilde{{\mathcal F}}_\mu)$ is
canonically isomorphic to the pullback of ${\rm R}\Psi_\mu$, and by
Lemma \ref{no monodromy},
\[
{\rm R}\Psi^{{\rm Gr}_{{\mathcal G}, n, \O'} \times_{ \O' } S_{w,
\O'}}(\widetilde{{\mathcal F}}_\mu\boxtimes\on{IC}_{w,E'})\simeq {\rm
R}\Psi^{{\rm Gr}_{{\mathcal G}, n,
\O'}}(\widetilde{{\mathcal F}}_\mu)\boxtimes\on{IC}_{w,k'}.
\]
Observe that both sides are in fact $L^+_nP_{k'}$-equivariant
perverse sheaves, and the isomorphism respects to the equivariant
structure (by the similar argument as in the proof of Lemma
\ref{equiv str}). We thus have \eqref{ser4} and therefore have
finished the proof of (a).
Next we prove (b), which is similar. There is another convolution
affine Grassmannian ${\rm Gr}_{{\mathcal G},X}^{\rm Conv'}$, which represents the
functor that associates every $X$-scheme $y:S\to X$,
\begin{equation*}
{\rm Gr}^{\rm Conv'}_{{\mathcal G},X}(S)=\Biggl\{\,\text{iso-classes of } (
\mathcal E, \mathcal E',\beta,\beta') \Biggm|
\threelinestight{$\mathcal E, \mathcal E'$ are two ${\mathcal G}$-torsors on $X\times S$,}
{ $\beta$ a trivialization of $\mathcal E |_{ \mathring{X}\times S }$,} {$\beta'$ an isomorphism $\mathcal E'|_{X\times S\setminus\Ga_y}\simeq \mathcal E|_{X\times S\setminus\Ga_y}$}\,\Biggr\}\, .
\end{equation*}
Clearly, we have $m':{\rm Gr}^{\rm Conv'}_{{\mathcal G},
X}\to{\rm Gr}_{{\mathcal G},X}^{\on{BD}}$ by sending
$(y,{\mathcal E},{\mathcal E}',\beta,\beta')$ to $(y,{\mathcal E}',\beta\beta')$. This is
an isomorphism over $\mathring{X}$, and $m'|_{X_0}$ is again
the local convolution diagram
\[
m:{\rm Gr}_{P}\,\widetilde{\times}\, {\rm Gr}_{P}\to{\rm Gr}_{P}.
\]
Again, regard $\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu$ as a sheaf on
${\rm Gr}^{\rm Conv'}_{{\mathcal G},E'} \simeq {\rm Gr}_{P_{E'}}\times_{E'}
{\rm Gr}_{{\mathcal G},E'} $. Again, as nearby cycles commute with proper
push-forward, it is enough to prove that as sheaves on
${\rm Gr}_{P_{k'}}\,\widetilde{\times}\, {\rm Gr}_{P_{k'}}$,
\begin{equation}\label{ser1}
{\rm R}\Psi^{{\rm Gr}_{{\mathcal G},\O'}^{\rm Conv'} }(\on{IC}_{w,E'}\boxtimes\,
{\mathcal F}_\mu)\xrightarrow{\sim} \on{IC}_{w,k'}\,\widetilde{\times}\, R\Psi_\mu.
\end{equation}
Recall ${\mathcal L}^+_n{\mathcal G}$ is the $n$-th jet group of ${\mathcal G}$. This is
smooth over $X$ and the action of
$(\L^+{\mathcal G})_{\O_E}=\L^+{\mathcal G}\times_X{\rm Spec } (\O_E)$ on $M_{{\mathcal G}, \mu}$
factors through $({\mathcal L}^+_n{\mathcal G})_{\O_E}$ for some sufficiently
large $n$.
Let us define the ${\mathcal L}^+_n{\mathcal G}$-torsor ${\mathcal Q}_n$ over ${\rm Gr}_{P
}\times_{{\rm Spec } (\O)} X$ as follows. Its $S$-points are quadruples
$(y,{\mathcal E},\beta,\beta')$, where $y:S\to X$, $({\mathcal E},\beta)$ are as
in the definition of ${\rm Gr}_{P}$ (and therefore $\beta$ is a
trivialization of ${\mathcal E}$ on $\mathring{X}\times S$), and $\beta'$
is a trivialization of ${\mathcal E}$ over $\Ga_{y,n}$, the $n$-th
nilpotent thickening of the graph $\Ga_y$ of $y$. Then we have the
twisted product
\[
{\mathcal T}_n:=({\mathcal Q}_n\times_X{\rm Spec } (\O'))\times^{({\mathcal L}^+_n{\mathcal G})_{\O'} }M_{{\mathcal G}, \mu, \O'}\subset{\rm Gr}_{{\mathcal G},\O'}^{\rm Conv'} .\]
Over $E'$, we can form the twisted product
$\on{IC}_{w,E'}\,\widetilde{\times}\, {\mathcal F}_\mu$ on ${\mathcal T}_n$ as in \eqref{twp1}, which
is canonically isomorphic to $\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu$. By
the same argument as in the proof of (a) (i.e. by pulling back
everything to $({\mathcal Q}_n\times_X{\rm Spec } (\O'))\times M_{{\mathcal G}, \mu, \O'}$)
we have
\[{\rm R}\Psi^{ {\mathcal T}_n}
(\on{IC}_{w,E'}\,\widetilde{\times}\, {\mathcal F}_\mu)\xrightarrow{\sim}
\on{IC}_{w,k'}\,\widetilde{\times}\, {\rm R}\Psi_\mu.\] Therefore, \eqref{ser1}
holds and this completes the proof of the proposition.
\end{proof}
\begin{Remark}
{\rm Observe that in \cite{GaitsgoryInv}, the proof of the second
statement of the proposition is considerably more difficult than the
proof of the first one. Indeed, to prove the second statement,
Gaitsgory used the fact that every $H$-torsor over
${\mathbb A}^1_{\bar{k}}$ admits a reduction to a Borel subgroup, whose
counterpart for the general parahoric group schemes ${\mathcal G}$ over
${\mathbb A}^1_\O$ has not been documented. Instead our approach, exploits
the extra flexibility provided by the use of the group schemes
${\mathcal G}$, and treats both cases in a parallel way.}
\end{Remark}
\subsection{The monodromy of the nearby cycles}
\subsubsection{}
Let $M_{{\mathcal G}, \mu}$ be the generalized local model defined as
before. This is a flat $\O_E$-scheme. To study the
action of the inertia group $ I_E={\rm ker}({\rm Gal}(\bar F/E)\to {\rm Gal}(\bar k/k_E))$
(``monodromy") on the nearby cycles, it is convenient to first consider the restriction of this action to
the inertia $I_{\tilde F}$, where, as we recall (see \ref{sss1a2}), $\tilde F$ splits the group $G$.
\begin{thm}\label{monodromy}
The action of the inertia $I_{\tilde F}$ on the nearby cycles ${{\rm
R}\Psi}_\mu$ is unipotent.
\end{thm}
\begin{proof} Consider
the base change
\[
\widetilde M_{{\mathcal G}, \mu}:= M_{{\mathcal G}, \mu}\otimes_{\O_E}\O_{\tilde{F}}.
\]
Let us set $\widetilde{{\rm R}\Psi}_\mu:={\rm R}\Psi^{\widetilde
M_{{\mathcal G},\mu}}({\mathcal F}_\mu)$. Recall that $\widetilde{{\rm R}\Psi}_\mu$ is
canonically isomorphic to the sheaf ${\rm R}{\Psi}_\mu$ on
$\overline M_{{\mathcal G}, \mu}\otimes_{k_E}\bar k$ but with its ${\rm
Gal}(\bar F/F)$-action restricted to ${\rm Gal}(\bar F/\tilde F)$.
To study the inertia action we can base change to $\breve\O$. Let
$(S,s,\eta)$ be a strictly Henselian trait, i.e. $k(s)$ is separably
closed. Suppose that ${\mathfrak X}\to S$ is a separated scheme of finite
type. Let us recall that the nearby cycle ${\rm R}\Psi^{{\mathfrak X}}$ can
be canonically decomposed as
\[
{\rm R}\Psi^{{\mathfrak X}}=({\rm R}\Psi^{{\mathfrak X}})^{\rm un}\oplus ({\rm
R}\Psi^{{\mathfrak X}})^{\rm non-un},
\]
where $({\rm R}\Psi^{{\mathfrak X}})^{\rm un}$ is the unipotent part
and $({\rm R}\Psi^{{\mathfrak X}})^{\rm non-un}$ is the non-unipotent part
(see \cite[\S 5]{GortzHainesCrelle}).
If $f:{\mathfrak X}\to{\mathfrak Y}$ is a proper morphism over $S$, then the
isomorphism $f_*{\rm R}\Psi^{{\mathfrak X}}\simeq {\rm R}\Psi^{{\mathfrak Y}}f_*$
respects this decomposition. We refer to \cite{GortzHainesCrelle}
for the details. From this discussion, by using the standard
technique of using an Iwahori subgroup contained in our given
parahoric, we see that Theorem \ref{monodromy} follows from
\begin{prop}\label{Iwahori case}
Theorem \ref{monodromy} holds in the case $x\in A(G,A,F)$ lies in
the alcove, i.e. when $P_k$ is an Iwahori group scheme.
\end{prop}
This Proposition will be a consequence of the general theory of
central sheaves on affine flag varieties together with the
following lemma.
\begin{lemma}\label{triv}
Under the above assumptions and notations, the action of
${\rm Gal}(\bar{F}/\tilde{F}\breve{F})$ on the cohomology groups ${\mathrm H}^*(
M_{{\mathcal G}, \mu,\tilde{F}}, ({\mathcal F}_\mu)_{\tilde{F}})$ is trivial.
\end{lemma}
\begin{proof}
Recall that if $H$ is the split form of $G$, then $M_{{\mathcal G}, \mu,
\tilde F}\simeq \overline{{\rm Gr}}_{H, \mu,\tilde{F}}$ and
$({\mathcal F}_\mu)_{\tilde{F}}\simeq\on{IC}_{\mu,\tilde{F}}$. As in Lemma
\ref{no monodromy}, the nearby cycle ${\rm
R}\Psi^{\overline{{\rm Gr}}_{H,
\mu}\otimes_\O\O_{\tilde{F}}}(\on{IC}_{\mu,\tilde{F}})$
has trivial inertia action. This and proper base change implies the lemma.
\end{proof}
\smallskip
Now, we prove Proposition \ref{Iwahori case}. It is enough to show
the statement for $ \widetilde{{\rm R}\Psi}_\mu$. Decompose
\[
\widetilde{{\rm R}\Psi}_\mu=(\widetilde{{\rm R}\Psi}_\mu)^{\rm
un}\oplus (\widetilde{{\rm R}\Psi}_\mu)^{\rm non-un}.
\]
By taking cohomology and using the above lemma, we obtain
${\mathrm H}^*({\rm Gr}_{P_k}, (\widetilde{{\rm R}\Psi}_\mu)^{\rm non-un})=(0). $
We claim that this already implies that $(\widetilde{{\rm
R}\Psi}_\mu)^{\rm non-un}=(0)$. Indeed, recall that by Theorem
\ref{comm constraints}, we have isomorphisms
\[
\widetilde{{\rm R}\Psi}_\mu\star\on{IC}_{w,\bar{k}} \simeq {\rm
R}\Psi^{{\rm Gr}^{\on{BD}}_{{\mathcal G},\O_{\tilde F}}
}(\on{IC}_{w,E'}\boxtimes\,{\mathcal F}_\mu)\simeq \on{IC}_{w,E'}\star\,
\widetilde{{\rm R}\Psi}_\mu
\]
compatible with the action of the inertia group
${\rm Gal}(\bar{F}/\tilde{F}\breve{F})$. Since the inertia action on
$\on{IC}_{w,\bar{k}}={\rm
R}\Psi^{{\rm Gr}_{\breve{P}}}(\on{IC}_{w,\breve{F}})$ is trivial, we
obtain isomorphisms of perverse sheaves
\begin{eqnarray*}
(\widetilde{{\rm R}\Psi}_\mu)^{\rm non-un}\star\on{IC}_{w,\bar{k}}\simeq (\widetilde{{\rm R}\Psi}_\mu\star\on{IC}_{w,\bar{k}})^{\rm non-un} \phantom{\ \ \ \ \ } \\
\phantom{\ \ \ \ } \simeq (\on{IC}_{w,\bar{k}}\star\,
\widetilde{{\rm R}\Psi}_\mu)^{\rm non-un}
\simeq\on{IC}_{w,\bar{k}}\star\, (\widetilde{{\rm R}\Psi}_\mu)^{\rm
non-un}.
\end{eqnarray*}
(\cite[\S 5.3]{GortzHainesCrelle}). In other words,
$(\widetilde{{\rm R}\Psi}_\mu)^{\rm non-un}$ is a \emph{central
sheaf} in $ \on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}}, \overline
{\mathbb Q}_l)$ (cf. \cite{ArBezru, ZhuCoherence}). In loc. cit., it is
proven that central sheaves admit filtrations by the so-called
Wakimoto sheaves on ${\rm Gr}_{P_{\bar{k}}}$. Hence, by \cite[Corollary
7.9]{ZhuCoherence}, ${\mathrm H}^*({\rm Gr}_{P_{\bar{k}}}, (\widetilde{{\rm
R}\Psi}_\mu)^{\rm non-un})=(0)$ implies that $(\widetilde{{\rm
R}\Psi}_\mu)^{\rm non-un}=(0)$. This concludes the proof of the
Proposition and hence also of Theorem \ref{monodromy}.
\end{proof}
\subsubsection{}
Recall that we say that $x\in {\mathcal B}(G, F)$ is a special vertex, if
it is special in the sense of Bruhat-Tits in both ${\mathcal B}(G, F)$ and
${\mathcal B}(G_{\breve{F}},\breve{F})$. In this case, we have
better results.
\begin{prop}\label{mono triv}
Assume that $x$ is a special vertex. Then the action of the inertia
$I_{\tilde F}$ on $ {{\rm R}\Psi_\mu}$ is trivial. \end{prop}
\begin{proof}
Observe that when $x$ is a special vertex, the hypercohomology
functor
\[
{\mathrm H}^*:\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell)\to\on{Vect}_{\overline{{\mathbb Q}}_\ell}
\]
is a faithful functor. This follows from the semisimplicity of
$\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell)$,
as is shown in \cite{MirkVilonen, ZhuSatake}. Then the proposition
is a direct corollary of Lemma \ref{triv}.
\end{proof}
\begin{Remark}{\rm
When $G={\rm Res}_{K/F}{\rm GL}_n$, where $K$ is a finite extension of $F$
with Galois hull $\tilde{F}$ in $\bar{F}$, and $\mu$ is a Shimura
(minuscule) cocharacter, Proposition \ref{mono triv} is shown in \cite{PappasRaI}. When $G$ is a ramified unitary similitude group and $\mu$ is a
Shimura cocharacter, it is shown in \cite{ZhuSatake}.
In these two cases, one can also obtain an explicit description of the monodromy
action of $ I_E$ on ${\rm R}\Psi_\mu$. See \cite[Sect. 7]{PappasRaI} (also Example \ref{exPRJAG} below)
and \cite[Theorem 6.2]{ZhuSatake} respectively.
We will return to
the subject of the monodromy action on ${\rm R}\Psi_\mu$
in \S \ref{quasisplit}. There we will give a uniform but less
explicit description, in all cases that $G$ is quasi-split and $x$ is special.}
\end{Remark}
\subsection{The semi-simple trace}\label{sstraceSect}
\subsubsection{} As is explained in \cite[\S 3.1]{HainesNgoNearby}, for any $X$ over $s$,
and ${\mathbb F}_q\supset k(s)$, there is a map
\[\tau^{\on{ss}}: \on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)\to \on{Func}(X({\mathbb F}_{q}),\overline{{\mathbb Q}}_\ell),\]
called the semi-simple trace. This notion is due to Rapoport. Let us
briefly recall its definition and refer to \cite{HainesNgoNearby}
for details.
Let $V$ be an $\ell$-adic representation of $\Ga$. An admissible
filtration of $V$ is an increasing filtration $F_\bullet V$, stable
under the action of $\Ga$ and such that, for all $i$, the action of
$I$ on $V_i/V_{i-1}$ factors through a finite quotient. For an
admissible filtration of $V$, one defines
\[{\rm Tr}^{\on{ss}}(\sigma, V)=\sum_i{\rm Tr}(\sigma, (\on{gr}^F_iV)^I),\]
where $\sigma$ is the geometric Frobenius element. It is well-known
that admissible filtrations always exist and that the semi-simple
trace ${\rm Tr}^{\on{ss}}(\sigma, V)$ does not depend on the choice of
admissible filtration.
Next, if ${\mathcal F}\in\on{Sh}_c(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$,
then for every $x\in X({\mathbb F}_q)$, we define
\[\tau_{\mathcal F}^{\on{ss}}(x)={\rm Tr}^{\on{ss}}(\sigma_x,{\mathcal F}_{\bar{x}}),\]
where $\bar{x}$ is a geometric point over $x$ and $\sigma_x$ is the
geometric Frobenius of ${\rm Gal}(\bar{x}/x)$. In general, if ${\mathcal F}\in
\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$, then we set
\[
\tau_{\mathcal F}^{\on{ss}}(x)=\sum_i(-1)^i\tau_{{\mathcal H}^i{\mathcal F}}^{\on{ss}}(x).
\]
It is known that taking the semi-simple trace gives an analogue of
Grothendieck's usual sheaf-function dictionary. Namely, we can see that
$\tau^{\on{ss}}$ factors through the Grothendieck group of
$\on{D}_c^b(X\times_s\eta,\overline{{\mathbb Q}}_\ell)$. Also, if $f:X\to
Y$ is a morphism over $s$, then
\begin{equation}\label{sstrF}
f^*\tau^{\on{ss}}_{\mathcal F}=\tau^{\on{ss}}_{f^*{\mathcal F}},\quad
f_!\tau^{\on{ss}}_{\mathcal F}=\tau^{\on{ss}}_{f_!{\mathcal F}}.
\end{equation}
\subsubsection{}
Now, consider $x\in {\mathcal B}(G, F)$ and let ${\mathcal P}_x$ be the
parahoric group scheme as before. Let $G'={\mathcal G}\times_X
{\rm Spec } (k((u)))$ be the corresponding reductive group over $k((u))$
and suppose ${\mathcal P}_{x_{k((u))}}={\mathcal G}\times_X{\rm Spec } (k[[u]])$ is a
corresponding parahoric group scheme over $k[[u]]$. Let
${\mathbb F}_q\supset k$ and set $P'_q:={\mathcal P}_{x_{k((u))}}({\mathbb F}_q[[u]])$.
Let ${\mathcal H}_q(G',P'):={\mathcal H}(G'({\mathbb F}_q((u))),P'_q)$ be the Hecke
algebra of bi-$P'_q$-invariant, compacted supported locally constant
$\overline{{\mathbb Q}}_l$-valued functions on $G'({\mathbb F}_q((u)))$ under
convolution. This is an associative algebra (usually
non-commutative). Let ${\mathcal Z}({\mathcal H}_q(G',P'))$ denote its center.
Since ${\rm R}\Psi_\mu$
is an object in
$\on{Perv}_{L^+P_{k_E}}({\rm Gr}_{P_k}\times_{k_E}E,\overline{{\mathbb Q}}_\ell)$,
for every ${\mathbb F}_q\supset k_E$, we can consider $\tau_{{\rm
R}\Psi_\mu}^{\on{ss}}\in {\mathcal H}_q(G',P')$. As in
\cite{HainesNgoNearby}, we can see that Theorem \ref{comm
constraints} implies
\begin{thm} \label{thm9.13} With the above assumptions and notations,
$\tau^{\on{ss}}_{{\rm
R}\Psi_\mu}\in{\mathcal Z}({\mathcal H}_q(G',P'))$.\ \endproof
\end{thm}
In what follows, we explain how, in some cases, we can determine (or
characterize) the central function $\tau^{\rm ss}_{{\rm
R}\Psi_\mu}$.
\subsubsection{Unramified groups}
Here, we assume that $G$ is unramified. This means that the group
$G$ is quasi-split and it splits over $\breve{F}$.
\smallskip
a) First assume that $x$ is hyperspecial. Then ${\rm Gr}_{P_{\bar{k}}}$
is isomorphic to the usual affine Grassmannian ${\rm Gr}_H\otimes
\bar{k}$ of $H$. In this case, the geometric fiber
$\overline{M}_{{\mathcal G}, \mu }\otimes_{k_E}\bar{k}$ is isomorphic (up to
nilpotents) to the Schubert variety
$\overline{\on{Gr}}_{\mu,\bar{k}}\subset {\rm Gr}_H\otimes\bar k$
corresponding to $\mu$. (If $p\nmid|\pi_1(G_{\on{der}})|$, these are
isomorphic by Theorem \ref{special fiber}.) Let us denote the
intersection cohomology sheaf on $\overline{M}_{{\mathcal G}, \mu }$ by
$\bar{{\mathcal F}}_\mu$ so that $(\bar{{\mathcal F}}_\mu
)_{\bar{k}}\simeq\on{IC}_{\mu,\bar{k}}$. By Proposition \ref{mono
triv}, we know that ${\rm R}\Psi_\mu$ is a
Weil sheaf on $\overline{{\rm Gr}}_{\mu,\bar{k}}$.
By Lemma \ref{groupaction} and Lemma \ref{sla},
$(\bar{{\mathcal F}}_\mu)_{\bar{k}}$ is a subquotient of $({\rm
R}\Psi_\mu)_{\bar{k}}$ as Weil sheaves and when forgetting the Weil
structure over $k_E$, it is a direct summand since
$\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},
\overline{{\mathbb Q}}_{\ell})$ is a semi-simple category. Then it follows
from
\[\dim_{\overline{{\mathbb Q}}_{\ell}} {\mathrm H}^*(\on{IC}_{\mu})=\dim_{\overline{{\mathbb Q}}_{\ell}} {\mathrm H}^*({\mathcal F}_\mu)=\dim_{\overline{{\mathbb Q}}_{\ell}} {\mathrm H}^*({\rm R}\Psi_\mu),\]
that we have
\begin{prop}\label{propWeil}
${\rm R}\Psi_\mu\simeq \bar{{\mathcal F}}_{\mu}$ as Weil sheaves.\endproof
\end{prop}
In particular, if $\mu$ is a minuscule coweight, $x$ is still
hyperspecial, then $\overline{ M }_{{\mathcal G}, \mu }$ is smooth and we set
$d=\dim \overline{ M }_{{\mathcal G}, \mu }=2(\rho, \mu)$ then
\begin{equation}
{\rm R}\Psi_\mu\simeq \overline{{\mathbb Q}}_{\ell}[d](d/2)
\end{equation}
(the constant sheaf on
$\overline{ M }_{{\mathcal G}, \mu }$ up to cohomological shift and Tate
twist).
For simplicity, write $K=P'_q$ and denote ${\mathcal H}_q(G',P')$ by
$\on{Sph}_q$ (the notation stands for the spherical Hecke algebra)
which is then already a commutative algebra. It is well-known that
the function
\[
A_\mu:=\tau^{\on{ss}}_{\bar{{\mathcal F}}_{\mu}}\in\on{Sph}_q
\]
is given by the Lusztig-Kato polynomial (at least when $G$ is
split). Of course, in the case that $\mu$ is minuscule,
$A_\mu=(-1)^{2(\rho,\mu)}q^{(\rho,\mu)}1_{Ks_\mu K}$, where
$1_{Ks_\mu K}$ is the characteristic function on the double coset
$Ks_\mu K$.
\smallskip
b) Next we let $x$ be a general point in ${\mathcal B}(G, F)$. We
can then obtain the following statement which was conjectured by
Kottwitz and previously proven in the cases $G= {\rm GL}_n$ and
$\on{GSp}_{2n}$ by Haines and Ng\^o (\cite{HainesNgoNearby}, see
also \cite{RostamiThesis}).
\begin{thm} \label{ThmKott}(Kottwitz's conjecture)
Assume that $G$ is unramified. Then with the notations above,
$\tau^{\on{ss}}_{{\rm R}\Psi_\mu}$ is the unique element in the
center ${\mathcal Z}({\mathcal H}_q(G',P'))$, whose image under the Bernstein
isomorphism ${\mathcal Z}({\mathcal H}_q(G',P'))\xrightarrow{\sim} \on{Sph}_q$ is
$A_\mu$.
\end{thm}
\begin{proof} See \cite[Theorem 3.1.1]{HainesBC} for the
Bernstein isomorphism in this case. We first show the result when
$x$ is in an alcove, i.e the parahoric subgroup is an Iwahori. Then
the proof follows, exactly as is explained in
\cite{HainesNgoNearby},
from Theorem \ref{thm9.13} and Proposition \ref{propWeil},
by using (\ref{sstrF}) and the fact that taking nearby cycles
commutes with proper push-forward. We refer the reader to loc. cit.
for more details.
The general parahoric case now follows similarly by first finding an Iwahori subgroup
contained in the parahoric and then using the compatibility of the
Bernstein isomorphism with change in parahoric subgroup
(\cite[3.3.1]{HainesBC}).
\end{proof}
\begin{Remark}
{\rm Assume that $x$ lies in an alcove, i.e the parahoric subgroup
is an Iwahori, and that $\mu$ is minuscule. Using Bernstein's
presentation of the Iwahori-Hecke algebras (again at least for $G$
split), it is possible to give
explicit formulas for the functions $\tau^{\on{ss}}_{R\Psi_\mu}$. See
\cite{HainesNgoNearby} and \cite{HainesTest} for such formulas and various other
related results.}
\end{Remark}
\subsubsection{Quasi-split groups; the special vertex case}\label{quasisplit}
Now, we assume that $G$ is quasi-split (but can be split only after
a ramified extension). We will restrict here to the case that $x$ is
a special vertex in ${\mathcal B}(G, F)$, i.e it is special and remains
special in ${\mathcal B}(G_{\breve F}, \breve F)$ in the sense of
Bruhat-Tits. As in this case the special fiber
$\overline{M}_{{\mathcal G},\mu}$ is not necessarily smooth, the nearby cycle
${\rm R}\Psi_\mu$ can be complicated. Our goal is to determine the
semi-simple trace $\tau^{\on{ss}}_{{\rm R}\Psi_\mu}$ but before
that we need to understand the monodromy action on ${\rm
R}\Psi_\mu$.
We will start by describing $\widetilde{{\rm R}\Psi}_\mu$
explicitly.
By Proposition \ref{mono triv}, $\widetilde{{\rm R}\Psi}_\mu$ is a Weil sheaf
on ${\rm Gr}_{P_{\bar{k}}}$.
Let us briefly recall the (ramified) geometric Satake correspondence
as established in \cite{MirkVilonen, ZhuSatake}. Let $H^\vee$ be the
dual group of the split form $H$ of $G$ over $\overline{{\mathbb Q}}_\ell$
in the sense of Langlands (i.e. the root datum of $H^\vee$ is dual
to the root datum of $H$). Then the Galois group $\Ga={\rm Gal}(\tilde
F/F)$ (and therefore $I=I_F$) acts on $H^\vee$ via pinned
automorphisms (after choosing some pinning). Recall the notation
$G'={\mathcal G}\times_X{\rm Spec } (k((u)))$ and $P_k={\mathcal G}\times_X{\rm Spec } (k[[u]])$;
our constructions allow us to identify the actions of $I_F$ and
$I_{k((u))}$ on $H^\vee$ obtained from $G$ and $G'$ respectively.
The geometric Satake correspondence of \cite{MirkVilonen, ZhuSatake}
is an equivalence of tensor categories
\[{\mathcal S}_{\bar{k}}:\on{Rep}((H^\vee)^I)\xrightarrow{ \sim }\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell),\]
such that the composition ${\mathrm H}^*\circ{\mathcal S}_{\bar{k}}$ is isomorphic
to the natural forgetful fiber functor $\on{Rep}((H^\vee)^I)\to {\rm
Vec}( \overline{\mathbb Q}_l)$. On the other hand,
${\rm Gr}_{{\mathcal G},\O}\otimes_{\O}\tilde{F}$ is just ${\rm Gr}_{H, \tilde{F}}$ for
the split group $H$ and by \cite{MirkVilonen} there is a full
embedding
\[
{\mathcal S}_{\tilde{F}}:\on{Rep}(H^\vee)\to\on{Perv}_{({\mathcal L}^+{\mathcal G})_{\tilde{F}}}({\rm Gr}_{{\mathcal G},\O}\otimes_{\O}\tilde{F},\overline{{\mathbb Q}}_\ell).
\]
This maps $V_\mu$, the highest weight representation of $H^\vee$
corresponding to $\mu\in\mathbb{X}_\bullet(T)$, to $\on{IC}_{\mu,\tilde{F}}$, so
that ${\mathcal S}_{\tilde F}(V_\mu)=\on{IC}_{\mu,\tilde F}$. Again, the
composition ${\mathrm H}^*\circ {\mathcal S}_{\tilde F}$ is isomorphic to the
forgetful functor.
\begin{thm}\label{thm9.19}
The composition
\begin{multline*} ({\mathcal S}_{\bar{k}})^{-1}\circ{\rm R}\Psi^{{\rm Gr}_{{\mathcal G},\O}\otimes\O_{\tilde{F}}}\circ{\mathcal S}_{\tilde{F}}:\\
\on{Rep}(H^\vee)\to\on{Perv}_{({\mathcal L}^+{\mathcal G})_{\tilde{F}}}({\rm Gr}_{{\mathcal G},\O}\otimes_\O\tilde{F},\overline{{\mathbb Q}}_\ell)
\to
\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell)\to\on{Rep}((H^\vee)^I)
\end{multline*}
is isomorphic to the natural restriction functor
\[\on{Res}:\on{Rep}(H^\vee)\to\on{Rep}((H^\vee)^I).\]
In particular, $\widetilde{{\rm R}\Psi}_\mu\simeq
{\mathcal S}_{\bar{k}}(\on{Res}(V_\mu))$.
\end{thm}
\begin{proof}Since this is a statement about geometric sheaves, we
can assume that the residue field of $\tilde{F}$ is algebraically
closed (i.e. we replace $\tilde{F}$ by $\tilde{F}\breve{F}$ in
$\bar{F}$). In addition, we can ignore the Tate twist in what
follows.
Consider the ramified cover ${\mathbb A}^1_{\O_{\tilde{F}}}\to{\mathbb A}^1_\O$
given by $\O[u]\to\O_{\tilde{F}}[v], u\mapsto v^e$. Let us denote
the base change
\[\widetilde{{\rm Gr}}_{{\mathcal G},X}:={\rm Gr}_{{\mathcal G},X}\times_{{\mathbb A}^1_\O}{\mathbb A}^1_{\O_{\tilde{F}}}.\]
We will consider the sub ind-schemes of $\widetilde{{\rm Gr}}_{{\mathcal G},X}$
given by specializing $\O_{\tilde{F}}[v]$ along different
directions.
\begin{enumerate}
\item The map $\O_{\tilde{F}}[v]\to\tilde{F}$, $v\mapsto \tilde{\varpi}$
gives rise to a closed embedding
$i_{v=\tilde{\varpi}}:{\rm Gr}_{{\mathcal G},\O}\otimes\O_{\tilde{F}}\to\widetilde{{\rm Gr}}_{{\mathcal G},X}$.
We denote the open embedding
${\rm Gr}_{{\mathcal G},\O}\otimes\tilde{F}\to{\rm Gr}_{{\mathcal G},\O}\otimes\O_{\tilde{F}}$
by $j_{v=\tilde{\varpi}}$.
\item The map $\O_{\tilde{F}}[v]\to\tilde{F}[v]$ gives rise to
$j_{\tilde{\varpi}\neq
0}:\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\tilde{F}}}\to
\widetilde{{\rm Gr}}_{{\mathcal G},X}$. We denote
$\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_{m\tilde{F}}}\to
\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\tilde{F}}}$ by $j_{\tilde{F}}$.
\item The map $\O_{\tilde{F}}[v]\to\bar{k}[v]$,
$\tilde{\varpi}\mapsto 0$ gives rise to
$i_{\tilde{\varpi}=0}:\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\bar{k}}}\to
\widetilde{{\rm Gr}}_{{\mathcal G},X}$. By its very definition, this is the
ind-scheme over ${\mathbb A}^1_{\bar{k}}$ considered in \cite{ZhuSatake}
(in \emph{loc. cit.}, it is denoted by $\widetilde{{\rm Gr}}_{\mathcal G}$). We
denote $\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_{m\bar{k}}}\to
\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\bar{k}}}$ by $j_{\bar{k}}$.
\item The map $\O_{\tilde{F}}[v]\to\O_{\tilde{F}}$, $v\mapsto 0$, gives rise
to $i_{v=0}:{\rm Gr}_{P_{\O_{\tilde{F}}}}\to\widetilde{{\rm Gr}}_{{\mathcal G},X}$, and
its open complement is denoted by $j_{v\neq 0}:
\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_m}\to \widetilde{{\rm Gr}}_{{\mathcal G},X}$.
\item The map $\O_{\tilde{F}}[v]\to\bar{k}$, $v\mapsto 0$, $\varpi\mapsto 0$ gives rise
to $i:{\rm Gr}_{P_{\bar{k}}}\to \widetilde{{\rm Gr}}_{{\mathcal G},X}$.
\end{enumerate}
Since the reductive group scheme $\underline{G}$ constructed in
Sect. \ref{reductive group} splits after the base change $\O[u^{\pm
1}]\to \O_{\tilde F}[v^{\pm 1}]$, we have
\[
\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_m}\simeq ({\rm Gr}_{H}\times
{\mathbb G}_{m})\otimes_\O\O_{\tilde{F}}.
\]
Then for $\mu\in\mathbb{X}_\bullet(T)$, we can regard
$\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell[1]$, which is a
perverse sheaf on $({\rm Gr}_{H}\times {\mathbb G}_{m})$, as a perverse sheaf on
$\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_{m}}$.
Similarly, we have
$\on{IC}_{\mu,\kappa}\boxtimes\,\overline{{\mathbb Q}}_\ell[1]$ over
$\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_{m\kappa}}$
for $\kappa=\bar{k}$ or $\tilde{F}$.
By the construction of ${\mathcal S}_{\bar k}$ in \cite{ZhuSatake}, there
is a canonical isomorphism
\[
{\rm
R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\bar{k}}}}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell[1])\simeq
{\mathcal S}_{\bar{k}}(\on{Res}V_\mu).
\]
Therefore, the theorem will follow if we can construct a canonical
isomorphism
\begin{equation}\label{step1}
{\rm
R}\Psi^{{\rm Gr}_{{\mathcal G},\O}\otimes\O_{\tilde{F}}}({\mathcal S}_{\tilde{F}}(V_\mu))\simeq
{\rm
R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb A}^1_{\bar{k}}}}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell[1]).
\end{equation}
In what follows, we will make use of the following standard lemma.
(See also \cite[Corollary 2.3]{ZhuSatake}).
\begin{lemma}\label{another}
Let ${\mathfrak X}\to S$ be as before with $S$ a strictly henselian trait.
Let ${\mathcal F}$ be a perverse sheaf on ${\mathfrak X}_\eta$. If the inertia
action on ${\rm R}\Psi^{\mathfrak X}({\mathcal F})$ is trivial, then
\[
{\rm R}\Psi^{\mathfrak X}({\mathcal F})\simeq {^p}{\mathrm H}^{0}i^*j_{*}{\mathcal F}\simeq
{^p}{\mathrm H}^{1}i^*j_*{\mathcal F}\simeq i^*j_{!*}{\mathcal F},
\]
where ${^p}{\mathrm H}^*$ stands for perverse cohomology.
\end{lemma}
\begin{proof}Since the inertia action is trivial, from the distinguished triangle
\begin{equation}\label{distinguish}
i^*j_*{\mathcal F}\to {\rm R}\Psi^{\mathfrak X}({\mathcal F})\stackrel{0}{\to} {\rm
R}\Psi^{\mathfrak X}({\mathcal F})\to \ ,
\end{equation}
(see for example \cite[(3.6.2)]{IllusieMonodromy} ) we obtain that
$i^*j_*{\mathcal F}$ is supported in perverse cohomological degree $0$ and
$1$, and both cohomology sheaves are isomorphic to ${\rm
R}\Psi^{\mathfrak X}({\mathcal F})$. But $i^*j_{!*}{\mathcal F}={^p{\mathrm H}}^{1}i^*j_*{\mathcal F}$.
The lemma follows.
\end{proof}
Using this we see that we can reduce \eqref{step1} to proving an
isomorphism
\begin{equation}\label{step2}
{^p}{\mathrm H}^{0}i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell)\xrightarrow{\sim}{^p}{\mathrm H}^{0}i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu).
\end{equation}
Observe that there is a natural map $i^*_{v=\tilde{\varpi}}j_{v\neq
0*}(\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\to
j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu)$, which is the
adjunction of $j_{v=\tilde{\varpi}}^*i^*_{v=\tilde{\varpi}}j_{v\neq
0*}(\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\xrightarrow{\sim
} {\mathcal S}_{\tilde{F}}(V_\mu)$. Similarly, we have a map
\begin{equation}
\label{stronger} i^*_{\tilde{\varpi}=0}j_{v\neq
0*}(\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\to
j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell).
\end{equation}
The key observation, which we will prove later, is that
\begin{lemma}\label{stronger1}
The map \eqref{stronger} is an isomorphism.
\end{lemma}
From this, we obtain the following correspondence
\begin{equation}\label{adjunction}
{^p}{\mathrm H}^0i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,{\mathbb Q}_\ell)\leftarrow
{^p}{\mathrm H}^0i^*j_{v\neq
0*}(\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\to
{^p}{\mathrm H}^0i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu),
\end{equation}
with the first arrow an isomorphism. Therefore, we can invert the
first arrow of \eqref{adjunction} and obtain an arrow as in
\eqref{step2}. To show that this gives an isomorphism, it is enough
to show that it induces an isomorphism on cohomology since as we
mentioned before,
${\mathrm H}^*:\on{Perv}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell)\to
{\rm Vect}(\overline {\mathbb Q}_\ell)$ is faithful.
Let $f:\widetilde{{\rm Gr}}_{{\mathcal G},X}\to{\mathbb A}^1_{\O_{\tilde{F}}}$ be the
structure map, which is ind-proper. Therefore the (derived)
push-forward $f_*$ commutes with any pullback and pushforward. Since
over $({\mathbb G}_m)_{\O_{\tilde F}}\subset {\mathbb A}^1_{\O_{\tilde{F}}}$,
$f_*(\on{IC}_{\mu}\boxtimes\, \overline{{\mathbb Q}}_\ell)$ is just
constant, with stalks isomorphic to $V_\mu[1]$, it is immediately
seen that
\[
f_*i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,{\mathbb Q}_\ell)\simeq
f_*i^*j_{v\neq
0*}(\on{IC}_{\mu}\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\simeq f_*
i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu)\simeq V_\mu.
\]
Observe that the triangle \eqref{distinguish} implies that
$i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,{\mathbb Q}_\ell)={^p}{\mathrm H}^0(i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,{\mathbb Q}_\ell))+{^p}{\mathrm H}^1(i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,{\mathbb Q}_\ell))[-1]$
as objects in
$\on{D}_{L^+P_{\bar{k}}}({\rm Gr}_{P_{\bar{k}}},\overline{{\mathbb Q}}_\ell)$
(and similarly for
$i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu))$. Therefore the
isomorphism
${\mathrm H}^*(i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes{\mathbb Q}_\ell))\simeq
{\mathrm H}^*(i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu))$ implies
that the natural map \eqref{adjunction} induces the isomorphism
\[{\mathrm H}^*({^p}{\mathrm H}^0i^*j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes{\mathbb Q}_\ell))\simeq
{\mathrm H}^*({^p}{\mathrm H}^0i^*j_{v=\tilde{\varpi}*}{\mathcal S}_{\tilde{F}}(V_\mu)),\]
and the theorem follows.
It remains to prove Lemma \ref{stronger1}. This will follow if we
can show
\begin{lemma}\label{step3}
Consider the natural structure map
$\widetilde{{\rm Gr}}_{{\mathcal G},X}\to\O_{\tilde{F}}$. Then:
(i) The vanishing cycle ${\rm
R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{v\neq
0*}(\on{IC}_\mu\boxtimes\overline{{\mathbb Q}}_\ell))=(0)$.
(ii) The natural map
\[{\rm R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{\tilde{F}*}(\on{IC}_{\mu,\tilde{F}}\boxtimes\,\overline{{\mathbb Q}}_\ell))\to j_{\bar{k}*}{\rm R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_{m}}}(\on{IC}_{\mu,\tilde{F}}\boxtimes\,\overline{{\mathbb Q}}_\ell)\simeq j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell),\]
is an isomorphism.
\end{lemma}
Indeed, the natural map \eqref{stronger} factors as
\[i_{\tilde{\varpi}=0}^*j_{v\neq 0*}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell)[-1]\to{\rm R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{\tilde{F}*}(\on{IC}_{\mu,\tilde{F}}\boxtimes\,\overline{{\mathbb Q}}_\ell))\to j_{\bar{k}*}(\on{IC}_{\mu,\bar{k}}\boxtimes\,\overline{{\mathbb Q}}_\ell).\]
(To see these two maps coincide, apply the adjunction between
$i_{\tilde{\varpi}=0}^*$ and $i_{\tilde{\varpi}=0*}$.) Now by Part
(i) of the lemma, the first arrow is an isomorphism, and by Part
(ii) of the lemma, the second arrow is an isomorphism. This shows
that Lemma \ref{step3} implies Lemma \ref{stronger1}.
Finally, we prove Lemma \ref{step3}. This statement can be regarded
as a global analogue of Lemma \ref{no monodromy}. Since there is no
resolution of $\widetilde{{\rm Gr}}_{{\mathcal G},X}$ satisfying the conditions in
\cite[\S 5.2]{HainesNgoNearby}, we need a different argument. First,
we prove (i). According to Gabber's theorem, since $j_{v\neq
0*}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell)[1]$ is perverse,
${\rm R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{v\neq
0*}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell))$ is a perverse
sheaf. On the other hand, by \cite[\S 5.2]{HainesNgoNearby},
\[
j_{\bar{k}}^*{\rm R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{v\neq
0*}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell))={\rm
R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},{\mathbb G}_m}}(j_{v\neq 0}^*j_{v\neq
0*}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell))=(0).
\]
Therefore, ${\rm R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{v\neq
0*}(\on{IC}_\mu\boxtimes\overline{{\mathbb Q}}_\ell))$ is a perverse sheaf
on ${\rm Gr}_{P_{\bar{k}}}$, which is $L^+P_{\bar{k}}$-equivariant by the
same argument as in Lemma \ref{equiv str}. To show it is trivial, it
is enough to prove that its cohomology vanishes. To do this we can
apply $f_*$ with
$f:\widetilde{{\rm Gr}}_{{\mathcal G},X}\to{\mathbb A}^1_{\O_{\tilde{F}}}$ as above, and
we are done.
To prove (ii), as the nearby cycle functor commutes with Verdier
duality (cf. \cite[Theorem 4.2]{IllusieMonodromy} ), it is enough to
prove the dual statement that the natural map
\[
j_{\bar{k}!}(\on{IC}_{\mu,\bar{k}}\boxtimes\overline{{\mathbb Q}}_\ell)\to
{\rm
R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}j_{\tilde{F}!}(\on{IC}_{\mu,\tilde{F}}\boxtimes\,\overline{{\mathbb Q}}_\ell)
\]
is an isomorphism. But the vanishing cycle ${\rm
R}\Phi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}(j_{v\neq
0!}(\on{IC}_\mu\boxtimes\,\overline{{\mathbb Q}}_\ell))=(0)$ by a similar
argument
as above, and therefore
\begin{multline*}
\ \ \ \ \ \ \ \ \ \ {\rm
R}\Psi^{\widetilde{{\rm Gr}}_{{\mathcal G},X}}j_{\tilde{F}!}(\on{IC}_{\mu,\tilde{F}}\boxtimes\,\overline{{\mathbb Q}}_\ell)\simeq
i_{\tilde{\varpi}=0}^*j_{v\neq
0!}(\on{IC}_\mu\boxtimes\overline{{\mathbb Q}}_\ell)[-1]\simeq \\
\simeq
j_{\bar{k}!}i_{\tilde{\varpi}=0}^*(\on{IC}_\mu\boxtimes\overline{{\mathbb Q}}_\ell)[-1]\simeq
j_{\bar{k}!}(\on{IC}_{\mu,\bar{k}}\boxtimes\overline{{\mathbb Q}}_\ell).\ \
\ \ \ \ \ \ \
\end{multline*}
This completes the proof of Lemma \ref{step3} and the theorem.
\end{proof}
\smallskip
Let us use Theorem \ref{thm9.19} to determine the inertia action on
${\rm R}\Psi_\mu$. Let $H^\vee\rtimes{\rm Gal}(\bar F/E)$ be the
Langlands dual group of $G_E$. Then according to \cite{RiZh},
${\mathrm H}^*({\mathcal F}_\mu)$ carries on a canonical action of $H^\vee\rtimes
{\rm Gal}(\bar F/E)$, that factors through $H^\vee\rtimes {\rm Gal}(\tilde
F/E)$, whose underlying representation of $H^\vee$ is just $V_\mu$
and whose underlying representation of ${\rm Gal}(\tilde{F}/E)$ is (a
certain twist of) the natural action of ${\rm Gal}(\tilde{F}/E)$ on
${\mathrm H}^*({\mathcal F}_\mu)$. In addition, when we restrict this action to $I_E$,
the twist disappears. Therefore, $V_\mu$ is naturally a
$(H^\vee)\rtimes I_E$-module. Since nearby cycles commute with
proper push-forward, we have ${\mathrm H}^*({\rm R}\Psi_\mu)\simeq
\on{Res}(V_\mu)$ as a representation of $(H^\vee)^{I_F}\times I_E$.
Therefore, we obtain that
\begin{thm}\label{thm9.23}
Under the isomorphism
${\mathcal S}_{\bar{k}}(\on{Res}(V_\mu))\xrightarrow{\sim} {\rm R}\Psi_\mu$
obtained from Theorem \ref{thm9.19}, the action of the inertia $I_E$
on ${\rm R}\Psi_\mu$ corresponds to the action of $I_E\subset
(H^\vee)^{I_F}\times I_E$ on $\on{Res}(V_\mu)$.
\end{thm}
\begin{example}\label{exPRJAG}
{\rm Let us consider the following example, which was discussed in
\cite{PappasRaI} (note that here we use different notation). Let
$G=\on{Res}_{K/F}{\rm GL}_n$ and ${\mathcal P}_x=\on{Res}_{\O_{K}/\O_F}{\rm GL}_n$,
where $K/F$ is a totally {\sl tamely} ramified extension of degree
$d$.
In this case, we have ${\rm Gr}_{P_k}\simeq{\rm Gr}_{{\rm GL}_n}$, and
${\rm Gr}_{{\mathcal G},\O}\otimes_{\O}\tilde{F}\simeq({\rm Gr}_{{\rm GL}_n})^d$. In
addition, $H^\vee={\rm GL}_n^d$, and $(H^\vee)^I={\rm GL}_n$, embedded in
$H^\vee$ via the diagonal embedding.
Let $\tilde{F}$ be the Galois closure of $K/F$. Choose an ordering
of the set of embeddings $\phi: \tilde F\to \bar F$ which allows us
to identify ${\rm Gal}(\tilde F/F)$ with a subgroup of the symmetric
group $S_d$. Let $\mu$ be a minuscule coweight of $G$. We can write
$\mu=(\mu_1,\mu_2,\ldots,\mu_d)$ with $\mu_i :{{\mathbb G}_{\rm m}}_{\tilde F}\to
({\rm GL}_n)_{\tilde F}$; if $E$ is the reflex field of $\mu$, the Galois
group ${\rm Gal}(\tilde F/E)$ is the subgroup of those $\sigma\in
S_d$ with $\mu_{\sigma(i)}=\mu_i$. In this case,
\[\on{Res}(V_\mu)=V_{\mu_1}\otimes\cdots\otimes V_{\mu_d}.\]
as a representation of ${\rm GL}_n=(H^\vee)^I$. Therefore, Theorem
\ref{thm9.19} gives
\[\widetilde{{\rm R}\Psi}_\mu\simeq \on{IC}_{\mu_1}\star\on{IC}_{\mu_2}\star\cdots\star\on{IC}_{\mu_d}.\]
Indeed, this has been proved in \cite[Theorem 7.1]{PappasRaI}.
As the action of
${\rm Gal}(\tilde{F}/E)$ on $V_{\mu_1}\otimes\cdots \otimes V_{\mu_d}$
commutes with the action of $(H^\vee)^{I_F}$, we can write
\[V_{\mu_1}\otimes\cdots \otimes V_{\mu_d}=\bigoplus_{\lambda\leq\mu_1+\cdots+\mu_d} M_\lambda\otimes V_\lambda,\]
where each $M_{\lambda}$ is a representation of ${\rm Gal}(\tilde{F}/E)$.
Therefore, Theorem \ref{thm9.23} now implies that
\[{\rm R}\Psi_\mu\simeq \bigoplus_{\lambda\leq\mu_1+\cdots+\mu_d} M_\lambda\otimes\on{IC}_\lambda,\]
and the action of ${\rm Gal}(\tilde{F}/E)$ on ${\rm R}\Psi_\mu$ is
through the action on each $M_\lambda$. This decomposition was
conjectured in \cite[Remark 7.4]{PappasRaI}. Actually, it seems that
our techniques can be extended to obtain this even when $K/F$ is
wildly ramified
but we prefer to leave this for another time.}
\end{example}
\subsubsection{}
Finally, let us determine the (semi-simple) trace
$\tau^{\on{ss}}_{{\rm R}\Psi_\mu}$ in the case that $G$ is
quasi-split (but not unramified) and $x$ is special.
We first recall the following extension of the
geometric Satake Langlands isomorphism (cf. \cite[Theorem
0.2]{ZhuSatake}). Recall $P_k={\mathcal P}_{x_{k((u))}}$, a parahoric group
scheme over $k[[u]]$. The affine flag variety ${\rm Gr}_{P_k}$ is defined
over $k$. Let ${\mathfrak P}_x^0$ be the category of $L^+P_k$-equivariant,
semi-simple perverse sheaves on ${\rm Gr}_{P_k}$, pure of weight zero.
Then there is an equivalence
\[
{\mathcal S}_k:
\on{Rep}((H^\vee)^I\rtimes{\rm Gal}(\bar{k}/k))\xrightarrow{\sim}
{\mathfrak P}_x^0,
\]
where we consider $(H^\vee)^I\rtimes{\rm Gal}(\bar{k}/k)$ as a
pro-algebraic group over $\overline{{\mathbb Q}}_\ell$ and
$\on{Rep}((H^\vee)^I\rtimes{\rm Gal}(\bar{k}/k))$ is its category of
algebraic representations. Let $\bar{\mu}\in(\mathbb{X}_\bullet(T)_I)^\sigma$ so
that the corresponding Schubert variety in ${\rm Gr}_{P_k}$ is defined
over $k$. Let $\on{IC}_{\bar{\mu}}$ be its intersection cohomology
sheaf and $A_{\bar{\mu}}$ be the function obtained by taking the
Frobenius trace on the stalks of $\on{IC}_{\bar{\mu}}$. Let
$W_{\bar{\mu}}={\mathrm H}^*(\on{IC}_{\bar\mu})$. Via the above geometric
Satake isomorphism, $W_{\bar{\mu}}$ is naturally a representation
of $(H^\vee)^I\rtimes{\rm Gal}(\bar{k}/k)$.
Recall that in \cite{ZhuSatake}, we call a smooth irreducible
representation of $G'={\mathcal G}\times_X{\rm Spec } (k((u)))$ over $F'=k((u))$ to
be ``unramified" if it has a vector fixed by
$P'_k={\mathcal P}_{x_{k((u))}}(k[[u]])$. To each such representation,
we attach a Langlands parameter
\[\on{Sat}(\pi): W_{F'}\to {^L}G= H^\vee\rtimes {\rm Gal}(\bar{F}'^s/F'),\]
where $W_{F'}$ is the Weil group of $F'$, such that
$\on{Sat}(\pi)(\gamma)= (1,\gamma)$ for $\gamma\in I_{F'}$. Let
$\Phi$ be a lift of the Frobenius to $W_{F'}$. Then as is shown in
\emph{loc. cit.}, up to conjugation, we can assume that
$\on{Sat}(\pi)(\Phi)\in (H^\vee)^{I_{F'}}\rtimes
{\rm Gal}(\bar{F}'^s/F')$ and is uniquely determined by this image up to
$(H^{\vee})^{I_{F'}}$-conjugacy. Now the Langlands parameter
$\on{Sat}(\pi)$ is characterized by the following identity.
\[
\on{tr}(\pi(A_{\bar{\mu}}))=\on{tr}((\on{Sat}(\pi)(\Phi),
W_{\bar{\mu}}).
\]
Now, since by Proposition \ref{mono triv} the inertia action on
${\rm R}\Psi_\mu$ factors through a finite quotient, we have
\[
\tau^{\on{ss}}_{{\rm R}\Psi_\mu}(x)=\on{tr}^{\on{ss}}(\sigma_x,
({\rm R}\Psi_\mu)_{\bar x})=\on{tr}(\sigma_x, ({\rm
R}\Psi_\mu)_{\bar x}^I) .
\]
We determine ${\rm R}\Psi_\mu^I$ as a Weil sheaf on
${\rm Gr}_{P_{\bar{k}}}$. As nearby cycles commute with proper
pushforward, the cohomology ${\mathrm H}^*({\rm
R}\Psi_\mu^I)={\mathrm H}^*({\mathcal F}_\mu)^I$ is pure of weight zero.
Therefore, ${\rm R}\Psi_\mu^I$ is just the direct sum of
intersection cohomology sheaves on ${\rm Gr}_{P_{\bar{k}}}$, equipped
with the natural Weil structure. By the above discussion and the
results of the previous section, we obtain that
$\tau^{\on{ss}}_{R\Psi_\mu}\in {\mathcal H}_q(G',K')$ is the unique
function such that
\[
\on{tr}(\tau^{\on{ss}}_{R\Psi_\mu}(\pi))=\on{tr}(\on{Sat}(\pi)(\sigma),V_\mu^I).
\]
This agrees with the prediction of Haines and Kottwitz.
\bigskip
\bigskip
\section{Appendix: Homogeneous spaces}
\setcounter{equation}{0}
In this section, we study the representability of certain quotients
of group schemes over a two-dimensional base. The results are used
in Chapter \ref{chapterLoop} to show the ind-representability of the global
affine Grassmannian ${\rm Gr}_{{\mathcal G}, X}$.
\subsection{}We assume that $A$ is an excellent Noetherian regular ring
of Krull dimension $2$. If $G$, $H$ are smooth group schemes over
$A$ and $H\hookrightarrow G$ is a closed group scheme immersion then
by Artin's theorem (\cite{ArtinVersal}), the fppf quotient $G/H$ is
represented by an algebraic space which is separated of finite
presentation and in fact smooth over $A$.
\begin{lemma}\label{triple}
Suppose that $G_1\hookrightarrow G_2\hookrightarrow G_3$ are closed
group scheme immersions and $G_1$, $G_2$, $G_3$ are smooth over
$A$. The natural morphism
$$
G_3/G_1\to G_3/G_2
$$
is an fppf fibration with fibers isomorphic to $G_2/G_1$. Suppose
that $G_2/G_1$ is quasi-affine (affine). If $G_3/G_2$ is a scheme,
then so is $G_3/G_1$. If in addition $G_3/G_2$ is quasi-affine
(resp. affine), then so is $G_3/G_1$.
\end{lemma}
\begin{proof}
The first statement follows from the fact that fppf descent is
effective for quasi-affine schemes. In fact, by our assumption, $
G_3/G_1\to G_3/G_2 $ is a quasi-affine morphism. If $G_3/G_2$ is
quasi-affine, then $G_3/G_1$ is also quasi-affine (affine) since its
structure morphism to $A$ is a composition of quasi-affine (resp.
affine) morphisms and as such is also quasi-affine (resp. affine).
\end{proof}
\begin{prop}\label{linearrep}
Let ${\mathcal G}, {\mathcal H} \to S={\rm Spec } (A)$ be two smooth affine group schemes
with connected fibers. Assume that ${\mathcal H}$ is a closed subgroup scheme
of ${\mathcal G}$.
Set ${\mathcal G}={\rm Spec } (B)$, so that $B$ is an $A$-Hopf algebra.
Then there is a free finitely generated $A$-module $M=A^n$ with
${\mathcal G}$-action (i.e a $B$-comodule) and a projective $A$-submodule
$W\subset M$ which is a locally a direct summand,
such that:
i) There is a ${\mathcal G}$-equivariant surjection ${\rm Sym}^\bullet_A(M)
\to B $ and the ${\mathcal G}$-action on $M$ gives a group scheme
homomorphism $\rho: {\mathcal G}\hookrightarrow {\rm GL}(M)$ which is a
closed immersion.
ii) The representation $\rho$ identifies ${\mathcal H}$ with the subgroup
scheme of ${\mathcal G}$ that stabilizes $W$.
\end{prop}
\begin{proof} Write ${\mathcal G}={\rm Spec } (B)$, ${\mathcal H}={\rm Spec } (B')$ and
let $p: B \to B'$ be the ring homomorphism that corresponds to
${\mathcal H}\subset {\mathcal G}$. Observe that $B$, $B'$ are Hopf algebras over $A$.
We will often refer to the $B$-comodules for the Hopf algebra $B$ as
``modules with ${\mathcal G}$-action". Since ${\mathcal G}$, ${\mathcal H}$ are smooth with
connected geometrical fibers both $B$ and $B'$ are projective
$A$-modules by Raynaud-Gruson \cite[Proposition
3.3.1]{RaynaudGruson}. We start with two lemmas.
\begin{lemma}\label{proj1} Let $P$ be a projective $A$-module and $N\subset P$ be
a finitely generated $A$-submodule. Then the $A$-torsion submodule
of $P/N$ is finitely generated.\endproof
\end{lemma}
\begin{proof}As $P$ is a direct summand of a free $A$-module, we can
assume that $P=A^I$ itself is free, with a basis $\{e_{i}; i\in
I\}$. Let $n_1,\ldots,n_t$ be a set of generators of $N$, and write
$n_i=\sum a_{ij}e_j$. Then $J=\{j\in I\mid \exists\ i \mbox{ such
that } a_{ij}\neq 0\}$ is a finite set and $A^I/N=A^J/N\oplus
A^{I-J}$. The conclusion follows.
\end{proof}
\begin{lemma}\label{proj2}Let $P$ be a projective $A$-module and suppose that $N\subset P$ is a
finitely generated $A$-submodule. If $P/N$ is torsion free, then $N$
is a projective $A$-module.
\end{lemma}
\begin{proof}
Observe that $N$ is $A$-torsion free. Consider the double dual
$N^{\vee\vee}$. We have $N\subset N^{\vee\vee}\subset P$, and
$N^{\vee\vee}/N\subset P/N$ is torsion. Therefore, $N=N^{\vee\vee}$,
which is projective by our assumption that $A$ is regular and
two-dimensional.
\end{proof}
\smallskip
Now let us prove the proposition. Observe first \cite[Cor.
3.2]{ThomasonEqRes} and its proof imply (i) of the proposition (In
other words, \cite{ThomasonEqRes} implies that such a ${\mathcal G}$ is
linear, i.e a closed subgroup scheme of ${\rm GL}_n$.) To obtain the
proof of the whole proposition we have to refine this construction
from \cite{ThomasonEqRes} to account for the subgroup scheme ${\mathcal H}$.
Let $V\subset B$ be a finitely generated $A$-submodule with ${\mathcal G}$
action that contains both a set of generators of $I=\ker (B\to B')$
and a set of generators of $B$ as an $A$-algebra. Let $p(V)$ be the
image of $V$ under $p:B\to B'$. The following diagram is a
commutative diagram of modules with ${\mathcal G}$-action
\[\begin{CD}
V@>{\rm coact}>> V\otimes_A B@>p\otimes 1>> p(V)\otimes_A B\\
@VVV@VVV@VVV\\
B@>{\rm comult}>>B\otimes_A B@>p\otimes 1>>B'\otimes_A B
\end{CD}\]
where ${\mathcal G}$ acts on $V\otimes_A B$, $p(V)\otimes_A B$, $B\otimes_A
B$, $B'\otimes_A B$ via the actions on the second factors.
The image of $N:=(p\otimes 1)\cdot {\rm coact}(V)$ in $p(V)\otimes_A
B$ is a finite $A$-submodule with ${\mathcal G}$-action. Let $\epsilon:B\to
A$ be the unit map which splits the natural $A\subset B$. Let $M$ be
the image of $N$ under $B'\otimes B\stackrel{1\otimes\epsilon}{\to}
B'$. Observe that $M$ is a finite $A$-module, but is not necessarily
${\mathcal G}$-stable. By \cite[Prop. 2]{SerreGroIHES}, we can choose a
finite ${\mathcal G}$-stable $A$-module $\tilde{M}$ in $B'$ containing $M$.
By Lemma \ref{proj1}, we can enlarge $\tilde{M}$ if necessary to
assume that $B'/\tilde{M}$ is torsion free (so $\tilde{M}$ is
projective over $A$
by the Lemma \ref{proj2}). We regard $\tilde{M}$ as a ${\mathcal G}$-stable submodule of
$B'\otimes_AB$ via $\tilde{M}\subset B'=B'\otimes_AA\subset
B'\otimes_AB$ (this is indeed a ${\mathcal G}$-stable submodule since the
inclusion $A\subset B$ is ${\mathcal G}$-equivariant). Let
$\tilde{N}=\tilde{M}+N$. Then $\tilde{N}$ is a finite ${\mathcal G}$-stable
submodule of $B'\otimes_AB$, and under the map $1\otimes\epsilon:
B'\otimes_A B\to B'$, $(1\otimes \epsilon)(\tilde{N})=\tilde{M}$.
Observe that the torsion submodule
$t((B'\otimes_AB)/\tilde{N})\subset (B'\otimes_AB)/\tilde{N}$ is a module with ${\mathcal G}$-action
and maps to zero under $(B'\otimes_AB)/\tilde{N}\stackrel{1\otimes\epsilon}{\to} B'/\tilde{M}$. Let
$\tilde{N}'$ be the preimage of $t((B'\otimes_AB)/\tilde{N})$ under
$B'\otimes_AB\to (B'\otimes_AB)/\tilde{N}$. From Lemma \ref{proj1},
$\tilde{N}'$ is finite ${\mathcal G}$-stable $A$-module, and $(1\otimes
\epsilon)(\tilde{N}')=\tilde{M}$. In addition, $\tilde{N}'$ is
locally free since $(B'\otimes_AB)/\tilde{N}'$ is torsion free.
Let $\tilde{V}$ be the ${\mathcal G}$-stable $A$-submodule of $B$ given by
the fiber product
\[\begin{CD}\tilde{V}@>>> \tilde{N}'\\
@VVV@VVV\\
B@>(p\otimes 1)\cdot{\rm comult}>> B'\otimes_AB.
\end{CD}\]
Observe that $(p\otimes 1)\cdot {\rm comult}: B' \to B'\otimes_AB$
is injective. Therefore, $\tilde{V}$ is an $A$-submodule of
$\tilde{N}'$ and therefore it is finitely generated over $A$. In
addition, $\tilde{N}'\supset N$, $V\subset \tilde{V}$. Since
$B/\tilde{V}\hookrightarrow (B'\otimes_AB)/\tilde{N}'$ is torsion
free, $\tilde{V}$ is projective. Observe that
$$
B \xrightarrow {(p\otimes 1)\cdot {\rm comult} }
B'\otimes_AB\stackrel{1\otimes\epsilon}{\to} B'
$$ is just the projection $p$.
Therefore, $p(\tilde{V})=\tilde{M}$.
Therefore, we obtain the following commutative diagram
\[\begin{CD}
0@>>>W@>>>\tilde{V}@>>>\tilde{M}@>>>0\\
@.@VVV@VVV@VVV@.\\
0@>>> I@>>>B@>>>B'@>>>0
\end{CD}\]
with the first row finitely generated projective $A$-modules. Notice
that $\tilde{V}\supset V$ contains a set of generators of the
$B$-ideal $I$ and a set of $A$-algebra generators of $B$. Hence, we
obtain a closed immersion ${\mathcal G}\xrightarrow {}{\rm GL}(\tilde V)$ of
group schemes and we can see that ${\mathcal H}$ can be identified with the
closed subgroup scheme of ${\mathcal G}$ that preserves the direct summand
$W\subset M:=\tilde V$. By replacing $M$ by $M\oplus M'$ and $W$ by
$W\oplus M'$ where $M'$ is a finitely generated projective
$A$-module with trivial ${\mathcal G}$-action, we can assume that $M$ is
$A$-free as desired.
\begin{comment}
Now by setting $M:=\wedge^{\on{rk}W}\tilde{V}$ and
$L=\wedge^{\on{rk}W}W\subset \wedge^{\on{rk}W}\tilde{V}$ we see that
(ii) is satisfied.\end{comment}
\end{proof}
\begin{cor} \label{qproj} Suppose that ${\mathcal H}\subset {\mathcal G}$ are as in Proposition \ref{linearrep}.
Then the fppf quotient ${\mathcal G}/{\mathcal H}$ is representable by a
quasi-projective scheme over $A$.
\end{cor}
\begin{proof}
By Artin's theorem the fppf quotient ${\mathcal G}/{\mathcal H}$ is represented by an
algebraic space over $A$. The algebraic space ${\mathcal G}/{\mathcal H}$ is
separated of finite type and even smooth over $A$, the quotient
${\mathcal G}\to {\mathcal G}/{\mathcal H}$ is also smooth. Take $M$ and $W$ as in Proposition
\ref{linearrep} and set $P:=\wedge^{\on{rk}W}M$ and
$L=\wedge^{\on{rk}W}W\subset \wedge^{\on{rk}W}M=A^r$, where $r={{\rm
rank}(M)\choose {\rm rank}(W)}$. Then, ${\mathcal H}$ is the stabilizer of
$[L]$ in ${\rm Proj}( \wedge^{\on{rk}W}M)={\mathbb P}^{r-1}_A$. We
obtain a morphism $f: {\mathcal G}\to {\mathbb P}^{r-1}_A$. This gives a
monomorphism $\bar f: {\mathcal G}/{\mathcal H}\to {\mathbb P}^{r-1}_A$ which is a
separated quasi-finite morphism of algebraic spaces. By
\cite[6.15]{Knutson}, ${\mathcal G}/{\mathcal H}$ is a scheme and we can now apply
Zariski's main theorem to
$\bar f$. We obtain that $\bar f$ is a composition of an open immersion
with a finite morphism and we can conclude that ${\mathcal G}/{\mathcal H}$ is
quasi-projective. (See \cite[proof of Thm. 2.3.1]{ConradNotes} for a
similar argument).
\end{proof}
\smallskip
\begin{Remark}
{\rm General homogeneous spaces over Dedekind rings are schemes
\cite{Ana}, but this is not always the case when the base is a
Noetherian regular ring of dimension $2$; see
\cite[X]{RaynaudLNM119}. In loc. cit. Raynaud asks if ${\mathcal G}/{\mathcal H}$ is a
scheme when both ${\mathcal G}$ and ${\mathcal H}$ are smooth and affine over a normal
base and ${\mathcal H}$ has connected fibers. The above Corollary gives a partial
answer to this question.}
\end{Remark}
\begin{cor}\label{qaffine}
Suppose that $ {\mathcal G}$ is a smooth affine group scheme with connected
fibers over $A$. Then there $n\geq 1$ and a closed subgroup scheme embedding
${\mathcal G}\hookrightarrow {\rm GL}_n$, such that the
fppf quotient ${\rm GL}_n/{\mathcal G}$ is represented by a smooth
quasi-affine scheme over $A$.
\end{cor}
\begin{proof}
By Proposition \ref{linearrep} applied to ${\mathcal G}$ and ${\mathcal H}=\{e\}$, we
see that there is a closed subgroup scheme embedding $\rho:
{\mathcal G}\hookrightarrow {\rm GL}_m$ (this follows also directly from
\cite{ThomasonEqRes}). Now apply Corollary \ref{qproj} and its proof
to the pair of the group ${\rm GL}_m$ with its closed subgroup
scheme ${\mathcal G}$. We obtain a ${\rm GL}_m$-representation $\rho':
{\rm GL}_m\to {\rm GL}_r={\rm GL}(M)$ that induces a locally closed embedding
${\rm GL}_m/{\mathcal G}\hookrightarrow {\mathbb P}^{r-1}$. Denote by $\chi:
{\mathcal G}\to {{\mathbb G}_{\rm m}}={\rm Aut}_A(L)$ the character giving the action of
${\mathcal G}$ on the $A$-line $L$ (as in the proof of Corollary \ref{qproj})
and consider ${\mathcal G}\to {\rm GL}_m\times {{\mathbb G}_{\rm m}}$ given by $g\mapsto (\rho'(g),
\chi^{-1}(g))$. Consider the quotient $({\rm GL}_m\times {{\mathbb G}_{\rm m}})/{\mathcal G}$; to
prove it is quasi-affine it is enough to reduce to the case that $A$
is local. Then $L$ is free, $L=A\cdot v$, and ${\mathcal G}$ is the subgroup
scheme of ${\rm GL}_m\times {{\mathbb G}_{\rm m}}$ (acting by $(g, a)\cdot m=a\rho'(g)(m)$)
that fixes $v$. This gives a quasi-finite separated monomorphism
$({\rm GL}_m\times {{\mathbb G}_{\rm m}})/{\mathcal G}\rightarrow {\mathbb A}^r$ and so by arguing
as in the proof of Corollary \ref{qproj} we see that $({\rm GL}_m\times
{{\mathbb G}_{\rm m}})/{\mathcal G}$ is quasi-affine. Consider now the standard diagonal block
embedding ${\rm GL}_m\times {{\mathbb G}_{\rm m}}\hookrightarrow {\rm GL}_{m+1}$. The quotient
${\rm GL}_{m+1}/({\rm GL}_m\times {{\mathbb G}_{\rm m}})$ is affine and we can conclude using
Lemma \ref{triple}.
\end{proof}
\bibliographystyle{hamsplain}
|
1,108,101,562,802 | arxiv | \section{Introduction}
The study of the variations in the meridional flows
is important for understanding the underlying mechanism of
the solar cycle
(Babcock 1961; Ulrich \& Boyden 2005).
Surface Doppler measurements
and helioseismology measurements of the surface and the subsurface flows
(e.g., Ulrich \& Boyden 2005;
Gonz\'alez Hern\'andez et al. 2008)
suggest poleward flows of about 10\,--\,20 m s$^{-1}$
and a considerable north-south asymmetry in the meridional flow.
Surface Doppler measurements are
available for about 3 cycles (Ulrich \& Boyden 2005).
Doppler measurements suffer from errors caused by the scatter light and
the B-angle influences (Beckers 2007).
The motions of many magnetic tracers, particularly sunspots,
have been used for a long time as a proxy of the fluid motions to
study the solar rotational and the meridional flows (Schr\"oter 1985; Javaraiah \& Gokhale 2002).
The sunspot
data have been available for more than
100 years.
However, the derived rates of rotation and meridional flows
depend on the method of the selection of the spots or the spot groups
(e.g., Howard et al. 1984; Balthasar et al. 1986;
Zappal\'a \& Zuccarello 1991; Zuccarello 1993;
Javaraiah \& Gokhale 1997a; Javaraiah 1999; Hiremath 2002;
Sivaraman et al. 2003). Proper motions and evolutionary factors
of the spot groups may also influence
the derived rates of the flows to some extent. The proper motions
are random in nature, hence their effect can be reduced with
the use of a large data
set.
Recently, Ru\v{z}djak et al. (2005) have found
that evolutionary factors of the sunspot groups in the determination of the
positions of the spot groups is small in the estimated mean meridional motion
of the spot groups.
A number of scientists studied the solar cycle variations of
the mean meridional motion of the spot groups (see Javaraiah \& Ulrich 2006).
Since yearly data are inadequate for this purpose,
particularly around a solar cycle
minimum,
some authors have used the
superposed epoch analysis of the data during a few or all cycles
for which the data were available and studied
the mean solar cycle variation
(Balthasar et al. 1986; Howard \& Gilman 1986).
Recently, by using the same method, Javaraiah \& Ulrich (2006)
analyzed the combined Greenwich and
Solar Optical Observation Network sunspot group
data during the period 1879\,--\,2004 and
determined the mean solar cycle variation in the
meridional motions of the sunspot groups.
In that early paper, the spot group data during the cycles
12\,--\,20 were superposed according to the
minima of these cycles.
This yielded an average solar cycle
variation of the meridional motion over about 5\,--\,9 cycles, which
suggests that only around the end of a solar cycle
the meridional motion is considerably significant
from zero and it is poleward in both the northern and the
southern hemispheres.
However,
during some individual cycles the variations may
be considerably different
from this average solar cycle variation.
Javaraiah \& Ulrich (2006) also
determined
the cycle-to-cycle
variation of the mean (over the duration of a cycle) meridional
motion of the spot groups during cycles 12\,--\,23 and found the
existence of a weak long-period cycle (Gleissberg cycle)
in the cycle-to-cycle variation of the mean motion.
Since the meridional speed is
negligibly small or zero in some phases of a cycle or even with
opposite signs during different phases of some cycles,
hence, the motions are washed out in the average over the
cycle.
In order to get rid of this problem in the study of long-term variations
in the mean meridional motion, it is necessary to
analyse
the spot or the spot group data in the intervals considerably
shorter than the length of a solar cycle.
In
the present paper we have analyzed the annual
spot group data during 1879\,--\,2008 and determined the
variations in the mean meridional motions of the spot
groups in the northern and the southern hemispheres.
As expected the statistics is
poor in case of the results derived from the annual data, particularly
at the cycles minima.
We have taken some additional precautions, so
it is possible to see the patterns of
the variations when they are close to a solar cycle length,
even in the annual time
series.
However,
we have also determined the variation in the mean
meridional motion of the spot groups by binning the spot group data into
the moving-time intervals (MTIs) of lengths (3\,--\,4 years) considerably
greater
than a year, but
reasonably smaller
than the length of a cycle; $i.e.$, less than the half of the
length of a cycle. In such a time-series which comprised the
longer time intervals, it is relatively easy to detect
the long-term variations near the length of a solar cycle
(Javaraiah \& Gokhale 1995, 1997b). In addition,
the sizes of these series
reasonably large. Hence, it enabled us to find
the periodicities that approach
the length of a solar cycle
from the power spectral
analysis.
In the next section we describe the data and analysis.
In Sect.~3 we present the variations in the
mean meridional motion of the
spot groups in the whole northern and
the southern hemispheres,
as well as in different $10^\circ$ latitude intervals--and
the corresponding differences between the whole northern and the southern
hemispheres--during the period 1879\,--\,2008 and point out their
important features.
In the same section,
we show the periodicities
in the mean meridional motion of the spot groups
from the traditional FFT analyses. From the
MEM analyses, we determined the values of the
periodicities, and from the Morlet-wavelet analyses
we determined the temporal dependencies of
the detected periodicities.
In Sect.~4 we summarize the results and the conclusions,
and briefly discuss
the implications of these results for understanding the solar
long-term variability.
\vspace{0.3cm}
\section{Data and analysis}
We have used the combined Greenwich
(1879\,--\,1976) and Solar Optical Observation
Network (SOON) (1977\,--\,2008) sunspot group
data, which are
taken from David Hathaway's website
{\tt http://solarscience.msfc.nasa.gov/greenwch.shtml}.
These data
include the observation time (the Greenwich data contain the date with the
fraction of the day, the SOON data do not contain
the fraction of
the day),
heliographic latitude
and longitude, and central meridian distance (CMD), etc., of the spot
groups for each day of observation.
The
positions of the groups are geometrical positions of the
centers of the groups.
The Greenwich data were compiled from the majority of the white
light photographs, which were secured at the Royal Greenwich Observatory
and at the Royal Observatory, Cape of Good Hope. The gaps in their
observations were filled with photographs from other observatories,
including the Kodaikanal Observatory, India.
The SOON data included measurements made by the
United States Air Force (USAF) from
the sunspot drawings of a network of the observatories
that includes telescopes
in Boulder, Colorado, Hawaii, etc.
David Hathaway scrutinized
the Greenwich and SOON data and produced a reliable
continuous data series
from 1874 up to now.
However, in this corrected data there may be some minor
differences mainly in the values of the areas of the spot groups of the
two datasets
(Hathaway \& Choudhary 2008).
We did not use the data before 1879 because of the
large uncertainties in the 1878 Greenwich data (Balthasar et. al. 1986;
Javaraiah \& Gokhale 1995).
Here the data reduction and the determination
of the meridional motions
of the spot groups is the same as described in
Javaraiah \& Ulrich (2006).
The data on the recurrent and the non-recurrent spot groups are combined.
The meridional velocity (daily rate of the latitudinal drift) of a spot
group is computed using the difference between the epochs of its observation
in
consecutive days and the heliographic
latitudes of the spot group at these epochs.
For the sake of convenience in determining the north-south difference in the
mean meridional motion of the spot groups, we
used the sign convention,
{\it a positive value represents
the northbound
motion in both the northern and the southern hemispheres}.
As in the paper referred to above,
we have taken the following precaution to
substantially reduce the
uncertainties in the derived results (Ward 1966;
Javaraiah \& Gokhale 1995):
We excluded the
data corresponding to the $|CMD| > 75^\circ$ on any day of the spot group's
life span.
Furthermore, we excluded the data corresponding to
the `abnormal' motions, e.g., displacements
exceeding $3^\circ$ day$^{-1}$
in the longitude or $2^\circ$ day$^{-1}$ in the latitude.
In addition, we did not use the data corresponding to the time-difference
$>$ 2 days of the spot group's life span.
We determined the variations in the mean
meridional motion
of the sunspot groups
in the whole northern hemisphere and in the whole
southern hemisphere,
and also in the separate
$10^\circ$ latitude intervals.
The variations in the mean motion of the spot groups in a whole hemisphere
are determined from the yearly data and from the data in 3- and 4-year
MTIs during the period 1979\,--\,2008.
For the separate $10^\circ$ latitude intervals, we used only the
4-year MTIs because, in a shorter than 4-year interval,
the data are found to be inadequate and the error bars are too large
to plot the results, particularly during the cycle minima.
The accuracy in determining the heliographic coordinates of
the mass center
of a sunspot group can be estimated as close to $0.5^\circ$ and therefore
error in
the calculations of daily velocity values can be estimated to
$\approx 1^\circ$ day$^{-1}$
($\approx 1.4 \times 10^4$ cm s$^{-1}$) (Balthasar \& W\"ohl 1980;
Zappal\'a \& Zuccarello 1991). However, error in determining the mean
velocity is inversely proportional to the square root of the number of
observations
used, and for example, it is on the order of $0.07^\circ$ day$^{-1}$
($9.9 \times 10^2$ cm s$^{-1}$) for 200 observations.
The number of data points are considerably more in several years of the
annual series and in most of the
intervals of the aforementioned MTIs series.
As a result, the determined mean velocity is highly accurate with
respect to the mentioned error.
To determine the periodicities in the mean meridional motion
of the spot groups using the power spectral techniques, we
applied some corrections to the original data, described in the
next section, where we describe the time series.
Since the lengths of the time series are inadequate for precisely measuring
the values of $\ge$ 11-year periodicities from the FFT analysis,
the uncertainties are large in the such
periodicities determined here from the FFT analysis.
A
different approach to determining the values of the periodicities
in a short time series with higher accuracy is to compute
the power spectrum using MEM.
MEM analysis is a parametric modeling approach to estimating the power
spectrum of a time series. The method is data adaptive, since it is
used upon an
autoregressive modeling process.
An important step in this method is the optimum selection
of the order ($M$) of the autoregressive process,
which is the number of immediately previous points that are used
in calculating a new point.
If $M$ is chosen too low, the spectrum is over-smoothed and the high resolution
potential is lost. If $M$ is chosen too high, frequency shifting and
spontaneous splitting
of the spectral peaks occurs.
The MEM code that we used here takes the values for $M$
in the range ($n/3$, $n/2$) (Ulrych \& Bishop 1975) or $2n/\ln(2n)$
(Berryman 1978),
where $n$ is the total number of intervals in the
analyzed time series.
To find the correct values of the periodicities
found in the FFT power spectrum,
we computed
MEM power spectra by choosing various
values for $M$ in the range ($n/3$, $n/2$) and $2n/\ln(2n)$.
We find that $M = n/3$ is suitable in the present MEM
analysis; $i.e.$, in the derived spectra the peaks are considerably sharp
and well separated.
The wavelet analysis is a powerful method for
analyzing localized variations
in the power within a time series at many different
frequencies (Torrence \& Compo 1998).
We did the Morlet wavelet analysis to determine the temporal
dependencies
in the periodicities found in the mean meridional motion
of the spot groups from the FFT and MEM analysis.
\section{Results}
\begin{figure*}
\centering
\includegraphics[angle=90,width=\textwidth]{12968fg1.eps}
\caption{Annual values of the mean meridional motion
of the sunspot groups during the period 1879\,--\,2008, (a) in the northern
hemisphere (filled circle-solid curve) and
the southern hemisphere (open circle-dotted curve) and
(b) the corresponding north-south difference ($v_N - v_S$).
The unconnected points represent the values that
are more than 15 m s$^{-1}$ or that
have the large uncertainty, i.e., standard error (standard deviation,
for the north-south difference) $> 2.6$.
In (a) a positive value represents the northbound motion.
In both (a) and (b) the dashed curve represents the
annual variation
in sunspot
activity during 1879\,--\,2008.
The Waldmeier cycle number is specified near the maximum
epoch of each cycle.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=90,width=\textwidth]{12968fg2.eps}
\caption{Same as the Fig.~1, but
determined from 4-year MTIs, 1879\,--\,1882, 1880\,--\,1883, ...,
2005\,--\,2008. Here
the unconnected points represent the values that
are more than 10 m s$^{-1}$ or that
have the large uncertainty, i.e., standard error
(standard deviation, in the case of the north-south difference)
$> 2.6$, and
the
dashed curve represents the
variation in the sunspot number
smoothed by taking 4-year running average.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=90,width=\textwidth]{12968fg3.eps}
\caption{Same as the Fig.~2(a), but
determined
separately for different 10$^\circ$ latitude intervals.}
\end{figure*}
\subsection{Temporal variations}
Figure~1 shows the variations in the mean meridional motion
of the sunspot groups in the northern and the southern hemispheres and
the corresponding north-south difference, determined from the
yearly spot group data during 1879\,--\,2008.
Figure~2 is the same as Fig.~1, but determined from
the data in 4-year MTIs (3-years MTIs series is not shown because it is
found to be almost the same as that of 4-year MTIs series).
Figure~3 shows the variations in the mean meridional motion of the spot groups
in different 10$^\circ$ latitude intervals determined from the
data in 4-year MTIs. To study the solar cycle
variations in the mean meridional motion, we also show the variations in the
sunspot activity in all these figures.
As can be seen in
Fig.~1(a), in the case of
the original annual series, the error bars are large.
(The error bars
are at the
1$\epsilon$ level, where $\epsilon = \sigma/\sqrt{k}$ is the standard
error, $\sigma$ and $k$ are the standard deviation and the number of data
points in a given interval, respectively.) In addition,
there are some large spikes in the time series during
the minima of some cycles. We
therefore applied
the corrections to those
values whose $\epsilon$ values exceeded 2.6 times
(correspond to 99\% confidence level)
the corresponding median values or to those values $>$ 15 m s$^{-1}$;
$i.e.$, these values are replaced with
the average of the corresponding values and
their respective two neighbors. (For the beginnings of the
time-series
it is the average of
the values in the intervals 1 and 2 and in the endings
it is the average of the values in the intervals $n-1$ and $n$.)
In the north-south differences ($cf.$,
lower panels of Figs.~1 and 2),
the error bar represents the 1$\sigma$ level
($\sigma = \sqrt{\epsilon^2_1 + \epsilon^2_2}$, where
$\epsilon_1$ and $\epsilon_2$ represent the standard errors of the
mean values
of the northern
and southern hemispheres, respectively).
As can be seen in Fig.~2, in the case of the variations determined from
4-year MTIs (in case of Fig.~3 in the lower latitude intervals),
the size of a error bar
is small. However, we also applied the aforementioned
corrections to the MTIs time-series. In these cases, the corrected
time series included
the values that are $\le$ 10 m s$^{-1}$.
We used 15 m s$^{-1}$ limit in the case of the annual time series
because
the amplitude of the variation is much
larger than in the MTIs series, and
to exclude only extreme outliers in the time series.
In Figs.~1\,--\,3 we can see the following:
\begin{itemize}
\item The pattern of the solar cycle
variation in the mean meridional motion of the spot groups is
different during different solar cycles.
At the maximum epoch
(year 2000, see Fig.~1)
of the current cycle~23, the mean motion
is stronger than in the last 100 years and
northbound in all the latitude intervals
in both the northern and the southern hemispheres.
(The mean meridional motion is strong and
poleward in the northern hemisphere
and strong and equatorward in the
southern hemisphere.)
In this cycle the overall pattern of the variation in the mean motion
closely resembles the shape of this cycle; $i.e.$,
there is
a high correlation between the activity
and the mean meridional motion.
\item Overall there is a suggestion that the mean meridional motion of the
spot groups
varies considerably on time scales of 5\,--\,20 years.
The maximum amplitudes of the mean variations in the northern and the
southern hemispheres
determined from the annual data is about 10\,--\,15 m s$^{-1}$.
The amplitude of the mean variations
determined from the 4-MTIs is on average about 5 m s$^{-1}$ and
seems to be
slightly large in the individual 10$^\circ$ latitude intervals (see Fig. 3).
\item The difference
between the mean motions in the northern and the southern hemispheres also
varies on 5\,--\,20 year time scales with
the maximum amplitude 5\,--\,10 m s$^{-1}$. However,
the north-south difference is
statistically
significant mainly during
the early cycles, 12\,--\,16, which are relatively weak ones,
and mainly contributed from the differences in the motion of the high
latitudes. The difference is negligibly small during the recent cycles.
That is, during the recent cycles, particularly in the
current cycle~23, the motion is highly symmetric about
the equator.
\end{itemize}
Besides the above main features, in Figs.~1\,--\,3
one can also see
the following features:
\begin{itemize}
\item Around the maxima of the large amplitudes cycles (e.g.,
17, 18, 19, and 21), the average motion
in the whole Sun seems to be
slightly southward, whereas for weak cycles
the motion is either not significantly different from zero
(cf., cycles 16 and 20), or very
different from zero and has
northward direction (cf., 12, 14 and 23).
During the minima of several
cycles, the motion
seems to be strongly southward, and
in some cycles (cf., 16 and 20) it seems to go in opposite directions
in the northern and the southern
hemispheres.
\item The behavior (changes in the phase seem to be
taken place)
of the meridional
motions
is very different between the northern and the southern hemispheres
during
the start of cycles~16 and 20 (see Fig. 2), which are weak cycles that
are four cycles apart.
A 44-year cyclic behavior is seen in the sunspot activity
(Javaraiah 2008).
\item Near the maxima of the much longer cycles (e.g., 13 and 23)
in the high latitudes
of both the northern and southern hemispheres, the mean motion is largely
northbound.
Javaraiah \& Ulrich (2006) found a positive correlation between the
mean meridional motion of the spot groups of a cycle,
mainly in the $20^\circ - 30^\circ$
latitude interval of the northern hemisphere,
and the length of
the same cycle. In Fig.~3 the pattern of
the variation in
the mean meridional motion of the spot groups in the $20^\circ - 30^\circ$
latitude interval of the northern hemisphere
is largely consistent with this result.
\end{itemize}
\subsection{FFT power spectra}
Figure~4 shows the FFT power spectra of the corrected annual time series
of the mean meridional motions of
the spot groups in the northern and the southern hemispheres
and that of the corresponding north-south difference, during 1879\,--\,2008.
Figure~5 shows the
FFT power spectra (low-frequency sides) of the mean meridional
motions of the spot groups in 4-year MTIs.
It should be noted here that, in the FFT power spectrum of the data
binned in the longer intervals (e.g. 4-year MTIs),
the peaks corresponding to the
high-frequency side are washed out and the low-frequency peaks
are became broader.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg4.eps}
\caption{FFT power spectra
of the corrected yearly data
of the mean
meridional motion of the sunspot groups in the northern
hemisphere (solid curve)
and the southern hemisphere (dotted curve), and
of the corresponding north-south differences (dashed curve).
The solid, dotted and dashed
horizontal lines represent the 99\% confidence levels ($> 2.6 \sigma$ levels)
of the
power in the corresponding spectra represented by the solid, dotted and
dashed curves, respectively.}
\end{figure}
As can be seen in Fig.~4, there are several peaks in
each of the FFT spectra of the yearly data
of the
mean meridional motion of the spot groups.
However, only the following peaks
are significant on more than 99\% confidence level ($> 2.6\sigma$ levels):
In the spectrum of
the northern
hemisphere,
the peak at frequency
$f \approx 0.0625$ year$^{-1}$ (period $T \approx 16$ years);
in the spectrum
of the southern hemisphere,
the two peaks of relatively high frequencies which are closer to
$f = 0.232$ year$^{-1}$ ($T = 4.3$ years) and
$f = 0.312$ year$^{-1}$ ($T= 3.2$ years);
and
in the spectrum of the north-south difference,
the peak at frequency
$f \approx 0.0625$ year$^{-1}$ (period $T \approx 16$ years).
In the spectrum of the northern hemisphere, the peak at
$f \approx 0.286$ year$^{-1}$ ($T \approx 3.5$ years) is significant
on a 99\% confidence level.
As can be seen in Fig.~5, which shows the low-frequency sides of the
power spectra of the data in the 4-year MTIs, in the spectrum of
the northern
hemisphere the broad peak of the $\approx 16$ year period is significant on
a 99\% confidence level. In the corresponding spectrum of the southern
hemisphere, the peak at $f \approx 0.0158$ year$^{-1}$
($T \approx 63.5$ years)
is significant
on a 99\% confidence level,
and the peak at
$f \approx 0.079$ year$^{-1}$ ($T \approx 12.7$ years)
is significant on a 95\%
confidence level.
(A similar peak is also present
in the yearly data.)
In the spectrum of
the north-south difference, a peak at $f = 0.394$ year$^{-1}$
($T = 25.4$ years) is significant on more than a 99\% confidence level, and
the peak at $f \approx$ 1/18 year~$^{-1}$
significant on a 95\%
confidence level.
Overall, all these results suggest
that there are around 5\,--\,20 years periodicities in
the mean meridional motion of the spot groups and that there are
considerable differences in their levels of significance
between
the northern and the southern hemispheres.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg5.eps}
\caption{The same as Fig.~4, but
for the variations in the mean meridional motions of the spot
groups in the 4-year MTIs (shown in Fig.~2).
The values of the power only at the frequencies shown here are used
for determining the confidence levels of the peaks
in the respective spectra. The solid, dotted and dashed horizontal lines
represent the 99\% confidence levels.}
\end{figure}
Figure~6 shows the FFT power
spectra of the mean meridional motion of the spot groups in different
$10^\circ$ latitude
intervals of the northern and the southern hemispheres during 4-year MTIs.
As can be seen in these spectra, there is considerable latitude dependence
in the periodicity of the mean motion of the spot groups in a hemisphere.
For example, the statistically significant $\approx 16$ year periodicity
found above in the mean motions of the spot groups in
the northern hemisphere seems to contain more contribution from the
motions of the
groups in $20^\circ$\,--\,30$^\circ$ latitude interval. Besides this,
the FFT spectrum of the mean motion in this latitude interval has a
strong peak around $f \approx 0.076$ year$^{-1}$ ($T \approx 13$ year).
In addition, there is a suggestion that this
(11\,--\,13 year) periodicity was strong in the mean meridional
motion of the spot
groups in the
$20^\circ$\,--\,30$^\circ$ latitude interval of the northern hemisphere,
whereas it was strong in the mean meridional motion of the spot groups in
$0^\circ$\,--$10^\circ$ latitude interval of
the southern hemisphere.
We would like to mention that
most of the relatively high peaks in the FFT spectra
seem to appear at
frequencies that
correspond to the integral multiples of one of the frequencies.
Hence, they may look
suspiciously like mathematical artifacts. However, they may not be
mathematical artifacts since the length of the data used here is longer
than double the longest
period that we find here. In addition,
before computing the FFT, the mean value
was subtracted and a cosine
bell function was applied to both the first and the last 10\% of the time
series. These processes detrend the time series and minimize the
aliasing and leakage effects (Brault \& White 1971).
The existence of
the `harmonics' and
`subharmonics'
may be a consequence of the Sun's behavior as a forced nonlinear
oscillator (e.g., Bracewell 1988; Gokhale et al. 1992; Gokhale \& Javaraiah 1995).
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg6a.eps}
\includegraphics[width=8.5cm]{12968fg6b.eps}
\caption{FFT power spectra of the corrected variations
of the mean meridional motions of the spot
groups in the different $10^\circ$ latitude intervals,
$0^\circ$\,--\,$10^\circ$ (solid curve),
$10^\circ$\,--\,$20^\circ$ (dotted curve), and $20^\circ$\,--\,$30^\circ$
(dashed curve),
of
(a) the northern hemisphere (upper panel) and
(b) the southern hemisphere (lower panel),
during 4-year MTIs.
The solid, dotted, and dashed horizontal lines
represent the 99\% confidence levels.
The values of the power only at the frequencies shown here are used
for determining the confidence levels of the peaks
in the respective spectra. The thick slid curve represents the mean spectrum
of the spectra correspond to the three latitude intervals.}
\end{figure}
\subsection{MEM power spectra}
Figures~7 and 8 show the MEM power spectra of the
mean meridional motion of the spot groups determined from
the corrected data of the annual and the 4-year MTIs time-series.
As can be seen in these figures, each MEM spectrum
shows a number of well-defined peaks.
The MEM spectrum
of the annual data of the southern
hemisphere show
the values 4.3-year and 3.2-year, and that of the north-south difference
shows the value 11.9-year, for the corresponding significant
periodicities found from
the FFT analyses.
The MEM spectrum of the 4-year MTIs of the southern hemisphere shows
the value 50-year for the
$\approx 63.5$ year peak found in the corresponding FFT spectrum.
The high significant broad peak of the $\approx$ 16-year
periodicity found in the FFT spectra of the northern hemisphere data
is broken into 13.1-year and $\approx$ 18-year peaks
in the corresponding MEM spectra.
The 13.1-year periodicity peaks are well-defined in the
MEM spectra of both the annual and the 4-MTIs time series
of the northern hemisphere data.
In addition, the MEM spectrum of the
4-year MTIs also shows a well-defined
peak with 29.8-year periodicity.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg7.eps}
\caption{MEM power spectra of the corrected yearly data of the mean
meridional motions
of the spot groups in the northern hemisphere (solid curve) and
the southern hemisphere (dotted curve),
and of the
corresponding north-south differences (dashed curve). The
values of the corresponding periods are marked near the considerably high
peaks.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg8.eps}
\caption{The same as Fig.~7, but
for the variations of the mean meridional motions of the spot
groups in 4-year MTIs (shown in Fig.~2).}
\end{figure}
The 3.5-year periodicity found
in the FFT spectrum of the annual data of the northern hemisphere
is also present in the corresponding
MEM spectrum, but the corresponding peak
is not clearly defined.
The peaks of $\approx$ 2.3-year and 4.3-year periodicities are present
in the MEM spectra of all the three annual time series.
The MEM spectrum of the annual data of the north-south
asymmetry also shows
the peaks of 5.6-year and 2.7-year periodicities and the spectrum of
corresponding 4-year MTIs shows a peak
of 22.2-year periodicity.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{12968fg9a.eps}
\includegraphics[width=8.5cm]{12968fg9b.eps}
\caption{MEM power spectra of the corrected variations
in the mean meridional motions of the spot
groups in the different $10^\circ$ latitude intervals,
$0^\circ$\,--\,$10^\circ$ (solid curve),
$10^\circ$\,--\,$20^\circ$ (dotted curve), and $20^\circ$\,--\,$30^\circ$
(dashed curve),
of (a) the
northern hemisphere (upper panel) and
(b) southern hemisphere (lower panel),
during 4-year MTIs.}
\end{figure}
Figure~9 shows the MEM spectra of the corrected
data in the different $10^\circ$
latitude intervals of the northern and the southern hemispheres
during the 4-year MTIs. As can be seen in this figure,
the results from the MEM analysis are consistent with the results
found from the FFT analyses and
suggest that there is a strong
hemispheric and latitude dependency in the periodicities of the mean motion of
the spot groups.
\subsection{Wavelet power spectra}
Figures~10 and 11 show the Morlet wavelet power spectra, normalized by
$1/\sigma^2$ (here $\sigma$ is the standard deviation of the
concerned data sample), and the corresponding
global spectra
of the mean meridional motion of the
spot groups determined from the
corrected annual and 4-MTIs time series,
respectively.
As can be seen in
Fig.~10, during the period 1880\,--\,2007, the 3.2-year and 4.3-year
periodicities occurred relatively consistently in the
mean meridional motion of the spot groups in the southern hemisphere.
This result is highly
consistent with the result that the aforementioned periodicities are found
statistically significant in the FFT analysis.
The 3.2-year periodicity and also
a 2.3-year periodicity were prominent around 1990.
In the wavelet spectra, Figs.~10(b) and 11(b),
of the mean motion of
the spot groups in
the southern hemisphere, there
is a suggestion that the
dominant periodicity evolved slowly
with time, from $\approx$ 15~years to $\approx$ 30~years, over the
period 1880\,--\,2007. In Figs.~10(a) and 11(a) there is a
suggestion that
the 13.1-year periodicity and a $\approx$ 20-year periodicity
of the mean motion of the spot groups in
the northern hemisphere are strong
after 1980 and before 1920.
The $\approx$ 30-year
periodicity seems to have occurred throughout
the period 1880\,--\,2007, with more powerfully before 1920 and after 1960.
In Figs.~10(c) and 11(c) there is an evidence that the
11.9-year periodicity occurred consistently in the north-south
difference in the mean motion.
This is consistent with that the same periodicity
is found to be highly significant in the
FFT analysis of the north-south difference.
The 29-year periodicity
in the north-south
difference (found in the MEM analysis) was weak or absent after around 1970.
In fact, the north-south
difference is itself very small and statistically
insignificant during
the recent cycles (see Figs.~1 and 2).
\begin{figure*}
\centering
\subfigure{
\includegraphics[width=6.0cm]{12968fg10a.eps}}
\subfigure{
\includegraphics[width=6.0cm]{12968fg10b.eps}}
\subfigure{
\includegraphics[width=6.0cm]{12968fg10c.eps}}
\caption{Wavelet power spectra and the global spectra
of the corrected yearly data
of the mean meridional
motion of the sunspot groups
(a) in the northern hemisphere,
(b) in the southern hemisphere, and (c) the corresponding north-south
difference. The wavelet spectra are normalized by the variances of
the corresponding time series.
The shadings are at the normalized variances of 2.0, 3.0,
4.5, and 6.0.
The dashed curves represent the 95\% confidence levels,
deduced by assuming a white noise process.
The cross-hatched regions indicate the ``cone of
influence", where edge effects become significant (Torrence \& Compo 1998).}
\end{figure*}
\begin{figure*}
\centering
\subfigure{
\includegraphics[width=6.0cm]{12968fg11a.eps}}
\subfigure{
\includegraphics[width=6.0cm]{12968fg11b.eps}}
\subfigure{
\includegraphics[width=6.0cm]{12968fg11c.eps}}
\caption{The same as Fig.~10, but for the variations
in the mean meridional motion of the spot groups in 4-year MTIs.}
\end{figure*}
Figure~12 shows the wavelet power spectra of the corrected data in the
different $10^\circ$
latitude intervals of the northern and the southern hemispheres, in
the 4-year MTIs.
As can be seen in this figure,
there is
a considerable latitude-time dependence in the
periodicities of the mean meridional motion of the spot groups.
The $\approx$ 20-year and
the $\approx$ 30-year periodicities
probably exist
in the mean meridional motion of the
spot groups
of the lower latitudes of
the northern hemisphere, throughout the period 1880\,--\,2007.
The evolution of a $\approx$ 15-year periodicity to a $\approx$ 30-year
periodicity, seen above in the
spectrum of the mean motion of the whole southern hemisphere (see Fig.~10(b)),
is not clearly visible in the spectrum of the mean motion in
any latitude interval of this hemisphere. On the other hand,
there is a suggestion that, in $10^\circ - 20^\circ$ latitude
interval of the northern hemisphere, a periodicity in the mean motion
evolved from $\approx$ 16~years to $\approx$ 10~years
over the
period 1880\,--\,2007. It was the
opposite in the $20^\circ$\,--\,$30^\circ$ latitude-interval.
In this latitude interval,
the $\approx$ 40-year periodicity might have
evolved to a $\approx$ 25-year periodicity.
Overall the wavelet analyses suggest
that there is a considerable latitude-time dependency in the periodicities
in the mean meridional motion of the spot groups.
\begin{figure*}
\centering
\subfigure{
\includegraphics[width=6.0cm]{12968fg12ua.eps}
\includegraphics[width=6.0cm]{12968fg12ub.eps}
\includegraphics[width=6.0cm]{12968fg12uc.eps}}
\subfigure{
\includegraphics[width=6.0cm]{12968fg12la.eps}
\includegraphics[width=6.0cm]{12968fg12lb.eps}
\includegraphics[width=6.0cm]{12968fg12lc.eps}}
\caption{The same as Fig.~11, but for the variations
in the mean meridional motion of spot groups in the different
$10^\circ$ latitude intervals,
(a) $0^\circ$\,--\,$10^\circ$,
(b) $10^\circ$\,--\,$20^\circ$, and (c) $20^\circ$\,--\,$30^\circ$,
of the northern (upper panel)
and the southern (lower panel) hemispheres.}
\end{figure*}
\section{Conclusions and discussion}
From the analysis of the largest available reliable sunspot group data;
$i.e.$, the combined Greenwich and SOON sunspot group
data during the period 1879\,--\,2008, we find the following.
\begin{enumerate}
\item The mean meridional motion of the sunspot groups varies
considerably
on 5\,--\,20 year timescales during the period 1879\,--\,2008.
The maximum amplitude of the variation is 10\,--\,15 m s$^{-1}$.
\item The pattern and amplitude of the solar cycle
variation in the mean motion are
significantly different during the different cycles.
During the maximum epoch (year 2000) of the current cycle, the mean motion
is relatively stronger than in the past $\approx$100 years, and it is
northbound in both the
northern and
the southern hemispheres.
\item The north-south difference (north-south asymmetry) in the
mean meridional motion of the spot groups also varies with a maximum
amplitude of about 10 m s$^{-1}$. The north-south difference was
considerably larger during the early cycles
(with a strong contribution from the
high latitudes). It is
negligible
during the recent cycles.
\item Power spectral analyses suggest that $\approx$
3.2- and $\approx$ 4.3-year
periodicities exist in the mean meridional motion of the spot groups of
the southern hemisphere, whereas
a 13\,--\,16 year periodicity is found
in the mean motion of
the spot groups of the northern hemisphere.
\item The $\approx$ 12- and $\approx$ 22-year periodicities
are found to exist
in the north-south difference of the mean motion.
\item There is a considerable latitude-time dependence in the
periodicities of the mean meridional motion of the spot groups.
There is a strong
suggestion that, in the $10^\circ$\,--\,$20^\circ$ latitude-interval
of the northern hemisphere,
a periodicity slowly evolved from $\approx$ 16 year to $\approx$ 10 year,
over the period 1880\,--\,2007,
and it
evolved in the opposite way, $\approx$ 10 year to $\approx$ 16 year, in
$20^\circ$\,--\,$30^\circ$ latitude interval.
\end{enumerate}
The behavior of the mean motion of the spot groups in cycle~23 is similar to
that of cycle~14, which is a low-amplitude (lowest in
the last century) and considerably long-duration cycle.
Cycle~23 is also a relatively low-amplitude and long-duration cycle, so that
the result above (conclusion (2)) may be a part of a real
long-term behavior in the mean
meridional motion of the spot groups; $i.e.$, most probably it is
not an
artifact of the differences (if any)
between the Greenwich and the SOON datasets, within the continuous
time series of the combined dataset used here.
Most of the helioseismic measurements of
the meridional flows during the current sunspot cycle~23
suggest an increase in the
amplitudes of
the surface and the subsurface poleward
meridional flows with a decrease in magnetic activity
(Gonz\'alez Hern\'andez et al. 2008), whereas we find a strong northbound
mean meridional motion of the spot groups during
the maximum of cycle~23.
A reason for this discrepancy
may be that
sunspot motions may not represent
the Sun's plasma motions (D'Silva \& Howard 1994), or
the motions of the
magnetic structures may
represent the motions of the deeper
layers of the
Sun's convection zone where these structures are anchored
(Javaraiah \& Gokhale 1997a; Hiremath 2002;
Sivaraman et al. 2003; Meunier 2005). In addition,
the mean meridional motion of the sunspot groups may only represent
the mean solar meridional plasma motion
at low and middle latitudes, because sunspots data are confined
to only these latitudes.
The magnetic structures of the only large spot groups
during their initial days
might be anchored near the base of the convection zone
(Javaraiah \& Gokhale 1997a; Hiremath 2002;
Sivaraman et al. 2003),
hence might have
largely equatorward motions (Javaraiah 1999).
While rising through
the convection zone, the magnetic structures of the large spot groups
may be fragmented into the smaller structures
(Javaraiah 2003; Sch\"ussler \& Rempel 2005).
The small structures may move mainly toward the poles
(\v{S}vanda et al. 2007). However, as can be seen in Figs.~1\,--\,3
there are also
equatorward motions
(may be due to an effect of the
reverse meridional flows), mainly near minima of the cycles where
a spot group is relatively small.
Meridional flows can transport magnetic flux and
cause cancellation/enhancement of magnetic flux, and
it is believed that
poleward meridional
flows play a major role in the polarity reversals of the polar magnetic
fields ($e.g.$, Wang 2004).
The $\approx$ 12-year and the $\approx$ 22-year periodicities of
the north-south difference in
the mean meridional motion of the spot groups
may have a close relationship with the 11-year
solar activity (the
emerging magnetic flux) cycle and the 22-year solar magnetic cycle,
respectively.
Many of the other periodicities found here also
exist in several activity phenomena (Knaack et al. 2005;
Song et al. 2009, and references therein)
and solar differential rotation determined from sunspot data
(Javaraiah \& Gokhale 1995, 1997b; Javaraiah \& Komm 1999;
Braj\v{s}a et al. 2006).
They may be closely
related to the Rossby type waves that were discussed by Ward (1965)
and others
(e.g., Knaack et al. 2005; Chowdhury et al. 2009).
According to the well known Gnevyshev and Ohl rule (G-O rule),
an odd cycle is stronger than its immediately preceding even cycle
(Gnevyshev \& Ohl 1948). Cycle pair~22,23 violated the G-O rule.
The duration of the current cycle~23 is very long, and
during the declining phase of this cycle, the activity in the southern
hemisphere is considerably stronger than in the northern hemisphere.
All these properties of the cycle~23
could be strongly related to the large and northbound mean meridional
motion of the spot groups during this cycle. As already mentioned above,
motions of magnetic structures such as sunspots mimic
the motions of deeper layers of the Sun (see also Javaraiah \& Gokhale 2002).
Therefore,
the magnetic flux cancellation/enhancement
due to the mean meridional motion of sunspot groups may take place
in the subsurface layers of the Sun.
By considering the poleward meridional
plasma flows detected by surface Doppler measurements and
by helioseismology,
the deeper
counter-motion (suggested by the mean meridional motion of spot groups) might
amplify the action of the near-surface dynamo in the
southern hemisphere during cycle~23 for causing
stronger magnetic activity on
this hemisphere.
\begin{acknowledgements}
I am thankful to the anonymous referee for the critical review and very
useful comments and suggestions. I also thank
Dr. L. Bertello
for useful comments and suggestions.
Wavelet software was provided by
C. Torrence and G. Compo, and is available at
{\tt URL: http//paos.colorado.edu/research/wavelets/}. The MEM FORTRAN code
was provided to us by Dr. A. V. Raveendran.
\end{acknowledgements}
|
1,108,101,562,803 | arxiv | \section{Introduction}
Principal component analysis (PCA) is a simple and widely used unsupervised machine learning tool for dimensionality reduction.~\cite{br_and_ml_book,stat_learning_book,pca_guide} Perhaps the most common application of PCA is for the lossy compression of images. One popular demonstration is the analysis of facial images, leading to the aptly named ``eigenfaces'' that capture collective attributes of facial structure.~\cite{br_and_ml_book,stat_learning_book} Only a subset of the eigenfaces--much fewer than the na\"{i}ve dimensionality of the problem--are required to recover the salient aspects of facial images by simple linear combination. Another routine use is in natural language processing, where PCA is employed to shrink the data dimensionality down from the large number of words appearing in a data set or in a dictionary.~\cite{br_and_ml_book,stat_learning_book} Use of the resultant lower dimensional representation greatly improves the development of predictive models to classify text documents.
The combined power and simplicity of PCA has made it a popular tool in the biological and physical sciences as well. For example, DNA microarray data is routinely treated with PCA to reduce the high dimensionality of the problem in order to identify unique gene expression states across various experimental conditions.~\cite{stat_learning_book, dna_microarray_chapter} Furthermore, PCA is commonly leveraged to extract dominant collective modes in simulations of proteins, referred to as ``Essential Dynamics'' in that field.~\cite{protein_pca_1,protein_pca_2} More recently, various spin models from statistical physics have been investigated via PCA and other machine learning methods.~\cite{spin_neural_classification,spin_confusion_ml,spin_ml_1, spin_ml_2,spin_ml_3,spin_ml_4,spin_ml_5,spin_ml_6} These studies have demonstrated the ability of machine learning tools to detect and quantify phase transitions by the autonomous construction of an order parameter (OP).
The aforementioned work on phase transitions in spin models served as motivation for this two-part series of papers. In the first manuscript (henceforth referred to as Paper I), we developed guidelines for the utilization of PCA~\cite{br_and_ml_book,stat_learning_book,pca_guide} to detect phase transitions in off-lattice, particle-based systems. We also demonstrated that PCA can readily identify the freezing transitions in hard disks and hard spheres, as well as liquid-gas phase separation in a binary mixture of patchy particles with complementary attractions. In developing and evaluating this approach, we initially focused on phase transitions that were equilibrium in nature and could be identified on the basis of features reflecting the positional degrees of freedom of the particles.
Here, we seek to generalize the formalism developed in Paper I to assess its utility for detecting phase transitions in a broader class of systems. Examples include equilibrium systems with 1) anisotropic particles leading to orientational as well as positional ordering~\cite{liquid_crystals_1,liquid_crystals_2} and 2) compositional degrees of freedom that can induce demixing, even in the absence of appreciable density fluctuations.~\cite{general_phase_behavior} We also address active or driven matter, which exhibits phase transitions whose detection and characterization cannot generally be facilitated based on arguments from equilibrium statistical mechanics.~\cite{noneq_pts_review,noneq_pts_book_v1,noneq_pts_book_v2,active_matter_pts,oscill_shear_pts_1,oscill_shear_pts_2,active_particles_review,oscill_mag_assembly,driven_pts_ex,active_pts_ex}
We propose several numerical encoding schemes (i.e., feature vector representations) for data describing particle configurations in these systems to detect their phase transitions with PCA. We find that prior knowledge of the phase transition is not required to construct a useful feature vector; consideration of the properties of the model system at hand is sufficient. However, we also show that by performing PCA on several choices for the feature vector, one can gain physical insights into the nature of the phase transition.
The balance of the manuscript is organized as follows. In Sect.~\ref{sec:methods}, considerations for constructing features for the detection of phase transitions in off-lattice systems using PCA are presented. The model systems analyzed in this work and the corresponding simulation details for each model are also provided. Sect.~\ref{sec:results} is divided into three subsections, each dedicated to a different model system. The first, Sect.~\ref{subsec:randorgR}, describes a study of the Random Organization Model, which exhibits a nonequilibrium phase transition between a quiescent state and a dynamically evolving steady state as a function of increasing density.~\cite{hyperuniformRO,OriginalRO,BerthierRO,SchmiedebergRO} Sect.~\ref{subsec:ellipsesR} addresses the fluid-nematic (orientationally driven) and the nematic-solid (positionally driven) phase transitions that occur upon densification of hard ellipses.~\cite{Baron_ellipses_k_6,ellipses_phase_diagram_Cuesta,ellipses_phase_diagram_Xu,ellipses_phase_diagram_Bautista} Finally, in Sect.~\ref{subsec:WRResult}, compositional demixing in the Widom-Rowlinson Model--a binary mixture where unlike particles interact via excluded volume effects but like particles are noninteracting\cite{WR,Ruelle,Chayes1995}--is explored. Concluding remarks are presented in Sect.~\ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
\subsection{Feature Construction}
\label{subsec:features}
Features ($f_i$) are scalar quantities that inform a machine learning algorithm about some aspect of the system being studied.~\cite{br_and_ml_book,stat_learning_book} Here, we denote a general vector of $m$ features as
\begin{equation} \label{eqn:general_features}
\begin{split}
&\boldsymbol{f} \equiv \begin{bmatrix}
f_{1}, & f_{2}, & \dots, & f_{m}
\end{bmatrix}^{T},
\end{split}
\end{equation}
where $T$ indicates a transpose. Feature vectors provide a numerical encoding for the separate realizations (or measurements) contained in the data set ($\mathcal{D}$).
When possible, features should reflect any known constraints; for physics problems, these include invariance to translation and rotation.~\cite{md_1,md_2,schnet,multibody_expansion} Such constraints can be easily encoded via the use of internal coordinates (e.g., interparticle distances or relative angular orientations) as features. Here, we compute pairwise quantities $g_{\beta}^{(\alpha)}$ that are in reference to a probe particle ($\alpha$) and a corresponding particle in its environment ($\beta$). A feature vector built from information considering $n_{\text{P}}$ probe particles (each with $n_{\text{NN}}$ corresponding environmental particles) can be represented as
\begin{equation} \label{eqn:multi_probe_features}
\begin{split}
&\boldsymbol{f} = \begin{bmatrix}
\boldsymbol{g}_{1}^{T}, & \boldsymbol{g}_{2}^{T}, & \dots, & \boldsymbol{g}_{n_{\text{P}}}^{T}
\end{bmatrix}^{T}, \\
&\boldsymbol{g}_{\alpha}^{T} \equiv \begin{bmatrix}
g_{1}^{(\alpha)}, & g_{2}^{(\alpha)}, & \dots, & g_{n_{\text{NN}}}^{(\alpha)}
\end{bmatrix}
\end{split}
\end{equation}
where the full vector of vectors $\boldsymbol{g}_{\alpha}^{T}$ corresponding to each probe particle $\alpha$ is implicitly flattened to form one contiguous feature vector (block matrix notation).
Within the above mathematical framework, there is no unique choice for the selection of either the probe particles or the neighboring particles that define their environment. Once a collection of probe and corresponding environment particles are chosen, we also must specify how the resultant pairwise quantities (the $g_{\beta}^{(\alpha)}$) are assigned to the $\alpha$ and $\beta$ indices in Eq.~\ref{eqn:multi_probe_features}. So that we do not have to compute properties with respect to every particle in the simulation box, we select $n_{\text{P}}$ probe particles at random. For the corresponding environmental particles, we use physical intuition as a guide by assuming that the distance between the probe particle and a given environmental particle $r_{\beta}^{(\alpha)}$ will influence the manner in which the associated feature $g_{\beta}^{(\alpha)}$ reports on a given phase transition. As a result, we use a distance-based criterion to determine which particles comprise the environment for a given probe (e.g., the first twenty nearest neighbors or every tenth nearest neighbor), hence our use of $n_{\text{NN}}$ to denote the number of environmental particles. Similarly, we assign the index $\beta$ on the basis of interparticle distance so that
\begin{equation} \label{eqn:nearest_neighbor_sorting}
r_{1}^{(\alpha)} \leq r_{2}^{(\alpha)} \leq \dots \leq r_{n_{\text{NN}}}^{(\alpha)}
\end{equation}
The assignment of a probe particle to a given $\alpha$ is less intuitive and could be model-dependent; however, random assignment is always a possibility, and, as we discuss below, the results obtained from that initial assignment can in some cases help to identify a superior assignment scheme for the probe particles.
In principle, $n_{\text{P}}$ could be as large as the number of particles in the simulation, $N$, and $n_{\text{NN}}$ could have a maximum value of $N-1$. Since the total feature vector size (in relation to Eqn.~\ref{eqn:general_features}) is $m=n_{\text{P}}\times n_{\text{NN}}$, the preceding choices would yield a feature vector of length $N(N-1)$. For most systems of interest, PCA for feature vectors of this size would be computationally infeasible. Therefore, practical implementation of PCA using particle-based coordinate data requires sensible choices for $n_{\text{P}}$ and $n_{\text{NN}}$ that we describe in the following sub-sections.
Finally, we refer to features where the $g_{\beta}^{(\alpha)}$ are physically motivated quantities as ``intuited'' features ($\boldsymbol{f}_{\text{I}}$). In Paper I, we showed that $\boldsymbol{f}_{\text{I}}$ do not necessarily approximate white noise in the disordered reference state (here, the ideal gas) limit and therefore may possess correlations that could obscure the detection of a phase transition via PCA. Arriving at corrected features ($\boldsymbol{f}_{\text{C}}$) that are linearly decorrelated when applied to an ideal gas reference data set ($\mathcal{D}_{0}$) is accomplished by deriving a PCA whitening transformation~\cite{whitening} ($\boldsymbol{f}_{\text{I}}\rightarrow\boldsymbol{f}_{\text{C}}$) that satisfies $\langle \boldsymbol{f}_{\text{C}}\boldsymbol{f}_{\text{C}}^{T} \rangle_{\mathcal{D}_{0}} = \boldsymbol{I}$ where $\boldsymbol{I}$ is the unit matrix and $\langle \dots\rangle_{\mathcal{D}_{0}}$ is an average over the reference data.
\subsection{Models}
\label{subsec:models}
We provide a brief description of each model examined in this work as well as the relevant phase transition(s) below. We then specify the form of the associated feature vectors within the framework provided by Eq.~\ref{eqn:multi_probe_features}. Finally, we describe the simulation protocols used to generate the configurations on which the PCA is performed. Throughout, $N$ denotes the number of particles in a two-dimensional (2D) periodically replicated simulation cell of area $A$, $\rho=N/A$ is the number density, and $\eta=\rho \pi \sigma^{2}/4$ is the packing fraction.
\subsubsection{Random Organization Model}
\label{subsec:randorgM}
In one variant of the Random Organization (RandOrg) model, a circular particle of diameter $\sigma$ is defined as active if it overlaps with any other particle.~\cite{SchmiedebergRO,BerthierRO} For a given configuration, all active particles are simultaneously given random displacements; all other particle positions are unaltered. Particle positions are initialized at random, from which the above procedure is repeated until either 1) a so-called absorbing state is reached where no particle overlaps are present (lower densities) or 2) a steady-state is reached where the fraction of active particles fluctuates about some non-zero value (higher densities).
Given that the RandOrg model comprises identical, radially symmetric particles, features that explicitly encode positional packing correlations around tagged particles are an obvious first choice to try. Specifically, we utilize mean subtracted interparticle distances as our features
\begin{equation} \label{eqn:distance_features}
g_{\beta}^{(\alpha)} = r_{\beta}^{(\alpha)} - \langle r_{\beta}^{(\alpha)} \rangle_{\mathcal{D}}
\end{equation}
Furthermore, while the model is technically single-component to the extent that there are no immutable labels associated with the particles, multiple particle types (active and inactive) are created on-the-fly due to the dynamics prescribed by the model. To capture emergent inhomogeneity with respect to particle environment, it is critical to utilize multiple probe particles as prescribed by Eq.~\ref{eqn:multi_probe_features}. In the present work, we use a fixed feature length of $m=n_{\text{P}} \times n_{\text{NN}}=400$, and examine the effects of co-varying $n_{\text{NN}}$ and $n_{\text{P}}$.
Within the above approach, there is still the question of how to assign the probe particles to specific values of $\alpha$ in Eqn.~\ref{eqn:multi_probe_features}.
One valid, though perhaps not particularly informative, choice is to randomly order the probe particles, such that $\alpha$ assignment does not encode any information. In Sect.~\ref{subsec:randorgR}, we demonstrate how performing PCA with this choice produces results that suggest a more informative sorting scheme, where probe particles are assigned to the index $\alpha$ on the basis of their first NN distance, i.e. $r_{1}^{(1)} \leq r_{1}^{(2)} \leq \dots \leq r_{1}^{(n_{\text{P}})}$.
We note two additional technical points regarding PCA for the RandOrg model. First, as we increase $n_{\text{P}}$, the magnitude of the OP grows in a nonlinear fashion. We can collapse OPs onto the same scale by dividing by the square root of the explained variance of their dimension, a procedure equivalent to data ``whitening'' discussed in Paper I in the context of correcting the physics-motivated features. We also find that OPs obtained from both dimensional (preserving units of distance) and nondimensionalized features (dividing raw distances by $\rho^{-1/D}$, where $D$ is the dimension) accurately detect the phase transition of the RandOrg Model. However, as we demonstrate, the PCA-derived OP using the former convention shows behavior that is more strikingly reminiscent of the classical OP for this system.
To generate the configuration data required to construct the above features, we performed 2D simulations in a square box with $N=1000$ particles and employed a maximum displacement of 0.25$\sigma$ in both the $x$ and $y$ directions for the active particles. The length of the simulations varied with proximity to the critical point characteristic of the transition between an absorbing and a steady state. At densities below the critical point, an individual simulation ended when the absorbing state is reached; however, critical slowing down impacts the simulation length required to achieve that state.~\cite{BerthierRO,OriginalRO} We used a maximum of $10^5$ simulation steps for densities below the critical point. For the higher densities, the fraction of active particles decreased from the initial random state before fluctuating about a steady-state. The number of simulation steps was chosen to be at least twice as long as the initial relaxation time scale, ranging from $10^3$ steps (at the highest densities) to $10^5$ steps (just past the critical point). We performed $10^3$ separate simulations, using only the last frame from the simulation in the PCA. Values for $\rho$ ranging from $0.38$ to $0.63$ were simulated in increments of $0.005$. From the simulation data, we computed $25$ feature vectors from each simulation snapshot, where the probe particles were selected at random. Within a single feature vector, probe particles are selected without replacement; however, a particular probe particle can appear in multiple feature vectors.
\subsubsection{Hard Ellipses}
\label{subsec:ellipsesM}
Densification of hard ellipses bears similarity to the freezing of hard disks studied in Paper I but with added complexity derived from particle-shape anisotropy. In addition to ordering on the center-of-mass positional level, quasi-long ranged orientational ordering is possible, yielding the so-called nematic phase.~\cite{liquid_crystals_1,liquid_crystals_2} Two obvious pairwise properties to compute from the configurational data of hard ellipses are center-of-mass distances and the relative orientations of the ellipses. With respect to the former, we use the positional features with a single probe particle as employed in Paper I for hard disks and spheres. This form is equivalent to the feature vector defined by
Eqs.~\ref{eqn:multi_probe_features}-\ref{eqn:distance_features} for the case where $n_{P}=1$ and $n_{\text{NN}}=N-1$, where $N$ is the number of particles. Subsequently, the size of the feature vector is reduced by only including every $10^{\text{th}}$ NN distance after the first feature in the final feature vector. The pairwise distances are normalized with respect to the mean interparticle spacing $l\equiv\rho^{-1/D}$, where $\rho$ is the number density and $D$ is the spatial dimensionality, to yield non-dimensionalized features.
For the latter case of relative orientations, we still employ one probe and index its environmental particles on the basis of NN sorting (Eq.~\ref{eqn:multi_probe_features}-\ref{eqn:nearest_neighbor_sorting}); however, we use a measure of relative pair orientations in place of pair distances in the feature vector. Defining $\delta\theta_{\beta}^{(\alpha)}$ as the angular difference between the probe and environmental particles assigned to indices $\alpha$ and $\beta$ respectively, we employ features of the form
\begin{equation} \label{eqn:angular_features}
g_{\beta}^{(\alpha)}=\big|\cos{\big(\delta\theta_{\beta}^{(\alpha)}\big)}\big|-\big\langle\big|\cos{\big(\delta\theta_{\beta}^{(\alpha)}\big)}\big|\big\rangle_{\mathcal{D}}
\end{equation}
That is, $g_{1}^{(1)}$ quantifies the relative orientation of the single probe particle with its closest NN ellipse, etc. From the sorted list defined by the combination of Eq.~\ref{eqn:multi_probe_features}, Eq.~\ref{eqn:nearest_neighbor_sorting}, and Eq.~\ref{eqn:angular_features}, only every $10^{\text{th}}$ NN is included in the feature vector, as was done for the positional features above.
The feature vectors used as input to PCA were collected from Monte Carlo simulations of hard ellipses carried out at constant particle number and volume using the HOOMD-blue software package.~\cite{hoomd_1,hoomd_2,hoomd_3} The box shape was chosen to approximate a square by an appropriate distribution of triangular lattice cells with an aspect ratio of $\sqrt{3}\kappa$, where $\kappa=b/a$ is the ratio of the semi-major ($b$) and semi-minor ($a$) elliptical axes, respectively. (Here, we set the lengthscale as $2a=1$.) Specifically, given the number of cells in the $y$ direction, $n_y$, the number of cells in the $x$ direction is chosen as $n_x=\text{round}(\sqrt{3}\kappa)$. For ellipses with $\kappa=\{3, 4, 6, 9\}$, we chose $n_y$=\{17, 15, 12, 10\} which yielded total number of particles, $N$=\{2992, 3120, 3000, 3120\}, respectively. For each step, the move type (rotation or translation) was selected at random with equal probability. The maximum degree of translation and rotation per move were independently scaled to yield a $\sim25\%$ acceptance rate for efficient phase sampling. Density ranges were chosen to span the isotropic, nematic and solid phases. For ellipses with $\kappa=\{3, 4, 6, 9\}$, we chose $\eta=\{0.6-0.9, 0.55-0.9, 0.4-0.9, 0.3-0.9\}$, respectively. A typical run proceeded as follows. A system of $N$ hard ellipses was started from an ideal triangular lattice at maximum packing fraction and expanded to a target $\eta$ value. Next, the range of translational and rotation move sizes were optimized using 50 iterations of 100 steps to achieve the targeted acceptance ratio, where a step is equal to a HOOMD-blue ``timestep'', or approximately four sweeps over all particles. Then, the system was equilibrated for $6\times10^6$ steps, and data was collected every $6000$ steps from an additional $6\times10^6$ step production run. From each frame, 30 feature vectors were constructed, where the probe particles were selected without replacement within a given frame.
\subsubsection{Widom-Rowlinson Model}
\label{subsec:WRModel}
The Widom-Rowlinson (WR) model~\cite{WR} is composed of a binary mixture of A and B particles where like pairs (A-A or B-B) are non-interacting and unlike pairs (A-B) interact isotropically via a hard-core repulsion with diameter~$\sigma$. Upon densification, the WR model compositionally demixes to form separate A- and B-rich phases. The resulting phase transition can straightforwardly be used to model compositional demixing; however, by integrating out the coordinates of one of the species, a model for liquid-gas coexistence can be obtained. In this work, we study the symmetric WR model where the number of A and B particles are equal.
Full specification of an individual particle in the WR model requires knowledge of both its type (A or B) and its position, yielding two obvious quantities to include in the feature construction. Instead of directly encoding the particle type as a categorical variable, we use particle type information to modify the assignments of the $\alpha$ and $\beta$ indices. We construct NN positional features as prescribed by (Eqs.~\ref{eqn:multi_probe_features}-\ref{eqn:distance_features}), but we only include distances between pairs of A particles in the feature vector. We use a single probe particle ($n_{\text{P}}=1$) with $n_{\text{NN}}=1200$ nearest neighbors for the environmental descriptors. By neglecting one of the WR components, we construct features that explicitly leverage both compositional and positional information. Finally, we non-dimensionalize the distances in the same fashion as the ellipse positional features described in Sect.~\ref{subsec:ellipsesM}.
For the production of the configuration data required to construct feature vectors for PCA, the HOOMD-Blue hard-particle Monte Carlo integrator~\cite{hoomd_1,hoomd_2,hoomd_3} was used to perform the simulations of the WR model in a square box for $N=4096$ particles in 2D. Equilibrium samples were generated at number densities spanning both the mixed and ordered phases as follows. After compressing the final configuration from simulation at the previous density, the system was equilibrated for $10^7$ steps, followed by a production run of $10^7$ steps, from which data was collected every $10^3$ steps, for a total of $10^4$ snapshots per density. A step is equivalent to a HOOMD-blue ``timestep'' as defined in Sect.~\ref{subsec:ellipsesM}. Simulations were run from $\rho = 0.064$ to $3.82$ in increments of $0.064$. From each frame, a single feature vector was constructed.
\section{Results and Discussion}
\label{sec:results}
Prior to examining the PCA results for the above models, we explain the general interpretation of the quantities that result from PCA below. For the features constructed according to the protocols described in Sect.~\ref{sec:methods}, PCA discovers a set of orthogonal axes--the principle components (PCs)--that are constructed in succession so as to maximize the data variance projected along each new axis. In this work, we monitor the relative explained variance of the PCs, denoted as $\lambda_{i}$ for the $i^{\text{th}}$ PC; by convention, the PCs are sorted so that $\lambda_{i}\ge \lambda_{i+1}$. A comparatively large value for $\lambda_1$ indicates that the information content of the features has been effectively concentrated into a single dimension: the first PC.
Of particular relevance to interpreting the PCA results is the projection of the feature vectors along the PCs: the PC score, denoted $p_{i}$ for the $i^{\text{th}}$ PC. Given that the first PC contains the largest explained variance, we evaluate the use of $p_{1}$ as an OP-like quantity to report on the phase transition of interest.~\footnote{We do not exclude the possibility that the other PC scores may be useful or that a better order parameter could involve multiple PC scores.~\cite{spin_ml_1,spin_ml_2,spin_ml_3,spin_ml_4,spin_ml_5,spin_ml_6} For simplicity we focus on $p_{1}$} This strategy amounts to coalescing as much ``information'' (i.e., variance) as possible from the high-dimensional feature vector $\boldsymbol{f}_{\text{C}}$ into the scalar $p_1$. Since each $p_1$ is associated with a single feature vector, we define two quantities that are averaged over a given state-point $\mathcal{S}$ (here, the state points are densities): $P_{1}\equiv \langle p_{1} \rangle_{\mathcal{S}}$ and the associated standard deviation, $\sigma_1\equiv\sqrt{\langle p_{1}^{2} \rangle_{\mathcal{S}}-\langle p_{1} \rangle_{\mathcal{S}}^{2}}$. When $P_1$ and $\sigma_1$ are plotted as a function of density, phase transitions will generally be indicated by a sigmoid in the former and a peak in latter metric.
The final relevant quantities from the PCA calculation are the PCs themselves--the weights that relate the features and the PC scores. As described in Sect.~\ref{subsec:features}, the intuited features ($\boldsymbol{f}_{\text{I}}$) are transformed in a corrected representation ($\boldsymbol{f}_{\text{C}}$), the latter of which are input into the PCA calculation. Since the values comprising $\boldsymbol{f}_{\text{I}}$ are straightforward to interpret physically, we explore the relationship between the PC scores and $\boldsymbol{f}_{\text{I}}$ (instead of $\boldsymbol{f}_{\text{C}}$). As described in Paper I, it is possible to write down a linear relationship between the scalar $p_{i}$ and the vector $\boldsymbol{f}_{\text{I}}$ via the following dot product
\begin{equation} \label{eqn:full_transformation}
{p}_{i} = \boldsymbol{q}_{i}^{T}\boldsymbol{f}_{\text{I}} \\
\end{equation}
where $\boldsymbol{q}_{i}$ is the desired vector of weights that map $\boldsymbol{f}_{\text{I}}$ to $p_i$.
Examination of these weights reveals which physically meaningful quantities are particularly relevant to the phase transition.
In summary, we consider 1) the effectiveness of the dimensionality reduction via $\lambda_1$, 2) the low-dimensional (OP-like) representation of the data via quantities that depend on $p_1$ ($P_1$ and $\sigma_1$), and 3) the relative importance of the physical quantities that comprise $\boldsymbol{f}_{\text{I}}$ via the weights $\boldsymbol{q}_{i}$. For convenience, we summarize the above notation in the following table.
\setlength{\arrayrulewidth}{1pt}
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}{>{\centering\arraybackslash} m{0.8cm} }
\begin{table}
\caption{Common PCA variable definitions.}
\begin{center}
\label{tab:sometab}
\begin{tabular}{| C | m{7.2cm}|}
\hline
$\lambda_{i}$ & Relative (fractional) explained variance captured by the $i^{\text{th}}$ PC, ranging between 0 and 1, where larger values are indicative of greater importance. \\
\hline
$p_{i}$ & The $i^{\text{th}}$ PC score. Mathematically, this is the projection of a feature vector along the $i^{\text{th}}$ PC. PCs offer a new coordinate system with information concentrated along the earlier (smaller index) PCs. \\
\hline
$P_{\text{i}}$ & Average of the $i^{\text{th}}$ PC score, $p_{i}$, over data from a state point ($\mathcal{S}$). $P_{1}$ serves as the OP-like quantity to report on phase transitions.\\
\hline
$\sigma_{i}$ & Standard deviation of the $i^{\text{th}}$ PC score, $p_{i}$, at a state point ($\mathcal{S}$). This is used as an effective ``susceptibility'' to locate the phase transition by identifying the maximum value. \\
\hline
$\boldsymbol{q}_{i}$ & Vector of weights that quantify the relevance of each feature to the $i^{\text{th}}$ PC.\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Random Organization Model}
\label{subsec:randorgR}
The RandOrg model was developed to understand the transition from reversible to irreversible dynamics that occurs upon increasing either the applied periodic shear or the density of a material.~\cite{OriginalRO} In the first incarnation of the RandOrg model, an initial configuration was sheared and any particles overlapping with others as a result of deforming the simulation box were defined as active particles. Only the active particles were given a random displacement after which the simulation box was restored to its original geometry. At sufficiently low combinations of density and applied shear, a quiescent ``absorbing'' state eventually results, where shear does not generate further particle overlaps and there are no longer active particles. However, at greater densities and/or shear rates, shearing the system will always generate some overlaps. The reversible-to-irreversible transition reflects a state where the onset of particle collisions upon shearing prevents the system from returning to it original state when the shear is reversed.~\cite{OriginalRO,hyperuniformRO}
A modified version of the RandOrg model, where shear is not included, has also been studied.~\cite{BerthierRO,SchmiedebergRO} Instead, as described in Sect.~\ref{subsec:randorgM}, initial particles are placed at random; active particles correspond to overlapping particles. This model possesses the same type of transition from an absorbing state at sufficiently low densities to an evolving steady-state containing a non-zero number of active particles at higher densities, while being technically simpler to implement. Fig.~\ref{fgr:RO_transition}a shows the fraction of active particles $f_{\text{A}}$ in this version of the RandOrg model as a function of number density $\rho$. Two simulation configurations, below ($\rho=0.5$) and above ($\rho=0.51$), the critical point are shown in Fig.~\ref{fgr:RO_transition}b and c, respectively.
\begin{figure}
\includegraphics{transition-RO-01.png}
\caption{(a) Fraction of active particles, $f_{\text{A}}$, in the RandOrg model as a function of number density, $\rho$. (b,c) Simulation snapshots (b) below ($\rho=0.5$) and (c) above ($\rho=0.51$) the critical density. Active particles are shown in lighter green in panel c.}
\label{fgr:RO_transition}
\end{figure}
Like hard disks, the RandOrg model comprises identical, radially symmetric particles, for which distance-based features are a sensible choice. Therefore, we first employed the feature vector developed in Paper I for hard disks--sorted nearest-neighbor (NN) distances associated with a single probe particle (Eqns.~\ref{eqn:multi_probe_features}-\ref{eqn:distance_features} where $n_{\text{P}}=1$). However, this construction of the feature vector did not produce a satisfactory OP. Use of a feature vector for which $n_{\text{P}}=1$ likely fails because, above the critical point, the system always has two effective particle types--active and inactive (see Fig.~\ref{fgr:RO_transition}c). Therefore, the environment of a single particle is not an accurate representation of the simulation box as a whole at higher densities.
As described in Sect.~\ref{subsec:features}, distance-based feature vectors can naturally incorporate multiple probe particles (and their corresponding neighbors). As with the NN distances for a given probe particle, one must decide how to order the probe particles inside the feature vector. In the absence of any information about the nature of a given phase transition, a first choice might be to randomly order the probe particles. In Fig.~\ref{fgr:RO_random}, we show results for the first PC using a feature vector constructed from 40 probe particles ($n_{\text{P}}=40$), each of which is encoded via its first 10 NN distances ($n_{\text{NN}}=10$). In principle, the weights associated with each probe particle should be identical since there is no physics-based interpretation for their ordering in the feature vector. Indeed in Fig.~\ref{fgr:RO_random}c we find a repeating pattern for every 10 weights: the first NN distance ($r_{1}^{(\alpha)}$) component weight for each probe is large in magnitude. The relative uniformity of the first NN distance weights, compared to the noisiness in the larger NN distances, indicates that the $r_{1}^{(\alpha)}$ values are informative to the PCA.
The corresponding OP is shown in Fig.~\ref{fgr:RO_random}a and bears striking resemblance to the standard OP shown in Fig.~\ref{fgr:RO_transition}a. Indeed, by arbitrarily shifting and scaling the PC score, we find that $f_{\text{A}}$ essentially overlaps with the PCA-deduced OP. It seems that the repeating unit in the component weights is able to distinguish between overlapping and non-overlapping particles and therefore can report on the relative amounts of active and inactive particles at a given value of $\rho$.
\begin{figure}
\includegraphics{fa_vs_random-01.png}
\caption{(a) The PCA-deduced OP $P_{1}$ (with probe particles sorted randomly in the feature vector) as a function of number density $\rho$ compared to the conventional OP (the fraction of active particles, $f_{A}$) for the RandOrg model. (b) Comparison of $P_{1}$ with probe particles sorted randomly (black) versus according to their first NN distance so that $r_{1}^{(\alpha)} \leq r_{1}^{(\alpha+1)}$ (blue). (c) Component weights, $[\boldsymbol{q}_{1}]_{k}$, as a function of feature dimension $k$ for the two PC scores shown in panel (b).}
\label{fgr:RO_random}
\end{figure}
We can use the above component weights to intelligently devise a better sorting scheme for the probe particles. From Fig.~\ref{fgr:RO_random}c, it is clear that $r_{1}^{(\alpha)}$ is a highly weighted contribution to the feature vector; therefore, we performed a separate PCA calculation with the same values for $n_{\text{P}}$ and $n_{\text{NN}}$ while sorting the probe particles so that $r_{1}^{(\alpha)} \leq r_{1}^{(\alpha+1)}$. Because we have sorted the probe particles on the basis of a physically meaningful descriptor, the symmetry among probe particles is broken and the probe particles with the closest NNs (i.e., those assigned to lower $\alpha$) are weighted more heavily than other probe particles (Fig.~\ref{fgr:RO_random}c). Moreover, the associated OP is significantly sharper at the phase transition, essentially giving a binary classification into absorbing states and dynamic steady states on the basis of the OP (Fig.~\ref{fgr:RO_random}b).
With the above sorting scheme in hand, we vary both $n_{\text{P}}$ and $n_{\text{NN}}$ while keeping the length of the feature vector fixed at $m=n_{\text{P}} \times n_{\text{NN}}=400$. As $n_{\text{P}}$ increases and therefore $n_{\text{NN}}$ decreases, the quality of the first PC score as an OP improves significantly, with the metric sharpening into a sigmoidal curve that separates quiescent absorbing states from diffusive steady-states; see Fig.~\ref{fgr:RO_PCA}a. Conversely when $n_{\text{P}}=1$ (as was the case for Paper I), $P_{1}$ cannot detect the transition. Correspondingly, when features constructed with more probe particles are used, the explained variance associated with the first PC increases dramatically (Fig.~\ref{fgr:RO_PCA}b). The preceding trend is monotonic--there is no value to including more than the nearest interparticle distance per probe particle at constant $m$, an indication of the local character of the phase transition in the RandOrg model.
For the above series of PCA calculations, the first 80 component weights are plotted in Fig.~\ref{fgr:RO_PCA}c. When $n_{\text{P}}$ is small, the components appear to be largely random, but as $n_{\text{P}}$ is increased, the components develop more structuring. For each probe particle, the component weights associated with $r_{1}^{(\alpha)}$ have a much larger weight than that of the rest of the features, reinforcing the importance of the first NN distance in the dimensionality reduction. Moreover, the probe particles that have closer first NNs have greater weights; we can interpret the role of using multiple probe particles as capturing an accurate measure of $r_{1}^{(1)}$ in a statistical sense, i.e., not all probe particles are required, but sampling is needed to make sure that sufficiently representative interparticle separations are included in each feature vector.
\begin{figure}
\includegraphics{vary_nP_RO-01.png}
\caption{(a) PCA-deduced OP $P_{1}$ of the RandOrg model as a function of number density $\rho$ for different numbers of probe particles $n_{\text{P}}$ and corresponding nearest neighbors $n_{\text{NN}}$, respectively. (b) Corresponding explained variance for the first three PCs and (c) the first 80 PC weights $[\boldsymbol{q}_{1}]_{k}$.}
\label{fgr:RO_PCA}
\end{figure}
While the importance of the first NN distance is intuitive given that the RandOrg phase transition is defined by the presence or absence of particle overlaps, we did not incorporate knowledge of the transition in constructing the features. In other words, our results suggest that modifying the feature vector can be used to infer characteristics of a transition, even if its nature is unknown at the outset. Specifically, for the RandOrg model, the importance of the first NN distance revealed by the PCA implies a transition that is local in character in real-space, and the necessity of multiple probe particles indicates that multiple distinct particle types or environments are an important characteristic.
\subsection{Hard Ellipses}
\label{subsec:ellipsesR}
\begin{figure}[ht]
\includegraphics{transition-ellipse-01.png}
\caption{Density-driven isotropic fluid to nematic phase transition in a system of hard ellipses with aspect ratio $\kappa=4$. (a) The packing fraction $\eta$ dependence of the conventional order parameter for this transition, $P^{\text{max}}_{2}=[\langle 1/N \Sigma_{i}^{N} \text{cos}(2\theta_{i})\rangle ^{2}+\langle 1/N \Sigma_{i}^{N} \text{sin}(2\theta_{i})\rangle ^{2}]^{1/2}$, where $\theta_{i}$ is the angle between the semi-minor axis of the $i^{\text{th}}$ ellipse and the x-axis and $N$ is the number of ellipses, as per Ref.~\citenum{ellipses_phase_diagram_Bautista}. Simulation configurations of (b) the isotropic fluid at $\eta=0.65$ and (c) the nematic phase at $\eta=0.75$. Ellipses are color coded according to $\theta_{i}$ as defined above with the angular range limited to $[-\pi/2,\pi/2]$ due to orientation symmetry of the ellipse.}
\label{fig:system_illustration}
\end{figure}
The freezing transition for hard ellipses differs from that of hard disks because the former features an intervening nematic phase between the disordered fluid and the positionally ordered solid. The nematic phase manifests when the ellipses display disordered center-of-mass positions but quasi-long range orientational order.~\cite{Baron_ellipses_k_6,ellipses_phase_diagram_Cuesta,ellipses_phase_diagram_Bautista,ellipses_phase_diagram_Xu}. A conventional OP that reports on the the fluid-nematic transition, $P_2^{\text{max}}$, as well as simulation configurations at densities below and above the phase transition, are shown in Fig.~\ref{fig:system_illustration}. The continuous, second-order nature of the fluid-nematic transition is apparent from the behavior of $P_2^{\text{max}}$, from which the precise density for the underlying phase transition is not readily apparent. Therefore, one typically monitors the long-range power-law decay of a pairwise angular correlation function versus interparticle separation to identify the transition. The nematic phase transition point is identified when the power law decay exponent falls below an approximate value of $\frac{1}{4}$.~\cite{ellipses_phase_diagram_Cuesta,ellipses_phase_diagram_Xu}
\begin{figure}[ht]
\includegraphics{angles-ellipse-01.png}
\caption{Based on PCA of the 2D system of hard ellipses, we show (a) component weights $[\mathbf{q}_{1}]_{k}$, (b) the explained variance $\lambda_{i}$, and (c) the OP ($P_1$) and standard deviation, $\sigma_1=\sqrt{\left \langle p_1^2 \right \rangle_{\mathcal{S}} -\left \langle p_1 \right \rangle_{\mathcal{S}}^2}$, where $p_1$ is the first PC score associated with an individual feature vector. Averages are taken over all feature vectors at a given state point ($\mathcal{S}$)--here, a single density.}
\label{fig:OPprops}
\end{figure}
To detect the fluid-nematic phase transition via PCA, we use a feature vector constructed from the relative orientation of pairs of ellipses that are sorted in ascending order by the distance between the probe particle and its neighbor as described in Sect.~\ref{subsec:ellipsesM}. In Fig.~\ref{fig:OPprops}, we present the results of PCA using this orientational feature vector for ellipses with an aspect ratio $\kappa=4$. As seen by the explained variance $\lambda_i$ in Fig.~\ref{fig:OPprops}b, the first PC captures approximately $40\%$ of the data variance, indicating effective dimensionality reduction. From the component weights $[\mathbf{q}_{1}]_{k}$ in Fig.~\ref{fig:OPprops}a, it is clear that long-range orientations (larger values of $k$) are much more important than the closer neighbor orientations which tend towards zero.
The above weights reflect the underlying structural motifs present in hard ellipses at various values of $\eta$. Orientationally aligned clusters of ellipses are present in both fluid and nematic phases (compare, for instance, the snapshots in Fig.~\ref{fig:system_illustration}b,c). Therefore, orientations between nearby ellipses (smaller values of $k$ in Fig.~\ref{fig:OPprops}a) are not useful indicators of nascent orientational long-range order, and their contributions to the OP are suppressed by the PCA. On the other hand, long distance components are approximately equal-weighed as they correlate proportionally to the presence of an emerging nematic director but average out for random configurations in the fluid state.
\begin{figure}[ht]
\includegraphics{etavk-01.png}
\caption{Phase boundary of the isotropic fluid to nematic transition for a system of hard ellipses as a function of packing fraction $\eta$ and aspect ratio $\kappa$. Solid black dots indicate the phase transition point identified from the position of maximum susceptibility, $\text{max}(\sigma_1)$. The dashed gray line indicates the phase boundary fit reported by Xu et al.\cite{ellipses_phase_diagram_Xu}}
\label{fig:nematic_boundary}
\end{figure}
Regarding the OP itself, we show $P_1$ and its standard deviation $\sigma_1$ in Fig.~\ref{fig:OPprops}c; the latter quantity can be interpreted as a type of susceptibility of the OP. Note that $P_1$ resembles the traditional OP $P_2^{\text{max}}$ of Fig.~\ref{fig:system_illustration} and, while responsive to the nematic phase change, does not provide a unique transition point. As such, we use $\sigma_1$ to correlate the PCA results to the fluid-nematic phase transition: the maximum of $\sigma_1$ indicates the region most consistent with large-scale configuration fluctuations near the critical point of a continuous phase transition. Indeed, for all values of $\kappa$ investigated here, we find that the density $\eta$ associated with the maximum value of $\sigma_{1}$ is in excellent agreement with fitted fluid-nematic boundary reported by Xu et al\cite{ellipses_phase_diagram_Xu} (see Fig.~\ref{fig:nematic_boundary}), without requiring a tedious analysis of the long range scaling behavior in the angular correlations employed by the latter study.
\begin{figure}[ht]
\includegraphics{orientvspos-01.png}
\caption{For the first PC of hard ellipses, comparison of the (a) shifted and normalized PC scores $\tilde{P}_1 \equiv \frac{P_1-\text{min}(P_1)}{\text{max}(P_1-\text{min}(P_1))}$ and (b) normalized standard deviations $\tilde{\sigma}_1 \equiv \frac{\sigma_{1}}{\text{max}(\sigma_{1})}$, as derived from either orientational or positional feature vectors. The dashed black vertical line indicates fluid-nematic boundary, and the shaded gray region indicates the nematic-solid phase coexistence region reported in Ref.~\citenum{ellipses_phase_diagram_Bautista}.}
\label{fig:pos_vs_orientation}
\end{figure}
While use of orientational features as input to PCA provides a means to detect the fluid-nematic phase transition, the relationship between the above PCA results and the nematic-solid transition is less obvious. In Fig.~\ref{fig:pos_vs_orientation}a,b, we plot a normalized version of $P_1$ and its standard deviation $\sigma_{1}$ derived from the orientational features against the known nematic-solid coexistence region (the gray shaded area). There is a weak response to the nematic-solid region as $\tilde{\sigma}_{1}$ drops abruptly--perhaps an indicator of reduced orientational freedom of the ellipses upon solidification. However, the relationship between the angular degrees of freedom and the nematic-solid transition is relatively indirect and therefore performing PCA on the orientational features yields comparatively poor OP-like quantities for this phase change.
In order to detect the center-of-mass level ordering that occurs at the nematic-solid transition, we employ the positional NN features used for hard disks in Paper I. That is, instead of including the relative angle between two ellipses in the feature vector, we employ the interparticle distances. In Fig.~\ref{fig:pos_vs_orientation}a,b, we compare the normalized PC scores $\tilde{P}_1$ and the associated $\tilde{\sigma}_{1}$, respectively, when these positional features are used as input to the PCA. The resulting OP is insensitive to the fluid-nematic boundary but grows sharply across the known nematic-solid phase-coexistence region. Similar to the orientational features, the maximum in the position-based susceptibility $\sigma_1$ in Fig.~\ref{fig:pos_vs_orientation}b is an appropriate identifier for the underlying phase transition. The form of the OP as a function of $\eta$--flat over the fluid phase, rapid growth upon solidification, and more muted growth in the solid phase--is qualitatively similar to the OP reported in Paper I for the densification of hard disks. Together, the above results attest to the ability of PCA to provide insights into the character of a given phase transition by varying the form of the feature vector.
\subsection{Widom-Rowlinson Model}
\label{subsec:WRResult}
As mentioned in Sect.~\ref{subsec:WRModel}, the WR Model contains two particles types--A and B--where like particles are non-interacting and unlike particles interact via a hard-core repulsion of diameter $\sigma$. At low densities, the two species are mixed. However, upon densification, a phase transition occurs~\cite{Ruelle,Chayes1995,Georgii} where the WR mixture phase separates into A-rich and B-rich regions (Fig.~\ref{fig:TraditionalOP}a,b) as the excluded volume effects experienced by the unlike particles overcomes the mixing entropy. The density at which the demixing transition occurs varies with composition; we denote $x$ as the fraction of A particles. In the present work, we study the mixture for which $x=0.5$.
When $x=0.5$, the density at which clusters of like particles become percolated can be used to determine the demixing transition.~\cite{Chayes1995,Klein,ChayesPRE} For the WR model, a cluster is defined as a group of particles that are all either directly overlapping or connected via a contiguous pathway of overlapping particles when periodic boundary conditions are properly taken into account. For a finite-sized, periodically replicated simulation box, a percolated cluster is one that grows in size upon replication of the simulation cell. Therefore, for each species at $x=0.5$, we computed the fraction of configurations possessing at least one percolated cluster of that particle type and averaged the results for the A and B particles to yield $f_{\text{perc}}$. Fig.~\ref{fig:TraditionalOP}c shows $f_{\text{perc}}$ as a function of density; percolated clusters were identified as described in Ref.~\citenum{linkergel}. One choice for the percolation threshold--the point when at least 50\% of the configurations are percolated--yields a de-mixing transition density of $\rho_{t}$ = 1.68.
\begin{figure}
\includegraphics[width=3.37in,keepaspectratio]{TraditionalOP.pdf}
\caption{For the WR model at $x=0.5$, simulated configuration snapshots of (a) mixed WR particles at $\rho = 1.25$ (below the phase transition) and (b) demixed WR particles at $\rho = 2.5$ (above the phase transition).(c) Fraction of percolated configurations ($f_{\text{perc}}$) as a function of number density.}
\label{fig:TraditionalOP}
\end{figure}
Positional features of the form defined by Eq.~\ref{eqn:multi_probe_features}-~\ref{eqn:distance_features} are unable to detect the above de-mixing transition if compositional degrees of freedom are not taken into account--a consequence of the absence of large scale fluctuations in the packings of the particles (agnostic to particle type) as the phase transition occurs. However, for any multicomponent mixture, features can be constructed using particle type data as well as spatial information. For the WR model, one such strategy is to design a feature vector that only includes interparticle distances if the corresponding pair of particles meets some criterion based on particle type. One such choice (though others are possible) is to only include distances between two A particles in the feature vector defined by Eq.~\ref{eqn:multi_probe_features}-~\ref{eqn:distance_features}--akin to the liquid-gas formulation of the WR model. The outcome of PCA with the above feature vector is shown in Fig.~\ref{fig:LGFig}. The component weights $[\boldsymbol{q}_{1}]_{k}$ in Fig.~\ref{fig:LGFig}a indicate that the long-ranged positional correlations dominate, whereas the smallest interparticle separations with respect to a given probe are essentially meaningless to the PCA. Reminiscent of the fluid-nematic transition seen in ellipses and described in Sect.~\ref{subsec:ellipsesR}, some local clustering on the basis of particle type occurs at lower densities than phase separation does; see Fig.~\ref{fig:TraditionalOP}a for example. Therefore, it is the long-range correlations that change sharply as the phase transition occurs. Fig.~\ref{fig:LGFig}b depicts the explained variance $\lambda_{i}$ for the first 20 PCs, where the first PC accounts for $\sim10\%$ of the data variance--an order of magnitude more than the succeeding components.
The PCA-deduced OP is shown in Fig.~\ref{fig:LGFig}c. Relative to $f_{\text{perc}}$, the PCA-based OP varies more slowly and has a wider transition window. To identify the unique transition point, the standard deviation $\sigma_{1}$, as outlined in Sect.~\ref{subsec:ellipsesR}, is computed for every density. The maximum value of $\sigma_{1}$, denoting the region with most variance in the feature vectors, is accepted as being associated with the transition density $\rho_{t}$. For the above one-component mixture, a sharp peak (not shown) in standard deviation is observed at $\rho_t$ = 1.66 which is in excellent agreement with the value obtained through percolation arguments ($\rho_{t}$ = 1.68), indicating successful identification of the de-mixing transition.
\begin{figure}
\includegraphics[width=3.37in,keepaspectratio]{LGFig.pdf}
\caption{For the first PC upon application of PCA to the WR model, (a) its component weights $[\boldsymbol{q}_{1}]_{k}$, (b) the explained variance $\lambda_{i}$, and (c) the PCA-deduced OP ($P_{1}$) and percolation-based OP ($f_{\text{perc}}$).}
\label{fig:LGFig}
\end{figure}
\section{Conclusions}
In this article, we extended the PCA framework introduced in Paper I to detect phase transitions in three new model systems, each characterized by very different physics. The success of the method in these cases highlights the importance of exploring various feature representations when seeking to detect such phase transitions and understand their underlying physics.
Moving forward, two avenues seem fruitful for developing a routine analysis toolkit: (1) curate a sufficiently diverse library of features, each focused on different physical aspects that may be relevant to various phase transitions of interest and (2) explore the ability of more sophisticated, nonlinear learners to autonomously extract the physical intuition underlying such transitions on-the-fly.
Finally, we comment on general trends that we observed to indicate that the PCA calculation was usefully reporting on the phase transition of interest.
First we note that meaningful dimensionality reduction into the first PC is generally indicated by a large relative explained variance in comparison to the higher order PCs. Furthermore, we found that appropriate choices for the features resulted in an OP with strong convergence properties that required relatively small amounts of data to overcome sampling noise. We expect that these trends are relevant to other machine-learning approaches for the detection of phase transitions as well.
\label{sec:conclusions}
\section*{Acknowledgments}
The authors thank Michael P. Howard for valuable discussions and feedback. This research was primarily supported by the National Science Foundation through the Center for Dynamics and Control of Materials: an NSF MRSEC under Cooperative Agreement No. DMR-1720595 as well as the Welch Foundation (F-1696). We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources.
\setcounter{figure}{0}
\setcounter{equation}{0}
\renewcommand\thefigure{A\arabic{figure}}
\renewcommand{\thesection}{\thepart .\arabic{section}}
\renewcommand\theequation{A\arabic{equation}}
\renewcommand{\thesubsection}{\arabic{subsection}}
\renewcommand{\thesubsubsection}{\alph{subsubsection}}
|
1,108,101,562,804 | arxiv | \section*{#1},
notes-sep = 0pt,
format = \indent\normalfont,
number = \textsuperscript{#1}
}
\startlocaldefs
\providecommand{\glFnc}[1]{{\mathnormal{#1}}
\providecommand{\glSFnc}[1]{{\mathrm{#1}}
\newcommand{\fcaIR}{\mathtt{I}
\newcommand{\plNStR}{\Omega_{r}
\newcommand{\plNStJ}{\Omega_{j}
\newcommand{\plNVCR}{\upsilon_{\mathrm{r}}
\newcommand{\plNVCJ}{\upsilon_{\mathrm{j}}
\newcommand{\plDrSR}{{\widetilde{\varepsilon}}_{\mathrm{r}}
\newcommand{\plDrSJ}{{\widetilde{\varepsilon}}_{\mathrm{j}}
\newcommand{\plZedR}{{\mathfrak{z}}_{\mathrm{r}}
\newcommand{\plZedJ}{{\mathfrak{z}}_{\mathrm{j}}
\DeclareMathOperator{\stSu}{Su
\DeclareMathOperator{\stDr}{\Lambda
\DeclareMathOperator{\glVa}{Va
\DeclareMathOperator{\glWg}{\glFnc{w}
\DeclareMathOperator{\glHm}{\glSFnc{h}
\DeclareMathOperator{\glEta}{\glFnc{H}
\DeclareMathOperator{\plDrR}{\widetilde{\Lambda}_{\mathrm{r}}
\DeclareMathOperator{\plDrJ}{\widetilde{\Lambda}_{\mathrm{j}}
\DeclareMathOperator{\sfhzeta}{\zeta
\DeclareMathOperator{\sfWitten}{\mathcal{W}
\DeclareMathOperator{\sfLerch}{\Phi
\DeclareMathOperator{\sfWittenLerch}{{\mathcal{W}}_{\Phi}}%
\newcommand{\varnothing}{\varnothing}
\newcommand{\preccurlyeq}{\preccurlyeq}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\glbinom}[2]{\left[\!\genfrac{}{}{0pt}{0}{#1}{#2}\!\right]}
\newcommand{\gltrinom}[3]{\left[\!\begin{array}{@{}c@{}}#1\\#2\\#3\end{array}\!\right]}
\endlocaldefs
\renewcommand{\doi}[1]{\href{https://doi.org/\detokenize{#1}}{\texttt{https://doi.org/#1}}}
\begin{document}
\begin{frontmatter}
\begin{fmbox}
\dochead{Research\hfill\now
\title{On the Perturbation of\protect\\Self-Organized Urban Street Networks}
\author[
addressref={NYUAD},
corref={NYUAD},
email={[email protected]}
]{\inits{JGMB}\fnm{J{\'e}r{\^o}me GM} \snm{Benoit}}
\author[
addressref={NYUAD,NYUTSE},
email={[email protected]}
]{\inits{SEGJ}\fnm{Saif Eddin G} \snm{Jabari}}
\address[id=NYUAD]{%
\orgname{New York University Abu~Dhabi},
\street{Saadiyat Island},
\postcode{POB 129188},
\city{Abu~Dhabi},
\cny{UAE}
}
\address[id=NYUTSE]{%
\orgname{New York University Tandon~School~of~Engineering},
\street{Brooklyn},
\postcode{NY 11201},
\city{New~York},
\cny{USA}
}
\end{fmbox}
\begin{abstractbox}
\begin{abstract}
We investigate urban street networks as a whole
within the frameworks of information physics and statistical physics.
Urban street networks are envisaged as evolving social systems subject to
a Boltzmann-mesoscopic entropy conservation.
For self-organized urban street networks,
our paradigm has already allowed us
to recover the
effectively observed
scale-free distribution of roads
and to foresee the distribution of junctions.
The entropy conservation is interpreted as
the conservation of the surprisal of the city-dwellers for their urban street network.
In view to extend our investigations to other urban street networks,
we consider to perturb our model
for self-organized urban street networks
by adding an external surprisal drift.
We obtain the statistics
for slightly drifted self-organized urban street networks.
Besides being practical and manageable,
this statistics separates
the macroscopic evolution scale parameter
from the mesoscopic social parameters.
This opens the door to observational investigations
on the universality of the evolution scale parameter.
Ultimately,
we argue that the strength of the external surprisal drift
might be an indicator for the disengagement of the city-dwellers for their city.
\end{abstract}
\begin{keyword}
\kwd{Urban street networks}
\kwd{Self-organizing networks}
\kwd{Entropic equilibrium}
\kwd{MaxEnt}
\kwd{power law}
\kwd{City science}
\kwd{Interdisciplinary physics}
\kwd{Information physics}
\kwd{Statistical physics}
\kwd{Surprisal}
\kwd{Wholeness}
\kwd{Big data}
\end{keyword}
\end{abstractbox}
\end{frontmatter}
\hypersetup{%
pdfdisplaydoctitle=true,
pdftitle={On the Perturbation of Self-Organized Urban Street Networks}
pdfauthor={J\'er\^ome~Benoit (ORCID: 0000-0003-1226-6757) and Saif~Eddin~Jabari (ORCID: 0000-0002-2314-5312)}
pdfsubject={%
Applied Network Science special issue:
ComplexNetwork 2018
(%
The 7th International Conference on Complex Networks and Their Applications
- Cambridge 11-13 December (United Kingdom)%
)
},
pdfcreator={\LaTeXe{} and its friends},
pdfpagelayout=SinglePage
pdfpagemode=UseOutlines
pdfstartpage=1,
pdfhighlight=/O
pdfview=FitH,
pdfstartview=FitH,
colorlinks=true,
allcolors=RedOrange,
citecolor=RoyalBlue3,
urlcolor=RoyalBlue3,
linkcolor=RoyalBlue3,
bookmarksnumbered=true,
bookmarksopen=true,
bookmarksopenlevel=3,
}
\section*{\addcontentsline{toc}{section}{Introduction}Introduction}
We seek to understand the statistics of urban street networks.
Such an understanding will help urban designers and decision makers
to improve urban policies in general and urban transportation in particular.
In our work we investigate urban street networks as a whole
within the frameworks of information physics \citep{KHKnuth2011}
and statistical physics \citep{ETJaynes1957I,ETJaynes1957II}.
Although
the number of times that a \emph{natural road} crosses an other one
has been widely observed to follow a
discrete Pareto probability distribution \citep{AClausetCRShaliziMEJNewman2009}
among self-organized cities \citep{CAlexanderACINAT1965,CrucittiCMSNUS2006,BJiangTSUSNPDC2014},
very few efforts have focused
on deriving
the statistics of urban street networks
from fundamental principles.
Here a natural road (or road) denotes an accepted substitute for a ``named'' street \citep{BJiangTSUSNPDC2014}.
In a recent work \citep{SESOPLUSN},
we introduce a statistical physics model that derives
the statistics of self-organized urban street networks
by applying Jaynes’s \emph{maximum entropy principle} \citep{ETJaynes1957I,ETJaynes1957II}
through the information physics paradigm \citep{KHKnuth2011}.
Our approach explicitly emphasizes
the road-junction hierarchy of the initial urban street network
rather than implicitly splitting it accordingly in two dual but distinct networks.
Most of the investigations indeed seek
to cast the initial urban street network into a road-road
topological
network \citep{BJiangTSUSNPDC2014}
and to describe its valence probability distribution.
This holistic viewpoint adopted by the urban community \citep{CAlexanderACINAT1965,RHAtkin1974}
appears to fit well with the mindset of information physics \citep{KHKnuth2011},
which is built upon partial order relations \citep{BADaveyHAPriestleyILO,KHKnuth2011}.
Here
the partial order relation derives from the road-junction incidence relation.
The passage from the road-junction hierarchy to a Paretian coherence occurs
by imposing a Boltzmann-mesoscopic entropy conservation \citep{MMilakovic2001,YDover2004}.
The emerging statistical physics expresses better in terms of surprisal \citep{MTribusTT}.
Surprisal quantifies our astonishment and indecision whenever we face an arbitrary event.
Here surprisal betrays the perception of the city-dwellers for their own urban street network.
Then,
the passage to Paretian coherence simply expresses
the conservation on average of their perception-surprisal.
Ultimately,
we are facing a Paretian statistical physics that challenge our Gaussian way of thinking.
The present work explores,
by hand,
how we can extend our Paretian statistical physics model for self-organized urban street network
to \textit{`nearly'} self-organized urban street networks.
Basically,
we want to proceed by applying arbitrary small perturbations to our model,
and see what we get.
In the remaining,
the paper is organized as follows.
The second section articulates the pace
from raw urban street networks
to idealized self-organized urban street networks
within the framework of information physics.
Next,
the third section shifts to Jaynes’s maximum entropy principle.
There,
along treatments and discussions,
we set
the idealized self-organized Shannon-Lagrangian for urban street networks
before we perturb it with an external surprisal drift.
Eventually,
after highlighting the two major practical properties of our theoretical work,
we point to future observational works
around the universality of self-organized urban street networks
as such and as reference.
\begin{figure}[bth!]
\includegraphics[width=0.95\linewidth]{osusn_jsi-figure-01}
\caption{\label{OSUSN/fig/USN/NotionalExample/RawMaterial}%
Notional urban street network%
\endnote{%
Notional example inspired by the \textit{`notional road network'}
in the paper by \citet*{BJiangSZhaoJYin2008}.%
}\label{OSUSN/edn/NotionalExample}
in black-and-white and colourized versions
used all along the paper.
This notional example
is meant to
pattern a
portion of a real-world city map.
The black-and-white version ($\mathrm{g}$) connotes a geometrical viewpoint
that leads to a Poissonian physics.
Whereas
the colourized version ($\mathrm{t}$) evinces a topological perception
that is subject to scale-free behaviours.
}
\end{figure}
\section*{\addcontentsline{toc}{section}{From Apparent Dullness to Living Coherence}From Apparent Dullness to Living Coherence}
\subsection*{\addcontentsline{toc}{subsection}{Structure to Quantify}Structure to Quantify}
\subsubsection*{\addcontentsline{toc}{subsubsection}{From Street-Junction Networks to Road-Road Networks}\label{sec/subsub/AD2LC/S2Q}From Street-Junction Networks to Road-Road Networks}
Everyone has seen black-and-white city maps drew with lines of the same width
as shown in Figure~\ref{OSUSN/fig/USN/NotionalExample/RawMaterial}$\mathrm{g}$.
Each line intersection represents a street-junction (or junction),
each portion of line between two adjacent junctions may be identified
as a street-segment.
Basically,
an urban street network is composed of junctions bonded by street-segments.
That is,
junctions and street-segments constitute,
respectively,
the immediate nodes and links of a family of real-world networks
known as urban street networks
---
see Figure~\ref{OSUSN/fig/USN/NotionalExample/GeoTopo}.
As such,
these real-world networks are literally street-junction networks.
Construction rules readily impose that each junction ties together at least three street-segments.
On the other hand,
everyday observations tell us that,
anywhere in any city,
any junction joins mostly four street-segments,
occasionally five or six, rarely seven, and very exceptionally more.
Real data analysis shows that
the valence distribution for street-junction networks
essentially follows a Poisson law sharply centred in four \citep{BJiangSZhaoJYin2008,BJiangCLiu2009}.
In this sense,
the complexity of street-junction networks tends to be as trivial as a regular square lattice.
This first attempt to describe urban street environments
---
better known as the \emph{geometrical approach}
---
may appear to be too naive
\citep{BJiangTAUSN2004,BJiangSZhaoJYin2008,BJiangCLiu2009,BJiangTSUSNPDC2014,PortaTNAUSPA2006,MRosvall2005,APMsucci2009}.
As an alternative,
we may consider instead colourized city maps with lines of arbitrary colours
as shown in Figure~\ref{OSUSN/fig/USN/NotionalExample/RawMaterial}$\mathrm{t}$.
We have in mind street maps.
Basically,
a street map of a city has the particularity to exhibit
how the city-dwellers perceive the urban street network of their own city.
Explicitly,
it shows how they have gathered
along the time
the street-segments of their own city to form streets.
Implicitly,
it reveals that we
human
townmen
rather reason in terms of streets than of street-segments.
But over all,
deeply,
it betrays a topological mindset
that looks on street maps
essentially
for topological information.
Indeed,
to move from one place to another,
we seek for directional information
with the following three characteristic traits:
\newcounter{counterTMSTEnum}\setcounter{counterTMSTEnum}{1}%
(\roman{counterTMSTEnum})\stepcounter{counterTMSTEnum}~%
each pair of successive streets must critically share a common junction
---
whichever it is;
(\roman{counterTMSTEnum})\stepcounter{counterTMSTEnum}~%
each junction in itself plays a secondary role;
(\roman{counterTMSTEnum})\stepcounter{counterTMSTEnum}~%
neither position nor distance is important.
The \emph{topological approach} forces these three characteristic traits
by reducing road maps to
(topological)
road-road networks.
Here a {natural road}
(or {road}, for short)
is an accepted substitute for street
(more precisely, for ``named'' street).
A road-road network reduces roads to nodes and bonds each pair that shares a common junction
---
see Figure~\ref{OSUSN/fig/USN/NotionalExample/GeoTopo}.
Real data analysis shows that
the valence distribution for the road-road network
of a self-organized urban street network
typically follows an inverse-power scaling law,
namely,
a scale-free power law
\citep{BJiangTAUSN2004,CrucittiCMSNUS2006,PortaTNAUSPA2006,PortaTNAUSDA2006,BJiangATPUSN2007,BJiangSZhaoJYin2008,BJiangTSUSNPDC2014}.
This is scale-freeness.
We have a slight grasp of scale-freeness
for an urban street network
whenever we apprehend that
only a few streets cross a large number of them,
several streets cross an intermediate number of them,
and very many streets cross a small number of them.
As a matter of fact,
by contrast to street-junction networks,
road-road networks are subject to complex network behaviours.
Thusly,
the topological approach appears far more pertinent
than the geometrical one
for at least two reasons.
Firstly,
the topological description unveils that
urban street networks underlie complex behaviours generally observed in real complex networks
\citep{BJiangSZhaoJYin2008,BJiangCLiu2009,BJiangTSUSNPDC2014,PortaTNAUSPA2006,MRosvall2005,APMsucci2009,APMsucci2016};
the complexity induced by the geometrical description is trivial \citep{BJiangSZhaoJYin2008,BJiangCLiu2009}.
Secondly,
the topological approach permits to isolate a category of real urban street networks that
shows evidence of a \textit{`pure'} scaling behaviour;
the geometrical approach renders all urban street networks equally \textit{`boring'} \citep{BJiangSZhaoJYin2008}.
This idealized category of urban street networks may serve as a reference from which
any general urban street networks deviates.
\begin{figure}[bth!]
\includegraphics[width=0.95\linewidth]{osusn_jsi-figure-02}
\caption{\label{OSUSN/fig/USN/NotionalExample/GeoTopo}%
Geometrical versus topological approaches for urban street networks:
a four-step visual construction of their respective abstract networks.
Each construction is performed on the notional sample exhibited
in Figure~\ref{OSUSN/fig/USN/NotionalExample/RawMaterial}.
The left four-step sequence ($\mathrm{g}_{1}$)--($\mathrm{g}_{4}$)
and its right counterpart ($\mathrm{t}_{1}$)--($\mathrm{t}_{4}$)
sketch for this sample
the geometrical and topological abstract network constructions,
respectively.
At Step~$1$, street-segments and roads are identified:
the street-segments are labelled with indexed $s$ and coloured in distinct pallid colours;
the roads are labelled with indexed $r$ and coloured in distinct vivid colours.
Meanwhile,
the junctions and the impasses are coloured in grey and labelled with indexed $j$ and $i$, respectively.
In Subfigure~$\mathrm{g}_{2}$,
the extended junctions $j_{\ast}$ and $i_{\ast}$ and the street-segments $s_{\ast}$
spontaneously become nodes and edges, respectively.
In Subfigure~$\mathrm{t}_{2}$,
each road $r_{\ast}$ is reduced to a node
and each road-node pair $\{r_{\ast},r_{\star}\}$ is linked
whenever $r_{\ast}$ and $r_{\star}$ share
at least
a common junction.
At Step~$3$, the raw material is being dissolved to highlight the emerging abstract networks.
Finally, at Step~$4$,
the resulting abstract networks are rearranged to stress their relevant traits:
the size of each node is proportional to its valence;
the impasses $i_{\ast}$ are neglected because they are rather free-ends than nodes;
the road-node $r_{h}$ was flipped to avoid a confusing edge crossing;
and so forth.
}
\end{figure}
\subsubsection*{\addcontentsline{toc}{subsubsection}{Road-Road Networks Mask Road-Junction Partial Orders}Road-Road Networks Mask Road-Junction Partial Orders}
Even though the topological approach leads to precious observations,
it remains mostly a descriptive tool.
The topological approach does not provide any explanation,
it is not concerned about the underlying principles
for how urban street networks are emerging.
A \emph{structural approach} that does not bypass street-junctions
(or road-junctions)
allows us to establish a statistical physics foundation
for the \textit{`pure'} scaling behaviour
as effectively observed among self-organized urban street networks \citep{SESOPLUSN}.
It is fair to add that the structural approach may lead to alternative foundations,
but also that it does not fail to catch the \textit{`true structure'}
of urban street networks
by forcing the three above topological characteristic traits
a bit too early.
Here
urban street networks are envisioned as a whole
where road literally tie together through junctions.
To begin with,
we represent the ties by an incidence relation
that gathers for each road all junctions through which it passes \citep{BJiangSZhaoJYin2008}
as exemplified in Table~\ref{OSUSN/tab/USN/NaturalRoads/IncidenceRelation}.
Then,
we interpret this road-junction incidence relation
as an object/attribute relation
for which any road acts as an objects and any junction as an attribute
\citep{RHAtkin1974,BADaveyHAPriestleyILO,YSHoTPP1982D}.
Eventually,
by invoking the Formal Concept Analysis (\textsc{FCA}) paradigm,
this change of perspective allows us to establish bijectively a partial-order relation
\citep{BADaveyHAPriestleyILO,YSHoTPP1982D}.
In other words,
every urban street network
is subject to and bijectively representable by a partial-order.
\begin{table}[h!]
\colorlet{tblOPSOUSNGreyXCol}{lightgray}
\providecommand{\tblOPSOUSNCross}{$\CIRCLE$}%
\let\tblOPSOUSNGreyX\tblOPSOUSNCross
\providecommand{\tblOPSOUSNAntiX}{{\tiny{$\cdot$}}}%
\settowidth{\tabcolsep}{$\:$}
\caption{\label{OSUSN/tab/USN/NaturalRoads/IncidenceRelation}%
Road-junction incidence dot-chart
associated to the colourized notional urban street network
introduced in Figure~\ref{OSUSN/fig/USN/NotionalExample/RawMaterial}
with the labelling chosen in Figure~\ref{OSUSN/fig/USN/NotionalExample/GeoTopo}.
Here the incidence relation is represented as a Boolean array that stores its Boolean values:
a big dot \tblOPSOUSNCross\ stands for \texttt{true}, a tiny dot \tblOPSOUSNAntiX\ for \texttt{false};
each row represents a road $r_{\ast}$, each column a junction $j_{\ast}$;
$\fcaIR$ denotes the incidence relation.
Incidence relations are concretization of object-attribute relations.
Here the objects are the roads $r_{\ast}$ while the attributes are the junctions $j_{\ast}$.
}
\begin{tabular}{l|ccccccccccccccccc}%
$\fcaIR$ &
$j_{1}$ & $j_{2}$ & $j_{3}$ & $j_{4}$ & $j_{5}$ & $j_{6}$ & $j_{7}$ & $j_{8}$ &%
$i_{1}$ & $i_{2}$ & $i_{3}$ & $i_{4}$ & $i_{5}$ & $i_{6}$ & $i_{7}$ & $i_{8}$ & $i_{9}$ \\
\hline
$r_{a}$ & \tblOPSOUSNCross & \tblOPSOUSNAntiX & \tblOPSOUSNCross & \tblOPSOUSNCross &%
\tblOPSOUSNCross & \tblOPSOUSNAntiX & \tblOPSOUSNCross & \tblOPSOUSNCross &%
\tblOPSOUSNGreyX & \tblOPSOUSNGreyX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{b}$ & \tblOPSOUSNAntiX & \tblOPSOUSNCross & \tblOPSOUSNCross & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNGreyX & \tblOPSOUSNGreyX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{c}$ & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNCross &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNGreyX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{d}$ & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNCross & \tblOPSOUSNCross & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{e}$ & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNCross & \tblOPSOUSNCross & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNGreyX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{f}$ & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNCross &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNGreyX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{g}$ & \tblOPSOUSNCross & \tblOPSOUSNCross & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX \\
$r_{h}$ & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNCross & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX &%
\tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNAntiX & \tblOPSOUSNGreyX & \tblOPSOUSNGreyX \\
\end{tabular}
\end{table}
More interestingly,
any partial-order can be represented by an abstract ordered structure
known as Galois lattice \citep{BADaveyHAPriestleyILO,YSHoTPP1982D}.
In general,
a Galois lattice organizes itself in layers
with respect to its partial-order,
so that it can give rise to sympathetic graphical representations
called Hasse diagrams \citep{BADaveyHAPriestleyILO}.
The Galois lattice corresponding to the incidence relation in Table~\ref{OSUSN/tab/USN/NaturalRoads/IncidenceRelation}
is represented by a Hasse diagram in Figure~\ref{OSUSN/fig/USN/NotionalExample/GL/HasseDiag}.
If we assume that two roads cross to each other only once,
it appears then that
urban street networks reduce to intuitive two-layer Galois lattices:
the roads and the junctions make up the lower nontrivial layer and the upper nontrivial layer, respectively;
the \textit{`imply'} ordering relation
(or join operator)
is ``passing through'' (or ``crossing at'').
Figure~\ref{OSUSN/fig/USN/NotionalExample/GL/HasseDiag} exhibits clearly this property.
Roads that cross to each other more than once form loops.
Since such loops are rare while mostly not spontaneous,
for the sake of simplicity and unless otherwise specify,
the remaining will consider set of roads free of such loops.
Distributivity is an important property of Galois lattices \citep{BADaveyHAPriestleyILO}.
In particular,
for any finite Galois lattice,
distributivity allows us to claim that the elements of the first nontrivial layer
are the join-irreducible elements of the Galois lattice;
that is,
each upper elements can be expressed as
a join chain composed with elements of the first nontrivial layer
---
while none element of the first nontrivial layer can be decomposed.
In our context,
distributivity corresponds to the intuition that
any junction is a crossing of only two roads.
Therefore,
any junction that joins more than two roads
render the underlying Galois lattice nondistributive.
However,
as we have seen above,
junctions mostly join two roads.
Second,
any junction that joins at least three roads can be replaced by a roundabout
so that it remains only junctions that joins at most two roads.
For theses two reasons,
we may qualify as \emph{canonical}
any urban street network whose junctions effectively join only two roads.
\begin{figure}[bth!]
\includegraphics[width=0.975\linewidth]{osusn_jsi-figure-03}
\caption{\label{OSUSN/fig/USN/NotionalExample/GL/HasseDiag}%
Road-junction Galois lattice
associated to the colourized notional urban street network
introduced in Figure~\ref{OSUSN/fig/USN/NotionalExample/RawMaterial}
with the labelling chosen in Figure~\ref{OSUSN/fig/USN/NotionalExample/GeoTopo}.
This Galois lattice is obtained by applying the Formal Concept Analysis (\textsc{FCA}) paradigm
to the incidence relation $\fcaIR$ whose chart representation is given
in Table~\ref{OSUSN/tab/USN/NaturalRoads/IncidenceRelation}.
This construction is one-to-one.
A Galois lattice is an algebraic structure that underlies a partial order relation $\preccurlyeq$ and two algebraic operators,
a join operator $\vee$ and a meet operator $\wedge$.
The partial order relation can be interpreted as an extended logical imply relation $\rightarrow$.
The arrows in the diagram inherit this interpretation.
For \textsc{FCA} lattices,
each element is a pair of sets $[R,J]$ where $R$ is a set of objects and $J$ a set of attributes.
Here the roads $r_{\ast}$ are the objects whose attributes are junctions $j_{\ast}$ and impasses $i_{\ast}$
(see Table~\ref{OSUSN/tab/USN/NaturalRoads/IncidenceRelation}).
Because the roads $r_{\ast}$ do not cross to each others more than once,
the Galois lattice takes an intuitive two-layer form.
Indeed,
the join-irreducible elements $[\{r_{\ast}\},J]$ and the meet-irreducible elements $[R,\{j_{\ast}\}]$
readily identify themselves with their road $r_{\ast}$ and their junctions $j_{\ast}$, respectively.
So, the roads $r_{\ast}$ and the junctions $j_{\ast}$ immediately form, respectively,
the lower and upper nontrivial layers of the Galois lattice.
This also gives meaningful and intuitive interpretations to
the partial order relation $\preccurlyeq$ and to the operators $\vee$ and $\wedge$:
${r_{a}}\preccurlyeq{j_{7}}$
(or ${r_{a}}\rightarrow{j_{7}}$)
reads ``road $r_{a}$ passes through junction $j_{7}$'' or ``junction $j_{7}$ is along road $r_{a}$'';
${r_{a}}\vee{r_{b}}={j_{3}}$ reads ``roads $r_{a}$ and $r_{b}$ join at junction $j_{3}$'';
${j_{3}}\wedge{j_{7}}={r_{a}}$ reads ``junctions $j_{3}$ and $j_{7}$ meet road $r_{a}$''.
Each colourized arrow in the diagram bears the colour of its road.
The top element $\top$ is the urban street network as a whole,
while the bottom element $\bot$ is its absurd counterpart,
emptiness or the absence of urban street network.
}
\end{figure}
Furthermore,
it is noticeable that
the underlying Galois lattice proves
not only to reduce bijectively but also to reflect pertinently
the involving topological complexity.
Indeed,
each underlying Galois lattice assigns
a clear primary role to roads
and a clear secondary role to junctions
so that the three topological characteristic traits are valorized as they should be:
roads imply junctions;
roads are join-irreducible (or just irreducible, for short),
while junctions are join-reducible (reducible) to roads;
junctions are meet-irreducible,
while roads are meet-reducible to junctions.
[%
As an aside,
for roads that form loops with each others,
the \textsc{FCA} paradigm simply creates abstractions of roads and junctions:
roads and junctions may then be defined,
respectively,
as the join-irreducible and meet-irreducible elements
of the involved road-junction Galois lattice.%
]
By now,
most of us should recognize road-road networks as
either zeroth order approximations or projections
of road-junction Galois lattices
by employing either an analytic analogy or a geometric one,
respectively.
To summarize,
any urban street network bijectively reduces
its topological complexity
to an essentially distributive two-layer Galois lattice,
while its canonicalization renders the latter plainly distributive.
\subsubsection*{\addcontentsline{toc}{subsubsection}{Partial Orders Are Algebraic Structures}Partial Orders Are Algebraic Structures}
Actually,
Galois lattices are not only ordered structures but also algebraic structures
\citep{BADaveyHAPriestleyILO,YSHoTPP1982D}.
To put it another way,
the join operator
(or partial order)
not only permit us
to construct the entire Galois lattice from its join-irreducible elements
\citep{BADaveyHAPriestleyILO,YSHoTPP1982D}
but also
to consistently assign numbers to its elements
so that the algebra of these numbers
reflects the algebra
of the Galois lattice
while their order respects its partial-order
\citep{KHKnuth2011,KHKnuth2008,KHKnuth2009,KHKnuth2005,KHKnuth2014}.
The evaluation of partially ordered sets
(or Galois lattices)
is the main object of the theory of information physics
\citep{KHKnuth2011,KHKnuth2008,KHKnuth2009,KHKnuth2005,KHKnuth2014}.
Because
quantifying roads and their concomitant junctions enable us
to confront models motivated by principles against observed data,
the structural approach liberates us to restrain ourselves
to perform sophisticated but nevertheless blind data analysis.
Given this,
the structural approach may no more appear as a gratuitous {\ae}sthetic step
to the most skeptical readers.
In brief,
the structural approach is a game changer.
In fact,
three Galois lattices are getting involved \citep{KHKnuth2008,KHKnuth2009}.
Let us now,
in order to forge for ourselves a better comprehensive picture,
succinctly describe them
and their respective \emph{valuation functions}.
The task is relatively easy since we are already familiar
with the Galois lattice of our system,
with the valuation function of the first extra Galois lattice,
and almost with the valuation function of the last Galois lattice.
Of course,
our first and foremost Galois lattice is the system itself,
so that our
unique
unknown valuation function $\glVa$ is simply meant to describe
the physics of our system.
Specifically,
the unknown valuation function $\glVa$ assigns a positive real number
indiscriminately
to all roads and junctions
so that each assigned positive real number characterises
the physical state of the involved road or junction.
Notice that valuation functions must be positive for consistency reasons.
As generic
Galois lattice
components,
roads and junctions organizes themselves in downsets.
A downset is a set of elements which contains all the elements implying each of them
\citep{BADaveyHAPriestleyILO}.
If we mentally sketch our urban street network randomly by roads and junctions
under the unique rule that a junction can be dotted only when all of its joining roads are already lined,
then each downset represents a state of our mental picture
---
and vice versa.
The set of all downsets ordered according to set inclusion $\subseteq$ forms a distributive Galois lattice,
which is called the \emph{state space}.
The state space is an auxiliary Galois lattice which merely helps us to introduce the next relevant one.
The join-irreducibles of the state space are the downsets associated to every road or junction,
that is,
the singleton sets composed of one road and sets composed of one junction along all of its joining roads.
These join-irreducibles generate the state space with set union $\cup$ as join-operator.
Nevertheless,
in reality,
given city-dwellers may not know precisely which state
their mental picture of the urban street network represents.
Even so,
they may have some information that exclude some states,
but not others.
Therefore,
the mental pictures of city-dwellers are mostly sets of potential states than single states know with certainty.
A set of potential states is called a \emph{statement}.
The set of all possible statements is simply the powerset generated from the set of all states.
Once ordered according to set inclusion $\subseteq$,
the set of all statements becomes a distributive Galois lattice
whose the join-irreducibles are the states.
This Galois lattice is known as the \emph{hypothesis space}.
Within the hypothesis space,
statements follows a logical deduction order as
each statement literally implies
(or is included in)
a statement with certainty.
The valuation function associated to any hypothesis space is recognized
to be a probability distribution.
So,
we are already very familiar with the algebra satisfied by the valuation functions associated to hypothesis spaces.
Among valuation functions associated to Galois lattices,
this algebra can be shown to be the only one possible
by imposing natural algebraic consistency restrictions.
Let us digress briefly to bring our attention back to the system valuation function $\glVa$:
as immediate consequence,
for canonical urban street networks,
the evaluation $\glVa(j(r,s))$ of a junction $j(r,s)$ joining a pair of roads $(r,s)$
must be the sum of the evaluations $\glVa(r)$ and $\glVa(s)$ of the joined roads $r$ and $s$,
respectively;
we have
\begin{equation}\label{eq/USN/Evaluation/constraint/addition}
\glVa(j(r,s)) = \glVa(r) + \glVa(s)
.
\end{equation}
End of digression.
Because the hypothesis state is essentially a representation of the system,
it is reasonable to claim that
its valuation function $\Pr$ must be related to the valuation function $\glVa$ of our system,
that is, to the physics of our system.
Meanwhile,
Rota theorem \citep[Thm.~1, Cor.~2]{GCRotaOCEC1971} asserts that,
for a finite distributive Galois lattice,
the valuation function is perfectly determined by the arbitrary values taken by its join-irreducibles.
In other words,
the valuation function $\Pr$ does not depend on the very structure of the hypothesis space;
rather,
it depends on the arbitrary values assigned to the join-irreducibles of the hypothesis space,
which are the states.
Accordingly,
the probability assigned to each state has to be
an arbitrary function of its evaluation by the valuation function $\glVa$;
this is a composition.
This arbitrary function interprets itself as a \textit{weight function} $\glWg$.
We read
\begin{equation}\label{eq/USN/HypothesisSpace/Evaluation/composition/Pr}
\Pr = \glWg \circ \glVa
.
\end{equation}
The weight function $\glWg$ constitutes our second unknown function.
The construction of the hypothesis space from the state space
corresponds technically to an exponentiation \citep{BADaveyHAPriestleyILO}.
The exponentiation of the hypothesis space brings up an \emph{inquiry space}.
The inquiry space is a distributive Galois lattice whose elements are \emph{questions}.
Thus, by construction, any question is a set of statements that answer it.
The quantification of the inquiry space leads to a measure,
coined \emph{relevance}.
In fact,
the inquiry space is Carrollian in the sense that it contains both
vain
(and fanciful)
and real questions.
A respond to a real question is a true state of our system.
To wit, a real question permits to know the configuration of our system exactly and without ambiguity.
A vain question can only lead to partial or ambiguous knowledge of the configuration.
The join chain of all the join-irreducible questions is the smallest real question,
it is called the \emph{central issue}.
The questions above the central issue form a Galois sublattice that contains all and only real questions.
The join-irreducible elements of the real Galois sublattice appears to partition their answers.
This property is reflected in the choice of the relevance
by coercing the relevance of a partition question to depend
on the probability of the greatest statements of its partitions.
This choice imposes the relevance to satisfy
the four natural properties of entropies \citep{JAczelBForteCTNg1974}.
This means that relevance is a generalized measure of information
with Shannon entropy as basis \citep{JAczelBForteCTNg1974}.
This is one of the major results of information physics.
The relevance of the central issue identifies itself with the entropy.
Therefore,
for canonical urban street networks,
the functional entropy $\glEta[\glVa,\glWg]$
takes the form
\begin{equation}\label{eq/USN/StructureEntropy}
\glEta[\glVa,\glWg] =
\!\sum_{r}\left(\glHm\circ\glWg\right)\left(\glVa(r)\right)
+%
\!\!\!\sum_{j(r,s)}\!\left(\glHm\circ\glWg\right)\left(\glVa(r)\!+\!\glVa(s)\right)
\end{equation}
where the first summation runs over the roads~$r$
and the second one over the junctions $j(r,s)$ joining the pair of roads $(r,s)$,
while $\glHm\colon{x}\mapsto-x\ln{x}$ is
the Shannon entropy function.
We will keep to express information measures in \textsf{nat} units.
For further details on the theory of information physics,
we refer the reader to the work of \citet{KHKnuth2011,KHKnuth2008,KHKnuth2009,KHKnuth2005,KHKnuth2014}.
For now,
we have enough material to step forward.
\subsection*{\addcontentsline{toc}{subsection}{Quantify to Organize}Quantify to Organize}
\subsubsection*{\addcontentsline{toc}{subsubsection}{From Galoisean Hierarchy to Paretian Coherence}\label{sec/subsub/Q2O/GH2PC}From Galoisean Hierarchy to Paretian Coherence}
Network data analysis shows that city-dwellers have
a topological perception of their urban street networks.
On the other hand,
the topology of urban street networks hides
a simple road-junction partial order that
bijectively reduces to intuitive two-layer Galois lattices.
The Galoisean hierarchy is intuitive in the sense that its join-operator expresses
our intuition that two roads join to form a junction.
Nonetheless this intuitive hierarchy leads to
two layers whose cardinality might be perceived as incommensurable.
Typical big cities count far more than several roads and junctions.
The apparent simplicity of the underlying Galois lattices is
the result of an algorithmic thought.
Nonetheless the Galoisean hierarchy is three-fold.
While the ordering and algebraic perspectives are respectively structural and operational,
the whole is measurable.
The underlying algebraic structure leads unambiguously to a unique quantification modulo
two unknown functions that we are free to choose.
These two unknown functions are of different nature.
The valuation function $\glVa$ assigns to each road or junction of the urban street network
a numerical quantity that characterizes its physical state.
The weight function~$\glWg$,
or more precisely its composition with the valuation function~$\glVa$
as expressed in \eqref{eq/USN/HypothesisSpace/Evaluation/composition/Pr},
allows us to assign to each mental picture of the urban street network
a numerical quantity that characterizes its perception among the city-dwellers.
This assignment is simply the probability distribution $\Pr$ of our system.
Ultimately all these mental pictures are surrounded by all sort of questions
whose pertinence can be measured.
The relevance of the most pertinent question is better known as the entropy of the system.
The most plausible probability $\Pr$,
that is,
the quantification which tends to represent at best
the perception of the city-dwellers for their own urban street network,
must also be the most relevant one.
In other words,
the most plausible probability $\Pr$ must maximize
the functional entropy \eqref{eq/USN/StructureEntropy} of their urban street network.
This is nothing other than Jaynes’s maximum entropy principle
\citep{KHKnuth2008,ETJaynes1957I,ETJaynes1957II,HKKesava2009,JNKapurHKKesava1992,ETJaynes1978SYLI}.
Thusly,
our physical content shifts from a algorithmic order to a fluctuating organization.
Roads and junctions indiscriminately yield \emph{our initial ignorance} \citep{ETJaynes1978SYLI}.
The most we can tell is that roads and junctions are mesoscopic systems
with a finite number of possible configurations $\Omega$.
Besides,
we must assume \emph{our complete ignorance} about their respective inner worlds.
This means that,
to our eyes at least,
all their possible configurations are equally likely.
Thusly,
roads and junctions are Boltzmannian mesoscopic systems.
Therefore,
the probability distribution $\Pr$ reduces to a function
that depends only on the number of possible configurations $\Omega$.
Meanwhile,
the functional entropy \eqref{eq/USN/StructureEntropy} simplifies
to take the more sympathetic form
\begin{equation}\label{eq/USN/StructureEntropy/Pr}
\glEta[\Pr] =
-\sum_{\Omega}\Pr(\Omega)\ln\left(\Pr(\Omega)\right)
.
\end{equation}
On the other hand,
here,
self-organized urban street networks are idealized as scale-free systems,
\textit{viz.},
as systems exhibiting no typical number of configurations but rather a typical scale $\lambda$.
Thus,
as suitable characterizing moments
to invoke Jaynes’s maximum entropy principle \citep{ETJaynes1957I,ETJaynes1957II,HKKesava2009},
we must discard any classical moment and may consider logarithmic moments instead.
It appears that imposing the first logarithmic moment
\begin{equation}\label{eq/USN/ConfigurationSpace/Pr/FirstLogaritmicMoment/rhs}
\sum\Pr(\Omega)\ln\Omega
\end{equation}
as sole characterizing constraint gives rise to
the scale-free probability distribution
\begin{equation}\label{eq/USN/ConfigurationSpace/Pr/approx}
\Pr(\Omega)\propto\Omega^{-\lambda}
.
\end{equation}
A practical normalization of this probability distribution leads to
the discrete Pareto probability distribution \citep{AClausetCRShaliziMEJNewman2009}.
To sum up:
the passage from the underlying Galoisean hierarchy
to an underlying Paretian coherence occurs
by invoking
Jaynes’s maximum entropy principle
with the first logarithmic moment as sole characterizing moment
and with our complete ignorance as initial knowledge condition.
For every road or junction having $\Omega$ possible configurations,
the Boltzmann entropy $\ln\Omega$ measures nothing but our complete ignorance
on the configuration effectively taking place.
So,
our characterizing restriction simply claims that
an idealized self-organized urban street network evolves
by preserving our complete ignorance on average.
This characterizing scheme that induces a Paretian coherence has been interpreted as
some evolutionary based mechanism
to maintain some opaque internal order \citep{MMilakovic2001,YDover2004}.
Note furthermore that \textit{``complete ignorance''} has rather remained,
so far,
a technical term.
A more intuitive interpretation might be considered instead.
If the Boltzmann entropy $\ln\Omega$ is interpreted as the \emph{surprisal} that
city-dwellers associate to every road or junction having $\Omega$ possible configurations,
then $\sum\Pr(\Omega)\ln\Omega$ becomes the amount of surprisal on average that
they associate to their own urban street network.
Surprisal
(or \emph{surprise})
$\stSu=-\ln\circ\Pr$
was introduced by \citet{MTribusTT} as a measure to quantify our astonishment and our indecision
whenever we face any arbitrary event.
Once adapted to our context,
surprisal somehow betrays the perception of the city-dwellers for their own urban street network.
Therefore,
the above Paretian characterizing constraint simply asserts that
an idealized self-organized urban street network evolves
by preserving on average the perception that its city-dwellers share for it.
This assertion renders city-dwellers the unconscious
but nevertheless active actors of their own urban street networks,
not the passive subjects of an obscure technical machinery.
Along this line,
the scale parameter $\lambda$ of the underlying scale-free probability distribution \eqref{eq/USN/ConfigurationSpace/Pr/approx}
interprets itself as an \emph{evolution scale}.
\subsubsection*{\addcontentsline{toc}{subsubsection}{Untangling the Underlying Coherence}Untangling the Underlying Coherence}
The underlying coherence,
Paretian or not,
does not reveal to city-dwellers as-is.
Technically,
we must still untangle the corresponding weight function $\glWg$ and valuation function $\glVa$
with respect to the underlying algebraic structure,
namely,
with respect to
composition \eqref{eq/USN/HypothesisSpace/Evaluation/composition/Pr}
and addition rule \eqref{eq/USN/Evaluation/constraint/addition}.
Practically,
we need a mesoscopic model to count the number of configurations $\Omega$ associated to every road or junction.
For the reason that
roads and junctions are likely driven by social interactions,
the mesoscopic model must typify social interactions.
To fulfill this purpose,
it appears convenient to adopt and adapt
the network of intraconnected agents model introduced by \citet{YDover2004}
for the distribution of cities in countries.
Thereby,
each road or junction becomes a hive of agents that connect to each other.
As agents,
we may consider the inhabitants that somehow
participate to the live activity of roads:
drivers, cyclists, pedestrians,
suppliers,
institutional agents,
residents,
and so forth.
For each road $r$,
the number of agents
is assumed to be asymptotically proportional to the number of junctions $n_{r}$
that $r$ crosses
--- the ratio $A$ being constant and sufficiently large.
This expresses nothing but the extensive property of roads.
Here the very existence of every road relies
on the ability for each of its agent
to maintain a crucial number of intraconnections
which is crudely equal to a constant number $\plNVCR$ \citep{YDover2004,RIMDunbarSShultz2007},
called the \emph{number of vital connections} for roads.
The layout of theses intraconnections is implicitly associated to
the internal order within each road,
while the total number of possible layouts
for each road
is simplistically considered as
its number of configurations \citep{YDover2004}.
\begin{subequations}\label{eq/USN/AgentBasedModel/NumberOfStates}
Therefore,
for each road~$r$,
the number of configurations $\Omega_{r}$ yields
\begin{equation}\label{eq/USN/AgentBasedModel/NumberOfStates/NaturalRoads}
\Omega_{r}
= \plNStR\left(n_{r}\right)
\simeq
\binom{\tfrac{1}{2}A\,n_{r}\left(A\,n_{r}-1\right)}{\plNVCR}
\simeq
\frac{A^{2\plNVCR}}{2^{\plNVCR}{\plNVCR}!}\,n_{r}^{2\plNVCR}
.
\end{equation}
As concerns each junction,
continuing along this spirit,
the involved agents are merely
the agents of the two joining natural roads combined together.
Nevertheless,
as there is no apparent reason for roads and junctions to experience the same type of internal equilibrium,
we will assume two distinct numbers of vital connections,
$\plNVCR$ and $\plNVCJ$ respectively.
Then the same crude maneuvers give
\begin{equation}\label{eq/USN/AgentBasedModel/NumberOfStates/Junctions}
\Omega_{j(r,s)}
= \plNStJ\left(n_{j}=n_{r}+n_{s}\right)
\simeq
\frac{A^{2\plNVCJ}}{2^{\plNVCJ}{\plNVCJ}!}\,n_{j}^{2\plNVCJ}
.
\end{equation}
\end{subequations}
Therefore,
the valuation function $\glVa$ appears clearly to assign
to each road or junction
the number of its agents,
and
the weight function $\glWg$ to asymptotically count
the number of possible vital intraconnection layouts
---
modulo normalization.
\section*{\addcontentsline{toc}{section}{Self-Organized Urban Street Networks as Reference}Self-Organized Urban Street Networks as Reference}
\subsection*{\addcontentsline{toc}{subsection}{Ideal Self-Organized Urban Street Networks}Ideal Self-Organized Urban Street Networks}
\subsubsection*{\addcontentsline{toc}{subsubsection}{Coherence Based on Boltzmannian Mesoscopic Surprisals}\label{sec/subsub/SOUSN/ISOUSN/derivation}Coherence Based on Boltzmannian Mesoscopic Surprisals}
It is time now to explicitly invoke Jaynes’s maximum entropy principle
for the functional entropy \eqref{eq/USN/StructureEntropy/Pr}
with the first logarithmic moment \eqref{eq/USN/ConfigurationSpace/Pr/FirstLogaritmicMoment/rhs}
as single characterizing constraint.
Promptly,
the corresponding Shannon Lagrangian writes
\begin{multline}%
\label{eq/USN/ShannonLagrangian}
\mathcal{L}\left(\left\{\Pr(\Omega)\right\};\nu,\lambda\right) =
-\sum_{\Omega} \Pr(\Omega)\,\ln\left(\Pr(\Omega)\right)
-\left(\nu-1\right) \left[
\sum_{\Omega} \Pr(\Omega) - 1
\right]
\\
\shoveright{%
\qquad\qquad\quad%
-\lambda \left[
\sum_{\Omega} \Pr(\Omega)\,\ln{\Omega} - {\left\langle{\mathrm{S}}\right\rangle}
\right]
.
}
\end{multline}
The constraint relative to the Lagrange multiplier $\lambda$ compels
to keep constant the first logarithmic moment
\eqref{eq/USN/ConfigurationSpace/Pr/FirstLogaritmicMoment/rhs}
of the probability distribution $\Pr$;
namely,
it imposes
the preservation on average of the amount of surprisal
that city-dwellers perceive for their roads and junctions.
Meanwhile,
the Lagrange multiplier $\nu$ ensures the normalization condition that
the probability distribution $\Pr$ must satisfy.
The constant ${\left\langle{\mathrm{S}}\right\rangle}$ stands for
the constant mean surprisal at which the system evolves
---
for now it plays a dummy role.
Extremizing expression \eqref{eq/USN/ShannonLagrangian} yields
\begin{equation}\label{eq/USN/ShannonLagrangian/equations}
\frac{\partial\mathcal{L}\left(\left\{\Pr\left(\Omega\right)\right\};\nu,\lambda\right)}{\partial\Pr(\Omega)} =
-\ln\left(\Pr(\Omega)\right)
-\nu
-\lambda\,\ln\Omega
= 0
,
\end{equation}
which immediately leads to the scale-free probability distribution
\begin{equation}\label{eq/USN/ShannonLagrangian/solution/calculus/intermediate}
\Pr(\Omega) =
\frac{\Omega^{-\lambda}}{{e}^{\nu}}
\end{equation}
as previously claimed.
Afterwards,
the normalization condition effortlessly gives us an expression
for the dependent exponential denominator $\exp(\nu)$,
which may be defined as the \emph{partition function} $Z(\lambda)$ of our system;
we have
\begin{equation}
{e}^{\nu} = \sum_{\Omega} \Omega^{-\lambda} \equiv Z(\lambda)
.
\end{equation}
Ultimately,
we write solution \eqref{eq/USN/ShannonLagrangian/solution/calculus/intermediate}
in the more familiar form
\begin{equation}\label{eq/USN/ShannonLagrangian/solution}
\Pr(\Omega) =
\frac{\Omega^{-\lambda}}{Z(\lambda)}
.
\end{equation}
The found most plausible probability distribution \eqref{eq/USN/ShannonLagrangian/solution}
concerns the underlying coherence of our system.
As such,
this coherence can only be perceived indirectly by the city-dwellers of the urban street network.
The city-dwellers may rather perceive the coherence behind roads and junctions.
Their corresponding statistics are obtained as follows.
Substituting \eqref{eq/USN/AgentBasedModel/NumberOfStates/NaturalRoads}
into \eqref{eq/USN/ShannonLagrangian/solution},
we readily obtain for roads
\begin{subequations}\label{eq/USN/AgentBasedModel/DistributionFunction}
\begin{equation}\label{eq/USN/AgentBasedModel/DistributionFunction/NaturalRoads}
\Pr\left(n_{r}\right) \propto n_{r}^{-2\lambda\plNVCR}
,
\end{equation}
which is a scale-free probability distribution.
Injecting instead \eqref{eq/USN/AgentBasedModel/NumberOfStates/Junctions}
into \eqref{eq/USN/ShannonLagrangian/solution},
then gathering and counting with respect to
the precedent probability distribution \eqref{eq/USN/AgentBasedModel/DistributionFunction/NaturalRoads}
gives for junctions
\begin{equation}\label{eq/USN/AgentBasedModel/DistributionFunction/Junctions}
\Pr\left(n_{j}\right) \propto
\left(
\sum_{j(r,s)} \frac{\left[n_{j}=n_{r}+n_{s}\right]}{\left({n_{r}}{n_{s}}\right)^{2\lambda\plNVCR}}%
\right)
\,{n_{j}}^{-2\lambda\plNVCJ}
,
\end{equation}
\end{subequations}
which is a generalized power law probability distribution;
the summation in parentheses is simply the self-convolution of
the road probability distribution \eqref{eq/USN/AgentBasedModel/DistributionFunction/NaturalRoads}.
The bracket around the equality statement follows Iverson's convention \citep{CONCMATH,DEKnuth1992}:
the bracket has value one whenever the bracketed statement is true, zero otherwise.
The number of junction $n_{r}$ that a road crosses is essentially the number of roads
with which it shares a common junction,
namely,
its valence number in the corresponding road-road network.
So the probability distribution \eqref{eq/USN/AgentBasedModel/DistributionFunction/NaturalRoads} predicts
the valence distribution for roads that has been widely observed empirically among self-organized cities
\citep{BJiangTAUSN2004,CrucittiCMSNUS2006,PortaTNAUSPA2006,PortaTNAUSDA2006,BJiangATPUSN2007,BJiangSZhaoJYin2008,BJiangTSUSNPDC2014}.
A similar argument dually applies for junctions.
However,
to the best of our knowledge,
the valence distribution for junctions has brought no attention until now
---
except in our recent investigations.
For practical data analysis \citep{AClausetCRShaliziMEJNewman2009},
we need to assume that
the number of junctions per road ${n}_{r}$ spans from some minimal positive value $\underline{n}_{r}$.
Then the normalization of probability distributions \eqref{eq/USN/AgentBasedModel/DistributionFunction}
can be performed elegantly
by using natural generalization of known special functions.
First,
the probability for a road to cross ${n}_{r}$ junctions becomes
\begin{subequations}\label{OSUSN/eq/USN/PDF}
\begin{equation}\label{OSUSN/eq/USN/PDF/NaturalRoads}
\Pr\left({n}_{r}\right) =
\frac{{n}_{r}^{-2\lambda\upsilon_{r}}}{\sfhzeta\left(2\lambda\upsilon_{r};\underline{n}_{r}\right)}
,
\end{equation}
where
\begin{math}
\sfhzeta\left(\alpha;a\right) =
\sum_{{n}=0}^{\infty} {(a+n)}^{-\alpha}
\end{math}
is the generalized
(or Hurwitz-)
zeta function \citep[\S~25.11]{HBMF}.
Second,
the probability
for a junction to see ${n}_{j}$ junctions through its joining roads
reads
\begin{equation}\label{OSUSN/eq/USN/PDF/Junctions}
\Pr\left({n}_{j}\right) =
\frac{%
\sum_{{n}=\underline{n}_{r}}^{{n}_{j}-\underline{n}_{r}}
\left[{n}\left({n}_{j}-{n}\right)\right]^{-2\lambda\upsilon_{r}}
\,%
{n}_{j}^{-2\lambda\upsilon_{j}}
}{%
\sfWitten\left(2\lambda\upsilon_{r},2\lambda\upsilon_{r},2\lambda\upsilon_{j};\underline{n}_{r}\right)%
}
,
\end{equation}
where
\begin{math}
\sfWitten\left(\alpha,\beta,\gamma;\underline{n}\right) =
\sum_{{m},{n}\geqslant\underline{n}}
{m}^{-\alpha} {n}^{-\beta} \left(m+n\right)^{-\gamma}
\end{math}
is the two-dimensional
generalized
(or Hurwitz-)
Mordell-Tornheim-Witten zeta function \citep{JMBorweinKDilcher2018}.
\end{subequations}
As a conclusion,
let us remark that
statistics \eqref{OSUSN/eq/USN/PDF} for an ideal self-organized urban street network
does not separate
the macroscopic parameter $\lambda$
from the mesoscopic ones $\plNVCR$ and $\plNVCJ$
in the sense that,
at best,
we can only estimate the products $\lambda\plNVCR$ and $\lambda\plNVCJ$.
This separation of parameters is critical since it would allow us to distinguish quantitatively
the macroscopic phenomenon of evolution
from the mesoscopic phenomena of social interactions
that take place in urban street networks.
Notice that,
from a qualitative perspective,
two distinct behaviours are anticipated.
The numbers of vital connections $\plNVCR$ and $\plNVCJ$ certainly differ
from one cultural basin to another one \citep{YDover2004}.
Whereas the evolution scale $\lambda$ might transcend cultures \citep{SCALE2017}.
A classical way to separate parameters in Physics consists to introduce sufficiently small perturbations.
This is,
in its observational form,
the subject of the next subsection.
\subsubsection*{\addcontentsline{toc}{subsubsection}{Case Study of Central London}\label{sec/subsub/CaseStudy/CentralLondon}Case Study of Central London}
Figure~\ref{OSUSN/fig/USN/London} shows
the Relative Frequency Distributions (\textsc{RFD}) of the urban street network of Central London.
The probability distribution for roads
$\Pr\left({n}_{r}\right)$ \eqref{OSUSN/eq/USN/PDF/NaturalRoads} appears highly plausible,
as expected for any recognized self-organized city \citep{CAlexanderACINAT1965,BJiangTSUSNPDC2014}.
However,
for the time being,
the validation of the probability distribution for junctions $\Pr\left({n}_{j}\right)$ \eqref{OSUSN/eq/USN/PDF/Junctions}
appears more delicate.
This is due to the emergence of a numerical bottleneck as follows.
The state-of-the-art statistical method to either validate or reject a plausible hypothesis
for power law probability distributions
is based on Maximum Likelihood Estimations
(\textsc{MLE}) \citep{AClausetCRShaliziMEJNewman2009}.
Besides invoking a numerical minimizer \citep{WHPress2007},
this method requires sampling \citep{AClausetCRShaliziMEJNewman2009},
that is,
the input sample must be compared to a large set of randomly generated samples
---
the larger, the more precise.
In the present case,
this means that the numerical evaluation of the normalizing functions $\sfhzeta$ and $\sfWitten$
---
and of their respective logarithms and logarithmic derivatives
---
have to be efficient not only in terms of precision but also in terms of speed.
Efficient numerical methods to evaluate the Hurwitz-zeta function $\sfhzeta$
can be found in the classical numerical literature
\citep{HBMF,KOldhamJCMylandJSpanier2009}
---
while they can easily be adapted to our specific usage.
By contrast,
the two-dimensional
Mordell-Tornheim-Witten zeta function $\sfWitten$
belongs to the specialized numerical literature and
its numerical computation is still a subject of investigation
\citep{JMBorweinKDilcher2018}.
In practice,
even the implementation of the corresponding Hurwitz generalization
with the same two first exponents
$\alpha$ and $\beta$
is rather tedious while very slow,
especially when the third exponent $\gamma$ becomes negative
---
as $2\lambda\upsilon_{j}$ appeared to be.
To work around this numerical bottleneck,
we performed a crude data analysis based on a Nonlinear Least-Squares Fitting
(\textsc{NLSF}).
Interestingly,
our {ad hoc} crude data analysis reveals
a negative number of vital connection $\upsilon_{j}$,
which means that the associated generalized binomial combination number is smaller than one
modulo a signed factor that drops at normalization%
\endnote{%
We have
\begin{math}
\binom{N}{-\nu}
= \frac{\sin\pi\nu}{\pi\nu}{\binom{N+\nu}{\nu}}^{-1}
= \frac{\sin\pi\nu}{\pi\nu}{\binom{N}{\nu}}^{-1} (1+\mathcal{O}(\frac{\nu^2}{N}))
\end{math}%
.
}\label{OSUSN/edn/Combinatorics}%
.
We interpret this to mean that
the number of intraconnections for junctions
might be relatively much smaller than the one for roads
in self-organized cities.
\begin{figure}[bth!]
\includegraphics[width=0.85\linewidth]{osusn_jsi-figure-04}
\caption{\label{OSUSN/fig/USN/London}%
Relative Frequency Distributions (\textsc{RFD}) for the urban street network of Central London:
circles represent relative frequencies for the valences of the road-road topological network;
crosses represent relative frequencies for the valences of the junction-junction topological network.
The red fitted curve
for the natural road statistics
describes the Maximum Likelihood Estimate (\textsc{MLE})
for the discrete Pareto probability distribution \eqref{OSUSN/eq/USN/PDF/NaturalRoads}
estimated according to the state of the art \citep{AClausetCRShaliziMEJNewman2009,CSGillespie2015}
(%
$\underline{n}_{r} = 4$,
$2\lambda\upsilon_{r} = 2.610(65)$,
$n=250\,000$ samples,
$p\text{-value} = 0.933(1)$%
).
The green fitted curve for the junction statistics
shows the best Nonlinear Least-Squares Fitting (\textsc{NLSF})
for the nonstandard discrete probability distribution \eqref{OSUSN/eq/USN/PDF/Junctions}
with $\underline{n}_r$ and $2\lambda\upsilon_{r}$ fixed to
their respective \textsc{MLE} value ($2\lambda\upsilon_{j} \approx -1.3$);
since
fast evaluation of the normalizing function $\sfWitten$ has yet to be found,
no \textsc{MLE} approach can be used for now.
Having for junctions
a number of vital connections $\upsilon_{j}$ negative
is interpreted as expressing
a number of agent intraconnections
for junctions relatively much smaller than the one for natural roads.
The sharp downturn at a valence of $10$ likely means that the model fails to catch
what occurs when valences are small.
In any case,
a proper \textsc{MLE} remains to be performed for confirming.%
}
\end{figure}
\subsection*{\addcontentsline{toc}{subsection}{Drifted Self-Organized Urban Street Networks}Drifted Self-Organized Urban Street Networks}
\subsubsection*{\addcontentsline{toc}{subsubsection}{Coherence Based on Drifted Boltzmannian Mesoscopic Surprisals}Coherence Based on Drifted Boltzmannian Mesoscopic Surprisals}
Now let us regard the self-organized urban street networks studied in the previous section
as an ideal class of urban street networks,
namely,
as a reference from which \textit{`real'} urban street networks deviate.
The deviation is vanishing for self-organized urban street networks.
For arbitrary urban street networks,
the deviation might be of arbitrary magnitude.
Furthermore,
we presume that deviations are essentially caused by artificial means,
but not due to any change in the behaviours of the city-dwellers.
Artificial deviations are created by urban designers or decision makers
who remodel cities for arbitrary purposes but without respect to the laws that
might govern the spontaneous evolution of cities.
Meanwhile,
the topological mindset of city-dwellers and the social machinery
that governs roads and junctions remain unchanged.
Moreover,
\textit{a priori},
there are no apparent reason that the remodelling
affects one iota
the deep paradigm which constructs the perception of city-dwellers:
roads and junctions remain perceived as Boltzmannian mesoscopic systems.
Nevertheless,
the remodeled urban street networks might no more reflect their perception
---
not vice versa.
In other words,
the deviations drift the surprisal of city-dwellers for their own urban street network.
Assuming a surprisal drift $\stDr(\Omega)$ that
generates an extra amount of surprisal ${\Delta{\left\langle{\mathrm{S}}\right\rangle}}$ on average,
the unique characterizing constraint bracket
in Shannon Lagrangian \eqref{eq/USN/ShannonLagrangian} becomes
\begin{equation}\label{eq/USN/drift/ShannonLagrangian/constraint/bracket/adhoc}
\left[
\sum_{\Omega}
\Pr(\Omega
\left(%
\ln{\Omega}+\stDr(\Omega)%
\vphantom{\widetilde\Delta}%
\right)
-
\left(%
{\left\langle{\mathrm{S}}\right\rangle}+{\Delta{\left\langle{\mathrm{S}}\right\rangle}}%
\vphantom{\widetilde\Delta}%
\right)
\right]
.
\end{equation}
Carefully expanding \eqref{eq/USN/drift/ShannonLagrangian/constraint/bracket/adhoc}
gives rise to two apparent characterizing restrictions:
the first logarithmic moment characterizing constraint discussed above
and a new characterizing constraint,
respectively
\begin{equation}\label{eq/USN/drift/ShannonLagrangian/constraint/bracket/andsplit}
\left[
\sum_{\Omega}
\Pr(\Omega)\,\ln(\Omega) - {\left\langle{\mathrm{S}}\right\rangle}
\right]
\quad\text{and}\quad
\left[
\sum_{\Omega}
\Pr(\Omega)\,\stDr(\Omega) - {\Delta{\left\langle{\mathrm{S}}\right\rangle}}
\right]
.
\end{equation}
By adding this new characterizing restriction to Shannon Lagrangian \eqref{eq/USN/ShannonLagrangian},
we arrive at the deviant version
\begin{multline}%
\label{eq/USN/drift/ShannonLagrangian}
\mathcal{L}\left(\left\{\Pr(\Omega)\right\};\nu,\lambda,\varepsilon\right) =
-\sum_{\Omega} \Pr(\Omega)\,\ln\left(\Pr(\Omega)\right)
-\left(\nu-1\right) \left[
\sum_{\Omega} \Pr(\Omega) - 1
\right]
\\
\shoveright{%
\qquad\quad%
-\lambda \left[
\sum_{\Omega} \Pr(\Omega)\,\ln{\Omega} - {\left\langle{\mathrm{S}}\right\rangle}
\right]
-\varepsilon \left[
\sum_{\Omega}
\Pr(\Omega)\,\stDr(\Omega) - {\Delta{\left\langle{\mathrm{S}}\right\rangle}}
\right]
.
}
\end{multline}
The introduced Lagrange multiplier $\varepsilon$ tells us how
urban designers or decision makers
impose
a surprisal drift $\stDr(\Omega)$ to the surprisal perception of city-dwellers
for their own urban street network.
The constant ${\Delta{\left\langle{\mathrm{S}}\right\rangle}}$ corresponds to
the part of the apparent mean surprisal caused by the surprisal drift $\stDr(\Omega)$ itself
---
for now,
as the constant ${\left\langle{\mathrm{S}}\right\rangle}$,
it plays a dummy role.
Extremizing expression \eqref{eq/USN/drift/ShannonLagrangian} holds
\begin{equation}\label{eq/USN/drift/ShannonLagrangian/equations}
\frac{\partial\mathcal{L}\left(\left\{\Pr\left(\Omega\right)\right\};\nu,\lambda,\varepsilon\right)}{\partial\Pr(\Omega)} =
-\ln\left(\Pr(\Omega)\right)
-\nu
-\lambda\,\ln\Omega
-\varepsilon\,\stDr(\Omega)
= 0
,
\end{equation}
from which we readily find the power law probability distribution
\begin{equation}\label{eq/USN/drift/ShannonLagrangian/solution/calculus/intermediate}
\Pr(\Omega) =
\frac{\Omega^{-\lambda}\:{e}^{-\stDr(\Omega)}}{{e}^{\nu}}
.
\end{equation}
With the same easy manipulation as before,
the normalization condition allows us to define
the deviant partition function $Z(\Lambda;\lambda,\varepsilon)$ of our drifted system;
we get
\begin{equation}
{e}^{\nu} = \sum_{\Omega} \Omega^{-\lambda}\:{e}^{-\varepsilon\stDr(\Omega)} \equiv Z(\Lambda;\lambda,\varepsilon)
.
\end{equation}
So we end up by writing the most plausible probability distribution
associated to Shannon Lagrangian \eqref{eq/USN/drift/ShannonLagrangian} as
\begin{equation}\label{eq/USN/drift/ShannonLagrangian/solution}
\Pr(\Omega) =
\frac{\Omega^{-\lambda}\:{e}^{-\varepsilon\stDr(\Omega)}}{Z(\Lambda;\lambda,\varepsilon)}
.
\end{equation}
For non-vanishing surprisal drift $\varepsilon\stDr(\Omega)$,
as expected,
this probability distribution is obviously not scale-free.
In fact,
when the polynomial part of the asymptotic expansion of $\Lambda(\Omega)$ does not reduce to a constant,
the surprisal drift $\varepsilon\stDr(\Omega)$ acts as a cut-off function.
In other words,
in contrast to ideal self-organized urban street networks,
a typical deviant urban street network possesses a typical number of configurations
for its roads and junctions.
Our next task is to establish the statistics for roads and junctions in deviant urban street networks.
Substitution of \eqref{eq/USN/AgentBasedModel/NumberOfStates/NaturalRoads}
into \eqref{eq/USN/ShannonLagrangian/solution} yields
\begin{subequations}\label{eq/USN/drift/AgentBasedModel/DistributionFunction}
\begin{equation}\label{eq/USN/drift/AgentBasedModel/DistributionFunction/NaturalRoads}
\Pr\left(n_{r}\right) \propto
n_{r}^{-2\lambda\plNVCR}\:\exp\left({-\varepsilon\plDrR(n_{r}^{2\plNVCR})}\right)
,
\end{equation}
once the surprisal drift $\stDr$ is suitably rescaled to the surprisal drift for roads $\plDrR$.
Afterwards,
substitution of \eqref{eq/USN/AgentBasedModel/NumberOfStates/Junctions}
into \eqref{eq/USN/ShannonLagrangian/solution}
along Iversonian counting
with respect to \eqref{eq/USN/drift/AgentBasedModel/DistributionFunction/NaturalRoads}
gives
\begin{multline}%
\label{eq/USN/drift/AgentBasedModel/DistributionFunction/Junctions}
\Pr\left(n_{j}\right) \propto
\left(
\sum_{j(r,s)} \frac{\left[n_{j}=n_{r}+n_{s}\right]}{\left({n_{r}}{n_{s}}\right)^{2\lambda\plNVCR}}%
\exp\left({-\varepsilon\left[\plDrR(n_{r}^{2\plNVCR})+\plDrR(n_{s}^{2\plNVCR})\right]}\right)
\right)
\\
\times%
{n_{j}}^{-2\lambda\plNVCJ}\:\exp\left({-\varepsilon\plDrJ(n_{j}^{2\plNVCJ})}\right)
,
\end{multline}
\end{subequations}
with the same notation convention previously used.
The main interest of the deviant statistics \eqref{eq/USN/drift/AgentBasedModel/DistributionFunction} lies in showing
how surprisal drift formally separates
the evolution scale exponent $\lambda$
from the numbers of vital connections for roads and junctions,
$\plNVCR$ and $\plNVCJ$ respectively.
As seen in the previous subsection,
this separation of parameters is important as it means that
the macroscopic phenomenon of evolution
and the mesoscopic phenomena of social interactions
can be qualitatively studied
among drifted self-organized urban street networks.
Fortunately enough,
such qualitative investigations
among slightly drifted self-organized urban street networks
appears almost as manageable as the ideal case investigation
among self-organized urban street networks
as follows.
\subsubsection*{\addcontentsline{toc}{subsubsection}{Exploratory Study of Slightly Drifted Urban Street Networks}\label{sec/subsub/ExploratoryStudy/SDUDN}Exploratory Study of Slightly Drifted Urban Street Networks}
Let us first specify what we mean when a self-organized urban street network is slightly drifted.
Here it is important to bear in mind that
the numbers of configurations \eqref{eq/USN/AgentBasedModel/NumberOfStates}
result from asymptotic countings.
So,
the surprisal drifts for roads and junctions
introduced in \eqref{eq/USN/drift/AgentBasedModel/DistributionFunction},
$\plDrR$ and $\plDrJ$ respectively,
reach the asymptotic behaviour of the underlying surprisal drift $\stDr$.
Let us now assume that the underlying surprisal drift $\stDr(\Omega)$ admits
as asymptotic expansion a generic finite Laurent polynomial of the form
\begin{math}
a_{-p}\Omega^{-p}+a_{-p+1}\Omega^{-p+1}+\cdots%
+a_{0}+\cdots%
+a_{q-1}\Omega^{q-1}+a_{q}\Omega^{q}%
\end{math}%
.
The non-polynomial part is absorbed by the exponential function whose the surprisal drifts feed,
hence irrelevant.
The zeroth order coefficient $a_{0}$ is eliminated
during the normalization by factorizing its inverse exponentiation,
hence meaningless.
The remaining polynomial part
\begin{math}
a_{1}\Omega+\cdots%
+a_{q-1}\Omega^{q-1}+a_{q}\Omega^{q}%
\end{math}
is imposed,
by the normalization condition,
to be positive for large $\Omega$ values.
More importantly,
the remaining polynomial behaves as an asymptotic cut-off polynomial
whose strength lies in its leading term $a_{q}\Omega^{q}$.
We may consider surprisal drifts asymptotic to quadratic or of higher degree polynomials
as inducing too drastic cut-offs,
namely,
as altering too drastically self-organized urban street networks.
That is,
for the time being,
we consider as slight any surprisal drift that is asymptotic to
a monomial of degree one $a_{1}\Omega$ whose coefficient $a_{1}$ is arbitrary small
---
and positive.
Thus,
we may assume,
without loss of generality,
that the slight surprisal drift $\stDr(\Omega)$ reduces to the canonical monomial $\Omega$
so that $\varepsilon\stDr(\Omega)=\varepsilon\,\Omega$.
Therefore
the parameter $\varepsilon$ simply expresses the strength of our slight surprisal drift.
Once properly rescaled,
the parameter $\varepsilon$ becomes the strengths $\plDrSR$ and $\plDrSJ$ associated to
the slight surprisal drifts for roads and junctions,
respectively;
we write
\begin{subequations}\label{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight}
\begin{align}
\varepsilon\plDrR(n_{r}^{2\plNVCR}) &= \plDrSR\:{n_{r}^{2\plNVCR}}%
\label{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight/NaturalRoads}%
\\
\varepsilon\plDrJ(n_{j}^{2\plNVCJ}) &= \plDrSJ\:{n_{j}^{2\plNVCJ}}
\label{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight/Junctions}%
.
\end{align}
\end{subequations}
Now we may carry out the statistics for roads and junctions in slightly deviant urban street networks.
Substituting \eqref{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight}
into \eqref{eq/USN/drift/AgentBasedModel/DistributionFunction},
then making the change of parameters
\begin{equation}\label{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight/ChangeOfParameters}
\plZedR = \exp(-\plDrSR)
\qquad
\plZedJ = \exp(-\plDrSJ)
\end{equation}
for conciseness,
we obtain
\begin{subequations}\label{eq/USN/drift/AgentBasedModel/DistributionFunction/slight}
\begin{align}
\Pr\left(n_{r}\right) &\propto
{n}_{r}^{-2\lambda\plNVCR}\:{\plZedR}^{{n}_{r}^{2\plNVCR}}%
\label{eq/USN/drift/AgentBasedModel/DistributionFunction/slight/NaturalRoads}%
\\
\Pr\left(n_{j}\right) &\propto
\left(
\sum_{j(r,s)} \frac{\left[n_{j}=n_{r}+n_{s}\right]}{\left({n_{r}}{n_{s}}\right)^{2\lambda\plNVCR}}%
\:%
{\plZedR}^{n_{r}^{2\plNVCR}} {\plZedR}^{n_{s}^{2\plNVCR}}
\right)
\,{n_{j}}^{-2\lambda\plNVCJ}\:%
{\plZedJ}^{n_{j}^{2\plNVCJ}}
\label{eq/USN/drift/AgentBasedModel/DistributionFunction/slight/Junctions}%
.
\end{align}
\end{subequations}
For practical normalization,
the change of parameters \eqref{eq/USN/drift/AgentBasedModel/SurprisalDrift/slight/ChangeOfParameters}
appears
precious
to easily identify the involved special functions.
First,
the probability \eqref{OSUSN/eq/USN/PDF/NaturalRoads}
for a road to cross ${n}_{r}$ junctions
in an idealized self-organized urban street network
takes
in a slightly deviant urban street network
the form
\begin{subequations}\label{OSUSN/eq/USN/drift/PDF/slight}
\begin{equation}\label{OSUSN/eq/USN/drift/PDF/slight/NaturalRoads}
\Pr\left({n}_{r}\right) =
\frac{{n}_{r}^{-2\lambda\plNVCR}\:{\plZedR}^{{n}_{r}^{2\plNVCR}}}%
{\sfLerch\left(\plZedR,2\lambda\upsilon_{r},\underline{n}_{r};2\plNVCR\right)}
,
\end{equation}
where
\begin{math}
\sfLerch\left(z,\alpha,a;\beta\right) =
\sum_{{n}=0}^{\infty} {(a+n)}^{-\alpha} {z}^{{(a+n)}^\beta}
\end{math}
is the generalization introduced by \citet{BRJohnson1974} of
the Lerch transcendent function
\begin{math}
\sfLerch\left(z,\alpha,a\right) =
\sum_{{n}=0}^{\infty} {(a+n)}^{-\alpha} {z}^{{n}}
\end{math}
\citep[\S~25.14]{HBMF}.
Second,
the concomitant probability \eqref{OSUSN/eq/USN/PDF/Junctions}
for a junction to see ${n}_{j}$ junctions through its joining roads
then transforms into
\begin{equation}\label{OSUSN/eq/USN/drift/PDF/slight/Junctions}
\Pr\left({n}_{j}\right) =
\frac{%
\sum_{{n}=\underline{n}_{r}}^{{n}_{j}-\underline{n}_{r}}
\left[{n}\left({n}_{j}-{n}\right)\right]^{-2\lambda\upsilon_{r}}
{\plZedR}^{n^{2\plNVCR}} {\plZedR}^{\left({n}_{j}-{n}\right)^{2\plNVCR}}
\;%
{n}_{j}^{-2\lambda\upsilon_{j}} {\plZedJ}^{n_{j}^{2\plNVCJ}}
}{%
\sfWittenLerch\left(%
[\plZedR,\plZedR,\plZedJ],%
[2\lambda\upsilon_{r},2\lambda\upsilon_{r},2\lambda\upsilon_{j}];%
\underline{n}_{r};%
[2\upsilon_{r},2\upsilon_{r},2\upsilon_{j}]
\right)%
}
,
\end{equation}
where
\begin{equation*}
\sfWittenLerch\left(%
[x,y,z],%
[\alpha,\beta,\gamma];%
\underline{n};%
[\iota,\kappa,\mu]%
\right)
=
\!\!\sum_{{m},{n}\geqslant\underline{n}}
{m}^{-\alpha} {n}^{-\beta} \left(m+n\right)^{-\gamma}
{x}^{{m}^{\iota}} {y}^{{n}^{\kappa}} {z}^{{(m+n)}^{\mu}}
\end{equation*}
is introduced for the sake of completeness.
\end{subequations}
The validation of the slightly deviant statistics \eqref{OSUSN/eq/USN/drift/PDF/slight}
is more challenging than of the ideal statistics \eqref{OSUSN/eq/USN/PDF}
from which it deviates
for at least two reasons.
Firstly,
it is rather an exploratory work since we have no catalogue
of slightly deviant urban street networks from which we can pick up relevant samples.
Secondly,
the involved normalizing functions $\sfLerch$ and $\sfWittenLerch$ are both computationally challenging.
Nonetheless,
the numerical bottleneck is
again
a bearer of both good news and bad news.
The bad news,
without surprise,
is that further investigations on
the deviant probability distribution
$\Pr\left({n}_{j}\right)$ \eqref{OSUSN/eq/USN/drift/PDF/slight/Junctions}
for junctions
must be postponed too.
This is simply because
its normalizing function $\sfWittenLerch$ combines
together the difficulties inherited from the normalizing functions $\sfWitten$ and $\sfLerch$ while,
for the least,
a fast numerical evaluation for the former has yet to be found.
The good news is that
a very efficient numerical evaluation already exists for the latter.
In fact,
this numerical evaluation was presented with the Lerch transcendent function
as illustration \citep{SVAksenov2003}.
Formally,
it consists to apply to the series
a condensation transformation followed by a Levin d-transformation \citep{WHPress2007,CBrezinskiMRedivoZaglia1991}.
Technically,
its adaptation to the generalized Johnson-Lerch transcendent function $\sfLerch$ is straightforward.
In practice,
a careful implementation written in \texttt{C} language
that uses the Levin transformation encoded in the \texttt{HURRY} procedure \citep[Algo.~602]{TFessler1983}
as implemented in the GNU Scientific Library \citep{GSL}
appears efficient in terms of both precision and speed.
\section*{\addcontentsline{toc}{section}{Conclusions and Future Works}Conclusions and Future Works}
The primary goal of our investigation is
to understand the statistics of urban street networks.
The objective of this research was two-fold.
First,
see how our recent results on self-organized urban street networks
can be broaden to \textit{`nearly'} self-organized urban street networks.
Second,
learn what we can expect from this extension of our initial domain of investigation.
The implicit idea behind this approach is that most urban street networks
can be envisaged as a perturbation of a self-organized urban street network.
To start,
we present the surprisal statistical physics model that
we showed to govern self-organized urban street networks.
Afterwards,
by hand,
we perturb the model by introducing a surprisal drift.
We argue that the surprisal drift essentially results
from artificial remodellings imposed by urban designers or decision makers.
We obtain the generic statistics for arbitrarily drifted self-organized urban street networks,
and most importantly,
a practical statistics for the slightly drifted ones among them.
All along we learn two important and practical properties.
First,
as expected whenever any perturbation occurs,
surprisal drift perturbations lead to a separation of parameters.
Here,
perturbations separate the macroscopic evolution scale parameter
from the mesoscopic social interaction parameters.
Second,
data analysis for validating the practical statistics
for slightly drifted self-organized urban street networks
remains manageable
---
modulo some numerical analysis efforts.
Future works must
first and foremost
validate the practical statistic for slightly drifted self-organized urban street networks
for a sufficiently large bunch of \textit{`real'} urban street networks.
Thereafter,
the macroscopic and mesoscopic parameters must be estimated
for a representative set of slightly drifted self-organized urban street networks
in view to perform an observational qualitative investigation
on the involved phenomena.
This investigation is helpful to determine whether or not
the macroscopic phenomena of evolution and the mesoscopic phenomena of social interactions
transcend cultural basins.
There exists evidences that the latters are cultural dependent.
We believe,
in contrast,
that the macroscopic phenomena of evolution is characterized
by an universal constant evolution scale that might reflect either
spacial spanning,
unconscious processes,
or both;
if so,
an observational estimation must be isolated
and ultimately some rational must be found.
This sequence of observational investigations aims to confirm
for self-organized urban street networks the status of reference
among urban street networks.
Once confirmed,
it makes sense to compare the strengths of the surprisal drifts among
a representative set of urban street networks.
Because these strengths reflect the rational thoughts of urban designers and decision makers,
we expect to observe a random
(not to say irrational)
set of data.
On the other hand,
since surprisal drifts might be perceived as a source of stress by city-dwellers,
these strengths might have correlation with data that mark
their disengagement for their own urban street networks.
If such correlations are effectively observed,
then the strength of surprisal drifts might be interpreted as a indicator of their disengagement,
namely,
as a quality measure of their city.
\printendnotes*[OPSOUSN]
\begin{backmatter}
\section*{Abbreviations}
\textsc{FCA}: Formal Concept Analysis;
\textsc{MLE}: Maximum Likelihood Estimate;
\textsc{NLSF}: Nonlinear Least-Squares Fitting;
\textsc{RFD}: Relative Frequency Distribution.
\section*{Availability of data and materials}
The datasets generated and analysed during the current study are available
from the corresponding author on reasonable request.
\section*{Author's contributions}
{JGMB}
conceived and designed the study,
programmed the map treatment/analysis tools,
collected and treated the map data,
performed the statistical analysis,
and
wrote the manuscript.
{SEGJ}
helped to shape the manuscript.
Both authors read and approved the final manuscript.
\section*{Competing interests}
The authors declare that they have no competing interests.
\bibliographystyle{spbasic}
|
1,108,101,562,805 | arxiv | \section{Introduction}
\label{sec:intro}
Extruded plastic scintillator bars with wavelength shifting (WLS) fibres and Silicon Photomultiplier (SiPM) readout
are considered an established technology for massive tracking calorimeters in long-baseline neutrino oscillation experiments.
The MINOS experiment~\cite{minos} employes extruded
bars of $(1\times 4.1 \times 800)$ cm$^3$ size with 9 m long WLS fibres. A fine-grained detector in the Miner$\nu$a
experiment~\cite{Minerva} is made of triangular-shaped 3.5 m long strips and WLS fibres of 1.2 mm
diameter. Other experiments that use the same technology are Belle II~\cite{Belle2} and T2K~\cite{T2K}.
This technology has been considered also as a viable option for the muon system of the SHiP experiment~\cite{ship}
proposed at the CERN SPS.
The SHiP muon detector comprises four muon stations interleaved by iron filters, each station with a
transverse dimension of $(6 \times 12)$ m$^2$ for a total active area of 288 m$^2$. Each station has to provide both spatial
and time information. The $x,y$ coordinate will be obtained by the crossing of horizontal and vertical bars 3 m long, with a granularity
to be defined between 5 and 10 cm. The time information is provided by the average of the times measured at both ends of the bars.
A time resolution better than $1$ ns per station is required.
This paper shows the results obtained on different types of extruded scintillating bars instrumented with
different types of WLS fibres and SiPMs
measured at a test beam held at the T9 area of the CERN Proton Synchrotron (PS) in the period October 14-28, 2015.
\section{The prototypes}
\label{sec:prototypes}
Given the relatively large area, a good choice for the SHiP Muon system is the rather inexpensive scintillators
produced at the FNAL-NICADD facility~\cite{fnal1, fnal2} which are fabricated by co-extrusion with a thin
layer of $TiO_2$ around the active core. Another possibility is the polystyrene scintillator bars
extruded at UNIPLAST plant (Vladimir, Russia)~\cite{uniplast}.
Since the attenuation length of the plastic scintillator is rather short,
the light produced by the particle interaction has to be collected, re-emitted, and transported to the photodetectors
efficiently by WLS fibres. These fibres need to have a good light yield to ensure a high detection
efficiency for fibre lengths of $\sim$ 3 m. Possible choices for WLS fibres are those produced by
Saint-Gobain~\cite{saint-gobain} and from Kuraray~\cite{kuraray} factories. Both companies
produce multiclad fibres with long attenuation length ($\sim$4 m) and good trapping efficiency
($\sim$ 5\%). The fibres from Kuraray have a higher light yield
while Saint-Gobain fibres have a faster response ($\sim$2.7 ns versus $\sim$10 ns of the Kuraray), which ensures
a better time resolution for the same light yield.
Scintillating bars from NICADD and UNIPLAST companies of different lengths, widths and thicknesses
were instrumented with different types and numbers of WLS fibres from Kuraray and
Saint-Gobain manufacturers and read out by different types of SiPMs from Hamamatsu and AdvanSiD (FBK) companies.
Table~\ref{tab:prototypes1} and Table~\ref{tab:prototypes2} show the main parameters of scintillating bars from NICADD and UNIPLAST
manufacturers, respectively.
\begin{table}[htbp]
\centering
\caption{\label{tab:prototypes1} Prototypes of extruded scintillator bars from NICADD manufacturer. All the bars were instrumented
with fibres Kuraray WLS Y11(200) S-type except the S2 bar that has been instrumented with fibres from the Saint Gobain company (BCF92).
The fibres in the L1, L2 and L4 bars were read out at both ends. The fibres in the S1, S2, S5 and S8 bars were read out only at one end.
The main parameters of the photosensors are shown in Table~\ref{tab:sipm}.}
\smallskip
\begin{tabular}{|l|c|c|c|c|}
\hline
& Bar dimensions & number of fibres/bar & fibre diameter & SiPM model \\
& (h $\times$ w $\times$ l) mm$^3$ & & [mm] & (AdvanSiD company) \\ \hline
L1 & ($10\times 45 \times 3000$) mm$^3$ & 1 fibre in 1 groove & 2 & ASD-NUV3S-P \\
L2 & ($20\times 40 \times 3000$) mm$^3$ & 1 fibre in 1 groove & 2 & ASD-NUV3S-P \\
L4 & ($20\times 40 \times 3000$) mm$^3$ & 1 fibre in 1 groove & 1.2 & ASD-NUV1S-P \\ \hline
S1 & ($10\times 45 \times 250$) mm$^3$ & 2 fibres in 1 groove & 1.2 & ASD-NUV3S-P \\
S2 & ($10\times 45 \times 250$) mm$^3$ & 2 fibres in 1 groove & 1.2 & ASD-NUV3S-P \\
S5 & ($20\times 40 \times 250$) mm$^3$ & 2 fibres in 1 groove & 1.2 & ASD-NUV3S-P \\
S8 & ($20\times 40 \times 250$) mm$^3$ & 1 fibre in 1 hole & 2 & ASD-NUV3S-P \\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{\label{tab:prototypes2} Prototypes of extruded scintillating bars from UNIPLAST manufacturer. All the bars were instrumented
with fibres Kuraray WLS Y11(200) S-type. In the bars U1, U2 and U3 the fibres were read out at both ends,
in the U4 bar the two fibres were read out just at one end, opposite with respect to each other.
The main parameters of the photosensors are shown in Table~\ref{tab:sipm}.}
\smallskip
\begin{tabular}{|l|c|c|c|c|}
\hline
& Bar dimensions & number of fibres/bar & fibre diameter & SiPM model \\
& (h $\times$ w $\times$ l) mm$^3$ & & [mm] & (Hamamatsu company) \\ \hline
U1 & ($ 7 \times 30 \times 3000$) mm$^3$ & 1 fibre in 1 groove & 1 & MPPC S13081-050CS \\
U2 & ($ 7 \times 50 \times 3000$) mm$^3$ & 1 fibre in 1 groove & 1 & MPPC S13081-050CS \\
U3 & ($ 7 \times 100 \times 3000$) mm$^3$ & 2 fibres in 2 grooves & 1 & MPPC S13081-050CS \\
U4 & ($ 7 \times 100 \times 3000$) mm$^3$ & 2 fibres in 2 grooves & 1 & MPPC S13081-050CS \\
\hline
\end{tabular}
\end{table}
The L1, L2 and L4 prototypes from NICADD company are 3 m long, of different widths and thicknesses
as shown in Table~\ref{tab:prototypes1}.
These bars were machined with a single straight groove, on the top face, to host
1.2 mm or 2 mm diameter Kuraray Y11 (S300) fibres.
The S1,S2,S5 and S8 are 25 cm long bars, of different widths and thicknesses, and were used to test different configurations
with two fibres hosted in the same groove or one fibre hosted in a hole machined at the center of the bar.
These bars were read out only at one end. The fibres were all fixed with BC-600 optical cement from Saint-Gobain company.
An adhesive aluminum tape has been applied on top of the grooves to reflect the light emerging from the groove.
The photosensors used for NICADD bars are ASD-NUV3S-P or ASD-NUV1S-P from the Advansid company~\cite{advansid} whose main parameters are listed in
Table~\ref{tab:sipm}.
The 3 m long U1, U2, U3 and U4 prototypes of 0.7 cm thickness
were extruded at the UNIPLAST Factory (Vladimir, Russia) and then cut to
the 3, 5 and 10 cm wide bars.
The scintillator composition is a polystyrene doped with 1.5\% of paraterphenyl (PTP)
and 0.01\% of POPOP. The bars were covered by a chemical reflector by etching the scintillator surface in a chemical
agent that results in the formation of a white micropore deposit over a polystyrene~\cite{Kudenko:2001qj}.
The chemical coating is an excellent reflector, besides it dissolves rough surface acquired during the cutting process.
A 2~mm deep and 1.1~mm wide groove has been machined along a bar central line in 3 and 5 cm
wide bars to accomodate a WLS fibre. The 10 cm wide bars have two grooves running 5 cm apart
The fibers of all prototypes are read out at both ends except for the U4 bar where each fiber is read out only at one end.
The four prototypes of the UNIPLAST bars are sketched in Figure~\ref{fig:bars}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=13cm,angle=0]{bars.pdf}
\caption{\label{fig:bars} Four types of bars manufactured at UNIPLAST factory.}
\end{center}
\end{figure}
The fibres of one of the 10 cm wide bars (U4) have been read out only at one end in such a way that the bar is viewed from both ends
only by two photosensors, one per fibre. The fibres used for UNIPLAST bars are Kuraray WLS Y11 multi-clad fibres of 1~mm diameter.
The glue used to couple the fibres with a scintillator is optical cement EJ500 from Eljen Technology~\cite{cemento}.
The same glue has been used to embed optical connectors into the groove.
The plastic optical connector consists of two parts: a ferrule part glued into the scintillator and a
container to hold a Hamamatsu MPPC SiPM. Both parts are latched by a simple snap-like mechanism.
A foam spring inside the container provides reliable optical contact between the photosensor and the fibre end.
\section{The photosensors}
\label{sec:sipm}
Bars from NICADD and UNIPLAST manufacturers are read out by photosensors from Advansid and Hamamatsu companies, respectively.
The main parameters of the photosensors are shown in Table~\ref{tab:sipm}.
UNIPLAST bars were instrumented with low crosstalk Hamamatsu MPPC S13081-050CS~\cite{mppc}
with sensitive area size of (1.3$\times$1.3)~mm$^2$. NICADD bars were instrumented with ASD-NUV3S-P~\cite{asd3} and ASD-NUV1S-P~\cite{asd1}
with squared area of dimensions (3$\times$3)~mm$^2$ and circular area of 1.2 mm diameter, respectively.
%
\begin{table}[htbp]
\caption{Parameters of SiPMs from different manufacturers.}
\label{tab:sipm}
\vspace{.1cm}
\begin{center}
\begin{small}
\begin{tabular}{ccc}
\hline
Parameter & Hamamatsu & AdvanSiD \\
& MPPC & ASD \\
&S13081-050CS & -NUV3S-P \\
\hline
Pixel size, $\mu$m & 50 & 40 \\
Number of pixels & 667 & 5520 \\
Sensitive area, mm$^2$ & $1.3 \times 1.3$ & $3.0 \times 3.0$ \\
Gain & 1.5$\times 10^6$ & $2.6 \times 10^6$ (at +4 V over-voltage) \\
Dark rate, kHz/mm$^2$ & $\sim 90 $ & $< 100$ \\
(at T=300 K) & & \\
Crosstalk, \% & $\sim 1$ & $\sim 25$ (at +4 V over-voltage) \\
PDE & $ \sim 33 \%$ at 520 nm & 25\% at 520 nm \\
Voltage bias, V & 70 V at T = 23 $^{\circ}C$ & 30 V at T = 23 $^{\circ}C$ \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
\section{The beam line}
\label{sec:beamline}
The T9 beam at the CERN PS is a secondary beam line produced from a 24 GeV/c primary proton beam
slowly extracted from the PS. The line transports either positive or negative particles
in the momentum range between 0.5 and 10 GeV/c and with a momentum
resolution of $\sim 0.5\%$.
The beam is a mixed particle beam. Depending on the beam momentum and charge chosen
there are pions, (anti)-protons, $e^+$ or $e^-$ and, at the percent level, also kaons and muons.
For the negatively charged beam, the fraction of electrons can be as high as 80\% for $p=0.5$~GeV/c but
drops to ~5\% at 5 GeV/c, for the ``electron-enriched'' target and to few per mille when the ``hadron-enriched'' target is used.
The maximum particle rate per burst of $10^6$ is achieved for a $p=10$~GeV/c positive beam.
For negative beams the rates are typically 2-3 times lower and drop significantly at lower
energy. The beam is delivered uniformly over a burst of 0.4 seconds. Depending on scheduling
such a burst is provided typically once or twice every $\sim 15$ seconds.
A negative charged beam with momentum of 10 GeV/c produced with a ``hadron-enriched'' target
has been used for the measurements discussed in this paper. This choice allows us to have a beam dominated by minimum ionizing particles (mip) and to
minimize the fraction of electrons showering in the material of the experimental setup. A trigger rate of $\cal{O}$(100 Hz) has been obtained by closing the
beam collimators, in order to maximize the fraction of single-hit events.
\section{The experimental setup}
\label{sec:setup}
Bars from NICADD and Vladimir companies were tested simultaneously using
a common trigger made of two scintillators of $(13 \times 5 \times 1)$~cm$^3$ dimensions
read out by photomultipliers, put in cross one in front and the other behind the bars, and selecting an active
rectangular area of $(1 \times 5)$~cm$^2$.
Bars from different companies were read out by independent, similar setups as described below.
In both setups the signals were read out by digitizers which record the full
signal waveforms. This information will be also used for design of the front-end electronics (FEE) for the muon system of the SHiP experiment.
The experimental setup for NICADD bars is described in the following paragraph.
The coincidence of the two scintillators has been used to start the readout of a buffer of a 10-bit, 8 (4)
channels, 1 (2) GS/s VME digitizer CAEN V1751.
One of the two trigger scintillator signals has been sent to the digitizer for time reference.
Signals from the SiPMs were sent via 4 m long RG-174 cables to a 8-channels, 350~MHz bandwidth, 20~db gain, custom
preamplifier board based on AD8000 current feedback operational amplifier and then to the digitizer.
A VME interface has been used to send the data to a PC in the control room via a 30 m long optical fibre.
The signal charge has been measured by integrating the signal waveform within 350~ns window.
In order to express the light yield in number of photoelectrons (p.e.),
the integrated charge spectra corresponding to dark noise events were registered on a scope and
used to extract the calibration constants for SiPMs. In fact, thanks to the high level of crosstalk ($\sim 25\%$) of the SiPM from the AdvanSiD company,
up to three single photoelectron peaks were clearly visible even in dark noise events and used to evaluate
the charge corresponding to one photoelectron.
A global uncertainty of 3\%
has been associated to the calibration constants, taking into account the fitting
procedure and the effect for temperature fluctuations (roughly 5 $^{\circ}$C day/night).
The optical crosstalk in the SiPMs has been statistically subtracted from the measured light yield.
\vskip 2mm
For UNIPLAST bars, signals from the MPPCs were sent through a 2.5~m long
twisted pair cable to a multi-channel custom made preamplifier with differential inputs.
The differential inputs suppressed the electronic pickup noise in the experimental hall, nevertheless
an additional screening has been required to obtain good separation between single photoelectron (p.e.)
peaks in the MPPC charge spectra. After shielding the twisted pair wires with Al-foil connected to ground,
up to 20 p.e. peaks were visible in the charge spectrum. An example of the spectrum used for the calibration of the light yield
is shown in Figure~\ref{fig:calib_spectrum}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{Calib_spec.pdf}
\caption{Example of MPPC spectrum used for the calibration of the light yield.\label{fig:calib_spectrum}}
\end{center}
\end{figure}
Then the signals were digitized by a 12-bit 5-GS/s switched capacitor desktop waveform digitizer CAEN DT5742.
Six readout channels were operated simultaneously.
The signal charge has been calculated by integrating the signal waveform within 200~ns window.
The pulse rise time has been analyzed to obtain the timing parameters.
The calibration of the MPPCs to express the light yield in number of p.e. has been done in the
position where the beam hits a bar at the far end from the considered MPPC. The light output in this configuration was around
20 p.e. so 16 single p.e. peaks were averaged to obtain the calibration coefficients.
The calibration coefficients were measured once. No corrections for temperature fluctuations
(within roughly 5$^{\circ}$~C day/night) were made. This contributes to the systematic uncertainty
of the light yield measured from data collected over a few days. The specified value of optical crosstalk in the Hamamatsu
MPPCs is about 1\% and this factor has been neglected in the light yield determination.
\section{Results}
\label{sec:results}
\subsection{Light yield and attenuation length}
\label{ssec:light-yield}
The light yield for NICADD bars has been obtained by measuring the light yields at both ends
of 3 m long bars and at one end for 25 cm long bars.
An example of the light yield distribution is shown in Figure~\ref{fig:spectrum}.
The spectrum is fitted with a Gaussian plus a Landau function
with common mean and sigma values. The mean value and its uncertainty obtained from the fit
are used to determine the light yield and its uncertainty for a given beam position.
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{spectrum_fitted}
\qquad
\caption{\label{fig:spectrum} Example of distribution of the sum of the light yield collected at both ends
of a 2 cm thick NICADD bar.}
\end{figure}
For long bars, the attenuation of the light during the propagation along the fibre has been determined by measuring the light yield
as a function of the distance of the beam from each photosensor.
To perform this measurement the bar has been moved with respect to the trigger position by 25 cm steps.
The results are shown in Figures~\ref{fig:l1},~\ref{fig:l2} and \ref{fig:l4}
for bars L1, L2, and L4, respectively.
The attenuation behaviour of the Y11 fibre shows two components, an initial strong attenuation over a distance of about $\sim$25 cm,
probably dominated by the absorption in the fibre cladding, followed by a much longer attenuation length ($\lambda \sim 4.5-5$ m).
This is consistent with previously published data~\cite{kudenko}.
The total light yield measured at both ends is constant within 20\% along the bar.
Figure~\ref{fig:l124} shows the total light yield for the three long bars from the NICADD company.
Table~\ref{tab:npe} shows the light yield measured at one end of the short bars S1,S2,S5 and S8 (as defined in Table~\ref{tab:prototypes1}) when the beam
impinges at $\sim$ 13 cm far from the SiPM. As comparison, the light yield measured at one end for long bars is also shown for the same beam position.
The highest light yield is measured for the S5 bar, which is 2 cm thick with two Kuraray fibres, 1.2 mm diameter each, embedded in the same groove.
The comparison of the results obtained with S1 and S2 bars shows that the Bicron fibres emit about half of the light produced by Kuraray fibres with the same
diameter. Another interesting result is that a fibre glued in a groove on the top face of the scintillator bar (L2) produces the same amount of light
of the same fibre glued in a hole in the middle of the bar (S8). In general, the light collected and re-emitted by a fibre is proportional to the transverse area
of the fibre itself (we measured about a factor three more light produced by a fibre of 2 mm diameter (L2) with respect to a fibre of
1.2 mm diameter (L4)) while the light yield obtained by doubling the thickness of the scintillator is only 30\% more (L2 versus L1).
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{LightYield_L1_new}
\qquad
\caption{\label{fig:l1} L1 bar: light yield measured at each end (red squares and blue solid circles) and the sum of the light yields at both ends
(black open circles) as a function of the incident beam position along the bar. }
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{LightYield_L2_new}
\qquad
\caption{\label{fig:l2} L2 bar: light yield measured at each end (red squares and blue solid circles) and the sum of the light yields measured at both ends
(black open circles) as a function of the incident beam position along the bar.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{LightYield_L4_new}
\qquad
\caption{\label{fig:l4} L4 bar: light yield measured at each end (red squares and blue solid circles) and the sum of the light yields measured at both ends
(black open circles) as a function of the beam position along the bar.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{LightYield_L124_new}
\qquad
\caption{\label{fig:l124} Sum of the light yield measured at each as a function of the beam position along the bar for L2, L1, L4 bar.}
\end{figure}
\begin{table}[htbp]
\centering
\caption{\label{tab:npe} Light yield measured at one end of the short bars S1, S2, S5 and S8 defined in Table~\ref{tab:prototypes1} when the beam
impinges at $\sim$ 13 cm far from the SiPM. For comparison of different prototypes,
the light yield measured at one end for long bars is also shown for the same beam position. The photosensors used for S1, S2, S5, S8, L1, L2, L4
are different to those used for U1,U2,U3,U4, as shown in Table~\ref{tab:sipm}.
The higher light yield is measured for the S5 bar. The uncertainty is dominated by the systematic one.}
\smallskip
\begin{tabular}{|l|c|}
\hline
& light yield [p.e./MIP] \\ \hline
S1 & $78.0 \pm 2.3$ \\
S2 & $41.0 \pm 1.2$ \\
S5 & $133.0 \pm 4.0$ \\
S8 & $105.9 \pm 3.2$ \\ \hline
L1 & $77.4 \pm 2.3$ \\
L2 & $114.3 \pm 3.4$ \\
L4 & $36.7 \pm 1.1$ \\ \hline
U1 & $50.7 \pm 1.2$ \\
U2 & $45.3 \pm 1.2$ \\
U3 & $ 22.3 \pm 0.6$ \\
U4 & $ 19.8 \pm 0.6$ \\ \hline
\hline
\end{tabular}
\end{table}
\vskip 2mm
The light yield for UNIPLAST bars has been measured by using three samples of 3 cm wide bars (U1), three samples of 5 cm wide bars (U2)
and single prototypes of 10 cm wide bars (U3 and U4). Figure~\ref{fig:ly35} shows the average result for
all three tested bars of the same width.
The position of the bar with respect to the beam has been changed in steps 10 cm each, collecting 29 points altogether.
Scan results are shown in Figure~\ref{fig:ly35} for 3 cm (U1) and 5 cm (U2) wide bars. The total light yield from
both ends has been measured to be about 60 and 50 p.e. per minimum ionizing particle (MIP) for U1 and U2 respectively,
when the beam impinges at the center of the bars. The light yield is higher by 20\% when the beam impinges near the ends.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=15cm,angle=0]{ly35.pdf}
\caption{\label{fig:ly35} Light yield scan for 3 cm bars U1 (a) and 5 cm bars U2 (b).}
\end{center}
\end{figure}
Figure~\ref{fig:ly10} shows the scan results for 10 cm wide bars, U3 type (left) and U4 (right), respectively.
For U3 bars, the total light yield from all 4 MPPCs
is about 45 p.e./MIP when the beam impinges at the center of the bars. Plot on the right shows the light yield for the bar U4,
where two WLS fibres are read out by two MPPCs, one per fibre at opposite bar ends.
This configuration gives the total light output of about 27 p.e./MIP in the center.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=15cm,angle=0]{ly10.pdf}
\caption{\label{fig:ly10} Light yield scan for 10 cm bars U3 (a) and U4 (b).}
\end{center}
\end{figure}
The scan has been also performed in the transverse dimension of the bar, to investigate how the light yield depends in the distance between the hit point and the WLS fibre.
This scan was done with the beam impinging in the middle of the bar length.
Bars 3, 5 and 10 cm wide were tested, with a single fibre read out at both ends.
Only events selected by a ($3\times 3$)~mm$^2$ area small plastic counter read out by a MPPC
in the same way as the bars under test were considered.
The results are shown in Figure~\ref{fig:ly_across} (a). The first point of the scan
was arbitrary fixed near the bar edges.
The scan spanned over 5~cm towards the opposite edge crossing a fibre position at the coordinate of
around 20--25~mm. The light yield is the sum of the light measured by the two photosensors at both ends of the bar.
The attenuation of the scintillating light in the opposite directions from a fibre is demonstrated in Figure~\ref{fig:ly_across}
(b) for the 10 cm wide bar. Light attenuation is asymmetrical because of the effect of close reflective edge in one direction.
The points were fitted with exponential function $f(x) = C\cdot exp(S\cdot x)$, where $x$ is the position variable and
attenuation length is $1/S$. An attenuation length of 76~mm has been obtained towards the near edge,
and 49~mm in the opposite direction, where the edges influence on the scintillating light collection is less important.
The second value can be considered to a first approximation as the attenuation length of scintillating
light propagation in 7~mm thick extruded scintillator.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=15cm,angle=0]{ly_across.pdf}
\caption{\label{fig:ly_across} Light yield scan across 3 cm, 5 cm and 10 cm bars (a). Attenuation of scintillating light in the opposite directions from a fibre in the 10 cm bar (b).}
\end{center}
\end{figure}
\subsection{Detection efficiency}
\label{ssec:det-eff}
Data obtained during the scan measurements across the bars have been used to calculate the detection efficiency.
The electronic trigger signal was produced by two trigger counters in coincidence.
An additional trigger counter of 3$\times$3~mm$^2$ active area allowed us to localize the position
of the impinging beam on the tested bars with high accuracy.
Only data collected with the beam impinging far from the bar edges have been considered for this measurement.
Events with the signals from the small counter with a pulse amplitude higher than 10~p.e. and time coordinate
within $\pm \, 4 \sigma_t$ with respect to the average time
were selected for measuring the detection efficiency.
The detection efficiency has been obtained with three different methods:
\begin{enumerate}
\item as the ratio between the number of hits within 4 $\sigma_t$ of the $(t_{\rm L}-t_{\rm R})/2$ spectrum and the total number of triggers,
where $t_{\rm L}$ and $t_{\rm R}$ are the times measured from the SiPMs situated at the left and right bar end with respect to the direction of the beam ({\it timing AND});
\item as the ratio between the number of hits within 4$\sigma_t$ of the time distribution of one of the two photosensors and the total number of triggers ({\it timing OR});
\item as the ratio between the number of hits with an integrated charge over some threshold at both bar ends and the total number of triggers ({\it charge AND}).
\end{enumerate}
The {\it timing AND} method implies the most stringent selection criterion while the {\it timing OR} depends on the noise level of the photosensors.
The {\it charge AND} method is the loosest criterion and it is affected by accidentals within 200~ns charge integration window.
The results are listed in Table~\ref{table:eff} for the three methods and for 3 , 5 and 10 cm wide bars. The 10 cm wide bar
was read out by a single fibre at both ends. Only signals with more than 3 p.e. at each bar end have been considered in the analysis.
\begin{table}[h]
\caption{Detection inefficiency for different prototypes and for different methods of counting the detected events.
The uncertainty is purely statistical.}
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
Method of counting & (1-$\varepsilon$) (\%)& (1-$\varepsilon$) (\%) & (1-$\varepsilon$) (\%)\\
& width = 3 cm & width = 5 cm & width = 10 cm \\
\hline
Timing AND & 0.32$\pm$0.03 & 0.26$\pm$0.02 & 0.59$\pm$0.04 \\
Timing OR & 0.27$\pm$0.03 & 0.13$\pm$0.02 & 0.09$\pm$0.01 \\
Charge AND & 0.17$\pm$0.02 & 0.06$\pm$0.01 & 0.32$\pm$0.03 \\
\hline
\end{tabular}
\end{center}
\label{table:eff}
\end{table}
Figure~\ref{fig:ineff35} shows how the inefficiency depends on the detection threshold in the case of {\it Charge AND}. The threshold is applied at each bar end.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm,angle=0]{ineff35.pdf}
\caption{\label{fig:ineff35} The inefficiency for 3 and 5 cm wide bars as a function of the charge detection threshold.
The events are counted if the charge is over the specified threshold at each bar end. }
\end{center}
\end{figure}
\subsection{Time resolution}
\label{ssec:time-res}
The time resolution of bars L1, L2, and L4 was measured using the following procedure, repeated for each impact
position of the beam along the bar.
As a first step, the charge samples from the SiPMs ``left'' (L) and ``right'' (R) with respect to the beam direction at the two bar ends were scanned in 0.5 ns steps,
identifying a ``start'' time ($t_{\rm L}, t_{\rm R}$)
as the first time bin corresponding to an ADC count greated than 10 with respect to the average baseline.
An example of baseline-subtracted waveform is shown in Figure~\ref{fig:waveform}.
The same procedure has been applied to the trigger signals, obtaining a $t_0$ for each event
that was subtracted from $t_{\rm L}$ and $t_{\rm R}$.
As a second step, in order to compute the time-slewing corrections, the total charge $Q_{{\rm L,R}}$ distributions were calculated,
where $Q_{\rm L,R}$ are the charges integrated over 350 ns starting at $t'_{\rm L,R}=t_{\rm L,R}-t_0$.
The $Q_{\rm L,R}$ spectra were divided in 10 slices, each slice containing the same number of events, and for each slice a gaussian fit
of the $t'_{\rm L,R}$ distribution was done
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{waveform.pdf}
\qquad
\caption{\label{fig:waveform} Example of baseline-subtracted waveform for 2 cm thick NICADD bars.}
\end{figure}
The average values of these fits, together with the correponding values of $Q_{\rm L,R}$, were used to find by linear interpolation
a time-slewing correction term, as a function of $Q_{\rm L,R}$, to be subtracted from the previous $t'_{\rm L}$ and $t'_{\rm R}$,
obtaining new ``start'' values $t''_{\rm L,R} = t_{\rm L,R} - t_0 - t^{\rm TS}_{\rm L,R}$.
The values of $t''_{\rm L,R}$ were histogrammed and gaussian fits were remade, this time for all charges $Q_{\rm L,R}$ together.
The averages of $t''_{\rm L,R}$ as a function of the beam impact position measure the speed of light propagation along the fibre,
and is shown in Figure~\ref{fig:L2_v} for bar L2.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{L2v_noSlewCorrs.pdf}
\qquad
\caption{\label{fig:L2_v} Speed of light propagation $v_f$ in L2 fibre, for signals read by SiPM left (red line) and right (blue line);
$v_f$ is measured as $(16.0\pm0.1)$ cm/ns.}
\end{figure}
The time resolution for each bar as a function of the position of the beam along the fibre length was measured as the $\sigma$ of a Gaussian function
used to fit the time distributions of each SiPM individually and the sum of the two, using the relation $0.5\cdot (t''_L + t''_R)$.
The results are shown in Figures~\ref{fig:L1_s},~\ref{fig:L2_s},~\ref{fig:L4_s} respectively for bars L1, L2 and L4.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{L1s_piecewiseSL.pdf}
\qquad
\caption{\label{fig:L1_s} L1 bar time resolution using only SiPM $L$($R$), red circles(blue squares) and both SiPMs (green triangles).}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{L2s_piecewiseSL.pdf}
\qquad
\caption{\label{fig:L2_s} L2 bar time resolution using only SiPM $L$($R$), red circles(blue squares) and both SiPMs (green triangles).}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{L4s_piecewiseSL.pdf}
\qquad
\caption{\label{fig:L4_s} L4 bar time resolution using only SiPM $L$($R$), red circles(blue squares) and both SiPMs (green triangles).}
\end{figure}
From the point of view of the time resolution $\sigma_t$, the L2 bar provides the best results. The results obtained from bar L1
would allow a great economy of scintillating material. For the L4 bar, with a 1.2 mm diameter fibre, the time resolution appears
very marginal.
For completeness, we show in Table~\ref{tab:shortBars} the time resolution obtained in exactly the same way for short bars, read out only at one end,
and with the beam impact point at 13 cm far apart from the photosensor.
\begin{table}[htbp]
\centering
\caption{\label{tab:shortBars}Time resolution for short bars S1, S2, S4, S5 and S8 defined in Table~\ref{tab:prototypes1} when the beam
impinges at $\sim$ 13 cm with respect to the SiPM. Treatment of slewing corrections is the same described in the text for long bars L1--L4.}
\smallskip
\begin{tabular}{|l|c|}
\hline
& time resolution [ns] \\ \hline
S1 & $0.756 \pm 0.006$ \\
S2 & $0.676 \pm 0.005$ \\
S4 & $0.820 \pm 0.007$ \\
S5 & $0.676 \pm 0.005$ \\
S8 & $0.730 \pm 0.005$ \\ \hline
\hline
\end{tabular}
\end{table}
\vskip 2mm
The measurement of the time resolution for UNIPLAST bars has been performed as follows.
The waveform digitizer captures the pulse shape with steps of 200~ps each as shown in Figure~\ref{fig:pulse_shape}.
The typical pulse shape after the slow preamplifier is stretched over 1000 time samples, that correspond to 200~ns.
The shoulder observed in the pulse is caused by the signal reflection in 2.5~m long twisted pair cable between MPPC and the preamplifier.
The rise time of the pulse was used to obtain the timinig mark of the event. The typical number of samples within the
pulse front is over 60. This allows us to apply two different methods for calculating the timing,
obtaining similar results.
The first method consists in fitting with straight lines the rising edge of the pulse shape
and the baseline before the signal;
the rising edge of the pulse shape is fitted between 5\% and 85\% of its maximum height.
The crossing point of the lines gives the relative time coordinate. The method is illustrated in Figure~\ref{fig:pulse_shape}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm,angle=0]{pulse_shape_inr.pdf}
\caption{ \label{fig:pulse_shape} Fitting of digitized pulse shape to obtain the timing mark. }
\end{center}
\end{figure}
The time resolution was obtained by fitting with a Gaussian function the time distributions
$0.5 \cdot (t''_R + t''_L)$ at different positions of the beam along the bar length.
The results for 3 and 5 cm wide bars are shown in Figure~\ref{fig:time35}. The points are the average time resolution
for the three tested bars of the same size. The average resolutions along the whole bar length are
$\sigma_t$ = 724~ps and $\sigma_t$=820~ps for 3 and 5 cm wide bars, respectively.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm,angle=0]{time35_sum.pdf}
\caption{\label{fig:time35} Time resolution for 3 cm (a) and 5 cm bars (b) vs position along the bars. }
\end{center}
\end{figure}
The results for 10 cm bars are presented in Figure~\ref{fig:time10}~(a) for the readout with two MPPCs (U4)
and in Figure~\ref{fig:time10}~(b) for the readout with four MPPCs (U3).
The time resolution for the U4 and U3 bars were determined by fitting with a Gaussian function the distributions $(t''_{\rm L} + t''_{\rm R}) \cdot 0.5$ and
$(t''_{\rm 1L}+t''_{\rm 1R} + t''_{2L} + t''_{2R}) \cdot 0.25$, respectively.
We find $\sigma_t = 1.4$ ns for the bar readout by two MPPCs (U4) and $\sigma_t = 1$ ns
for the bar read out by 4 MPPCs.
The time resolution of the bar instrumented with four photosensors improves by $\sqrt{2}$, as expected.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm,angle=0]{time10_sum.pdf}
\caption{\label{fig:time10} Time resolution for 10 cm bars with 2 MPPC readout (a) and 4 MPPC readout (b). }
\end{center}
\end{figure}
These results were obtained for the whole spectrum of pulse amplitudes.
The time resolution of a single MPPC as a function of the light yield can be fitted as
\[
\sigma_t=6.6 \; {\rm ns}/\sqrt{L.Y.(p.e.)}-0.14 \; {\rm ns}.
\]
The second method to determine the time resolution consists in simulating the behaviour of a constant fraction discriminator
on the pulse shape recorded by the digitizer.
The time mark was fixed to the clock sample where the pulse amplitude exceeds the 15\% fraction of the pulse height.
The results were compatible with those obtained with the first method within the uncertainties.
\section{Conclusions}
\label{sec:conclusions}
Parameters as the light yield, time resolution and efficiency for minimum ionizing particles
of different types of 300 cm and 25 cm long scintillating bars from NICADD and UNIPLAST companies
instrumented with wavelength
shifting fibres and read out by different models of silicon photomultipliers have been measured at a test
beam at the T9 area at the CERN Proton Synchrotron.
A time resolution of 700-800~ps constant along the bar length and a light yield of 140 (70) photoelectrons
has been measured for 3~m long, 4.5 (5)~cm wide and
2 (0.7)~cm thick bars from NICADD (UNIPLAST) company.
The difference in light yield is due to the different scintillator properties, different bar geometry and different photosensors.
The detection efficiency for minimum ionizing particles exceeds 99.5\% for the prototypes from UNIPLAST company.
The results collected so far nicely match the requirements for the SHiP muon detector.
\acknowledgments
The authors would like to thank H.~Wilkens, L.~Gatignon and M.~Jaeckel for continuous support and help.
This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 654168.
This work has been supported by the Grant \#14-12-00560 of the Russian Science Foundation.
|
1,108,101,562,806 | arxiv | \section{I. Introduction}
Fermion interactions play a central role in a wide range of physical systems. When these interactions are sufficiently strong, the physical properties of systems such as high-$T_c$ superconductors, neutron stars and cold atomic gases begin to exhibit universal behavior. Thanks to recent experimental progress, atoms and molecules can be trapped and cooled within optical or magnetic traps, which provide clean and highly-controllable environments where cold atomic systems can be studied. One such system, the two-component Fermi gas, is an excellent candidate for the observation of strongly interacting phenomena. We note that for cold atomic gases, the interaction between atoms is varied by manipulating their many-body bound states via the technique known as Feshbach resonance (FBR).
At characteristic densities and ultracold temperatures, only isotropic and short-range \emph{s}-wave scattering between particles can take place. This scattering can be characterized by a single parameter, the \emph{s}-wave scattering length $a$. Experimentally, the \emph{s}-wave scattering length can be tuned by using FBR~\cite{fbr1, fbr2, fbr3, fbr4, fbr5, fbr6, fbr7, fbr8}. In the unitarity limit, where $a$ is tuned to $\pm \infty$, the system is strongly interacting and its physical properties are independent of the shape of the inter-particle potential. As such, the system is expected to exhibit universal properties~\cite{ug1, ug2} since its corresponding equilibrium properties depend only on the scaled temperature $T/T_F$ (which is set by the energy scale $E_F$ and length scale $l$), where $T_F$, $E_F$, and $l$ are the Fermi temperature, the Fermi energy, and the inter-particle distance, respectively. Thus, by studying unitary Fermi gases, one learns about the equation of state for strongly interacting systems in general.
Thanks to the elegant, but simple, mean-field-like theories proposed by Eagle and Legget and Nozieres and Schmitt-Rink, zero-temperature superfluid properties are well understood~\cite{nsr1, nsr2, nsr3}. In the attractive interaction regime (where $a < 0$), atoms form Bardeen, Cooper, and Schreifer (BCS) pairs such that the ground state is a BCS superfluid. In the repulsive interaction regime (where $a > 0$), the atomic potential supports a two-body molecular bound state in vacuum such that the ground state is a Bose-Einstein Condensate (BEC) of these molecules. In between these two ground states, there is a smooth crossover where $a$ changes sign as it passes through $\infty$. Since the thermodynamics also evolve smoothly from the BCS limit to the BEC limit, a good understanding of the crossover regime comes from various interpolating schemes between these two limits~\cite{randeria}. However, due to the lack of theoretical techniques for taking into account strong interaction effects, the study of finite temperature, non-superfluid (or normal) phases are challenging. In particular, the quantitative theoretical understanding of the strongly correlated problem in general is limited by the absence of any small physical parameters within the unitarity limit. While several heavily numerical approaches~\cite{nmug1, nmug2, nmug3, nmug4, nmug5} have been used to resolve this issue, the simplicity of mean-field theory has been lost in the process.
In this paper, we develop a simple, mean-field-like theory for strongly interacting, normal-phased Fermi gases at the unitarity limit. To do so, we begin in Section II by constructing a self-consistent theory to determine the self-energy of a spin-balanced, unitary Fermi gas, which can be accomplished by calculating the total energy, entropy and variational occupation numbers of the system. Continuing in Section II, we explore the thermodynamics of both the "upper branch" and the "lower branch" of a FBR by calculating the finite temperature equation of state, such that the pressure and the entropy of the system can be extracted and compared with experimental data. Finally, in Section III, we generalize an accurate theory concerning population balanced fermions to the case of population imbalanced fermions at the unitarity limit. This is done by writing the virial coefficients for population imbalanced fermions in terms of the virial coefficients for population balanced fermions, thereby allowing us to determine the grand thermodynamic potential for a system of population imbalanced, unitary fermions.
\section{II. A Self-Consistent Theory for Universal Thermodynamics}
In general, interaction effects in Fermi gases can be well described by including the appropriate self-energy term~\cite{se1, se2} in an expression for the total energy of the particles. Despite the presence of strong interaction, it has been argued that such a self-energy exists for unitary Fermi gases~\cite{stf1}, however, it is impossible to derive an expression for it from the first principles of a Fermi gas alone. The main difficulty with constructing a self-energy for a degenerate, unitary Fermi gas is related to the strong interaction associated with an infinite \emph{s}-wave scattering length. To circumvent this problem, we start by writing the self-energy $\Sigma(k)$ in terms of a distribution function $n_k$ and a momentum-dependent coupling constant $g_{kk^\prime}$ as:
\begin{eqnarray}
\Sigma(k) = \frac{1}{2V}\sum_{k^{\prime}} g_{kk^\prime} n_{k^\prime}\label{SELFEN}
\end{eqnarray}
\noindent where $V$ is the system volume. Here, the distribution function $n_k$ is taken as a variational function. In this Hartree-Fock-like self-energy, $g_{kk^\prime}$ is given by:
\begin{eqnarray}
g_{kk^\prime} = -\frac{4\pi\hbar^2}{m} \frac{\delta(|k-k^\prime|/2)}{ |k-k^\prime|/2}\label{MDCC}
\end{eqnarray}
\noindent where the phase shift $\delta$ and the scattering length $a$ are related to one another by: $\delta = -\arctan(|k-k^\prime|a/2)$. Here, $m$ is the mass of a particle and $|k-k^\prime|/2 \equiv q$ is the relative momentum between a particle and the scattering particle. Note that in the limit where $(k-k^\prime)|a|/2 \ll 1$, the momentum-dependent coupling constant transforms into the ordinary momentum-independent, mean-field coupling contasnt $g = 4\pi\hbar^2 a/m$, while at the unitarity limit (where $a \rightarrow \pm \infty$), $\delta \rightarrow \delta_{0} = \mp \pi/2$, a \emph{constant} value.
Now, by calculating the total energy:
\begin{eqnarray}
E = \sum_k\frac{\hbar^2k^2}{2m} n_k + \frac{1}{2V}\sum_{k, k^\prime} g_{k, k^\prime} n_k n_{k^\prime}
\end{eqnarray}
\noindent the entropy:
\begin{eqnarray}
S = -\sum_k[n_k \ln n_k + (1-n_k) \ln (1-n_k)]
\end{eqnarray}
\noindent and the number of particles: $N = 2\sum_k n_k$, the grand thermodynamic potential $\Omega = E-TS-\mu N$ is derived. Then, by using the relation: $N = -\partial \Omega/\partial \mu$, the variational occupation number $n_k$ is obtained as:
\begin{eqnarray}
n_k = \frac{1}{e^{\beta(\epsilon_k-\mu)}+1}
\end{eqnarray}
\noindent where $\epsilon_k = \hbar^2k/2m+\Sigma(k)$ is the total single particle energy of a particle with momentum \emph{k} and $\mu$ is the chemical potential of the system. Lastly, by inserting this occupation number and the unitary value of the momentum-dependent coupling constant into Eq. (\ref{SELFEN}), we complete the derivation of a self-consistent equation for the self-energy $\Sigma(k)$ at unitarity:
\begin{eqnarray}
\Sigma(k) = -\frac{1}{2V}\sum_{k^{\prime}}\frac{8\pi\hbar^2}{m} \frac{\delta_{0}}{|k-k^\prime|}\frac{1}{e^{\beta(\epsilon_{k^\prime}-\mu)}+1}\label{SELFEN1}
\end{eqnarray}
However, since real-space densities for ultracold Fermi gases are so dilute, they also have large momentum-space densities associated with them. As a result, we can convert the sum in Eq. (\ref{SELFEN1}) into an equivalent integral over all of momentum-space. Then, by defining the dimensionless variables: $\beta \mu \equiv \eta$, $\beta \Sigma(k)\equiv V(\gamma)$ and $\gamma \equiv \hbar k\sqrt{\beta/(2m)}$ (where $\beta = 1/k_B T$), we have:
\begin{eqnarray}
V(\gamma) = -\frac{\delta_{0}}{\pi^2}\int \frac{{\gamma^{\prime}}^2 d{\gamma^{\prime}}}{e^{{\gamma^{\prime}}^2-\eta+V({\gamma^{\prime}})}+1} \frac{sin\theta^\prime d\theta^\prime d\phi^\prime}{|\gamma-{\gamma^{\prime}}| \label{SELFEN2}}
\end{eqnarray}
\noindent By expanding 1/$|\gamma - \gamma^\prime|$ in terms of spherical harmonics and using their orthonormal properties, we can solve the angular part of Eq. (\ref{SELFEN2}) for both the $\gamma < \gamma^{\prime}$ case and the $\gamma > \gamma^{\prime}$ case. Hence, we obtain:
\begin{eqnarray}
\int \frac{sin\theta^\prime d\theta^\prime d\phi^\prime}{|\gamma-\gamma^\prime|}=\begin{cases} 4\pi/\gamma^\prime, & \mbox{for } \gamma < \gamma^\prime \\ 4\pi/\gamma, & \mbox{for } \gamma > \gamma^\prime\end{cases}
\end{eqnarray}
\noindent Finally, we distribute Eq. (\ref{SELFEN2}) into two, separate integrals (corresponding to the two cases) and simplify the results to arrive at the concluding form of our self-consistent equation for the self-energy $V(\gamma)$ at unitarity:
\begin{eqnarray}
V(\gamma) = -\frac{4\delta_{0}}{\pi}\biggr[\frac{1}{\gamma}\int_0^\gamma\frac{y^2 dy}{e^{y^2-\eta+V(y)}+1} + \int_\gamma^\infty \frac{y dy}{e^{y^2-\eta+V(y)}+1}\biggr]\label{IESE}
\end{eqnarray}
\subsection{Upper Branch Thermodynamics of a Feshbach Resonance}
The BEC side of a Feshbach resonance is characterized by a positive scattering length ($a >0$) and by a ground state which involves bound pairs of fermionic molecules in condensate. This is known as the "lower branch" of a Feshbach resonance. By contrast, in the "upper branch" of a Feshbach resonance, the system's wavefunction consists of scattering states such that we can neglect these bound pairs and their corresponding binding energies~\cite{jump1}. Motivated by a series of recent experiments conducted in this (metastable) upper branch state \cite{jump2, exub1, exub2}, static and dynamic properties of Fermi gases have been theoretically studied \cite{thub1, thub2, thub3, thub4, thub5}. To further explore the thermodynamics of the upper branch, we set $\delta_{0}$ = $-\pi/2$ (i.e. its limiting value as $a \rightarrow +\infty$) in order to find the numerical solution of Eq. (\ref{IESE}) iteratively. We note that while Eq. (\ref{IESE}) converges very rapidly at first, it requires more iterations at relatively small values of $\gamma$. As such, the calculated self-energy $V(\gamma)$ for the upper branch is shown in FIG.~\ref{ubse} for three different values of $\gamma$, while the calculated occupation numbers $n(\gamma)$ for the upper branch are similarly shown in FIG.~\ref{occnum}. Additionally, we show the calculated pressure $P^*=P(\xi)\lambda^3/k_B T$ for the upper branch in FIG.~\ref{pressure}, where $\xi = \exp(-\mu/k_BT)$. It is important to note that the tail of the momentum distribution should decay with $\gamma^{-4}$ behavior as related to the Tan Contact~\cite{contact1, contact2, contact3}. However, the Contact density as a function of temperature was recently determined by Boettcher \emph{et al.} for the self-energy of ultracold fermions in the BEC-BCS crossover by using methods in non-perturbative quantum field theory~\cite{contact}. Their findings indicate that the Contact density is not a monotonic function of temperature, and that its maximum occurs at approximately 1.25$T_{c}$ (where $T_{c}$ is the critical temperature for phase transition). Additionally, their results show that the Contact density becomes very small for temperatures greater than 2$T_{c}$, which is reinforced by the findings of Enss and Haussmann, who have determined that the Contact density for a unitary Fermi gas is $C = 0.086k_{F}^4$ at $T = 0.5T_{F}$~\cite{contactd}. Since our theory is valid for the normal phase of a Fermi gas at unitarity, we believe that the Contact plays a very small role in the actual decay of our momentum distribution function, especially at higher temperatures.
\begin{figure}
\includegraphics[width=\columnwidth]{UBSE.png}
\caption{The self-energy of a unitary Fermi gas in the upper branch of a Feshbach resonance as related to its momentum. Values of $\gamma = 0.1$, 1 and 2 are shown in black, dark gray and light gray lines, respectively.} \label{ubse}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{UBON.png}
\caption{The occupation numbers of a unitary Fermi gas in the upper branch of a Feshbach resonance as related to momentum. Values of $\gamma = 0.1$, 1 and 2 are shown in black, dark gray and light gray lines, respectively. (Dashed lines of the same hue are the \emph{non}-interacting occupation numbers corresponding to these $\gamma$ values.)} \label{occnum}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{UBPR.png}
\caption{The pressure of a unitary Fermi gas as a function of $\xi$ in the upper branch of a Feshbach resonance. Here, $P^*=P(\xi)\lambda^3/k_B T$, where $\xi = \exp(-\mu/k_BT)$.} \label{pressure}
\end{figure}
\subsection{BCS Side Thermodynamics of a Feshbach Resonance}
The BCS side of a Feshbach resonance is characterized by a negative scattering length ($a < 0$) and by a ground state which involves Cooper pairs of fermionic molecules in condensate. To further explore the thermodyanmics of this lower branch state, we set $\delta_{0}$ = $+\pi/2$ (i.e. its limiting value as $a \rightarrow -\infty$) in order to find the numerical solution of Eq. (\ref{IESE}) iteratively. However, upon doing so, we realize that Eq. (\ref{IESE}) does \emph{not} converge for relatively small values of $\gamma$ as it did before (for small values of $\xi$). In principle, we could introduce a lower cutoff value for the momentum to get around this problem, but our resulting numerical iteration may not be reliable. As such, we choose to solve Eq. (\ref{IESE}) via one-step iteration instead. The motivation for doing so is two-fold: not only do we want to get away with the non-convergency at low values of momenta, but we also want to find an approximate analytical expression for the self-energy. However, obtaining an accurate expresssion for the self-energy requires us to first choose an accurate starting point for our one-step iteration (i.e. the zeroth order self-energy).
In general, the unitarity Fermi gas at zero-temperature has been studied with both the heavily numerical Monte Carlo method~\cite{nmug4} and with renormalization group theory~\cite{stf2}. Monte Carlo calculations at zero-temperature have shown that the self-energy is given by: $\hbar \Sigma(k) = A \mu$ (where $A \simeq -0.4045$~\cite{nmug2, lobo, chevy}), while renormalization group theory has shown that the self-energy has a weak momentum and frequency dependence at the unitarity limit~\cite{stf2}. Based on these zero-temperature theoretical results, we make the ansatz $\hbar \Sigma(k) = A \mu$ as the zeroth order self energy. Thus, for one-step iteration, we define: $h = \eta - V \equiv \eta - A\eta$. Doing so (while expanding the denominator) allows us to write the first integral on the right hand side of Eq. (\ref{IESE}) as:
\begin{eqnarray}
I_1 = \sum_{n=0}^\infty (-1)^n \int_0^{\sqrt{h}} e^{-n(h-y^2)} y^2 dy \\ \nonumber
+ \sum_{n=1}^\infty (-1)^{n+1} \int_{\sqrt{h}}^{\gamma} e^{-n(y^2-h)} y^2 dy \label{I1a}
\end{eqnarray}
\noindent Completing these two integrations yeilds the expression:
\begin{eqnarray}
I_1 = \sum_{n=0}^\infty (-1)^n g_n(\sqrt{h}) + \sum_{n=1}^\infty (-1)^{n+1}[f_n(\gamma) - f_n(\sqrt{h})] \label{I1b}
\end{eqnarray}
\noindent where $g_n(x) = e^{-nh}[2e^{nx^2} x \sqrt{n} - \sqrt{\pi} Erfi(x\sqrt{n}]/(4 n^{3/2})$ and $f_n(x) = e^{nh}[-2e^{-nx^2} x \sqrt{n} + \sqrt{\pi} Erf(x\sqrt{n}]/(4 n^{3/2})$. The two functions: $Erf(x)$ and $Erfi(x)$, are the usual error and imaginary error functions, respectively. Note that for $h <0$, only the second term in Eq. (\ref{I1b}) contributes, whereas for $h > \gamma$, only the first term contributes. Thus, for $h <0$, we must set $f_n(h\rightarrow 0) = 0$ in the second term. The second integral on the right hand side of Eq. (\ref{IESE}) can be evaluated explicitly to obtain:
\begin{eqnarray}
I_2 = (\ln[e^h + e^{\gamma^2}] -\gamma^2)/2
\end{eqnarray}
Finally, within one-step iteration, we find an analytical expression for the momentum-dependent, finite temperature self-energy as: $V = -2(I_1/\gamma + I_2)$. Note that while we have completed the momentum integration, the self-energy now has the form of an infinite converging series. Now, in order to test the accuracy of our iterative analytical theory, we calculate the finite temperature equation of state and then compare our results with experimental data. In the unitary limit, the pressure can be written in a universal form as: $P(\mu,T) = P_1(\mu,T)h(\xi)$. Here, $P_1(\mu,T)$ is the pressure of a single-component, non-interacting Fermi gas:
\begin{eqnarray}
P_1(\mu,T) = \frac{k_B T}{\lambda^3}\frac{2}{\sqrt{\pi}}\int_0^\infty \sqrt{t}\ln[1+z_\sigma e^{-t}] dt
\end{eqnarray}
\noindent where $\lambda=\sqrt{2\pi \hbar^2/m k_B T}$ is the thermal wavelength and $\xi = \exp(-\mu/k_BT)$. Upon completion of the self-energy calculation, we compute the system pressure via the relation: $P(\mu,T) = -\partial \Omega/\partial V$, and then extract the universal function $h(\xi)$. This function is plotted in FIG.~\ref{hf} together with experimental data from Nascimbene \emph{et al.}~\cite{ens}. As can be seen, our theory is in reasonable agreement with experimental values at higher temperatures, however, we begin to see a noticeable deviation at very low temperatures. This is due to the fact that our theory is only valid for the \emph{normal} phase, wheras the experimental system is in the superfluid phase.
\begin{figure}
\includegraphics[width=\columnwidth]{FTPR.png}
\caption{The pressure of a unitary Fermi gas as a function of $\xi$ in the lower branch of a Feshbach resonance. Here, $h(\xi) = P(\mu,T)/ P_1(\mu,T) $, where $\xi = \exp(-\mu/k_BT)$. Our theory (black line) vs. experimental data~\cite{ens} (gray points).} \label{hf}
\end{figure}
We note that while the jump in $\delta_{0}$ by $\pi$ at unitarity will correspond to a jump in our self-energy, the thermodynamic quantities of the system will remain \emph{continuous} throughout the BEC-BCS evolution. This is due to our treatment of the upper branch on the BEC side, where we have chosen to neglect the binding energy of ground-state pairs. As such, the jump in our self-energy should vanish if we were to take these binding energies into account. However, since this was not the case, FIG.~\ref{pressure} and FIG.~\ref{hf} are qualitatively the same, but quantitatively \emph{different}. Note that this behavior has already been experimentally verified via upper branch energy measurements~\cite{jump2}, thus we believe that these two figures would also be quantitatively the same if the jump in our self-energy was not present. Regardless, we have chosen to validate our theory by using data~\cite{ens} from experiments performed on the BCS side of a Feshbach resonance, where its numerical predictions are most suited for comparison with thermodynamic measurements.
Similarly, the temperature dependence of the entropy and energy for harmonically trapped, unitary fermions is measured by Thomas's group at Duke University~\cite{duke}. To investigate this, we used the local density approximation (LDA) to evaluate the entropy and energy of these trapped fermions. In LDA, the local chemical potential $\mu$ is written in terms of the central chemical potential $\mu_0$ and the trapping potential $V(\vec{r}) = m[\omega_\bot^2(x^2+y^2)+\omega_z^2z^2]/2$ as: $\mu = \mu_0 -m\omega^2 r^2/2$, where $\omega = (\omega_\bot^2\omega_z)^{1/3}$ is the average trapping frequency. In doing so, the total number of particles:
\begin{eqnarray}
N = \int d^3\vec{r} n(\vec{r}) = \frac{4\pi}{m\omega^2}\int P(r) dr
\end{eqnarray}
\noindent and the total energy:
\begin{eqnarray}
E &=& 12 \pi\int r^2 P(r) dr
\end{eqnarray}
\noindent can easily be converted into integrals over the chemical potential as:
\begin{eqnarray}
N &=& \frac{4\pi}{\sqrt{2\beta m^3\omega^6}}\int \frac{P(\eta) d\eta}{\sqrt{\eta_0-\eta}}\label{ne} \\
E& = & \frac{12\sqrt{2}\pi}{\sqrt{\beta^3 m^3\omega^6}}\int P(\eta) \sqrt{\eta_0-\eta}d\eta \label{ee},
\end{eqnarray}
\noindent where $\eta = \beta \mu$ and $\eta_0 = \beta \mu_0$. The entropy is then given by: $S = 4E\beta/3 - \eta_0N$. Now, by defining the Fermi energy and Fermi temperature of an ideal Fermi gas as: $E_F \equiv k_BT_F = (3N)^{1/3}\hbar \omega$, we can combine Eq. (\ref{ne}) and (\ref{ee}) to yeild the expression:
\begin{eqnarray}
\frac{S}{Nk_B} &=& \frac{4}{3}\frac{T_F}{T}\frac{E}{NE_F}.
\end{eqnarray}
\noindent Hence, for given values of $\eta_0$ and $T$, we can solve the above set of equations for the entropy and the energy. As such, the calculated entropy as a function of energy is shown in FIG. \ref{se} along with experimental data taken from Luo and Thomas~\cite{duke}.
\begin{figure}
\includegraphics[width=\columnwidth]{FTEN.png}
\caption{The entropy as a function of energy for harmonically trapped, unitary fermions. Our theory (black line) vs. experimental data~\cite{duke} (gray points).} \label{se}
\end{figure}
\section{III. Population Imbalanced Fermions at Unitarity}
Recent experiments concerning population imbalanced fermions~\cite{ru1, mit1} has triggered a new direction in theoretical research devoted to the study of unitary fermions in the presence of population imbalance~\cite{stf1, fc, liu}. For population balanced, two-component fermions at unitarity, R. K. Bhaduri, W. van Dijk and M. V. N. Murthy (BvDM) have proposed a parameter-free, high-temperature equation of state based on a virial cluster expansion~\cite{BvDM} that shows excellent agreement with experimental results over a wide range of fugacity. Their basic assumption is that higher order cluster integrals can be written in terms of two-particle clusters. This is justified because only two-body scattering effects are dominant for dilute atomic gases, even at unitarity where the virial coefficients are temperature independent. In this section, we generalize the BvDM approach to the case of population imbalanced, unitary fermions.
First, we summarize the original BvDM approach~\cite{BvDM} by noting that the grand thermodynamic potential of a population balanced Fermi system can be written as:
\begin{eqnarray}
\Omega - \Omega^{(0)} = -k_BTZ_1(\beta) \sum_{l=0}^\infty (\Delta b_l)z^l,
\end{eqnarray}
\noindent where $\Omega^{(0)}$ is the grand thermodynamic potential of an ideal Fermi gas, $Z_1(\beta)$ is the one-particle partition function and $\Delta b_l = b_l-b^{(0)}_l$ is the $l$-particle cluster integral relative to an ideal Fermi gas. As FBR is related to the forming and dissolving of two-body pairs, BvDM has proposed that higher order cluster integrals are expressible in terms of the two-body cluster $\Delta b_2$. Assuming the $l$-body cluster is one particle interacting with $l-1$ paired particles, the $l$-particle cluster integral is given by:
\begin{eqnarray}
\Delta b_l = (-1)^l \frac{\Delta b_2}{2^{\alpha_l}}
\end{eqnarray}
\noindent for $l \ge 2$, where $\alpha_l = (l-1)(l-2)/2$. As was mentioned earlier, the BvDM ansatz for a population balanced system of fermions shows excellent agreement with experimental results over a wide range of fugacity. Hence, for the remainder of this paper, we generalize this ansatz to the case of population imbalanced, unitary fermions.
The grand thermodynamic potential of a population imbalanced Fermi system can be written as:
\begin{eqnarray}
\Omega = -k_BTZ_1(\beta) \sum_{n=1}^\infty \sum_{k=0}^\infty b_{n,k}z_\uparrow^{n-k}z_\downarrow^k
\end{eqnarray}
\noindent where $b_{n,k}$ is the $n$-th virial coefficient for a configuration of $n-k$ spin-up fermions and $k$ spin-down fermions. We also note that $b_{n,k}$ has the properties: $b_{n,n-k} = b_{n,k}$ and $\sum_{k=0}^n b_{n,k} = b_n$. Now, by defining the virial coefficient difference relative to non-interacting fermions as: $\Delta b_{n,k} = b_{n,k}-b_{n,k}^{(0)}$, we have:
\begin{eqnarray}
\Omega - \Omega_0 = -k_BTZ_1(\beta) \sum_{n=1}^\infty \sum_{k=0}^\infty (\Delta b_{n,k})z_\uparrow^{n-k}z_\downarrow^k,
\end{eqnarray}
\noindent where $\Omega_0 = \sum_\sigma \Omega_{0\sigma}$. Here, we note that $\Omega_{0\sigma}$ is the grand thermodynamic potential for $\sigma$ non-interacting fermions, and is given by the expression:
\begin{eqnarray}
\Omega_{0\sigma}= -V\frac{k_B T}{\lambda^3}\frac{2}{\sqrt{\pi}}\int_0^\infty \sqrt{t}\ln[1+z_\sigma e^{-t}] dt.
\end{eqnarray}
\noindent Since the interaction occurs only between fermions with opposing spins, the virial coefficients for population imbalanced fermions can be written in terms of virial coefficients for population balanced fermions. Furthermore, $\Delta b_{n,0} = 0$, and we find that: $\Delta_{n,k} = \Delta_n/(n-1)$ for $n \ge 2$. Finally, by putting everything together, the grand thermodynamic potential for a population imbalanced, unitary Fermi system can be written as:
\begin{eqnarray}
\Omega - \Omega_0 = -k_BTZ_1(\beta)\sum_{n=2}^\infty \frac{(-1)^n \Delta b_2}{2^{\alpha_n}(n-1)} \sum_{k=1}^{n-1}z_\uparrow^{n-k}z_\downarrow^k\label{omega}
\end{eqnarray}
\noindent Hence, by using the relation: $P = -\partial \Omega/\partial V$, we see that the pressure for a population imbalanced, unitary Fermi system can be similarly written as:
\begin{eqnarray}
P - P_0 = \frac{k_B T}{\lambda^3} \sum_{n=2}^\infty \frac{(-1)^n \Delta b_2}{2^{\alpha_n}(n-1)} \sum_{k=1}^{n-1}z_\uparrow^{n-k}z_\downarrow^k\label{pres}
\end{eqnarray}
\begin{figure}
\includegraphics[width=\columnwidth]{PITA.png}
\caption{The finite temperature equation of state for a population imbalanced, unitary Fermi gas. Values of $\eta = 1.0$, 0.5 and 0.25 are shown in black, dark gray and light gray lines, respectively. The gray points represent experimental data~\cite{ens} for the population balanced case, which is recovered from Eq. (\ref{hh}) when $\eta = 1.0$.} \label{pita}
\end{figure}
\noindent Lastly, we extract the universal function $h(\eta,\xi)$ from Eq. (\ref{pres}) to obtain:
\begin{eqnarray}
h(\eta,\xi) = 1 + \frac{1}{\Omega_0}\sum_{n=2}^\infty \frac{(-1)^n \Delta b_2}{2^{\alpha_n}(n-1)} \sum_{k=1}^{n-1}z_\uparrow^{n-k}z_\downarrow^k\label{hh}
\end{eqnarray}
\noindent where $\eta\equiv\mu_\downarrow/\mu_\uparrow$ is the ratio between the two chemical potentials. We note that when $\eta = 1.0$, the chemical potentials are equal and Eq. (\ref{hh}) reduces to its population balanced form as depicted in FIG. \ref{hf}. By contrast, $h(\eta,\xi)$ is plotted in FIG. \ref{pita} as a function of $\beta\mu_\downarrow$ for three different values of $\eta$, along with experimental data~\cite{ens} pertaining to the population balanced case for verification purposes.
\section{IV. Conclusions and Remarks}
Two important points of interest to consider are the recent experimental findings by Zwierleins's group~\cite{imb1} and the recent theoretical findings by Van Houcke \emph{et al}~\cite{imb2}. Zwierlein's group observed the superfluid phase transition in a strongly-interacting Fermi gas by using high-precision measurements of the local compressibility, density and pressure. Their data was able to completely describe the universal thermodyanmics of such Fermi gases without the use of any fit or external thermometer. Similarly, Van Houcke \emph{et al.} computed and measured the equation of state for a normal, unitary Fermi gas. Their data showed excellent agreement with their theory that a series of Feynman diagrams can be controllably resummed in a non-perturbative regime using a Bold Diagrammic Monte Carlo approach. We note that while the newer data from Zwierlein's group is highly accurate~\cite{imb1}, its normal phase measurements are nearly identical to the measurements made by Nascimbene \emph{et al.}~\cite{ens}.
In conclusion, we have presented a self-consistent theory to determine the self-energy of a strongly interacting, normal-phased Fermi gas at unitarity. We have also shown that this self-energy can be used to calculate universal thermodynamic properties of a Fermi gas for both the upper branch of a Feshbach resonance and the lower branch of a Feshbach resonance. In addition, we have demonstrated that higher order virial expansion coefficients for population imbalanced fermions can be written in terms of virial expansion coefficients for population balanced fermions, which makes calculating the grand thermodynamic potential for population imbalanced Fermi systems at unitarity a much less cumbersome task. Overall, we find that our theory is in good agreement with currently available experimental data, indicating that there may be promising advancements ahead for further theoretical research regarding fermion interactions at the unitarity limit.
\section{V. Acknowledgements}
We are grateful to Sylvain Nascimbene for sending us experimental data regarding their universal function measurements, and we would like to thank John Thomas for pointing out their experimental data in Ref.~\cite{duke}.
|
1,108,101,562,807 | arxiv | \section{introduction}
Recent progress in our understanding of non-interacting Bloch electrons~\cite{classification1, classification2, mb, fkm, hm, qi1} reveals a large class of gapped topological phases, the so-called topological insulators and superconductors~\cite{hm, QiRev, KaneRev}. For example, a time-reversal-symmetric topological insulator is a band insulator that cannot be continuously tuned into a trivial atomic insulator, as long as time-reversal symmetry is respected. A topological insulator is featured by a single Dirac cone in its surface state spectrum. Typically, these topological phases are realized in systems with strong spin-orbit coupling. It is known that when topological insulators are combined with superconductivity via the proximity effect~\cite{fu1} or via phonon-mediated attractive interaction~\cite{fu2}, the interesting interplay between electron pairing and the spin-orbit coupling results in exotic superconductivity. For instance, when a topological insulator is doped and turned into a superconductor, an odd-parity topological superconductor is obtained~\cite{fu2} with a single gapless Majorana surface state, which is protected by time-reversal symmetry.
Another type of gapless `topological matter', the Weyl semimetal, is currently being studied intensively and is proposed to be realized in experiments~\cite{Ashvin, Burkov, Yang, cho, latticeWeyl}. Its electronic structure has an {\it even} number of Weyl nodes -- two cylyndrical 3D cones that touch at their apex, the Weyl point -- which carry non-trivial winding numbers ensuring their stability. These Weyl nodes can be thought of as 3D analogs of the two component Dirac fermions in graphene and at the surface of a 3D topological insulators. They exhibit spin-momentum locking and thus require strong spin-orbit coupling to be realized. In analogy with superconductivity in topological insulators, it is natural to expect interesting superconducting states to emerge in these systems upon doping, resulting from their non-trivial topological winding numbers. To realize a Weyl semimetal phase requires either time-reversal symmetry~\cite{Ashvin} or inversion symmetry~\cite{murakamiweyl} to be broken. In this paper we concentrate on the inversion-symmetric case, in which the two nodes connected by the inversion symmetry carry opposite chirality. Upon slight doping there are at least two disconnected components to the Fermi surface around the nodes, shifted in momentum space from the inversion-symmetric high-symmetry points such as the $\Gamma$-point. We will show that the interplay between the finite-momentum displacement and the non-trivial winding numbers around each Weyl node leads to interesting superconducting states.
The finite momentum shift of the Fermi surface motivates the study of finite-momentum pairing states or Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) states~\cite{FF,LO}. FFLO states break translational symmetry and have interesting physical properties~\cite{FF,LO, disorder3}. In the Weyl semimetals, the center of momentum of the FFLO pairs is fixed by the momentum of the Weyl nodes. Similarly, the non-trivial winding around the nodes and the broken time-reversal symmetry suggests the possibility of realizing even/odd-parity BCS states that are electronic analogues of the $^{3}$He-A phase~\cite{He3a1,He3a2}. Since the $^{3}$He-A phase has nodes with non-trivial winding number which guarantees the existence of a dispersionless surface states~\cite{He3a2}, the Weyl semimetal in these phases is also expected to support zero-energy surface flat bands, similar to a Weyl semimetal in proximity to a superconductor~\cite{He3a2,WeylSc,Fa,nodal}.
Surprisingly, when the attractive interaction is completely local in real space and represents a phonon-mediated interaction, we find from a self-consistent mean-field calculation that the {\it fully-gapped} finite-momentum pairing is energetically favored over the even-parity BCS state (both pairing states can be thought of as spin-`singlet' pairings though `singlet' is not a very exact terminology since spin-rotational symmetry is broken) and is stable against weak disorder. Hence, there is a good chance of experimentally observing these exotic phases. To be concrete, we concentrate on a specific lattice model realizing an inversion-symmetric Weyl semimetal and solve the gap equation of the lattice model in the BCS approximation. We also discuss the applicability of our result to other models realizing Weyl semimetals.
The proximity effect of an s-wave superconductor on an undoped Weyl semimetal has been studied by Meng and Balents in Ref.~\onlinecite{WeylSc}. In contrast to this work, where the superconductivity is {\it extrinsic}, we are interested in the {\it intrinsic} superconductivity of the doped Weyl semimetal. Note that if the Weyl semimetal is undoped, the intrinsic superconducting gap and the critical temperature are expected to be vanishingly small since the density of states goes to zero at the Weyl point.
\section{Model}
The model we consider in this work is given by the Hamiltonian
\begin{equation}
H = H_0 + V_{\rm ee}
\label{eq:Model}
\end{equation}
where $V_{\rm ee}$ is an electron-electron interaction term to be specified below. For the kinetic term, $H_0$, we take the minimal two-band lattice model~\cite{Yang}
\begin{align}
H_0 = t& (\sigma^{x} \sin k_{x} + \sigma^{y} \sin k_{y})+t_{z}(\cos k_{z} - \cos Q)\sigma^{z} \nonumber \\ &+m(2-\cos k_{x}-\cos k_{y})\sigma^{z} -\mu.
\label{lattice}
\end{align}
This model realizes a Weyl semimetal with two Weyl points at momenta ${\vec P}_\pm = (0,0,\pm Q)$. $\sigma^{x,y,z}$ are the Pauli sigma matrices (for later use we define $\sigma^0$ to be the $2\times2$ unit matrix), $t$ and $t_z \sin Q$ are the Fermi velocities at the Weyl points in the $x,y$ and $z$ directions respectively. Without a loss of generality, we assume $t = t_{z}\sin(Q)$ such that the Fermi velocity around the Weyl points is isotropic. We have explicitly included the chemical potential $\mu$ in the kinetic term. We are primarily interested in the parameter range $0 < |\mu/t| \ll Q$, when the Fermi surface consists of two disconnected spherical components around the Weyl points (see Fig.~\ref{Fig1}). In this case, the states on the Fermi surface have spin-momentum locking similar to the surface states of a strong topological insulator. This property will play an important role in our discussion of pairing states below.
The electron-electron interaction is short ranged and takes the form
\begin{equation}
V_{\rm ee} = V_{0} \sum_{i} n_{i}n_{i} + V_{1} \sum_{\langle ij\rangle}n_{i}n_{j} = \sum_{{\vec k}} V({\vec k})n_{{\vec k}}n_{-{\vec k}},
\label{interaction}
\end{equation}
where $n_{i} = \sum_{\sigma} c^{\dagger}_{i, \sigma}c_{i,\sigma}$ is the number of electrons on site $i$, and the second sum is over nearest neighbors only. $V({\vec k}) = V_{0}+V_{1}(\cos k_{x} + \cos k_{y} + \cos k_{z})$ is the Fourier transform of the real-space interaction. $V_{0}$ represents a phonon-mediated attractive on-site interaction and $V_{1}$ the nearest-neighbor interaction. We are mainly interested in the case $V_{0}<0$ and $|V_0|\gg |V_{1}|$ when electrons form Cooper pairs and condense. The phenomenological interaction term~\eqref{interaction} captures the tendency of d-wave pairing for $V_{1}<0$ and $V_{0}>0$ in the context of high-Tc superconductors such as cuprates~\cite{MacDonald}.
The point group symmetry of the model Hamiltonian, $H$, is $C_{4h} =\{I^{\eta_I}C_4^{\eta_4}|\eta_I=0,1;\eta_4=0,1,2,3\}$, where
\begin{align}
I&: \sigma^{z} H(-{\vec k})\sigma^{z} = H({\vec k}), \nonumber \\
C_{4}&: S^{\dagger} H[R_{\pi/2}({\vec k)}]S = H({\vec k}),\label{symmetry}
\end{align}
with $S = \frac{1}{\sqrt{2}}(\sigma^{0}+i\sigma^{z})$ and $R_{\pi/2}$ a rotation by an angle $\pi/2$ around the $z$-axis. $I$ is the inversion symmetry that takes ${\vec r} \rightarrow -{\vec r}$. Each spatial rotation is accompanied by an equal spin rotation, manifesting the spin-momentum locking due to the presence of strong spin-orbit coupling. The $C_4$-rotation symmetry in the $xy$-plane is not required to realize a Weyl semimetal, but is present in this model and similar lattice rotations are present in other models we discuss later.
To make a connection to previous work~\cite{Ashvin, Yang, cho, Burkov} and to obtain a general understanding of Weyl semimetal phases, we derive the low energy effective theory corresponding to the lattice model~\eqref{eq:Model}. Expanding $H(\vec k)$ in the small momentum ${\vec q} = {\vec k} - {\vec P}_\pm$ around the two Weyl Points denoted by $\pm$, we obtain
\begin{equation}
H_0 = \sum_{{\vec k}}c^{\dagger}({\vec k}) H_0({\vec k}) c({\vec k}) \approx \sum_{a= \pm} \psi^{\dagger}_{a}({\vec q}) h_{a}({\vec q}) \psi_{a} ({\vec q}).
\label{weyl}
\end{equation}
The effective kinetic Hamiltonian $h_{\pm}({\vec q})$ is given by
\begin{equation}
h_{\pm}({\vec q}) = t (q_{x}\sigma^{x}+q_{y}\sigma^{y} \mp q_{z}\sigma^{z}) - \mu.
\label{low}
\end{equation}
Similarly, we obtain for the interaction term
\begin{equation}
V_{\rm ee} = \sum_{{\vec k},{\vec p}, {\vec q}}V^{ab;cd}({\vec q})\psi^{\dagger}_{a, \sigma}({\vec k}+{\vec q})\psi^{\dagger}_{b,\tau}({\vec p}-{\vec q})\psi_{c,\tau}({\vec p})\psi_{d,\sigma} ({\vec k}),
\end{equation}
where roman letters denote the nodal indices $\pm$ and $\sigma,\tau$ are spin indices. Here and henceforth, repeated indices are summed over. In the BCS channel (see App.~\ref{app:interactions} for details)
\begin{equation}
V_{\rm ee} =\sum_{{\vec k},{\vec l}}V^{ab;cd}\psi^{\dagger}_{a, \sigma}({\vec k})\psi^{\dagger}_{b,\tau}(-{\vec k})\psi_{c,\tau}(-{\vec l})\psi_{d,\sigma} ({\vec l}),
\label{eq:VeeBCS}
\end{equation}
with
\begin{align}
&V^{-+;+-}=V^{+-;-+}= V_{0} + 3V_{1} - \frac{V_{1}}{2}({\vec k}-{\vec l})^{2},\nonumber \\
&V^{-+;-+}= V_{0}+2V_{1} +V_{\perp} + V^{+}_{\parallel},\label{ContinuumInteraction}\\
&V^{+-;+-} =V_{0}+2V_{1} +V_{\perp} + V^{-}_{\parallel}, \nonumber
\end{align}
and
\begin{align}
V_{\perp} &= -\frac{V_{1}}{2} ({\vec k}_{\perp}-{\vec l}_{\perp})^{2},\\
V^{+}_{\parallel}/V_{1} &= [1 - \frac{1}{2}(k_{z}-l_{z})^{2}]\cos2Q + (k_{z}-l_{z})\sin2Q,\nonumber \\
V^{-}_{\parallel}/V_{1} &= [1 - \frac{1}{2}(k_{z}-l_{z})^{2}]\cos2Q - (k_{z}-l_{z})\sin2Q,\nonumber
\end{align}
where ${\vec k}_{\perp} = (k_{x},k_{y},0)$. These expressions will be used in the next section.
\section{Mean field theory and pairing channels}
\label{MFEnergy}
We treat the interaction term $V_{\rm ee}$ in a mean-field approximation and solve the resulting gap equations self-consistently. In addition to the more standard BCS paring, we also study finite-momentum or FFLO pairing.
\subsection{BCS pairing}
\begin{figure}
\includegraphics[width=1\columnwidth]{SC_Weyl3}
\caption{A schematic diagram of the spin texture around the Weyl nodes in momentum space and the pairing states. (a) The spin direction of the eigenstates is given by thick arrows. The double-headed arrows labeled with (1) and (2) indicate the partner states in the BCS pairing. The spin state is maximally anti-parallel for (1) and parallel for (2), indicating that there will be nodes in the latter case if the pairing is in the singlet channel. Contrary to the BCS pairing, the FFLO pairing (3) connects two states within the same node (`intra-nodal' pairing). The two states connected by the FFLO pairing have the opposite spin directions. (b) Position of the nodes for the even-parity state. The nodes of the same chirality are on the same component of the Fermi surface, with their partner nodes of opposite chirality on the other. The filled circle represents a node of chirality $+1$ and the crossed circle represents a node of chirality $-1$. Hence, there are four nodal points on the Fermi surface of the even-parity paired BCS state.}
\label{Fig1}
\end{figure}
The symmetry classification of different BCS pairing order parameters in a doped Weyl semimetal, according to the lattice symmetry~\eqref{symmetry}, is summarized in Table \ref{table1} (see App.~\ref{app:symm} for more details). There are three fully-gapped BCS pairing order parameters ($\Gamma^1$ and $\Gamma^{3,\pm}$) and one that has nodal lines ($\Gamma^2$).
\begin{table}
\begin{tabular} {l|l|l|l}
\hline
{}&IRR&$C_{4h}$&pairing function\\ \hline
$\Gamma^{1}$&$A$&$1$& $Z^2(i\sigma^{y}),(X-iY)(\sigma^0+\sigma^z),(X+iY)(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{2}$&$B$&$-1$&$XY(i\sigma^y),(X+iY)(\sigma^0+\sigma^z),(X-iY)(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{3,+}$&$E_{+}$&$i$&$(X+iY)Z(i\sigma^y),Z(\sigma^0+\sigma^z),XYZ(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{3,-}$&$E_{-}$&$-i$&$(X-iY)Z(i\sigma^y),XYZ(\sigma^0+\sigma^z),Z(\sigma^0-\sigma^z)$\\
\hline
\end{tabular}
\caption{Symmetry classification of the BCS pairing order parameters for the model (\ref{lattice}) according to the different irreducible representations (IRRs) of the group $C_{4h}$. $X$, $Y$, and $Z$ are basis functions for the momentum-space pairing function which we take to be $\sin p_{x}$, $\sin p_{y}$, and $\sin p_{z}$ respectively, and can be realized by nearest-neighbor pairings. Among the pairing functions, $i\sigma^y$ means singlet pairing, $\sigma^x$ is the spinfull triplet pairing, $\sigma^0+\sigma^z$ the triplet pairing for polarized $\uparrow$ spins and $\sigma^0-\sigma^z$ the triplet pairing for polarized $\downarrow$ spins. The paired state $\Gamma^2$ has nodal lines, while the other three states are gapless.}
\label{table1}
\end{table}
In the continuum theory, the pairing terms of Table~\ref{table1} take the form
\begin{equation}
\sum_{{\vec k}}\Delta_{\sigma\tau}({\vec k})c^{\dagger}_{\sigma}({\vec k})c^{\dagger}_{\tau}(-{\vec k})\approx \sum_{{\vec q}}\Delta^{a,b}_{\sigma\tau} ({\vec q}) \psi^{\dagger}_{a,\sigma}({\vec q})\psi^{\dagger}_{b,\tau} (-{\vec q}),
\end{equation}
The standard BCS pairing term connects two Weyl nodes in the effective theory. The explicit form of $\Delta^{a,b}_{\sigma\tau}$ and $\Delta_{\sigma\tau}$ can be found in Table \ref{table1} and in Eq.~\eqref{Pair} in App.~\ref{app:interactions}. The self-consistent gap equation takes the form
\begin{equation}
\Delta^{ab}_{\sigma\tau}({\vec p}) = \sum_{{\vec k}}V^{ab;cd}({\vec p} - {\vec k}) \langle\psi_{c,\tau}(-{\vec k})\psi_{d,\sigma}({\vec k})\rangle,
\end{equation}
where the expectation value is taken with respect to the mean-field superconducting state (see App.~\ref{app:interactions} for more explicit expressions for the gap equations).
\subsection{FFLO pairing}
In the doped Weyl semimetal, the Fermi surface is formed around the Weyl points ${\vec P}_\pm$, and it is natural to expect a finite-momentum pairing to compete with the standard BCS-paired states. We therefore introduce a FFLO state with a center of momentum at $2{\vec P}_\pm$, which paring function satisfies
\begin{equation}
\Delta^{\pm}_{\rm FFLO}({\vec r}) \propto \exp(2i{\vec P_+}\cdot {\vec r}) \pm \exp(2i{\vec P}_-\cdot {\vec r})
\label{fflo}
\end{equation}
The self-consistent equations for these pairing order parameters take the form
\begin{equation}
\Delta_{\sigma\tau}({\vec p}; \pm {\vec P}) = \sum_{{\vec k}}V({\vec p} -{\vec k}) \langle \psi_{\pm,\tau}(-{\vec k})\psi_{\pm,\sigma}({\vec k})\rangle,
\end{equation}
where the two nodes $\pm$ are decoupled. These FFLO states correspond to the {\it intra-node} pairing, in contrast to the BCS case which is {\it inter-node} pairing (see Fig.~\ref{Fig1}). The two states of the pairings $\Delta^{\pm}_{\rm FFLO}$ in Eq.~\eqref{fflo} with a relative phase of $\pm1$ between the two components of the Fermi surface have the {\it same} mean-field energy since the two nodes are decoupled in the mean-field theory.
\begin{table}
\begin{tabular} {l|l|l}
\hline
{}&$C_{4}$&Pairing function\\ \hline
$\Gamma^{1}$&$1$& $Z^2(i\sigma^{y}),(X-iY)(\sigma^0+\sigma^z),(X+iY)(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{2}$&$-1$&$XY(i\sigma^y),(X+iY)(\sigma^0+\sigma^z),(X-iY)(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{3,+}$&$i$&$(X+iY)Z(i\sigma^y),Z(\sigma^0+\sigma^z),XYZ(\sigma^0-\sigma^z)$\\ \hline
$\Gamma^{3,-}$&$-i$&$(X-iY)Z(i\sigma^y),XYZ(\sigma^0+\sigma^z),Z(\sigma^0-\sigma^z)$\\
\hline
\end{tabular}
\caption{Classification of the FFLO states of superconducting Weyl fermions based on the lattice symmetry $C_{4}$. The notation is the same as in Table~\ref{table1}. We assume that the center of momentum for the pairing is at ${\vec P}_\pm$. Note that the symmetry is only $C_{4}$ on the $xy$-plane without the inversion because `inversion' is already encoded by the ansatz Eq.~\eqref{fflo}. This classification is essentially the same as that of BCS-type pairing order parameters.}
\label{table2}
\end{table}
\subsection{Mean field energy}
Having identified the possible superconducting states, we compute their free energy by solving the self-consistent gap equations numerically. We are interested in the case $|V_0|\gg |V_1|$ with $V_0 < 0$ where the spin singlet is preferred. We denote the pairing term $\propto i\sigma^y$ in Table~\ref{table1} as 'singlet' and the other terms $\propto i{\vec \sigma}\sigma^{y}$ as 'triplet'. The singlet and triplet components have a different dependence on the interaction parameters $V_{0}$ and $V_{1}$. The gap of the triplet components depends only on the value of $V_{1}$, while the singlet component depends only on $V_{0}+3V_{1} \approx V_{0}$ or $V_{0}+ V_{1}(5 + \cos 2Q)/2 \approx V_{0}$. We therefore consider in the following only the singlet component $\propto i\sigma^y$ of $\Gamma^1$ for the BCS and FFLO states.
For these two states the BCS mean-field approximation is
\begin{equation}
H = H_0 + V_{\rm ee}^{\rm pair},
\label{eq:Hmf}
\end{equation}
where $H_0$ is given by Eq.~\eqref{lattice} and $V_{\rm ee}^{\rm pair}$ is the effective {\it projected} pair potential derived from the lattice interaction in Eq.~\eqref{interaction}. For the $\Gamma^{1}$-BCS state we have
\begin{align}
V_{\rm ee}^{\rm pair} &= -U_{\rm BCS}\sum_{{\vec k}, {\vec p}}P^{\dagger}_{{\vec k}}P_{-{\vec p}}, \notag \\
U_{\rm BCS} &= V_{0}+V_{1}\frac{5 + \cos(2Q)}{2},\\
P^{\dagger}_{{\vec k}} &= \psi^{\dagger}({\vec k}) \tau^{x}i\sigma^{y} \psi^{*}(-{\vec k}), \notag
\end{align}
and the gap equation is
\begin{equation}
\Delta = -\frac{U_{\rm BCS}}{4} \int_{{\vec k}} \langle\psi_{a,\alpha}({\vec k}) (\tau^{x})^{ab}(-i\sigma^{y})^{\alpha\beta}\psi_{b,\beta}(-{\vec k})\rangle.
\end{equation}
For the $\Gamma^{1}$-FFLO state we obtain
\begin{align}
V_{\rm ee}^{\rm pair} &= -U_{\rm FFLO}\sum_{{\vec k}, {\vec p}}P^{\dagger}_{{\vec k}}P_{{\vec p}} \notag \\
U_{\rm FFLO} &= V_{0}+3V_{1}, \\
P^{\dagger}_{{\vec k}} &= \psi^{\dagger}({\vec k}) i\sigma^{y} \psi^{*}(-{\vec k}), \notag
\end{align}
with a gap equation
\begin{equation}
\Delta = -\int_{{\vec k}} \frac{U_{\rm FFLO}}{2} \langle\psi_{a,\alpha}({\vec k})(-i\sigma^{y})^{\alpha\beta}\psi_{a,\beta}(-{\vec k})\rangle
\end{equation}
In this standard BCS-type approximation, we can evaluate the energy $E$ of the pairing states with the pairing amplitude $\Delta_{\vec k} = -U\langle P_{-{\vec k}}\rangle$ (with the effective pairing interaction strength $U$) by computing
\begin{align}
E &= E_{\rm el} + E_{\rm sc}, \nonumber\\
E_{\rm el} &= \sum_{{\vec k} \in d\Omega, \epsilon_{\rm sc} <0} \epsilon_{\rm sc}({\vec k})n_{e}[\epsilon_{\rm sc}({\vec k})] \nonumber \\
& -\sum_{{\vec k} \in d\Omega, \epsilon_{\rm fs}<0} \epsilon_{\rm fs}({\vec k})n_{e}[\epsilon_{\rm fs}({\vec k})], \nonumber\\
E_{\rm sc} &= - \sum_{{\vec k} \in d\Omega}\frac{\Delta_{{\vec k}}\Delta^{*}_{-{\vec k}}}{2U} + h.c.
\label{Energy}
\end{align}
Here $n_{e}$ is the filling of the electron for the state at ${\vec k}$ of energy $\epsilon({\vec k})$, $\epsilon_{\rm sc}$ is the energy of the filled band of the BdG quasiparticle with mean-field gap $\Delta$, and $\epsilon_{\rm fs}$ is the energy of the filled bands of the free Weyl electrons without pairing, i.e.\ the energy of the normal state. Thus, the second line of Eq.~\eqref{Energy} represents the energy gain of the superconducting state relative to the normal state by opening up a gap near the Fermi surface. The last line of Eq.~\eqref{Energy} represents the contribution from the pairing interaction labeled by the momentum ${\vec k}$. The range of the summation is restricted to a shell $d\Omega$ around the Fermi surface, which width is determined by the strength of the attractive interaction.
In Fig.~\ref{Fig2} we plot the mean-field energy for the two parings, as obtained from Eq.~\eqref{Energy}, as a function of the interaction strength $V_0$. The $\Gamma^{1}$-FFLO state has a larger gap than the $\Gamma^{1}$-BCS state and is energetically favored. This result can be understood by considering the spin-momentum locking around the Fermi surface. For the even-parity pairing state, the state $|{\vec k}, \alpha\rangle$ ($\alpha$ is the spin state) is paired with the inversion partner state $|-{\vec k}, \sigma^{z}\alpha\rangle$. The pairing amplitude is of the form $\sim \langle c^{\dagger}({\vec k})i\sigma^{y}c^{*}(-{\vec k}) \rangle$ which takes the maximum value if the two states at ${\vec k}$ and $-{\vec k}$ have opposite spins. However, the spins at ${\vec k}$ and $-{\vec k}$ are not anti-parallel and even become parallel at the poles (which is the origin of the nodes, see Fig.~\ref{Fig1}) which tends to reduce the superconducting gap. In contrast, the FFLO state connects the states $|{\vec k}+{\vec Q}, \alpha \rangle$ and $|-{\vec k}+{\vec Q}, \beta \rangle$ via the spin singlet channel with $\beta = -\alpha$ (anti-parallel spins). A gap opens up everywhere at the Fermi surface with a larger gap than the even-parity BCS state. This is very similar to the surface of topological insulator which we discuess Appendix B (See also Ref. [\onlinecite{FFLOsurface}]). The similar finite-momentum pairing (``intra-valley" pairing or ``Kekule" pairing) can happen in a graphene in the presence of a nearest-neighbor attractive interaction~\cite{Gr1, Gr2}.
\section{Discussion}
In this section, we discuss the nodal structure of the $\Gamma^{1}$-BCS state and the effect of the disorder on the $\Gamma^{1}$-FFLO state. Like in the last section, we consider only the singlet components of these states, assuming $|V_{0}|\gg |V_{1}|$.
\subsection{$\Gamma^{1}$-BCS state}
\label{BCS state}
The pair potential term in the mean field Hamiltonian of the singlet component of the $\Gamma^1$-BCS state is
\begin{align}
H^{\rm BCS}_{\rm pair} &= \sum_{{\vec k}}\Delta c^{\dagger}_{\alpha}({\vec k}) (i\sigma^{y})^{\alpha\beta} c^{\dagger}_{\beta}(-{\vec k}) + h.c. \\
&= \sum_{\vec q}\Delta \psi^{\dagger}_{a,\alpha}({\vec q})(\tau^{x})^{ab}(i\sigma^{y})^{\alpha\beta} \psi^{\dagger}_{b,\beta}(-{\vec q})+h.c. \notag
\label{singlet1}
\end{align}
The second form is obtained in the low energy theory. This superconducting state is an even-parity state and has four point nodes on the Fermi surface at $q_{x}=q_{y}=0$ and $q_{z} = \pm \sqrt{\Delta^{2}+\mu^{2}}$ (see Fig.~\ref{Fig1}). The gap remains closed even when the triplet pairings of the $\Gamma^{1}$-BCS state in Table~\ref{table1} are included.
\begin{figure}
\includegraphics[width=1\columnwidth]{SC_Energy5}
\caption{Mean-field energy $E$ of the even-parity $\Gamma^{1}$-BCS and the $\Gamma^{1}$- FFLO states as a function of interaction strength $V_0$. Other model parameters used to obtain this plot are $\mu/t = 0.3$, $Q = 0.7$, $V_1 = 0$, and $d\Omega = 0.2$}
\label{Fig2}
\end{figure}
The two nodal points near Weyl node ${\vec P}_+$ (${\vec P}_-$) carry a winding number of +1 (-1). To demonstrate this we write down the Bogoliubov-de Gennes (BdG) Hamiltonian $H = \sum_{{\vec k}} \Phi_{{\vec k}} {\tilde H}_{{\vec k}} \Phi_{{\vec k}}$ for $\Phi_{{\vec k}} = (c_{{\vec k}}, i\sigma^{y}c^{*}_{-{\vec k}})^{T}$. In the continuum limit at ${\vec P}_+$ (similar expressions are obtained for ${\vec P}_-$)
\begin{equation}
{\tilde H} =
\begin{pmatrix}
h_{+}({\vec q})& \Delta \sigma^{0} \\
\Delta\sigma^{0}& -h_{-}({\vec q})
\end{pmatrix},
\label{nodes}
\end{equation}
with $h_{\pm}({\vec q})$ defined in Eq.~\eqref{low}. The quasiparticle spectrum corresponding to this BdG Hamiltonian is
\begin{equation}
E({\vec q}) = \pm [q^{2} + \Delta^{2} + \mu^{2} \pm 2(\Delta^{2}q^{2}_{z} + \mu^{2}q^{2})^{1/2}]^{1/2}
\label{spec}
\end{equation}
which has nodes at $q_x = q_y = 0$, $q_{z} =\pm \sqrt{\Delta^{2}+\mu^{2}}$, both with chirality of $+1$. Near the nodes $|q_x|,|q_y| \ll |q_z|,|\mu|$, we obtain the anisotropic Weyl spectrum
\begin{equation}
E({\vec q}) \approx \pm \left[ (q_{z} \pm \sqrt{\Delta^{2}+\mu^{2}})^{2} + q^{2}_{\perp} (1+ \frac{\mu^{2}}{\mu^{2}+\Delta^{2}})\right]^{1/2},
\end{equation}
with $q_\perp = (q_x,q_y)$. At zero chemical potential, this is similar to the results of Meng and Balents~\cite{WeylSc} who considered the proximity effect of undoped Weyl semimetals. The effect of nonzero chemical potential is to simply shift the Weyl nodes located at $q_{z} = \pm \Delta$ at $\mu =0$, to $q_{z} = \pm \sqrt{\Delta^{2}+\mu^{2}}$.
Because of the non-trivial winding number carried by the nodes, the nodal points are robust against small perturbations. The only way to gap out the nodes is to undergo a pair-annihilation of nodes with the opposite winding numbers, and the nodal points are {\it topologically} stable as long as they are separated enough in momentum space. Strikingly, this nodal structure implies that there will be a zero-energy state on the surface which should be detectable in experiment. This is similar to $^{3}$He-A which is an odd-parity pairing state, while our superconducting phase is realized by the even-parity pairing.
\subsection{$\Gamma^{1}$-FFLO state}
\label{FFLO state}
The singlet component of the $\Gamma^{1}$- FFLO state is fully gapped with a mean-field pair potential
\begin{equation}
H^{\rm FFLO}_{\rm pair} = \Delta c^{\dagger}_{\alpha}({\vec k}+{\vec P}_+)(i\sigma^{y})^{\alpha\beta}c^{\dagger}_{\beta}(-{\vec k}+{\vec P}_+) \pm ({\vec P}_+\leftrightarrow {\vec P}_-)
\label{FFLO}
\end{equation}
with center-of-momentum of $2{\vec P}_{\pm}$. In the low-energy theory, it can be represented by the {\it intra-node} pairing $\sim \Delta \sum_{{\vec q}}\psi^{\dagger}_{a,\alpha}({\vec q}) (i\sigma^{y})^{\alpha\beta} \psi^{\dagger}_{a,\beta}(-{\vec q})$.
It is known that some two-dimensional FFLO states with strong spin-orbit coupling and parallel magnetic field are unstable against weak disorder~\cite{disorder1,disorder3}. In contrast, the FFLO state discussed in this paper is found to be robust against weak disorder. In fact, the structure of the FFLO state Eq.~\eqref{fflo} and Eq.~\eqref{FFLO} is more similar to the even/odd-parity state of the doped topological insulators studied in Ref.~\onlinecite{disorder2} than usual FFLO states in the two spatial dimension. This similarity is manifested if we write down the pairing for the Weyl fermions in the continuum limit in the helicity eigenstates
\begin{equation}
\Delta_{\pm} \propto e^{i\phi} [\langle \psi_{+}({\vec q})\psi_{+}(-{\vec q}) \rangle \pm \langle \psi_{-}({\vec q})\psi_{-}(-{\vec q}) \rangle],
\label{FFLO2}
\end{equation}
which corresponds to Eq.~(5) of Ref.~\onlinecite{disorder2}. Within this Cooper channel, we add a scalar disorder term to the Hamiltonian
\begin{equation}
H_{\rm imp} = V_{\rm imp} \sum_{{\vec k}, {\vec p} \in FS}c^{\dagger}_{{\vec k},\sigma}c_{{\vec p},\sigma} = \sum_{{\vec q}, {\vec l} \in FS} V^{ab}_{{\vec q}, {\vec l}} \psi^{\dagger}_{a,{\vec q}}\psi_{b,{\vec l}}
\end{equation}
The matrix element $V^{ab}_{{\vec q}, {\vec l}}$ is given by
\begin{equation}
V^{ab}_{{\vec q}, {\vec l}} = V_{imp}
\begin{pmatrix}
\langle{\hat q}|{\hat l}\rangle& \langle{\hat q} | {\bar l}\rangle \\
\langle {\bar q}|{\hat l}\rangle & \langle{\bar q}|{\bar l}\rangle
\end{pmatrix},
\label{disorders}
\end{equation}
where we have used the standard normalized spin state ${\hat q} \cdot {\vec \sigma}|{\hat q}\rangle = |{\hat q}\rangle$ and ${\bar q} \cdot {\vec \sigma} |{\bar q}\rangle = |{\bar q}\rangle$ with ${\hat q} = {\vec q}/|{\vec q}|$ and ${\bar q} = ({\vec q}_{\perp}, -q_{z})/ |{\vec q}|$. With this impurity scattering, the self-energy can be worked out in the self-consistent Born approximation, and we find that the correction to the self-energy and the Cooperon diagram due to disorder are exactly of the same form as obtained by Michaeli and Fu~\cite{disorder2}. In fact, the only difference between the FFLO state $\Delta_{\pm}$ in Eq.~\eqref{FFLO} and the even/odd-parity paired states of Ref.~\onlinecite{disorder2} is phase factors in the matrix elements of $V^{ab}_{{\vec k}, {\vec l}}$ in Eq.~\eqref{disorders} which does not show up in the corrections to the self-energy, the Cooperon diagram, and the pairing susceptibilities. Thus we conclude that the critical temperature of $\Gamma^{1}$- FFLO states is not affected by the disorder and thus $\Gamma^{1}$- FFLO state is robust.
\section{conclusion}
In conclusion, we have studied the possible superconducting states of doped inversion-symmetric Weyl semimetals. We considered a concrete lattice model realizing a Weyl semimetal and found that the FFLO state has a lower energy than the even-parity state if the interaction is phonon-mediated, and the phase is argued to be stable against disorder. Though the even-parity state is less favored in energy than the FFLO state, it interestingly provides an electronic analogue of $^{3}$He-A phase.
We remark briefly on the implication of our work for superconducting states of Weyl semimetal models other than the one studied in this paper. Among the many proposals for the Weyl semimetal phase, we restrict ourselves to the models based on the topological insulators~\cite{cho, Burkov} with the time-reversal breaking perturbation.
\begin{equation}
H = v \tau^{z}{\vec \sigma}\cdot {\vec k}_{\perp} + \tau^{x}k_{z} + m\sigma^{z}
\label{real}
\end{equation}
The typical symmetry of the model is $I \times C_{n}$ ($\times M$, Mirror symmetry) where $I$ is the inversion symmetry and $C_{n}$ is the $n$-fold lattice rotation symmetry along a certain axis (for the model based on Bi$_{2}$Se$_{3}$~\cite{cho}, we have $n=3$). Due to the strong spin-orbit interaction, the spatial symmetry operation involves the spin/orbital operations, e.g., $I: {\vec k}\rightarrow -{\vec k}$ should involve $\tau^{y}$, and $\tau^{y}H(-{\vec k}) \tau^{y} = H({\vec k})$ (the lattice rotation will involve a spin rotation). Note that these symmetry considerations already manifest the similarity between the realistic model and the simplified model Eq.~\eqref{lattice}, and this similarity becomes much clearer if we go to the low-energy theory of Eq.~\eqref{real}. It is not difficult to confirm that the low-energy theory is identical to Eq.~\eqref{low}, and hence we will have similar superconducting states, FFLO and electronic analogues of $^{3}$He-A, in the more realistic model. Hence, we predict that the superconducting states we found should show up in other proposals for Weyl semimetals.
Note that FFLO state Eq.~\eqref{FFLO} shows a density modulation pinned by the momentum of the Weyl nodes (which is reminiscent of the field-induced charge density wave~\cite{Yang} of the Weyl semimetals). Many experimentally available Weyl semimetals have a large number of Weyl nodes,
for example the irridates which have 24 nodes~\cite{Ashvin} (or an inversion-symmetry broken Weyl semimetal has {\it at least} four Weyl nodes~\cite{murakamiweyl}). While our minimal model calculation here does not guarantee that the FFLO state will be the lowest energy state in such systems,
at minimum it suggests that it will be a competing state. In this case the FFLO state can have multiple centers of momenta. This directly implies that there will be interesting density modulation patterns which are fully determined by the position of the Weyl nodes. (This is true at least at the level of mean-field theory which ignores the effect of $O(\Delta^{4})$ terms in the Landau-Ginzburg theory. $O(\Delta^{4})$ terms can potentially {\it melt} this pattern).
We also note that FFLO state can host interesting half-quantum vorticies discussed in Ref. [\onlinecite{VortexFFLO}]. In the FFLO state, we have {\it two} indepedent superconducting order parameters $\Delta(\pm {\vec P}) \propto \exp(\pm 2i{\vec P}\cdot {\vec r})$,i.e., the order parameter space is $S^{1}\times S^{1}$. The half-quantum vortex corresponds to a unit ``winding" of the phase of the $\Delta({\vec P})$ while the phase of the $\Delta(-{\vec P})$ does not wind. On the other hand, the Fermi surface around the Weyl node at $\vec P$ encloses the $\pi$-Berry phase~\cite{Hosur_Vishwanath} which signals that there will be a gapless ``chiral'' Majorana mode at the core of the half-quantum vortex. Furthermore, this implies that a full quantum vortex will be a composite of the two half-quantum vortices and each half-quantum vortex will have a chiral mode. Thus the full quantum vortex will host a helical Majorana mode. In contrast to the related case~\cite{Hosur_Vishwanath}, this helical Majorana mode is not symmetry protected and is therefore generally gapped out. Furthermore, the helical Majorana mode can be understood as the critical point between a weak pairing state and a strong pairing state in a $1$D p-wave superconductor~\cite{Kitaev}. There are two possible phases for the full quantum vortex depending on the sign of ``mass gap'' for the helical mode~\cite{Kitaev} and in a nontrivial phase there will be a Majorana fermion at the end of the vortex.
\acknowledgements
The authors thank Pavan Hosur, Ashvin Vishwanath, Sid Parameswaran, Eun Gook Moon, Yong Baek Kim, and Tarun Grover for helpful discussion and Daniel Agterberg for useful comments from which we learn about the half-quantum vortex in FFLO states. The authors acknowledge support from NSF DMR-1206515 (G. Y. C. and J. E. M.), Office of BES, Materials Sciences Division of the U.S. DOE under contract No.DE-AC02-05CH1123 (Y. M. L.), and the LBNL Thermoelectrics Program (J. H. B.) of DOE BES.
|
1,108,101,562,808 | arxiv | \section{Introduction}\label{intro}
Graph neural networks (GNNs) have shown much success in the learning of graph structured data.
Amongst these noteworthy GNNs, attention-based GNNs \cite{velivckovic2018graph} have drawn increasing interest lately, and have been applied to solve a plethora of real-world problems competently, including node classification \cite{velivckovic2018graph,kipf2016semi}, image segmentation \cite{wang2019graph}, and social recommendations \cite{song2019session}.
Empirical attention mechanisms adopted by GNNs aim to leverage the node features (node embeddings) to compute the normalized correlations between pairs of nodes that are observed to connect.
Treating normalized correlations (attention scores/coefficients) as the relative weights between node pairs, attention-based GNN typically performs a weighted sum of node features which are subsequently propagated to higher layers.
Compared with other GNNs, especially those that aggregate node features with predefined strategies \cite{kipf2016semi,atwood2016diffusion,klicpera2019diffusion}, attention-based GNNs provide a dynamical way for feature aggregation, which enables highly correlated features from neighboring nodes to be propagated in the multi-layer neural architecture.
Representations that embed with multi-layer correlated features are consequently learned by attention-based GNNs, and can be used for various downstream tasks.
Though effective, present empirical graph attention has several shortcomings when aggregating node features.
First, the computation of attention coefficients is limited solely to the correlations of internal factors, i.e., layer-wise node features within the neural nets.
External factors such as cluster structure and higher-order structural similarities, which comprise heterogeneous node-node relevance have remained underexplored to be positively incorporated into the computation of more purposeful attention scores.
Second, the empirical attention heavily leaning on the node features may cause over-fitting in the training stage of neural nets \cite{wang2019improving}.
The predictive power of attention-based GNNs is consequently limited.
To overcome the mentioned challenges, in this paper, we propose a class of generic graph attention mechanisms, dubbed here as Conjoint Attentions (CAs).
Given CAs, we construct Graph conjoint attention networks (CATs) for different downstream analytical tasks.
Different from previous graph attentions, CAs are able to flexibly compute the attention coefficients by not solely relying on layer-wise node embeddings, but also allowing the incorporation of purposeful interventions brought by factors external to the neural net, e.g., node cluster embeddings.
With this, CATs are able to learn representations from features that are found as significant by diverse criteria, thus increasing the corresponding predictive power.
The main contributions of the paper are summarized as follows.
\begin{itemize}
\item We propose Conjoint Attentions (CAs) for GNNs.
Different from popular graph attentions that rely solely on node features, CAs are able to incorporate heterogeneous learnable factors that can be internal and/or external to the neural net to compute purposeful and more appropriate attention coefficients.
The learning capability and hence performance of CA-based GNNs is thereby enhanced with the proposed novel attention mechanisms.
\item For the first time, we theoretically analyze the expressive power of graph attention layers considering heterogeneous factors for node feature aggregation, and the discriminant capacity of such attention layers, i.e., CA layers is validated.
\item Given CA layers, we build and demonstrate the potential of Graph conjoint attention networks (CATs) for various learning tasks.
The proposed CATs are comprehensively investigated on established and extensive benchmarking datasets with comparison studies to a number of state-of-the-art baselines.
The notable results obtained are presented to verify and validate the effectiveness of the newly proposed attention mechanisms.
\end{itemize}
\section{Related works}\label{related-works}
To effectively learn low-dimensional representations in graph structured data, many GNNs have been proposed to date.
According to the ways through which GNNs define the layer-wise operators for feature aggregation, GNNs can generally be categorized as spectral or spatial \cite{wu2020comprehensive}.
\textbf {Spectral GNNs}-The layer-wise function for feature aggregation in spectral GNNs is defined according to the spectral representation of the graph.
For example, Spectral CNN \cite{bruna2014spectral} constructs the convolution layer based on the eigen-decomposition of graph Laplacian in the Fourier domain.
However, such layer is computationally demanding.
To reduce such computational burden, several approaches adopting the convolution operators which are based on simplified or approximate spectral graph theory have been proposed.
First, parameterized filters with smooth coefficients are introduced for Spectral CNN to incorporate spatially localized nodes in the graph \cite{henaff2015deep}.
Chebyshev expansion \cite{defferrard2016convolutional} is then introduced to approximate graph Laplacian rather than directly performing eigen-decomposition.
Finally, the graph convolution filter is further simplified by only considering first or higher order of connected neighbors \cite{kipf2016semi,wu2019simplifying}, so as to make the convolution layer more computationally efficient.
\textbf {Spatial GNNs}-In contrast, spatial GNNs define the convolution operators for feature aggregation by directly making use of local structural properties of the central node.
The essence of spatial GNNs consequently lies in designing an appropriate function for aggregating the effect brought by the features of candidate neighbors selected based on appropriate sampling strategy.
To achieve this, it sometimes requires to learn a weight matrix that accords to the node degree \cite{duvenaud2015convolutional}, utilize the power of transition matrix to preserve neighbor importance \cite{atwood2016diffusion,busch2020pushnet,klicpera2019diffusion,xu2018representation,klicperapredict}, extract the normalized neighbors \cite{niepert2016learning}, or sample a fixed number of neighbors \cite{hamilton2017inductive, zhang2019adaptive}.
As representative spatial GNNs, attention-based GNNs (GATs) \cite{velivckovic2018graph,gulcehre2018hyperbolic} have shown promising performances on various learning tasks.
What makes them effective in graph learning is a result of adopting the attention mechanism, which has been successfully used in machine reading and translation \cite{cheng2016long,luong2015effective}, and video processing \cite{xu2015show}, to compute the node-feature-based attention scores between a central node and its one-hop neighbors (including the central node itself).
Then, attention-based GNNs use the attention scores to obtain a weighted aggregation of node features which are subsequently propagated to the next layer.
As a result, those neighbors possessing similar features may then induce greater impact on the center node, and meaningful representations can be inferred by GATs.
Having investigated previous efforts to graph neural networks, we observe that the computation of empirical graph attentions heavily relies on layer-wise node features, while other factors, e.g., structural properties that can be learned outside of neural net, have otherwise been overlooked.
This motivates us in proposing novel attention mechanisms in this paper to alleviate the shortcomings of existing attention-based graph neural networks.
\section{Graph conjoint attention networks}
In this section, we elaborate the proposed Conjoint Attention mechanisms, which are the cornerstones for building layers of novel attention-based graph neural networks.
Mathematical preliminaries and notations used in the paper are firstly illustrated.
Then, how to construct neural layers utilizing various Conjoint Attention mechanisms are introduced.
Given the formulated Conjoint Attention layers, we finally construct the Graph conjoint attention networks (CATs).
\subsection{Notations and preliminaries}
Throughout this paper, we assume a graph $G = \lbrace V, E \rbrace$ containing $N$ nodes, $|E|$ edges, and $C$ classes ($C\ll N$) to which the nodes belong, where $V$ and $E$ respectively represent the node and edge set.
We use $\mathbf A \in \lbrace 0, 1\rbrace^{N \times N}$ and $\mathbf X \in \mathbb R^{N \times D}$ to represent graph adjacency matrix and input node feature matrix, respectively.
$\mathcal N_i$ denotes the union of node $i$ and its one-hop neighbors.
$\mathbf W^l$ and $\lbrace \mathbf h^l_i \rbrace_{i = 1, ... N}$ denote the weight matrix and features (embeddings) of node $i$ at $l$th layer of CATs, respectively, and $\mathbf h^0$ is set to be the input feature, i.e., $\mathbf X$.
For the nodes in $\mathcal N_i$, their possible feature vectors form a multiset $M_i = (S_i, \mu_i)$, where $S_i = \lbrace s_1, ... s_n\rbrace$ is the ground set of $M_i$ which contains the distinct elements existing in $M_i$, and $\mu_i : S_i \rightarrow \mathbb N^\star$ is the multiplicity function indicating the frequency of occurrence of each distinct $s$ in $M_i$.
\subsection{Structural interventions for Conjoint Attentions}\label{enlight}
As aforementioned, the proposed Conjoint Attentions are able to make use of factors that are either internal or external to the neural net to compute new attention coefficients.
Internal factors refer the layer-wise node embeddings in the GNN.
While, the external factors include various parameters that can be learned outside of the graph neural net and can potentially be used to compute the attention scores.
Taking the cue from cognitive science, where contextual interventions have been identified as effective external factors that may improve the attention and cognitive abilities \cite{jones2004joint}, here we refer these external structural properties as \textit {structural interventions} henceforth for the computing of attention coefficients.
Next, we propose a simple but effective way for CAs to capture diverse structural interventions external to the GNNs.
Let $\mathbf {C}_{ij}$ be some structural intervention between $i$th and $j$th node in the graph. It can be obtained with the following generic generating function:
\begin{equation}\label{gen}
\begin{aligned}
\mathbf {C}_{ij} = \mathop{\arg \min}_{\phi(\mathbf {C})_{ij}} \Psi(\phi(\mathbf {C})_{ij},\mathbf Y_{ij}),
\end{aligned}
\end{equation}
where $\Psi(\cdot)$ represents a distance function and $\phi(\cdot)$ stands for an operator transforming $\mathbf {C}$ to the same dimensionality of $\mathbf Y$.
Given the generic generating function in Eq. (\ref{gen}), it is known that many effective paradigms for learning latent features can be used for the subsequent computation of conjoint attentions, if the prior feature matrix $\mathbf Y$ is appropriately provided.
Taking $\mathbf {A}$ as the prior feature matrix, in this paper, we consider two generation processes that can capture two unique forms of structural interventions.
Let $\Psi(\cdot)$ be the euclidean distance, when $\phi(\mathbf {C})_{ij} \doteq \mathbf V \mathbf V^{T}_{ij}$, we have:
\begin{equation}\label{local}
\mathbf {C}_{ij} = \mathop{\arg \min}_{\mathbf V \mathbf V^{T}_{ij}}(\mathbf A_{ij} - \mathbf V \mathbf V^{T}_{ij})^2,
\end{equation}
where we use an $N$-by-$C$ matrix $\mathbf V$ to approximate $\mathbf {C}$ to reduce the computational burden.
As it shows in Eq. (\ref{local}), $\mathbf {C}_{ij}$ attempts to acquire the structural correlation pertaining to node cluster embeddings, based on matrix factorization (MF).
A higher $\mathbf {C}_{ij}$ learned by Eq. (\ref{local}) means a pair of nodes are very likely to belong to the same cluster.
If $\phi(\mathbf {C})_{ij} \doteq \sum_{j}\mathbf V \mathbf V^{T}_{ij}\mathbf A_{ij}$, we have:
\begin{equation}\label{global}
\mathbf {C}_{ij} = \mathop{\arg \min}_{\mathbf V \mathbf V^{T}_{ij}}(\mathbf A_{ij} - \sum_{j}\mathbf V \mathbf V^{T}_{ij}\mathbf A_{ij})^2.
\end{equation}
As shown in Eq. (\ref{global}), $\mathbf {C}_{ij}$ is the coefficient of self-expressiveness \cite{elhamifar2013sparse} (SC) which may describe the global relation between node $i$ and $j$.
A higher $\mathbf {C}_{ij}$ inferred by Eq. (\ref{global}) means the global structure of node $i$ can be better represented by that of node $j$, and consequently this pair of nodes are more structurally correlated.
It is known that both aforementioned properties have not been considered previously by empirical graph attention mechanisms.
We believe considering either of them as structural interventions for the Conjoint Attentions could lead to better attention scores for feature aggregation.
Note that other types of $\mathbf {C}$ may also be feasible for the proposed attention mechanisms, as long as they are able to capture meaningful property which is not already possessed within the node embeddings of the GNN.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{fig1}
\caption{Graphical illustration of the Conjoint Attention layer used in CATs. Left: CA mechanism using \textit{Implicit direction} strategy (CAT-I). Right: CA mechanism using \textit{Explicit direction} strategy (CAT-E). Both two mechanisms consider learnable structural interventions.}\label{attm}
\end{figure}
\subsection{Conjoint attention layer}
Having obtained a proper $\mathbf {C}$, we present next the Conjoint Attention layer, which is the core module for building CATs and will be used in our experimental study.
Different from the attention layers considered in other GNNs, the attention layer proposed here adopts novel attention mechanisms, i.e., Conjoint Attentions.
It is known that empirical graph attentions solely concern the correlations pertaining to the internal factors, e.g., node embeddings locating in each layer of the neural net.
How those diverse forms of relevance, e.g., the correlation in terms of node cluster embeddings and self-expressiveness may affect the representation learning is therefore yet to be investigated in previous works.
Besides utilizing the correlations pertaining to node embeddings, the proposed Conjoint Attentions now take into additional considerations the structural interventions brought by diverse node-node relevance, which is learned outside of the neural network.
As a result, each Conjoint Attention layer may pay more attentions to the similar embeddings of neighbors, as well as to the ones that share other forms of relevance with the central node.
Node representations possessing heterogeneous forms of relevance can now be learned by the CATs.
Given a set of node features $\lbrace \mathbf h^l_i\rbrace_{i = 1, ... N}$, each $\mathbf h^l_i \in R^{D^l}$, the Conjoint Attention layer maps them into $D^{l+1}$ dimensional space $\lbrace \mathbf h^{l+1}_i\rbrace_{i = 1, ... N}$, according to the correlations of node features and aforementioned structural interventions.
The contextual correlation between two connected nodes, say $v_i$ and $v_j$ is firstly obtained.
To do so, we directly adopt the feature-based attention mechanism considered by existing graph attention networks (GAT) \cite{velivckovic2018graph}:
\begin{equation}\label{f-att}
f_{ij}=\frac{\exp(\text{LeakyReLU}(\mathbf {\vec{a}}^T(\mathbf {W}^l\mathbf h^l_i\parallel \mathbf {W}^l\mathbf h^l_j )))}{\sum_{k\in \mathcal{N}_i} \exp(\text{LeakyReLU}(\mathbf {\vec{a}}^T(\mathbf {W}^l\mathbf h^l_i\parallel \mathbf {W}^l\mathbf h^l_k )))},
\end{equation}
where $\mathbf {\vec{a}}\in \mathbb{R}^{2D^{l+1}}$ is a vector of parameters of the feedforward layer, $\parallel$ stands for the concatenation function, and $\mathbf W^l$ is $D^{l+1}\times D^l$ parameter matrix for feature mapping.
Given Eq. (\ref{f-att}), the proposed CA layer captures the feature correlations between connected nodes (first-order neighbors) by computing the similarities w.r.t. node features mapped to next layer.
As mentioned, determining the attention scores solely based on node features internal to a GNN may result in overlooking other important factors.
To overcome this issue, the proposed CA layer attempts to learn the structural interventions as described in Section \ref{enlight}, and presents the additional new information for computing the attention scores.
Given any learnable parameter $\mathbf C_{ij}$ (structural intervention) between two nodes, CA layer additionally obtains a supplementary correlation as follows:
\begin{equation}
s_{ij} = \frac{\exp{(\mathbf C_{ij})}}{\sum_{k \in \mathcal N_i} \exp{(\mathbf C_{ik})}}.
\end{equation}
Given $f_{ij}$ and $s_{ij}$, we propose two different strategies to compute the Conjoint Attention scores, aiming at allowing CATs to depend on the structural intervention at different levels.
The first mechanism is referred here as \textit {Implicit direction}.
It aims at computing the attention scores whose relative significance between structural and feature correlations can be automatically acquired.
To do so, each CA layer introduces two learnable parameters, $g_f$ and $g_s$, to determine the relative significance between feature and structural correlations and they can be obtained as follows:
\begin{equation}\label{pen}
r_f = \frac{\exp{(g_f)}}{\exp{(g_s)+\exp{(g_f)}}}, r_s = \frac{\exp{(g_s)}}{\exp{(g_s)+\exp{(g_f)}}},
\end{equation}
where $r_s$ or $r_f$ represents the normalized significance related to different types of correlations. Given them, CAT then computes the attention score based on \textit {Implicit direction} strategy:
\begin{equation}\label{att-ip}
\alpha_{ij} = \frac{r_f\cdot f_{ij}+r_s\cdot s_{ij}}{\sum_{k \in \mathcal N_i}[r_f\cdot f_{ik}+r_s\cdot s_{ik}]}=r_f\cdot f_{ij}+r_s\cdot s_{ij}.
\end{equation}
Given the attention mechanism shown in Eq. (\ref{att-ip}), $\alpha_{ij}$ attempts to capture the weighted mean attention in terms of the various node-node correlations, which are the ones internal or external to the GNN.
Compared with the attention mechanism solely based on features of one-hop neighbors, $\alpha_{ij}$ computed by Eq. (\ref{att-ip}) may be adapted according to the implicit impact brought by different structural interventions, e.g., correlations pertaining to node cluster embeddings and self-expressiveness coefficients.
Moreover, the relative significance $r$ can also be automatically inferred through the back propagation process.
More smooth and appropriate attention scores can thereby be computed by the CA layer for learning meaningful representations.
To enhance the impact of structural intervention, the CA layer has another strategy, named here as \textit{Explicit direction}, to compute attention scores between neighbors.
Given $f_{ij}$ and $s_{ij}$, the attention scores obtained via the \textit{Explicit direction} strategy is defined as follows:
\begin{equation}\label{att-dp}
\alpha_{ij} = \frac{f_{ij}\cdot s_{ij} }{\sum_{k \in \mathcal N_i}f_{ik}\cdot s_{ik}}.
\end{equation}
Compared with Eq. (\ref{att-ip}), $s_{ij}$ explicitly influences the magnitude of $f_{ij}$, so that those node pairs which are irrelevant in terms of $\mathbf C_{ij}$ will never be assigned with high attention weights.
Based on \textit{Explicit direction} strategy, the CA layer becomes more structurally dependent when performing message passing to the higher layers in the neural architecture.
Having obtained the Conjoint Attention scores, the CA layer is now able to compute a linear combination of features corresponding to each node and its neighbors as output, which will be either propagated to the higher layer, or be used as the final representations for subsequent learning tasks.
The described output features can be computed as follows:
\begin{equation}\label{att-aggregation}
\mathbf h^{l+1}_i = (\alpha_{ii}+\epsilon\cdot\frac{1}{\vert \mathcal N_i \vert})\mathbf {W}^l\mathbf h^l_i+ \sum_{j \in \mathcal N_i, j \ne i} \alpha_{ij} \mathbf {W}^l\mathbf h^l_j,
\end{equation}
where $\epsilon \in (0, 1)$ is a learnable parameter that improves the expressive capability of the proposed CA layer.
\subsection{Construction of Graph conjoint attention networks (CATs)}
In Fig. \ref{attm}, the Conjoint Attention layers that use the attention mechanisms proposed in this paper are graphically illustrated.
We are now able to construct Graph conjoint attention networks (CATs) using a particular number of CA layers proposed.
In practice, we also adopt the multi-head attention strategy \cite{vaswani2017attention} to stabilize the learning process.
CATs may either concatenate the node features from multiple hidden layers as the input for next layer, or compute the average of node features obtained by multiple units of output layers as the final node representations.
For the details on implementing multi-head attention in graph neural networks, the reader is referred to \cite{velivckovic2018graph}.
\section{Theoretical analysis}\label{theory}
Study on the expressive power of various GNNs has drawn much attention in the recent.
It concerns whether a given GNN can distinguish different structures where vertices possessing various vectorized features.
It has been found that what the neighborhood aggregation functions of all message-passing GNNs aim at are analogous to the 1-dimensional Weisfeiler-Lehman test (1-WL test), which is injective and iteratively operated in the Weisfeiler-Lehman algorithm \cite{weisfeiler1968reduction, xu2018powerful, zhang2020improving}, does.
As a result, all message-passing GNNs are as most powerful as the 1-WL test \cite{xu2018powerful}.
The theoretical validation of the expressive power of a given GNN thereby lies in whether those adopted aggreation/readout functions are homogeneous to the 1-WL test.
One may naturally be interested in whether the expressive power of the proposed CAT layers is as powerful as the 1-WL test, which can distinguish all different graph structures.
To do so, we firstly show that neighborhood aggregation function (Eq. (\ref{att-aggregation})) without the term for improving expressive capability (i.e., $\epsilon\cdot\frac{1}{\vert \mathcal N_i \vert} \mathbf {Wh}^l_i$ in Eq. (\ref{att-aggregation})) still fails to discriminate some graph structures possessing certain topological properties.
Then, by integrating the term of improving expressive capability, all the proposed CA layers are able to distinguish all those graph structures that cannot be discriminated previously.
For the function of neighborhood aggregation solely utilizing the strategy shown in Eq. (\ref{att-ip}), we have the following theorem pointing out the conditions under which the aggregation function fails to distinguish different structures.
\begin{theorem}\label{theorem-ip}
Assume the feature space $\mathcal X$ is countable and the aggregation function using the weights computed by Eq. (\ref{att-ip}) is represented as $h(c, X) = \sum_{x\in X} \alpha_{cx} g(x)$, where $c$ is the feature of center node, $X \in \mathcal X$ is a multiset containing the feature vectors from nodes in $\mathcal N_i$, $g(\cdot)$ is a function for mapping input feature $X$, and $\alpha_{cx}$ is the weight between $g(c)$ and $g(x)$. For all $g$ and the strategy in Eq. (\ref{att-ip}), $h(c_1, X_1) = h(c_2, X_2)$ if and only if $c_1 = c_2$, $X_1 = \lbrace S, \mu_1 \rbrace$, $X_2 = \lbrace S, \mu_2 \rbrace$, and $\sum_{y=x, y\in X_1} f_{c_1y}-\sum_{y=x, y\in X_2} f_{c_2y} = q[\sum_{y=x, y\in X_2} s_{c_2y} - \sum_{y=x, y\in X_1} s_{c_1y}]$, for $q = \frac{r_s}{r_f}$ and $x \in S$. In other words, $h$ will map different multisets into the same embedding iff the multisets have same central node feature, same underlying set, and the difference in feature-based scores is proportional ($ \frac{r_s}{r_f}$) to the opposite of that in the weights corresponding to the structural interventions.
\end{theorem}
We leave the proof of all the theorems and corollaries in the appendix.
For the aggregation function utilizing the strategy shown in Eq. (\ref{att-dp}), we have the following theorem indicating the structures which cannot be correctly distinguished.
\begin{theorem}\label{theorem-dp}
Under the same assumptions shown in Theorem \ref{theorem-ip}, for all $g$ and the strategy in Eq. (\ref{att-dp}), $h(c_1, X_1) = h(c_2, X_2)$ if and only if $c_1 = c_2$, $X_1 = \lbrace S, \mu_1 \rbrace$, $X_2 = \lbrace S, \mu_2 \rbrace$, and $q\cdot\sum_{y=x, y\in X_1} \phi(\mathbf C_{c_1x}) = \sum_{y=x, y\in X_2} \phi(\mathbf C_{c_2y})$, for $q > 0$ and $x \in S$, where $\phi(\cdot)$ is an function for mapping values to $\mathbb R^+$.
In other words, $h$ will map different multisets into the same embedding iff the multisets have same central node feature, same node features whose corresponding mapped structural interventions are proportional.
\end{theorem}
Theorems \ref{theorem-ip} and \ref{theorem-dp} indicate that the CA layers may still fail to distinguish some structures, if they exclude the improving term shown in Eq. (\ref{att-aggregation}).
However, GNNs utilizing Eqs. (\ref{att-ip}) or (\ref{att-dp}) can still be more expressively powerful than classical GATs.
As node features and structural interventions are heterogeneous, intuitively, structures satisfying the stated conditions should be infrequent.
This may well explain why those GNNs concerning including external factors, e.g., some structural properties into the computation of attention coefficients may experimentally perform better than GATs.
However, when distinct multisets with corresponding properties meet the conditions mentioned in Theorems \ref{theorem-ip} and \ref{theorem-dp}, the attention mechanisms solely based on Eqs. (\ref{att-ip}) or (\ref{att-dp}) cannot correctly distinguish such multisets.
Thus, GNNs only utilizing Eqs. (\ref{att-ip}) or (\ref{att-dp}) as the feature aggregation function fail to reach the upper bound of expressive power of all message-passing GNNs, i.e., the 1-WL test.
However, we are able to readily improve the expressive power of CATs to meet the condition of the 1-WL test by slightly modifying the aggregation function as Eq. (\ref{att-aggregation}) shows.
Then, the newly obtained Conjoint Attention scores can be used to aggregate the node features passed to the higher layers.
Next, we prove that the proposed Conjoint Attention mechanisms (Eqs. (\ref{att-ip})-(\ref{att-aggregation})) reach the upper bound of message-passing GNNs via showing they can distinguish those structures possessing the properties mentioned in Theorems \ref{theorem-ip} and \ref{theorem-dp}.
\begin{corollary}\label{coro-att}
Let $\mathcal T$ be the attention-based aggregator shown in Eq. (\ref{att-aggregation}) that considers one of the strategies in Eq. (\ref{att-ip}) or (\ref{att-dp}) and operates on a multiset $H \in \mathcal H$, where $\mathcal H$ is a node feature space mapped from the countable input feature space $\mathcal X$.
A $\mathcal H$ exists so that utilizing attention-based aggregator shown in Eq. (\ref{att-aggregation}), $\mathcal T$ can distinguish all different multisets in aggregation that it previously cannot discriminate.
\end{corollary}
Based on the performed analysis, the expressive power of CATs is theoretically stronger than state-of-the-art attention-based GNNs, e.g., GATs \cite{velivckovic2018graph}.
\section{Experiments and analysis}\label{exp}
In this section, we evaluate the proposed Graph conjoint attention networks against a variety of state-of-the-art and popular baselines, on widely used network datasets.
\subsection{Experimental set-up}
\textbf{Baselines for comparison}-To validate the effectiveness of the proposed CATs, we compare them with a number of state-of-the-art baselines, including Arma filter GNN (ARMA) \citep{bianchi2021graph}, Simplified graph convolutional Networks (SGC) \citep{wu2019simplifying}, Personalized Pagerank GNN (APPNP) \citep{klicperapredict}, Graph attention networks (GAT) \citep{velivckovic2018graph}, Jumping knowledge networks (JKNet) \citep{xu2018representation}, Graph convolutional networks (GCN) \citep{kipf2016semi}, GraphSAGE \citep{hamilton2017inductive}, Mixture model CNN (MoNet) \citep{monti2017geometric}, and Graph isomorphism network (GIN) \cite{xu2018powerful}.
As GAT can alternatively consider graph structure by augmenting original node features (i.e., $\mathbf X$) with structural properties, we use prevalent methods for network embedding, including $k$-eigenvectors of graph Laplacian ($k$-Lap) \cite{qiu2018network}, Deepwalk \cite{perozzi2014deepwalk}, and Matrix factorization-based network embedding (NetMF) \cite{qiu2018network} to learn structural node representations and concatenate them with $\mathbf X$ as the input feature of GAT.
Thus, three variants of GAT, i.e., GAT-$k$-Lap, GAT-Deep, and GAT-NetMF are additionally constructed as compared baselines.
Based on the experimental results previously reported, these baselines may represent the most advanced techniques for learning in graph structured data.
\textbf{Testing datasets}-Five widely-used network datasets, which are Cora, Cite, Pubmed \cite{lu2003link,sen2008collective}, CoauthorCS \cite{shchur2018pitfalls}, and OGB-Arxiv \cite{hu2020open}, are used in our experiments.
Cora, Cite, and Pubmed are three classical network datasets for validating the effectiveness of GNNs.
However, it is recently found that these three datasets sometimes may not effectively validate the predictive power of different graph learning approaches, due to the relatively small data size and data leakage \cite{hu2020open,shchur2018pitfalls}.
Thus, more massive datasets having better data quality have been proposed to evaluate the performance of different approaches \cite{dwivedi2020benchmarking,hu2020open}.
In our experiment, we additionally use CoauthorCS and OGB-Arxiv as testing datasets.
The details of all benchmarking sets can be checked in the appendix.
\textbf{Evaluation and experimental settings}-Two learning tasks, semi-supervised node classification and semi-supervised node clustering are considered in our experiments.
For the training paradigms of both two learning tasks, we closely follow the experimental scenarios established in the related works \cite{hu2020open,kipf2016semi,velivckovic2018graph,yang2016revisiting}.
For the testing phase of different approaches, we use the test splits that are publicly available for classification tasks, and all nodes for clustering tasks.
The effectiveness of all methods is validated through evaluating the classified nodes using $Accuracy$.
In the training stage, we construct the two-layer network structure (i.e., one hidden layer possessed) for all the baselines and different versions of CATs.
In each set of testing data, all approaches are run ten times to obtain the statistically steady performance.
As for other details related to experimental settings, we leave them in the appendix.
\begin{table*}[htbp]
\centering
\caption{Average $Accuracy$ on semi-supervised node classification. Bold fonts mean CAT obtains a better performance than any other baseline.}
\label{classification}
\begin{tabular}{c|ccccc}
\hline\hline
&\bf Cora&\bf Cite&\bf Pubmed&\bf CoauthorCS& \bf OGB-Arxiv\\
\hline
MoNet&81.96 $\pm$ 0.50&64.22 $\pm$ 0.16&79.78 $\pm$ 0.33&91.96 $\pm$ 0.75&47.71 $\pm$ 0.27\\
GCN&81.42 $\pm$ 0.19&71.60 $\pm$ 0.73&79.66 $\pm$ 0.39&91.54 $\pm$ 0.43&71.78 $\pm$ 0.16\\
GraphSAGE&81.12 $\pm$ 0.41&71.06 $\pm$ 0.64&79.04 $\pm$ 0.62&93.06 $\pm$ 0.80&69.07 $\pm$ 0.27\\
JKNet&78.34 $\pm$ 0.02& 65.88 $\pm$ 0.01&79.88 $\pm$ 0.01&89.62 $\pm$ 0.01&64.91 $\pm$ 0.01\\
APPNP&82.80 $\pm$ 0.32& 72.38 $\pm$ 0.50& 82.62 $\pm$ 0.37& 89.16 $\pm$ 0.65& 63.16 $\pm$ 0.54\\
SGC&81.90 $\pm$ 0.01&71.40 $\pm$ 0.01& 82.42 $\pm$ 0.04& 93.60 $\pm$ 0.01&61.06 $\pm$ 0.09\\
ARMA& 80.06 $\pm$ 0.57& 70.00 $\pm$ 0.66&76.46 $\pm$ 0.58& 86.28 $\pm$ 0.75& 68.77 $\pm$ 0.17\\
GIN &81.58 $\pm$ 0.62 &66.90 $\pm$ 0.16 &80.76 $\pm$ 0.33 &93.03 $\pm$ 0.74 &64.02 $\pm$ 0.18\\
\hline
GAT&83.84 $\pm$ 0.61& 70.36 $\pm$ 0.42& 81.50 $\pm$ 0.47& 92.80 $\pm$ 0.41& 72.39 $\pm$ 0.07\\
GAT-$k$-Lap&84.10 $\pm$ 0.24&71.18 $\pm$ 0.52 &82.56 $\pm$ 0.30 &92.70 $\pm$ 0.31&72.47 $\pm$ 0.06\\
GAT-NetMF&84.44 $\pm$ 0.19 &70.94 $\pm$ 0.16 &81.90 $\pm$ 0.33&93.16 $\pm$ 0.27 &72.42 $\pm$ 0.08\\
GAT-Deep&83.68 $\pm$ 0.67 & 69.70 $\pm$ 0.57 & 80.13 $\pm$ 0.26 & 92.93 $\pm$ 0.17 & 72.79 $\pm$ 0.09 \\
\hline
CAT-I-MF& \bf85.38 $\pm$ 0.16& \bf73.22 $\pm$ 0.19&\bf83.90 $\pm$ 0.24&\bf93.74 $\pm$ 0.14 &\bf72.89 $\pm$ 0.06\\
CAT-I-SC & \bf85.50 $\pm$ 0.22& \bf73.18 $\pm$ 0.22& \bf84.28 $\pm$ 0.20& \bf93.70 $\pm$ 0.11& \bf72.85 $\pm$ 0.04\\
CAT-E-MF& \bf85.56 $\pm$ 0.19& \bf73.24 $\pm$ 0.21& \bf83.60 $\pm$ 0.17& 93.40 $\pm$ 0.12& \bf72.81 $\pm$ 0.09\\
CAT-E-SC& \bf85.40 $\pm$ 0.36& \bf73.02 $\pm$ 0.24& \bf84.02 $\pm$ 0.24& 93.30 $\pm$ 0.11& \bf72.83 $\pm$ 0.11\\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{Average $Accuracy$ on semi-supervised node clustering. Bold fonts mean CAT obtains a better performance than any other baseline.}
\label{clustering}
\begin{tabular}{c|ccccc}
\hline\hline
&\bf Cora&\bf Cite&\bf Pubmed&\bf CoauthorCS& \bf OGB-Arxiv\\
\hline
MoNet&79.42 $\pm$ 0.86& 63.07 $\pm$ 0.11&79.39 $\pm$ 0.61& 88.75 $\pm$ 0.54 &53.08 $\pm$ 0.15\\
GCN&74.25 $\pm$ 0.13& 63.36 $\pm$ 0.87& 77.83 $\pm$ 0.75& 89.74 $\pm$ 0.53& 75.02 $\pm$ 0.07\\
GraphSAGE&78.46 $\pm$ 0.56&69.00 $\pm$ 0.17& 79.52 $\pm$ 1.13& 90.16 $\pm$ 0.53& 73.50 $\pm$ 0.13\\
JKNet&75.95 $\pm$ 0.01&65.12 $\pm$ 0.03& 79.52 $\pm$ 0.01& 86.66 $\pm$ 0.01&71.28 $\pm$ 0.01\\
APPNP&79.93 $\pm$ 0.82& 70.55 $\pm$ 0.85& 82.81 $\pm$ 0.32& 85.93 $\pm$ 0.39& 69.73 $\pm$ 0.67\\
SGC&79.38 $\pm$ 0.02&69.71 $\pm$ 0.02& 81.64 $\pm$ 0.01& 90.13 $\pm$ 0.01& 71.09 $\pm$0.37\\
ARMA&77.70 $\pm$ 0.99& 68.38 $\pm$ 0.87& 77.29 $\pm$ 1.11& 84.72 $\pm$ 0.29&69.24 $\pm$ 0.12\\
GIN &78.25 $\pm$ 0.46 & 67.83 $\pm$ 0.15 & 79.31 $\pm$ 0.35 & 89.97 $\pm$ 0.26 & 63.85 $\pm$ 0.18\\
\hline
GAT&81.39 $\pm$ 0.18& 69.20 $\pm$ 0.28& 80.88 $\pm$ 0.33& 90.09 $\pm$ 0.15& 76.04 $\pm$ 0.38\\
GAT-$k$-Lap&80.66 $\pm$ 0.31 & 69.56 $\pm$ 0.34 & 81.59 $\pm$ 0.09 & 89.83 $\pm$ 0.18 & 76.21 $\pm$ 0.06 \\
GAT-NetMF &81.75 $\pm$ 0.26 & 68.96 $\pm$ 0.21 & 81.74 $\pm$ 0.18 & 89.85 $\pm$ 0.21 & 76.06 $\pm$ 0.07\\
GAT-Deep &81.08 $\pm$ 0.41 & 68.27 $\pm$ 0.06 & 80.55 $\pm$ 0.11 & 89.70 $\pm$ 0.27 & 76.91 $\pm$ 0.15\\
\hline
CAT-I-MF& \bf82.17 $\pm$ 0.11& \bf71.15 $\pm$ 0.12& 82.77 $\pm$ 0.07& \bf90.26 $\pm$ 0.22 &\bf77.72 $\pm$ 0.07\\
CAT-I-SC& \bf82.26 $\pm$ 0.13&\bf71.17 $\pm$ 0.15& \bf 82.86 $\pm$ 0.07& \bf90.29 $\pm$ 0.21& \bf77.01 $\pm$ 0.16\\
CAT-E-MF&\bf81.98 $\pm$ 0.19& \bf71.21 $\pm$ 0.12&82.40 $\pm$ 0.08& 89.66 $\pm$ 0.22& \bf76.93 $\pm$ 0.16\\
CAT-E-SC& \bf82.01 $\pm$ 0.24& \bf71.11 $\pm$ 0.24& 82.61 $\pm$ 0.14& 89.72 $\pm$ 0.15 &\bf76.98 $\pm$ 0.08\\
\hline\hline
\end{tabular}
\end{table*}
\subsection{Results on node classification}
The results on semi-supervised node classification are summarized in Table \ref{classification}.
As the table shows, CATs utilizing different attention strategies generally perform better than any other baseline in all the testing datasets.
Specifically, CAT utilizing \textit{Implicit direction} (CAT-I-MF and CAT-I-SC) performs better than all the compared baselines in all the five datasets.
CAT utilizing \textit {Explicit direction} (CAT-E-MF and CAT-E-SC) is better than other compared baselines in four datasets out of five, except the case of CoauthorCS.
In that dataset, CAT-E ranks the second-best when compared with other baselines.
\subsection{Results on node clustering}
Node clustering can be more challenging as all the nodes containing various potential structures in the graph are used in the testing phase.
The results obtained show that CATs still performs robustly when compared with other baselines on this challenging task.
As Table \ref{clustering} shows, the \textit {Implicit direction} strategy utilized by CAT can still ensure the proposed neural architecture to outperform other compared baselines in all the datasets.
As for CAT utilizing \textit {Explicit direction}, it ranks best on three datasets out of five.
While on the remaining two datasets, Pubmed and CoauthorCS, the performance of CAT-E is competitive to the best.
Based on the robust performance shown in Tables \ref{classification} and \ref{clustering}, CAT is observed to be one of the most effective GNNs for various graph learning tasks.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\textwidth]{ab-cla.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\textwidth]{ab-clu.jpg}
\end{subfigure}
\caption{Performance comparison on computing attention scores using different factors}\label{ablation}
\end{figure}
\subsection{Ablation study}
To further investigate whether the proposed CA mechanisms are effective in improving the predictive power of CATs, we compared the performance of CAs with that obtained by attention mechanisms considering various factors.
Specifically, we let GAT computes attention coefficients using either structural interventions, i.e., node-node correlations pertaining to node cluster embeddings (MF in Eq. (\ref{clustering})) and self-representation coefficients (SC in Eq. (\ref{global})), or node features (Eq.(\ref{f-att}), F).
Then GAT utilizing different attention strategies is used to perform node classification and clustering tasks on all the testing datasets.
The performance comparisons between CATs and GAT utilizing the aforementioned attentions have been summarized in Fig. \ref{ablation}.
As the figure shows, on both classification and clustering tasks, the proposed CA mechanisms perform statistically better than other attention strategies utilized by GAT.
It is also observed that the consideration of structural attentions (SC and MF) may also improve the performance of attention-based GNNs.
Such results thus experimentally agree with and validates our theoretical analysis in Section \ref{theory}.
\section{Discussions}
In this section, some further discussions that may provide better understandings to the proposed attention mechanisms are presented.
\subsection{Comparisons between CATs and GATs with augmented node features}
Besides the proposed Conjoint Attention mechanisms, directly concatenating input node features and structural embeddings \cite{perozzi2014deepwalk,grover2016node2vec,ribeiro2017struc2vec,tang2015line,qiu2018network} is another effective way to make current GNNs more structural bias.
In our experiments, it is shown that the learning performance have improved in most datasets when GAT uses the concatenation of original node features and various structural embeddings.
However, in most datasets, such performance improvement is not as significant as that obtained by CATs.
Different from directly concatenating the original node input features and structural embeddings for learning node representations, CATs provide a smooth means to compute attention coefficients that jointly consider the diverse relevance regarding to both layer-wise node embeddings and external factors, like the structural interventions pertaining to the correlations generated by the node cluster embeddings considered in this paper.
As a result, CATs can learn node representations from those nodes that are heterogeneously relevant and attain notable learning performance on all the testing datasets.
\subsection{Potential limitations of the proposed approach}
Although the proposed Conjoint Attentions are very effective in improving the learning performance of attention-based GNNs, they may also have possible shortcomings.
First, the predictive power of the proposed CATs may be determined by the quality of the structural interventions.
As the proposed Conjoint Attentions attempt to compute attention scores considering heterogeneous factors, their performance might be negatively affected by those contaminated ones.
However, some potential methods may mitigate the side-effect brought by possible false/noisy external factors.
One is to utilize adversarial learning modules \cite{lowd2005adversarial} to alleviate the model vulnerability to external contamination. The other method is to consider learning effective factors from multiple data sources (i.e., multi-view).
In previous works \cite{xu2013survey}, multi-view learning has shown to be effective even when some view of data is contaminated.
Second, both space and time complexity of the proposed Conjoint Attentions can be higher than the empirical attention-based GNNs, e.g., GAT.
Thus, how to select a simple but effective strategy for capturing the interventions is crucial for the proposed Conjoint Attentions.
In our experiments, we recorded the memory consumption of CATs when performing different learning tasks in all the testing datasets and the corresponding results are provided in the appendix.
We find that some simple learning paradigms, e.g., matrix factorization (Eq. (\ref{local}) in the manuscript) can ensure CATs to outperform other baselines, while the space and time complexity does not increase much when compared with GATs.
As for the SC strategy, its space complexity is relatively high, if it uses a single-batch optimization method.
Therefore, more efficient optimization methods should be considered when CATs use SC to learn $\mathbf C_{ij}$.
Third, the expressive power of the proposed Conjoint Attention-based GNNs reaches the upper bound of the 1-WL test in the countable feature space, but such discriminative capacity may not be always held by CATs in the uncountable feature space.
Previous works have proved that the expressive power of a simplex function for feature aggregation in a GNN can surely reach the upper bound of the 1-WL test only in countable feature space, multiple categories of functions for feature aggregations are required to maintain the expressive power of a GNN when the feature space is uncountable \cite{corso2020principal}.
As all Conjoint Attentions belong to one category of function, i.e., mean aggregator, in this paper, we perform the theoretical analysis on the expressive power of CATs assuming the feature space is countable.
Ideally, the expressive power of CATs can be further improved in uncountable space, if the proposed Conjoint Attentions are appropriately combined with other types of feature aggregators.
\section{Conclusion}\label{conclusion}
In this paper, we have proposed a class of novel attention strategies, known as Conjoint Attentions (CAs) to construct Graph conjoint attention networks (CATs).
Different from empirical graph attentions, CAs offer flexible incorporation of both layer-wise node features and structural interventions that can be learned outside of the GNN to compute appropriate weights for feature aggregation.
Besides, the expressive power of CATs is theoretically validated to reach the upper bound of all message-passing GNNs.
The proposed CATs have been compared with a number of prevalent approaches in different learning tasks.
The obtained notable results verify the CATs' model effectiveness.
In future, we will further improve the effectiveness of CATs in the following ways.
First, besides node cluster embeddings and self-expressiveness, more structural interventions will be explored to compute more compelling attention coefficients for node representation learning.
Second, appropriate adversarial strategies to reduce model vulnerability to contaminated factors that are used for computing attention scores will be considered.
Last but not the least, the proposed conjoint attentions will be extended to learn node representations in multi-view context and heterogeneous graph data.
\begin{ack}
The authors would like to thank the anonymous reviewers for their constructive comments and suggestions.
This work is supported in part by the Data Science $\And$ Artificial Intelligence Research Center (DSAIR), Nanyang Technological University, and in part by Agency for Science, Technology and Research (A*STAR).
\end{ack}
\bibliographystyle{plain}
|
1,108,101,562,809 | arxiv | \section{Introduction}
Inspired by the great success of transformer \cite{vaswani2017attention} in natural language processing (NLP) \cite{devlin2018bert}, there is an increasing effort on applying it to computer vision. Following vision transformer (ViT) \cite{dosovitskiy2020image}, which is the first attempt of applyting transformer to vision, plenty of studies \cite{wang2021pyramid, liu2021swin} have adopted transformer to dense image prediction tasks, such as object detection, semantic segmentation and instance segmentation.
However, the scale variation in dense image prediction is still the key challenge, even for transformer-based methods. While the pyramidal structure represented by Feature Pyramid Network (FPN) \cite{lin2017feature} is an effective method to tackle this problem, but few attempts are made to apply multi-scale technique in transformer. On one hand, vanilla transformer fails to explore the diversity of high-level feature's semantic information, because of the fixed receptive field of patches and the limitations of the self-attention mechanism. On the other hand, establishing attention directly on low-level future maps, which has large size in general, is infeasible due to quadratic computational cost, let alone building attention and interaction among different levels.
To alleviate this problem, we first revisit the multi-scale problem in dense prediction and obtain several findings. Through the decomposition experiments of FPN, we find that the semantic level (i.e., $C_5$) plays a significant role in dense prediction, and even the single input of $C_5$ can achieve comparable performance. Besides the semantic information, multi-level interaction is indispensable, which can promote the mutual learning among different levels and suppress the redundant information in low levels. In addition, we make various attempts to adapt transformer to FPN. The results show that in the proposed decoupled space, transformer saves large computation and achieves high performance.
Based on these findings, we propose a novel Semantic-aware Decoupled Transformer Pyramid (SDTP) for dense image prediction consisting of three components: Intra-level Semantic Promotion (ISP), Cross-level Decoupled Interaction (CDI) and Attention Refinement Function (ARF). ISP exploits the semantic diversity in various receptive space to amply mine the semantic information, integrating local to global information in transformer flexibly. CDI builds the global attention and interaction across different feature levels with the help of proposed decoupled technique, which also solves the problem of heavy computation. Besides, ARF module is further embedded in attention module in transformer to refine the attention map. These three components are all plugged and can be embedded in various methods.
In summary, this work makes the following contributions:
\begin{itemize}
\item The multi-scale problem in dense image prediction is revisited with insightful findings of key factors leading to the success of multi-scale prediction: sufficient semantic information and effective interaction computation. Besides, one simple yet effective solution is presented to reduce the computational cost of multi-scale transformer.
\item We propose SDTP to alleviate scale variation problem in dense image prediction, which is an enhanced drop-in replacement of FPN with transformer, for generating more representative multi-scale features.
\item SDTP consists of three components: Intra-level Semantic Promotion (ISP), Cross-level Decoupled Interaction (CDI) and Attention Refinement Function (ARF). ISP makes full use of semantic information in various receptive space. CDI builds a global attention and interaction among different levels in decoupled space, and ARF is embedded in transformer block to refine the attention map. Each of these components can be separately utilized in various methods.
\end{itemize}
\section{Related Works}
\subsection{Dense image prediction task
\noindent{\textbf{Object detection.}} Most detectors can be divided into two types: one-stage detectors (e.g., RetinaNet \cite{lin2017focal}, OneNet \cite{sun2020onenet}) and multi-stage detectors (e.g., Faster R-CNN \cite{ren2015faster}, Cascade R-CNN \cite{cai2018cascade}). Recently, inspired by the excellent performance of transformer in NLP tasks, some studies applied transformer to object detection tasks. DETR \cite{carion2020end} is proposed to utilize transformer for end-to-end detection for the first time. Deformable DETR \cite{zhu2020deformable} further embeds an effective attention module, leading to better performance than DETR.
\noindent{\textbf{Segmentation segmentation.}} FCN \cite{long2015fully} utilizes a fully convolutional network to obtain segmentation maps. Inspired by FCN, U-Net \cite{ronneberger2015u} is widely used in medical segmentation, fusing multi-level information for prediction. Besides, the series of DeepLab \cite{liu2019auto, chen2017deeplab} apply dilated convolution to obtain large receptive fields for more spatial information.
\noindent{\textbf{Instance segmentation.}} By adding a paralleled mask head, Mask R-CNN \cite{he2017mask} extends Faster R-CNN for instance segmentation task. To refine the edge of instances, PointRend \cite{kirillov2020pointrend} provides a more sophisticated mask heads with a set of points. In HTC \cite{chen2019hybrid}, apart from a semantic segmentation branch for contextual information, a cascade structure is used to combine the object detection and instance segmentation for multi-stage prediction. All these dense image prediction tasks are faced with scale variation problem and need multi-scale features for precise prediction.
\subsection{Method for scale variation}
Scale variation of object instances is a giant obstacle in dense image prediction tasks and most current studies rely on multi-scale technique. Feature pyramid network (FPN) \cite{lin2017feature} is a classical structure that contains a top-down pathway to fuse the adjacent features. After that, a series of studies are proposed to further improve FPN. For example, PANet \cite{liu2018path} introduces a bottom-up pathway to shorten information path among different levels and FPG \cite{chen2020feature} utilizes a deep multi-pathway feature pyramid to make the feature fusion in various directions. Different from FPN and its variants, YOLOF \cite{chen2021you} is proposed to utilize a single level feature to alleviate the scale variation through multi-receptive field blocks. Therefore, how to deal with multi-scale and multi-receptive field features effectively is the key for scale variation.
\subsection{Vision transformer}
Recently, a resurgence of work in transformer has led to major advances in vision tasks. ViT \cite{dosovitskiy2020image} constructs a pure transformer backbone for image classification. Following ViT, a series of works were presented. For example, T2T-ViT \cite{yang2019reppoints} splits the image into tokens of overlapping patches, to strengthen the interaction of tokens. In addition, the shifted windows for self-attention are proposed in Swin transformer \cite{liu2021swin}, providing local connections among different windows. Besides the methods above, some other methods explore to integrating CNN and transformer, taking advantage of both sides. For instance, PVT \cite{wang2021pyramid} applies the transformer to the pyramid structure used in ResNet and CVT \cite{wu2021cvt} introduces the convolutions to vision transformer which combines the preponderance of each others. However, most of previous studies concentrate on building the attention in the same scales of backbone, ignoring multi-scale interaction among different levels. Contrastively, the proposed method designs a multi-scale transformer, which can build long-range relationship among different levels to alleviate scale variation in dense prediction tasks.
\section{Revisiting dense multi-scale prediction
Before proposing our multi-scale transformer, in this section, we revisit FPN and transformer, exploring the factors that influence the performance of dense multi-scale prediction. Specifically, we first disassemble FPN and examine the effectiveness of each component. Then the transformer is analyzed to discuss the adaptability on multi-scale integration. Based on these analysis, a number of corresponding findings are obtained to give insight into dense multi-scale detection. Here, the experiments are implemented with Faster R-CNN and RetinaNet based on ResNet-50 in object detection task, as shown in Fig \ref{revisit fpn}. Note that $ \left\lbrace C_i \right\rbrace_{i=2}^{5}$ represent the output features from different stages with a down-sampling rate of $\left\lbrace 4, 8, 16, 32 \right\rbrace$ and $ \left\lbrace P_i \right\rbrace_{i=2}^{5}$ denote the features for final prediction. RetinaNet is implemented without the use of $C_2$.
\noindent{\textbf{\textit{Finding 1}: \textit{The high semantic level ($C_5$) plays a significant role on dense multi-scale prediction performance. Single input of $C_5$ can even achieve a comparable accuracy.}}}
As shown in Fig \ref{revisit fpn} (b), we select different levels as singe input of FPN and keep the multi-level output through the operations of down-sampling and up-sampling, in order to validate the importance of different level input features. The experimental results are reported in Fig \ref{revisit2}, which shows that, the single input from all levels has different degrees of performance degradation in Faster R-CNN and RetinaNet likewise has the same phenomenon. Among them, interestingly, model with single $C_5$ input only has slightly accuracy degradation, which is comparable to the baseline with FPN. This observation suggests that $C_5$ has sufficient semantic information, which is vital for detection performance. Based on this, we further enhance $C_5$ by embedding multiple receptive field, which leads to high performance gain. As shown in Fig \ref{revisit fpn} (c), dilated convolutions with a rate of 3 is utilized to enhance the receptive field of $C_5$. Thanks to the extension of receptive field, the model achieves better performance by 0.3 and 0.2 points on Faster R-CNN and RetinaNet. This indicates that adopting multi-receptive field can help semantic level to complementally learn abundant semantic scale information, expanding the diversity of semantic features.
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{revisit1.pdf
\caption{ Illustrations of our experiments of FPN. (a) is the FPN baseline. The single input $C_5$ is acted as an example shown in (b). \textit{DC} denotes the dilated convolution in (c). (d) represents the pyramidal structure without interaction.
\label{revisit fpn}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{revisit2new.pdf
\caption{Experimental results in analyzing factors of FPN with Faster R-CNN and RetinaNet on COCO. $AP_S$ denotes the the average precision of small objects.}
\label{revisit2}
\end{figure}
\noindent{\textbf{\textit{Finding 2}: \textit{Although FPN with input of single high semantic level achieves good performance, the interaction among multiple levels is indispensable.}}}
As presented in Figure \ref{revisit fpn} (d), we directly remove the interaction among different levels to explore the necessity. The results in Fig \ref{revisit2} show that the detection performance has approximate 11 \textit{mAP} reduction in Faster R-CNN and about 5 \textit{mAP} in RetinaNet. Particularly, the small objects' performance of both detectors has steep decrease. Furthermore, although single input of $C_5$ can achieve the comparable result with baseline, the performance of four levels' input drops dramatically in both detectors without interaction. Thus, these phenomenons indicate that: (1) the interaction among different levels is indispensable to fully exploit the complementary roles of each feature level for dense multi-scale prediction; (2) there exists redundant information in low levels affecting the performance.
\begin{table}[!t]
\centering
\caption{Results of different attempts of applying transformer to FPN.
\resizebox{0.4 \textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method& $AP$ & $AP_S$& $AP_M$&$AP_L$&Flops (G)\\
\hline
baseline &37.4&21.2&41.0&48.1&207.07\\%faster0608
\hline
p-MSA &-&-&-&-&-\\%faster0608
s-MSA &37.3&19.9&41.7&49.7& 229.87 \\%faster0627
d-MSA &37.7&22.1&41.5&48.5&218.46 \\%faster04088 epoch10
\hline
\end{tabular}}
\label{attempt}
\end{table}
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.45]{framework2.pdf
\caption{The framework of the proposed SDTP.
\label{pipe}
\end{figure*}
\noindent{\textbf{\textit{Finding 3}: \textit{Feature interaction in the proposed decoupled space is promising to promote the efficiency of transformer with high performance.}}}
Transformer is good at capturing the global relationship by multi-head self attention (MSA), but there is no suitable method to solve the scale variation problem in dense image prediction tasks. Here, we conduct three attempts to apply transformer to multi-scale detection: (1) primitive MSA (p-MSA): the primitive transformer block is directly applied to different levels of FPN; (2) stride-MSA (s-MSA): in order to reduce the computation, ($C_2$, $C_3$, $C_4$) are down-sampled with strides of (8, 4, 2) to achieve the same size with $C_5$, respectively; (3) decoupled MSA (d-MSA): we propose a decoupled style of features \footnote{We use pooling operator to decouple $h$ and $w$.} to represent features with low flops. Given an image containing $h \times w$ patches with $c$ channels, the computational complexity of three attempts can be described as:
\begin{equation}
\small
\mathcal{O}(p\verb|-|MSA)=\sum_{i=2}^{5}(4h_iw_ic_{i}^2+2(h_iw_i)^2c_i) ,
\end{equation}
\begin{equation}
\small
\mathcal{O}(s\verb|-|MSA)=\sum_{i=2}^{5}(4\frac{h_iw_i}{s_{i}^2}c_{i}^2+2(\frac{h_iw_i}{s_{i}^2})^2c_i) ,
\end{equation}
\begin{equation}
\small
\mathcal{O}(d\verb|-|MSA)=\sum_{i=2}^{5}(4(h_i+w_i)c_{i}^2+2(h_i^2+w_i^2)c_i).
\end{equation}
As shown in Tab \ref{attempt}, it can be seen that the implementation of p-MSA fails due to the huge computation cost. s-MSA and d-MSA can significantly decrease the heavy computation, which verifies the efficiency of s-MSA and d-MSA. However, s-MSA performs worse than d-MSA and d-MSA, which has the smallest computation, even obtains the best performance. This may be because features of low levels are generally of large spatial size and contain redundant information, while our decoupled attempt can achieve computational savings with better performance and remove some redundancy. Thus, the decoupled-style method gives us some enlightenments to design an efficient and effective transformer to enable interaction among representative features.
\section{The Proposed Method}
According to our above findings, diverse and sufficient semantic information, effective multi-scale interaction and proper feature dimension reduction are all important for applying transformer in a multi-scale way. Hence, we propose a novel and effective method called SDTP as depicted in Fig \ref{pipe}. In the proposed SDTP framework, encoder extracts multi-scale features and decoder transmits the semantic and scale information from high level to low level. Besides, three main components, including ISP, CDI and ARF, are designed to apply multi-scale technique to transformer. In particular, ISP transformer encoder is applied to the high semantic level ($C_5$) for exploring intra-level semantic diversity. CDI adopts the decoupled style of features to effectively enable the sufficient interaction among tokens from various levels by cross-level transformer encoder. Besides, ARF is embedded in the attention module of transformer block for precise correlation result.
\subsection{Intra-level Semantic Promotion (ISP)}
Transformer is proficient in capturing global information relying on its core module self-attention. However, self-attention generally acts on the single feature to build its long range information. This mechanism only focuses on the current features' states, which ignores to explore the feature diversity (e.g., multi-receptive field of one feature). Even though in pyramid structure, high semantic feature has strong representation, but still losses diverse semantic scale detail information. As proved in finding 1, we learn that the semantic level (i.e., $C_5$) contains useful context which is crucial to performance, and the enhancement of $C_5$ (i.e., multi-receptive enhancement) can obtain a satisfied performance gain. Thus, we design the ISP transformer encoder to explore semantic diversity information of the high semantic level in receptive space, which can flexibly integrate the local and global information in the transformer.
In particular, as shown in Fig \ref{isp}, the whole process of ISP can be formulated as:
\begin{equation}
\small
\hat{C_5}=Attn_{\mathrm{ISP}}(LN(C_5))+C_5,
\end{equation}
\begin{equation}
\small
{C_5^\star}=MLP(LN(\hat{C_5}))+\hat{C_5},
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.43]{ISP2.pdf
\caption{Illustration of ISP transformer encoder. $ARF$ represents the attention refinement function module.
\label{isp}
\end{figure}
where $LN (\cdot)$ denotes the normalization layer and $MLP (\cdot)$ is multi-layer perception. $Attn_{\mathrm{ISP}} (\cdot)$ is the key module to enhance the semantic diversity through multi-receptive tokens, which can be written as:
\begin{equation}
\small
Attn_{\mathrm{ISP}}(C_5) =MMA[\mathcal{G}(C_5)],
\end{equation}
in which $\mathcal{G}(\cdot)$ Eq. \ref{gmrt} is the function to generate multi-receptive tokens and $MMA (\cdot)$ denotes the multi-head multi-receptive attention in Eq. \ref{mma}. We explain these two components in the following:
\begin{equation}
\small
\mathcal{G}(C_5)={\left\lbrace q_{s}, k_s, v_s\right\rbrace}_{s=1}^{S} ={\left\lbrace Reshape(M_s)\right\rbrace}_{s=1}^{S} \in \mathbb{R}^{c\times (hw)},
\label{gmrt}
\end{equation}
\begin{equation}
\small
s.t. \quad M_{s} =\sum\limits_
{\Phi\in \mathscr{O}_s}\Phi(C_5),\quad s=(1,...S).
\end{equation}
$M_s\in \mathbb{R}^{c\times h\times w}$ denotes the various states of features with different receptive fields. ${\mathscr{O}}_s$ represents the operation set, including $3 \times 3$ dilated convolution with different rates and the position embedding. $S$ represents the number of dilated rates which we adopt $3$ in our method. To keep the original local information without loss, we always include rate of 1. Obtaining query $q$, key $k$ and value $v$ as shown, we utilize the initial state' query $q_1$ to explore and search complementary semantic scale information of other states by $MMA(\cdot)$. Details can be formulated as follows:
\begin{equation}
\small
MMA=\mathbb{C} [\left\lbrace Attn(q_{1}w_{d}^{q},k_{s}w_{d}^{k},v_{s}w_{d}^{v})\right\rbrace_{d=1}^{D}],\quad s=(1,2,3).
\label{mma}
\end{equation}
Here, $\mathbb{C}(\cdot)$ means concatenation and $D$ is the number of heads. $Attn (\cdot)$ computes the token-wise correlation among the inputs. $w_{j}^{q},w_{j}^{k},w_{j}^{v}$ are the linear projection parameters. Our design keeps the query $q_1$ of high priority to avoid the information loss. And we utilize $q_1$ to search the diverse semantic message in a local-to-global way, which helps mine the diversity and make up the deficiency. Besides, due to the tiny spatial size of $C_5$, the computational cost is limited.
In summary, we realize intra-level diversity semantic promotion due to the ISP transformer encoder, which aggregates multi-receptive information in a local-to-global way. ISP makes full use of the superiority and diversity of high semantic level, which can flexibly explore its different scale states to search effective information in receptive space.
\subsection{Cross-level Decoupled Interaction (CDI)}
Transformer has a dominant performance in NLP task due to capturing long-range relationship. Hence, it is desirable to establish cross-level interaction based on transformer. However, there exist two fatal obstacles. The first one is huge computation cost. In transformer-based vision tasks, we regard image patches as tokens, and the resulted sequence length is generally large. What is worse, comparing with classification task, dense image prediction tasks like object detection need image of higher-resolution to gain precise prediction. Thus, applying transformer on features of high resolution in dense prediction tasks is difficult, let alone the interaction among different levels. Secondly, the popular interaction style in FPN is insufficient and rigid, since non-adjacent levels can not learn from each other. To address these problems, we propose the cross-level decoupled interaction module (CDI).
On the one hand, it reduces the input dimension for transformer in decoupled space, making multi-scale interaction practical and efficient. On the other hand, the proposed decoupled operation effectively decrease the redundancy in original features, which thereafter deepen the multi-scale interaction and improves the performance.
As shown in Fig \ref{pipe}, the proposed CDI decouples one feature map into a the vertical feature and a horizontal one with much smaller dimensions. For clearness, we present the decoupled process in Eq. \ref{10} and Eq. \ref{11}.
\begin{equation}
\small
Y_i=\mathcal{F}_{3*1}[\sum_{j}^{w} \Psi [\mathcal{F}_{1*1}(C_i)] \cdot \sum_{j}^{w}(C_i)] \in \mathbb{R}^{c\times h \times 1},
\label{10}
\end{equation}
\begin{equation}
\small
X_i=\mathcal{F}_{1*3}[\sum_{j}^{h} \Psi [\mathcal{F}_{1*1}(C_i)] \cdot \sum_{j}^{h}(C_i)] \in \mathbb{R}^{c\times 1 \times w},
\label{11}
\end{equation}
in which $\Psi$ denotes the activation function and $\mathcal{F}$ is the convolution operations with its kernel size shown in the subscript to enhance the learnability. $Y_i$ and $X_i$ are the decoupled features of $C_i$.
Considering the decoupled features from multiple levels as tokens, we leverage them to make a flexible and sufficient interaction in cross-level transformer as shown in Fig \ref{cdi}. Firstly, we can obtain $q, k, v$ from $Y_{i}$ and $X_{i}$:
\begin{equation}
\small
\left\lbrace q_{ih}, k_{ih}, v_{ih}\right\rbrace_{i=2}^{5}=\left\lbrace Reshape( Y_{i})\right\rbrace_{i=2}^{5} \in \mathbb{R}^{c\times h},
\end{equation}
\begin{equation}
\small
\left\lbrace q_{iw}, k_{iw}, v_{iw}\right\rbrace_{i=2}^{5}=\left\lbrace Reshape( X_{i})\right\rbrace_{i=2}^{5}\in \mathbb{R}^{c\times w}.
\end{equation}
Then we design a multi-head global attention module (MGA) to make the features learn cross-level knowledge from each other. Note that the MGA for tokens from decoupled vertical and horizontal features are implemented separately. For tokens form $X_i$, this process can be formulated as below, and the MGA for $Y_i$ is implemented similarly.
\begin{equation}
\small
MGA=\mathbb{C} [ Attn(q_{i}w^{q},\left\lbrace k_{i}w^{k}\right\rbrace_{i=2}^{5} ,\left\lbrace v_{i} w^{v}\right\rbrace_{i=2}^{5})]_{d=1}^{D}.
\end{equation}
The output of MGA are $\hat{X_i}$ and $\hat{Y_i}$ with sufficient scale information from all levels owing to the sufficient interaction. Particularly, in contrast to the traditional top-down fusion style in FPN, our design is more flexible since it allows for interactions for features of any two levels. Finally, we make the re-couple of $\hat{C_i}$ for the last process such as $MLP$ in transformer block.
Besides, in order to avoid significant information loss, we introduce an effective loss function for decoupled process to achieve the end-to-end optimization:
\begin{equation}
\small
\mathcal{L}_{dep}=\sum_{i=2}^{5}||C_i-(Y_i \circledast X_{i})||_2 .
\end{equation}
Note that $\circledast$ is the Kronecker sum operator. The total loss can be described as:
\begin{equation}
\small
\mathcal{L}=\mathcal{L}_{org}+\lambda \mathcal{L}_{dep} ,
\end{equation}
in which $\mathcal{L}_{org}$ represent the original loss of dense prediction task. Besides, we leverage $\lambda$ to balance the loss, which is set to 0.01 in our experiments.
Because of the large size and redundant information in lower levels of dense prediction tasks, applying transformer to capture long-range relationship is impractical. The proposed CDI adopts the decoupled style to obtain more representative features with low dimension, and makes tokens learn adequate cross-level message through the multi-head global attention, which proves to be efficient and effective.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{CDI2.pdf
\caption{Illustration of cross-level transformer encoder. We use $C_2$ as an example to show the process.
\label{cdi}
\end{figure}
\subsection{Attention Refinement Function (ARF)}
As the core block in transformer, self-attention module generates an attention map that depicts the correlation between different tokens. The previous studies (e,g., \cite{dosovitskiy2020image, wang2021pyramid}) adopt Softmax as the activation function in correlation calculation. However, Softmaxt takes account of all tokens together, ignoring the independence and inconsistency of various tokens. In dense prediction tasks, there are enormous tokens and the use of Softmax is likely to alter the correlation map by enhancing some relations and weakening others. However, we desire for a correlation map that can reflect the relations more accurately.
To alleviate this problem, we propose a flexible and effective activation function, which is formulated as:
\begin{equation}
\small
\mathcal{U}(x)=Max[\frac{e^x-e^{-x}}{e^x+e^{-(x+2\tau)}},0],
\label{activation}
\end{equation}
where, $\tau$ is a hyper-parameter. Our proposed activation function considers each correlation individually and eliminates the redundant relations, avoiding the limitations of Softmax. Moreover, it is easy to find that Tanh is a special case of this function when the hyper-parameter $\tau$ is set to zero and the activation value is positive. Comparing with Tanh, this function can boost values that are close to zero by adjusting the hyper-parameter $\tau$ and suppress the redundant relations to zero whose activation values are negative. Rather than using Softmax as activation function, we rethink the calculation of attention in self-attention and propose ARF to refine the correlation result. ARF can be put in other transformer-based methods as a pluggable component to refine the attention.
\section{Experiments}
\subsection{Settings}
\noindent{\textbf{Data and evaluation.}}
Our experiments are implemented on the MS COCO 2017 \cite{lin2014microsoft} for object detection and instance segmentation and ADE20K \cite{zhou2017scene} for semantic segmentation. MS COCO 2017 contains 80 object categories for detection and instance segmentation tasks, which consists of 115k images for training \textit{(train2017)} and 5k images for validation \textit{(val2017)}. We train on \textit{train2017}, and report results on \textit{val2017}. Then performance is evaluated by standard COCO-style Average Precision (AP) metrics on small, medium and large objects, i.e., AP$_s$, AP$_m$ and AP$_l$. Besides, AP$^b$ and AP$^m$ denote the AP of bounding box and mask. ADE20K is a challenging scene parsing benchmark containing 150 fine-grained semantic categories , consisting of 20k images for training and 2k images for validation. The mean Intersection-over-Union (mIoU) is the primary metric to evaluate the performance of semantic segmentation.
\noindent{\textbf{Implementation details.}}
We implement our method based on mmdetection \cite{chen2019mmdetection}. In order to ensure the fairness of comparisons, we also re-implement baseline methods on mmdetection \cite{chen2019mmdetection}. The results in our re-implementation are generally better than those in the reference papers. Besides, the backbone used in our experiments are pre-trained in ImageNet \cite{deng2009imagenet}. If not otherwise specified, all the baseline methods are equiped with FPN and the ablation studies are implemented with Faster R-CNN based on ResNet50. All other hyper-parameters in our work follow the settings in mmdetection.
\subsection{Performance}
To demonstrate the generality of our method, we implement our framework on three dense prediction tasks: object detection, semantic segmentation and instance segmentation.
\noindent{\textbf{Object detection.}}
The results on common object detectors are shown in Tab \ref{detection}. The proposed SDTP achieves consistent improvement on both single-stage and two-stage detectors. When paired with strong detectors, SDTP still shows the superiority with better performance. In particular, the results of small objects has significant improvement due to the effective utilization of high-resolution features because of the sufficient interaction among multiply levels.
\begin{table}
\centering
\caption{\textbf{Object Detection:} Performance comparisons with popular detectors. "SDTP" denotes our method. ``$\surd$" means the baselines integrated with our transformer pyramid.}
\resizebox{0.45\textwidth }{!}{
\begin{tabular}{|c|c|c|cccc|}
\hline
Method&Backbone&SDTP& $AP^b$ & $ AP^b_{S}$ & $AP^b_{M}$ & $AP^b_{L}$ \\
\hline
\multirow{4}{*}{RetinaNet}&\multirow{2}{*}{R50} & &36.5&20.4&40.3&48.1\\
& &$\surd$&\textbf{38.1}&\textbf{21.8}&\textbf{41.8}&\textbf{49.1} \\
\cline{2-7}
&\multirow{2}{*}{R101}&&38.5&21.7&42.8&50.4 \\
&&$\surd$&\textbf{40.0}&\textbf{22.5}&\textbf{44.1}&\textbf{52.1} \\
\hline
\hline
\multirow{4}{*}{Faster R-CNN}&\multirow{2}{*}{R50} & &37.4&21.2&41.0&48.1\\
& &$\surd$&\textbf{39.4}&\textbf{22.7}&\textbf{42.7}&\textbf{51.0}\\
\cline{2-7}
&\multirow{2}{*}{R101}&&39.4&22.4&43.7&51.1\\
&&$\surd$&\textbf{40.8} & \textbf{23.3} & \textbf{44.9} & \textbf{54.0} \\
\cline{2-7}
\hline
\hline
\multirow{4}{*}{Cascade R-CNN}&\multirow{2}{*}{R50} & &40.3&22.5&43.8&52.9\\
& &$\surd$&\textbf{41.7}& \textbf{24.2}& \textbf{45.0} & \textbf{54.9}\\
\cline{2-7}
&\multirow{2}{*}{R101}&&42.0&23.4&45.8&55.7\\
&&$\surd$&\textbf{43.2}&\textbf{25.3}&\textbf{47.1}&\textbf{57.3} \\
\hline
\end{tabular}}
\label{detection}
\end{table}
\noindent{\textbf{Semantic segmentation.}}
We also conduct experiments to prove the effectiveness of SDTP on semantic segmentation task. As shown in Tab \ref{semantic seg}, we compare our method with Semantic FPN \cite{kirillov2019panoptic} and PointRend. With the help of SDTP, we again outperform the baselines. Especially, the $mIoU$ of PointRend with ResNet50 increases nearly 4 points with SDTP. Since the semantic segmentation is more sensitive to multi-resolution information, the performance margin brought by SDTP is remarkable.
\begin{table}[!h]
\centering
\caption{\textbf{Semantic Segmentation:} Performance comparisons with common semantic segmentation methods of different backbones on ADE20K validation set.}
\resizebox{0.45\textwidth }{!}{
\begin{tabular}{|c|c|c|ccc|}
\hline
Method&Backbone&SDTP& $mIoU$ & $ mAcc$ & $aAcc$ \\
\hline
\multirow{4}{*}{Semantic FPN}&\multirow{2}{*}{R50} & &37.48&47.57&78.02\\
& &$\surd$&\textbf{38.77}&\textbf{49.35}&\textbf{79.13} \\
\cline{2-6}
&\multirow{2}{*}{R101}&&39.35&49.43&79.19 \\
&&$\surd$&\textbf{41.52}&\textbf{51.68}&\textbf{80.13} \\
\hline
\hline
\multirow{4}{*}{PointRend}&\multirow{2}{*}{R50} & &37.63&48.14&77.80\\
& &$\surd$&\textbf{41.53}&\textbf{51.82}&\textbf{79.56}\\
\cline{2-6}
&\multirow{2}{*}{R101}&&40.01&50.56&79.09 \\
&&$\surd$&\textbf{42.39}&\textbf{53.44}&\textbf{80.35} \\
\hline
\end{tabular}}
\label{semantic seg}
\end{table}
\begin{table*}
\centering
\caption{\textbf{Comparisons with the state-of-the-art methods:} The symbol “*” means our re-implemented results on mmdetection.}
\resizebox{0.9 \textwidth}{!}{
\begin{tabular}{|c|cc|cccccc|}
\hline
Method & Backbone & Schedule & $AP$ & $AP_{50}$ & $AP_{75} $&$ AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\
\hline
Faster R-CNN* & ResNeXt101-32$\times$4d&12 & 41.2 & 62.1 & 45.1 & 24.0 & 45.5 & 53.5 \\
Faster R-CNN* & ResNeXt101-64$\times$4d &12 & 42.1 & 63.0 & 46.3 & 24.8 & 46.2 & 55.3 \\
Mask R-CNN* & ResNeXt101-32$\times$4d &12 & 41.9 & 62.5 & 45.9 & 24.4 & 46.3 & 54.0 \\
Cascade R-CNN* & ResNeXt101-32$\times$4d&12 & 43.7 & 62.3 & 47.7 & 25.1 & 47.6 & 57.3 \\
DETR\cite{carion2020end} & ResNet50&500 & 42.0 & 62.4 & 44.2 & 20.5 & 45.8 & 61.1 \\
DETR\cite{carion2020end} & ResNet101&500 & 43.5 & 63.8 & 46.4 & 21.9 & 48.0 & \textbf{61.8} \\
Deformable DETR\cite{zhu2020deformable} & ResNet50&50 & 43.8 & 62.6 & 47.7 & 26.4 & 47.1 & 58.0 \\
Sparse R-CNN\cite{sun2020sparse} & ResNet101 &36 & 44.1 & 62.1 & 47.2 & 26.1 & 46.3 & 59.7 \\
Cascade Mask R-CNN* & ResNeXt101-32$\times$4d &20 & 45.0 & 63.2 & 49.1 & 26.7 & 48.9 & 59.0 \\
HTC* & ResNet101 &20 & 44.8 & 63.3 & 48.8 & 25.7 & 48.5 & 60.2 \\
\hline
\hline
SDTP Faster R-CNN(ours) & ResNeXt101-32$\times$4d & 12 & 42.3 & 63.7 & 46.1 & 25.3 & 46.6 & 54.8 \\
SDTP Faster R-CNN(ours) & ResNeXt101-64$\times$4d&12 & 43.0 & 63.9 & 46.6 & 25.3 & 46.9 & 55.8\\
SDTP Mask R-CNN(ours) & ResNeXt101-32$\times$4d &12 & 43.2 & 64.3 & 47.1 & 25.9 & 47.1 & 56.5\\
SDTP Cascade R-CNN(ours) & ResNeXt101-32$\times$4d &12 & 44.6 & {63.8} & {48.6} & {26.1} & {48.7} & 57.8\\
SDTP Sparse R-CNN(ours) & ResNet101 &36 & 44.6 &{63.1} & {48.3} & {27.3} & {47.6} & 60.0\\
SDTP Cascade Mask R-CNN (ours) & ResNeXt101-32$\times$4d &20 & 45.7 & 64.5 & 49.8 & 27.1 & 49.1 & 59.6 \\
SDTP HTC* (ours) & ResNet101 &20 & \textbf{45.8} & \textbf{64.9} & \textbf{49.8} & \textbf{27.3} & \textbf{49.3} & 60.5 \\
\hline
\end{tabular}}
\label{sota}
\end{table*}
\noindent{\textbf{Instance segmentation.}}
We continue to validate the generalization of SDTP on instance segmentation task as shown in Tab \ref{instance}. Our method improves the baseline on both detection and instance segmentation task with great margin. Even with the strong method such as HTC, SDTP still has significant increase by 1.5 points. Besides, benefiting from the diverse semantic information, the performance on large object in instance segmentation has witnessed dominant promotion.
\begin{table}[!ht]
\centering
\caption{\textbf{Instance Segmentation:} Performance comparisons with strong instance segmentation methods.}
\resizebox{0.45\textwidth }{!}{
\begin{tabular}{|c|c|c|cccc|}
\hline
Method&Backbone&SDTP& $AP^{b}$ & $ AP^{b}_{S}$ & $AP^{m}$ & $AP^{m}_{L}$ \\
\hline
\multirow{4}{*}{Mask R-CNN}&\multirow{2}{*}{R50} & &38.2&21.9&34.7&47.2\\
& &$\surd$&\textbf{40.0}&\textbf{22.8}&\textbf{36.2}&\textbf{53.1} \\
\cline{2-7}
&\multirow{2}{*}{R101}&&40.0&22.6&36.1&49.5 \\
&&$\surd$&\textbf{41.6}&\textbf{24.5}&\textbf{37.2}&\textbf{54.7} \\
\hline
\hline
\multirow{2}{*}{PointRend}&\multirow{2}{*}{R50} & &38.4&22.8&36.3&48.5\\
& &$\surd$&\textbf{40.9}&\textbf{25.5}&\textbf{38.0}&\textbf{50.8}\\
\hline
\hline
\multirow{2}{*}{HTC}&\multirow{2}{*}{R50} & &42.3&23.7&37.4&51.7\\
& &$\surd$&\textbf{43.8}&\textbf{25.7}&\textbf{38.9}&\textbf{57.2}\\
\hline
\end{tabular}}
\label{instance}
\end{table}
\noindent{\textbf{Comparison on transformer-based method.}}
Apart from the above experiments, we further assess the superiority and generality of SDTP on transformer-based backbone. As seen in Tab \ref{transformer}, we apply our method to Mask R-CNN based on backbones of two versions of PVT. SDTP still obtains better performance on two dense prediction tasks.
\begin{table}[!h]
\centering
\caption{\textbf{Comparison with transformer-based backbone:} Performance comparisons paired with Mask R-CNN.}
\resizebox{0.45\textwidth }{!}{
\begin{tabular}{|c|c|c|cccc|}
\hline
Method&Backbone&SDTP& $AP^b$ & $ AP^b_{S}$ & $AP^{m}$ & $AP^{m}_{L}$ \\
\hline
\multirow{4}{*}{Mask R-CNN}&\multirow{2}{*}{PVT-Tiny} & &36.7&21.6&35.1&48.5\\
& &$\surd$&\textbf{38.3}&\textbf{23.5}&\textbf{36.2}&\textbf{54.2} \\
\cline{2-7}
&\multirow{2}{*}{PVT-Small}&&40.4&22.9&37.8&53.6 \\
&&$\surd$&\textbf{41.4}&\textbf{23.3}&\textbf{38.3}&\textbf{58.0} \\
\hline
\end{tabular}}
\label{transformer}
\end{table}
\noindent{\textbf{Comparison with State-of-the-art methods.}}
All shown in Tab \ref{sota}, SDTP achieves consistently non-negligible improvements even with more powerful backbones and more training epochs. For example, when applying ResNeXt101-32$\times$4d and ResNeXt101-64$\times$4d as the feature extractors of Faster R-CNN, our SDTP still improves performance by 1.1 and 0.9 points, respectively. Besides, in the comparison between HTC and its SDTP variant which are both trained with 20 epochs, SDTP still wins the comparisons. SDTP brings consistent improvements on various backbones, methods and learning schedules. This proves the generalization and superiority of SDTP
\subsection{Ablation studies}
\noindent{\textbf{Ablation studies on each component.}} To analyze the importance of each module in STDP, we gradually apply them to the model. As in Tab \ref{each}, all three parts are essential. Especially, we can find that ISP alone increases 1.3 points. The adding of CDI helps to give a boost of 1.0 point. These results are in consistency with our first two findings that both semantic diversity and interaction across multi-level features are the core of multi-scale technique. Furthermore, the proposed ARF provides more accurate attention map in building the intra-level and cross-level transformer, and exploiting ARF enables the performance grow to 39.4.
\begin{table}[!h]
\centering
\caption{\textbf{Effectiveness of each component.}}
\resizebox{0.45 \textwidth}{!}{
\begin{tabular}{|c|cccc|}
\hline
Method& $AP$ & $ AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\
\hline
baseline &37.4&21.2&41.0&48.1\\
\hline
baseline+ISP &38.7&22.3&42.5&50.8 \\
baseline++CDI & 38.4&21.3&42.0&49.9 \\
baseline+ISP+CDI & 39.0&22.6&42.6&50.7 \\
baseline+ISP+CDI+ARF & \textbf{39.4}&\textbf{22.7}&\textbf{42.7}&\textbf{51.0} \\
\hline
\end{tabular}}
\label{each}
\end{table}
\noindent{\textbf{Ablation studies on different dilated rates in ISP.}}
The results illustrated in Tab \ref{ISP} show that exploring receptive space brings improvements to SDTP. In detail, the combination of 1,3,6 has the best performance. Another observation is that the performances of the first two rows are lower, which indicates that the selected dilation rates should be of large difference so that the semantic features can be diverse.
\begin{table}[!ht]
\centering
\caption{\textbf{Different settings of dilated rates in ISP.}}
\resizebox{0.30 \textwidth}{!}{
\begin{tabular}{|c|cccc|}
\hline
Rates& $AP$ & $ AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\
\hline
1,2,3 &38.1&21.5&41.6&49.6\\
1,2,4 &37.9&21.4&41.2&49.1 \\
\textbf{1,3,6} & \textbf{38.7}&\textbf{22.3}&\textbf{42.5}&\textbf{50.8 } \\
2,4,6 & 38.4&21.9&42.2&49.1 \\
3,6,12 & {38.3}&{21.4}&{41.6}&{50.0} \\
\hline
\end{tabular}}
\label{ISP}
\end{table}
\noindent{\textbf{Ablation studies on $\tau$ in ARF.}} Experimental results related with the settings of ARF are presented in Tab \ref{ARF}. Paired with our proposed function, the suppression of irrelevant information helps for better performance. We choose 2 for $\tau$ which achieves best performance.
\begin{table}[!ht]
\centering
\caption{\textbf{Comparisons among different $\tau$ of ARF}.}
\resizebox{0.33 \textwidth}{!}{
\begin{tabular}{|c|c|cccc|}
\hline
Method&$\tau$& $AP$ & $ AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\
\hline
Softmax &- &39.0&22.6&42.6&50.7\\
Tanh &-&39.1&22.4&42.6&51.0 \\
\hline
\multirow{4}{*}{Proposed} & 1& 39.0&22.1&42.5&50.9 \\
&\textbf{2} & \textbf{39.4}&\textbf{22.7}&\textbf{42.7}&{51.0 } \\
&3 & {39.1}&{22.4}&{42.2}&{50.8} \\
&4 & {39.2}&{22.6}&{42.5}&\textbf{{51.4}} \\
\hline
\end{tabular}}
\label{ARF}
\end{table}
\section{Conclusion}
In this paper, we focus on dealing with the scale variation problem in dense image prediction tasks with the aid of multi-scale and transformer techniques. To begin with, we revisit the dense multi-scale prediction, and obtain important insights that semantic diversity and interaction among different levels are the key elements. Based on these findings, we propose a novel semantic-aware decoupled transformer pyramid, which includes three simple yet effective components, i.e., Intra-level Semantic Promotion, Cross-level Decoupled Interaction and Attention Refinement Function. SDTP has shown the generality and effectiveness on various dense image prediction tasks. Additionally, SDTP and its three key components can be easily extended to other methods.
{
\bibliographystyle{ieee_fullname}
|
1,108,101,562,810 | arxiv | \subsection{Proof of the Claim from Lemma~\ref{lem:extreme-noisy}}
\section{Technical Spectral Lemmas} \label{app:spectral}
\begin{prop}[\cite{davis1970rotation} $\sin\theta$ theorem]. \label{prop:kahan}
Let $B, \hat B \in \mathbb{R}^{p\times p}$ be symmetric, with eigen values $\lambda_1 \geq \cdots \geq \lambda_p$ and
$\hat \lambda_1 \geq \cdots \geq \hat \lambda_p$, respectively.
Fix $1 \leq r \leq s \leq p$ and let $V = (\vec v_r, \dots, \vec v_s)$ and $\hat V = ({\hat{\vec v}}_r , \dots, {\hat{\vec v}}_s)$ be the orthonormal eigenvectors corresponding to $\lambda_r, \dots, \lambda_s$ and $\hat \lambda_r, \dots,\hat \lambda_s$.
Let $\delta = \inf \{ |\hat \lambda - \lambda |: \lambda \in [\lambda_s, \lambda_r], \hat \lambda \in (-\infty, \hat \lambda_{s-1}] \cup [\hat \lambda_{r+1}, \infty) \} > 0$. Then ,
\[ \| \sin \Theta(V, \hat V) \|_2 \leq \frac{ \| \hat B - B \|_2 }{\delta}.
\]
where $\sin \Theta(V, \hat V) = P_V - P_{\hat V}$, where $P_V$ and $P_{\hat V}$ are the projection matrices for $V$ and $\hat V$.
\end{prop}
\begin{prop}[Corollary 5.50~\citep{vershynin2010introduction}] \label{prop:covariance-gauss}
Consider a Gaussian distribution in $\mathbb{R}^n$ with co-variance matrix $\Sigma$. Let $A \in \mathbb{R}^{n\times m}$ be a matrix whose rows are drawn i.i.d from this distribution, and let $\Sigma_m = \frac 1m A A^\top$. For every $\epsilon \in (0,1)$, and $t$, if $m \geq c n (t / \epsilon)^2 $ for some constant $c$, then with probability at least $1 - 2 \exp(-t^2 n)$, $\| \Sigma_m - \Sigma \|_2 \leq \epsilon \| \Sigma \|_2$
\end{prop}
\begin{prop}[Matrix Bernstein~\citep{tropp2015introduction}] \label{prop:bernstein}
Let $S_1, \dots, S_n$ be independent, centered random matrices with
common dimension $d_1 \times d_2$, and assume that each one is uniformly bounded. That is, $\mathbb{E} S_i = 0$ and $\| S_i \|_2 \leq L$ for all $i\in[n]$.
Let $Z = \sum_{i=1}^n S_i$, and let $v(Z)$ denote the matrix variance:
\[ v(Z) = \max \left\{ \left\| \sum_{i=1}^n \mathbb{E}[S_i S_i^\top] \right\|, \left\| \sum_{i=1}^n \mathbb{E}[S_i^\top S_i] \right\| \right\}.
\]
Then,
\[
\P [ \|Z \| \geq t ] \leq (d_1+d_2) \exp \left(\frac{-t^2/2}{ v(Z) + Lt/3} \right).
\]
\end{prop}
\begin{prop}[Theorem 4.10 of \cite{stewart1990matrix}] \label{prop:diffeigen}
Let $\hat A = A + E$ and let $\lambda_1, \dots, \lambda_n$ and $\lambda'_1, \dots, \lambda'_n$ be the eigen values of $A$ and $A+E$. Then, $\max\{ | \lambda'_i - \lambda_i |\} \leq \| E\|_2$.
\end{prop}
\begin{prop}[Theorem 3.3 of \cite{stewart77perturbation}]\label{prop:inverseperturb}
For any $A$ and $B = A + E$, $$\| B^+ - A^+\| \leq \max 3 \left\{ \| A^{+} \|^2, \|B^{+}\|^2 \right\} \|E\|,$$
where $\|\cdot\|$ is an arbitrary norm.
\end{prop}
\section{Omitted Proof from Section~\ref{sec:phase1} --- Phase 1}
\subsection{Proof of Claim~\ref{claim:DE-estimate}} \label{app:claim:DE-estaimte}
Let $\vec e_i$ and $\vec d_i$ be the $i^{th}$ row of $E$ and $D$. Then $E D^\top = \sum_{i=1}^m \vec e_i \vec d_i^\top$ and $D E^\top = \sum_{i=1}^m \vec d_i \vec e_i^\top$.
Let $S_i = \frac 1m \begin{bmatrix} 0 & \vec e_i \vec d_i^\top\\
\vec d_i \vec e_i^\top & 0 \\
\end{bmatrix}$.
Then, $\| \frac 1m D E^\top + \frac 1m E D^\top \|_2 \leq 2 \| \sum_{i=1}^m S_i \|_2$. We will use matrix Bernstein to show that $\sum_{i\in[m]} S_i$ is small with high probability.
First note that the distribution of $\vec e_i$ is a Gaussian centered at $0$, therefore, $\mathbb{E}[S_i] = 0$.
Furthermore, for each $i$, with probability $1-\delta$, $\|\vec e_i\|_2 \leq \sigma \sqrt{n} \log\frac 1\delta$. So, with probability $1-\delta$, for all samples $i \in [m]$, $\| \vec e_i\|_2 \leq \sigma \sqrt{n} \log\frac m\delta$. Moreover, by assumption $\|\vec d_i \|=\| \vec x_i^1 - \vec x_i^2\| \leq 2M$.
Therefore, with probability $1-\delta$,
\[
L = \max_i \|S_i\|_2 = \frac 1m \max_i \|\vec e_i \| \|\vec d_i\| \leq \frac{2}{m} \sigma \sqrt{n} M ~\mathrm{polylog}\frac{n}{\epsilon\delta}.
\]
Note that, $\left\| \mathbb{E}[S_i S_i^\top] \right\| = \frac{1}{m^2} \left\| \mathbb{E}[(\vec e_i\vec d_i^\top )^2] \right\| \leq L^2.$
Since $S_i$ is Hermitian, the matrix covariance defined by Matrix Bernstein inequality is
\[
v(Z) = \max \left\{ \left\| \sum_{i=1}^m \mathbb{E}[S_i S_i^\top] \right\|, \left\| \sum_{i=1}^m \mathbb{E}[S_i^\top S_i] \right\| \right\} = \left\| \sum_{i=1}^m \mathbb{E}[S_i S_i^\top] \right\| \leq m L^2.
\]
If $\epsilon \leq v(Z) / L$ and $m\in \Omega( \frac{n \sigma^2 M^2}{\epsilon^2} \mathrm{polylog}\frac{n}{\epsilon\delta} )$
or $\epsilon \geq v(Z) / L$ and $m\in \Omega( \frac{\sqrt n \sigma M}{\epsilon} \mathrm{polylog}\frac{n}{\epsilon\delta} )$,
using Matrix Bernstein inequality (Proposition~\ref{prop:bernstein}), we have
\[ \Pr\left[ \left\| \frac 1m D E^\top + \frac 1m E D^\top \right\| \geq \epsilon \right] = \Pr\left[ \left\| \sum_{i=1}^m S_i \right\| \geq \frac \epsilon 2 \right] \leq \delta.\]
\subsection{Proof of Claim~\ref{claim:DD-estimate}} \label{app:estimateDD}
Let $\vec d_i$ be the $i^{th}$ row $D$. Then $D D^\top = \sum_{i=1}^m \vec d_i \vec d_i^\top$.
Let $S_i = \frac 1m \vec d_i \vec d_i^\top - \frac 1m \mathbb{E}[\vec d_i \vec d_i^\top]$.
Then, $\| \frac 1m D D^\top - \mathbb{E}\left[ \frac 1m D D^\top \right] \|_2 = \| \sum_{i=1}^m S_i \|_2$.
Since, $\vec d_i = \vec x_i^1 - \vec x_i^2$ and $\|\vec x_i^j\|\leq M$, we have that for any $i$, $\| \vec d_i \vec d_i^\top - \mathbb{E}[\vec d_i \vec d_i^\top] \| \leq 4M^2$. Then,
\[ L = \max_i \|S_i\|_2 = \frac 1m \max_i \| \vec d_i \vec d_i^\top - \mathbb{E}[\vec d_i \vec d_i^\top] \|_2 \leq \frac 4m M^2,
\]
and $\| \mathbb{E}[S_i S_i^\top] \leq L^2$. Note that $S_i$ is Hermitian, so, the matrix covariance is
\[
v(Z) = \max \left\{ \left\| \sum_{i=1}^m \mathbb{E}[S_i S_i^\top] \right\|, \left\| \sum_{i=1}^m \mathbb{E}[S_i^\top S_i] \right\| \right\} = \left\| \sum_{i=1}^m \mathbb{E}[S_i S_i^\top] \right\| \leq m L^2.
\]
If $\delta_0 \leq 4M^2$ and $m \in \Omega( \frac{M^4}{\delta_0^2} \log \frac n \delta)$ or $\delta_0 \geq 4M^2$ and $m \in \Omega( \frac{M^2}{\delta_0} \log \frac n \delta)$, then by Matrix Bernstein inequality (Proposition~\ref{prop:bernstein}), we have
\[ \Pr\left[ \left\| \sum_{i=1}^m S_i \right\| \geq \frac {\delta_0}{ 2} \right] \leq \delta.\]
\section{Omitted Proof from Section~\ref{sec:denoise} --- Denoising}
\subsection{Proof of Claim~\ref{claim:high-dense}} \label{app:claim:high-density}
Recall that for any $i\in[k]$, with probability $\gamma= g(\epsilon'/(8 k \alpha))$ a nearly pure weight vector $\vec w$ is generated from $\P$, such that $\| \vec w - \vec e_i\| \leq \epsilon'/(8 k \alpha)$.
And independently, with probability $p_0$ the point is not noisy. Therefore, there is $p_0 \gamma$ density on non-noisy points that are almost purely of class $i$. Note that for such points, $\vec x$,
\[
\| P\vec x - \vec a_i \| = \left\| \sum_{j=1}^k w_j \vec a_j - \vec a_i \right\| \leq k ( \epsilon'/(8 k \alpha))(\alpha) \leq \frac{\epsilon'}{8}.
\]
Since $\| P - \hat P \| \leq \epsilon' / 8M$, we have
\[ \| \vec a_i - \hat P \vec x \| = \| \vec a_i - P \vec x \| + \| P \vec x - \hat P \vec x \| \leq \frac{\epsilon'}{8} +\frac{\epsilon'}{8} \leq \frac{\epsilon'}{4}
\]
The claim follows immediately.
\section{Omitted Proof from Section~\ref{sec:phase2} --- Phase 2}
\subsection{Omitted proof from Claim~\ref{claim:extreme-noisy}}
\label{app:CH_claim}
Here, we prove that $\hvec x \in \mathrm{CH}(\hat S_\parallel\setminus B_{d+\epsilon'}(\hvec a_i))$ then there exists $\vec x\in \mathrm{CH}(\Delta \setminus B_{d}(\hvec a_i))$, such that $\| \vec x - \hvec x\|\leq \epsilon'$.
Let $\vec x = \sum_i \alpha_i \hvec z_i$ be the convex combination of $\hvec z_1, \dots, \hvec z_\ell \in \hat S_\parallel \setminus B_{d+\epsilon'}(\hvec a_i)$.
By Claim~\ref{lem:phase-denoise}, there are $\vec z_1, \dots, \vec z_\ell\in \Delta$, such that $\|\vec z_i - \hvec z_i\|\leq \epsilon'$ for all $i\in[k]$. Furthermore, by the proximity of $\vec z_i$ to $\hvec z_i$ we have that $\vec z_i \not\in B_{d}(\hvec a_i)$. Therefore,
$\vec z_1, \dots, \vec z_\ell\in \Delta \setminus B_{d}(\hvec a_i)$. Then,
$\vec x = \sum_i \alpha_i \vec z_i$ is also within distance $\epsilon'$.
\section{Proof of Theorem~\ref{thm:lower} --- Lower Bound} \label{app:lower-bound}
For ease of exposition assume that $n$ is a multiple of $k$. Furthermore, in this proof we adopt the notion $(\vec x_i, \vec x'_i)$ to represent the two views of the $i^{th}$ sample. For any vector $\vec u\in \mathbb{R}^n$ and $i\in [k]$, we use $(\vec u)_i$ to denote the $i^{th}$ $\frac nk$-dimensional block of $\vec u$, i.e., coordinates $u_{(i-1)\frac nk+1},\dots, u_{i\frac nk}$.
Consider the $\frac n k$-dimensional vector $\vec u_j$, such that $u_{j\ell} = 1$ if $\ell = 2j-1$ or $2j$, and $u_{j\ell} = 0$, otherwise.
And consider $\frac n k$-dimensional vectors $\vec z_j$ and $\vec z'_j$, such that $z_{ j\ell } = -1$ if $\ell = 2j-1$ and $z_{ j\ell } =1$ otherwise, and $z'_{j \ell} = -1$ if $\ell = 2j$ and $z'_{j \ell} = 1$ otherwise.
Consider a setting where $\vec v_i$ is restricted to the set of candidate $C_i = \{\vec v^j_i \mid (\vec v^j_i)_i = \vec u_j /\sqrt{2} \text{ and } (\vec v^j_i)_{i'} = \vec 0 \text{ for } i'\neq i \}$. In other words, the $\ell^{th}$ coordinate of $\vec v^j_i$ is $1/\sqrt{2}$ if $\ell = (i-1)\frac nk+2j-1$ or $(i-1)\frac nk+2j$, else $0$.
Furthermore, consider instances $( \vec x^j_i, \vec x'^j_i)$ such that $(\vec x^j_i)_i = \vec z_j / \sqrt{2}$ and $(\vec x'^j_i)_i = \vec z'_j/\sqrt{2}$ and for all $i'\neq i$, $(\vec x^j_i)_{i'} = (\vec x'^j_i)_{i'} = \vec 0$. In other words,
\begin{align*}
\vec x^j_i &= \frac {1}{\sqrt 2}~(0, \dots, 0, \ \ 1, \dots, 1, \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \overbrace{1 , -1 }^{(i-1)\frac nk+2j -1, (i-1)\frac nk + 2j} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!, 1, \dots, 1,\ \ 0, \dots, 0),\\
\vec x'^j_i &= \frac {1}{\sqrt 2}~(0, \dots, 0, \ \ 1, \dots, 1, -1 ,\ \ 1\ , 1, \dots, 1, \ \ 0, \dots, 0),\\
\vec v^j_i &= \frac {1}{\sqrt 2}~(0, \dots, 0, \ \underbrace{\ 0, \dots, 0, \ \ \, 1 ,\ \ 1\ , 0 , \dots,0, \ }_{i^{th}\ block} \ 0, \dots, 0).
\end{align*}
First note that, for any $i, i'\in [k]$ and any $j, j'\in [\frac{n}{2k}]$, $\vec v_i^j \cdot \vec x^{j'}_{i'} = \vec v_i^j \cdot \vec x'^{j'}_{i'}$. That is, the two views of all instances are consistent with each other with respect to all candidate vectors.
Furthermore, for any $i$ and $i'$ such that $i\neq i'$, for all $j, j'$, $\vec v_i^j \cdot \vec x^{j'}_{i'} = 0$. Therefore, for any observed sample $(\vec x^j_i, \vec x'^j_i)$, the sample should be purely of type $i$.
For a given $i$, consider all the samples $(\vec x^j_i, \vec x'^j_i)$ that are observed by the algorithm.
Note that $\vec v_i^j \cdot \vec x^j_i = \vec v_i^j \cdot \vec x'^j_i =0$. And for all $j' \neq j$, $\vec v_i^{j'} \cdot \vec x^j_i = \vec v_i^{j'} \cdot \vec x'^j_i =1$. Therefore, observing $(\vec x^j_i, \vec x'^j_i)$ only rules out $\vec v_i^j$ as a candidate, while this sample is consistent with candidates $\vec v_i^{j'}$ for $j'\neq j$. Therefore, even after observing $\leq \frac {n}{2k} -2$ samples of this types, at least $2$ possible choices for $\vec v_i$ remain valid. Moreover, the distance between any two $\vec v^j_i,\vec v^{j'}_i \in C_i$ is $\sqrt{2}$. Therefore, $\frac {n}{2k} -1$ samples are needed to learn $\vec v_i$ to an accuracy better than $\sqrt{2}/2$.
Note that consistency of the data with $\vec v_{i'}$ is not affected by the samples of type $\vec x_i^j$ that are observed by the algorithms when $i'\neq i$. So, $\Omega(k \frac nk)= \Omega(n)$ samples are required to approximate all $\vec v_i$'s to an accuracy better than $\sqrt{2}/2$.
\section{Omitted Proof from Section~\ref{sec:no-noise} --- No Noise} \label{app:no-noise}
\subsection{Proof of Lemma~\ref{lem:rank}} \label{app:no-noise-rank}
For all $j\leq n-k$, let $Z_{j} = \{(\vec x^1_i - \vec x^2_i)\mid i \leq \frac{j}{\zeta} \ln\frac n \delta\}$.
We prove by induction that for all $j$, $\mathrm{rank}(Z_j) < j$ with probability at most $j\frac{\delta}{n}$.
For $j=0$, the claim trivially holds.
Now assume that the induction hypothesis holds for some $j$.
Furthermore, assume that $\mathrm{rank}(Z_j) \geq j$. Then, $\mathrm{rank}(Z_{j+1}) < j+1$ only if the additional $\frac{1}{\zeta} \ln\frac n \delta$ samples in $Z_{j+1}$ all belong to $\mathrm{span}(Z_j)$.
Since, the space of such samples has rank $< n-k$, this happens with probability at most $(1 - \zeta)^{\frac{1}{\zeta} \ln\frac n \delta} \leq \frac \delta n$. Together with the induction hypothesis that $\mathrm{rank}(Z_j) \geq j$ with probability at most $j\frac{\delta}{n}$, we have that
$\mathrm{rank}(Z_{j+1}) < j+1$ with probability at most $\frac{(j+1) \delta}{n}$.
Therefore $\mathrm{rank}(Z) = \mathrm{rank}(Z_{n-k}) = n-k$ with probability at least $1-\delta$.
\subsection{Proof of Lemma~\ref{lem:sum-alpha-u}} \label{app:no-noise-sum-alpha-u}
First note that $V$ is a the pseudo-inverse of $A$, so their span is equal. Hence, $\sum_{i\in[k]}(\vec v_i \cdot \vec x) \vec a_i \in \mathrm{span}\{ \vec v_1, \dots, \vec v_k\} $. It remains to show that $\left( \vec x - \sum_{i\in[k]}(\vec v_i \cdot \vec x) \vec a_i \right) \in \mathrm{null}\{\vec v_1, \dots, \vec v_k\}$. We do so by showing that this vector is orthogonal to $\vec v_j$ for all $j$. We have
\begin{align*}
\left( \vec x - \sum_{i=1}^k (\vec v_i \cdot \vec x) \vec a_i \right) \cdot \vec v_j &= \vec x \cdot \vec v_j- \sum_{i=1}^k (\vec v_i \cdot \vec x) (\vec a_i \cdot \vec v_j) \\
& = \vec x \cdot \vec v_j - \sum_{i\neq j} (\vec v_i \cdot \vec x) (\vec a_i\cdot \vec v_j) - (\vec v_j \cdot \vec x) (\vec a_j\cdot \vec v_j) \\
&= \vec x \cdot \vec v_j - \vec x \cdot \vec v_j = 0.
\end{align*}
Where, the second equality follows from the fact when $A = V^+$, for all $i$, $\vec a_i\cdot \vec v_i = 1$ and $\vec a_j\cdot \vec v_i = $ for $j\neq i$.
Therefore, $\sum_{i\in[k]}(\vec v_i \cdot \vec x) \vec a_i$ is the projection of $\vec x$ on $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$.
\subsection{Proof of Lemma~\ref{lem:extreme-no-noise}}
\label{app:no-noise-extreme-no-noise}
Assume that $S$ included samples that are purely of type $i$, for all $i\in[k]$. That is, for all $i\in [k]$ there is $j\leq m$, such that $\vec v_i \cdot \vec x_j^1 = \vec v_i \cdot \vec x_j^2=1$ and $\vec v_{i'} \cdot \vec x_j^1 = \vec v_{i'} \cdot \vec x_j^2=0$ for $i' \neq i$.
By Lemma~\ref{lem:sum-alpha-u}, the set of projected vectors form the set $ \{ \sum_{i=1}^k (\vec v_i \cdot \vec x_j) \vec a_i \mid j\in [m] \}$.
Note that $\sum_{i=1}^k (\vec v_i \cdot \vec x_j) \vec a_i$ is in the simplex with vertices $\vec a_1, \dots, \vec a_k$. Moreover, for each $i$, there exists a pure sample in $S$ of type $i$. Therefore, $\mathrm{CH}\{ \sum_{i=1}^k (\vec v_i \cdot \vec x_j) \vec a_i \mid j\in [m] \}$ is the simplex on linearly independent vertices $\vec a_1, \dots, \vec a_k$. As a result, $\vec a_1, \dots, \vec a_k$ are the extreme points of it.
It remains to prove that with probability $1-\delta$, the sample set has a document of purely type $j$, for all $j\in[k]$. By the assumption on the probability distribution $\P$, with probability at most $(1- \xi)^m$, there is no document of type purely $j$. Using the union bound, we get the final result.
\section{Relaxing the Assumptions} \label{sec:noise}
In this section, we relax the two main simplifying assumptions from Section~\ref{sec:no-noise}.
We relax the assumption on non-noisy documents and allow a large fraction of the documents to not satisfy $\vec v_i \cdot \vec x^1= \vec v_i \cdot \vec x^2$. In the standard topic model, this corresponds to having a large fraction of short documents.
Furthermore, we relax the assumption on the existence of pure documents to an assumption on the existence of ``almost-pure'' documents.
We further develop the approach discussed in the previous section and introduce efficient algorithms that approximately recover the topic vectors in this setting.
\noindent\textbf{The Setting:}~
We assume that any sampled document has a non-negligible probability of being non-noisy and with the remaining probability,
the two views of the document are perturbed by additive Gaussian noise, independently. More formally, for a given sample $(\vec x^1, \vec x^2)$, with probability $p_0>0$ the algorithm receives $(\vec x^1, \vec x^2)$ and with the remaining probability $1-p_0$, the algorithm receives
$(\hvec{x}^1, \hvec{x}^2)$, such that $\hat{\vec x}^j = {\vec x}^j + \vec e^j$, where $\vec e^j \sim \mathcal{N}(\vec 0, \sigma^2 I_n)$.
We assume that for each topic the probability that a document is mostly about that topic is non-negligible. More formally, for any topic $i\in[k]$, $\Pr_{\vec w\sim \P} [ \| \vec e_i - \vec w\|_1 \leq \epsilon \|] \geq g(\epsilon)$, where $g$ is a polynomial function of its input.
A stronger form of this assumption, better known as the \emph{dominant admixture assumption}, assumes that every document is mostly about one topic and has been empirically shown to hold on several real world data sets~\citep{bansal2014provable}.
Furthermore, in the Latent Dirichlet Allocation model, $\Pr_{\vec w\sim \P} [\max_{i\in[k]} w_i \geq 1-\epsilon] \geq O(\epsilon^2)$ for typical values of the concentration parameter.
We also make mild assumptions on the distribution over instances.
We assume that the covariance of the distribution over $(\vec x_i^1 - \vec x_i^2) (\vec x_i^1 - \vec x_i^2)^\top$ is significantly larger than the noise covariance $\sigma^2$. That is, for some $\delta_0>0$, the least significant non-zero eigen value of $\mathbb{E}_{(\vec x_i^1 , \vec x_i^2)} [ (\vec x_i^1 - \vec x_i^2) (\vec x_i^1 - \vec x_i^2)^\top ]$, equivalently its $(n-k)^{th}$ eigen value, is greater than $6 \sigma^2 + \delta_0$.
At a high level, these assumptions are necessary, because if $\| \vec x_i^1 - \vec x_i^2 \|$ is too small
compared to $\|\vec x_i^1\|$ and $\| \vec x_i^2\|$,
then even a small amount of noise affects the structure present in $\vec x_i^1 - \vec x_i^2$ completely. Moreover, we assume that the $L_2$ norm of each view of a sample is bounded by some $M>0$.
We also assume that for all $i\in [k]$, $\| \vec a_i \| \leq \alpha$ for some $\alpha>0$. At a high level, $\| \vec a_i \|$s are inversely proportional to the non-zero singular values of $V = (\vec v_1, \dots, \vec v_k)$. Therefore, $\| \vec a_i\| \leq \alpha$ implies that the $k$ topic vectors are sufficiently different.
\medskip
\noindent\textbf{Algorithm and Results:}~
Our approach follows the general theme of the previous section: First, recover
$\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ and then recover $\vec a_1, \dots, \vec a_k$ by taking the extreme points of the projected samples.
In this case, in the first phase we recover $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ approximately, by finding a projection matrix $\hat P$ such that $\|P - \hat P\|\leq \epsilon$ for an arbitrarily small $\epsilon$, where $P$ is the projection matrix on $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$.
At this point in the algorithm, the projection of samples on $\hat P$ can include points that are arbitrarily far from $\Delta$. This is due to the fact that the noisy samples are perturbed by $\mathcal{N}(\vec 0, \sigma^2I_n)$, so, for large values of $\sigma$ some noisy samples map to points that are quite far from $\Delta$. Therefore, we have to detect and remove these samples before continuing to the second phase.
For this purpose, we show that the low density regions of the projected samples can safely be removed such that the convex hull of the remaining points is close to $\Delta$.
In the second phase, we consider projections of each sample using $\hat P$.
To approximately recover $\vec a_1, \dots, \vec a_k$, we recover samples, $\vec x$, that are far from the convex hull of the remaining points, when $\vec x$ and a ball of points close to it are removed. We then show that such points are close to one of the pure class vectors, $\vec a_i$. Algorithm~\ref{alg:noise-gaussian} and the details of the above approach and its performance are as follows.
\begin{algorithm}[ht]
\caption {\textsc{Algorithm for Generalized Topic Models --- With Noise}}
\label{alg:noise-gaussian}
\textbf{Input:} A sample set $\{ ({\hat{\vec x}}_i^1 , {\hat{\vec x}}_i^2) \mid i \in[m] \}$ such that for each $i$, first a vector $\vec w$ is drawn from $\P$, then $(\vec x_i^1, \vec x_i^2)$ is drawn from $\mathcal{D}^{\vec w}$, then with probability $p_0$, $\hvec x_i^j = \vec x_i^j$, else with probability $1-p_0$, ${\hat{\vec x}}_i^j = \vec x_i^j + \mathcal{N}(\vec 0, \sigma^2 I_n)$ for $i\in [m]$ and $j\in \{1,2\}$.
\\
\textbf{Phase 1:}
\begin{enumerate}
\item Take $m_1 = \Omega \left(\frac {n-k}{\zeta} \ln(\frac 1 \delta)
+ \frac{n \sigma^4 r^2 M^2}{\delta_0^2 \epsilon^2} \ln(\frac 1 \delta)
+ \frac{n \sigma^2 M^4 r^2}{\delta_0^2 \epsilon^2} \mathrm{polylog}(\frac{nrM}{\epsilon\delta})
+ \frac{M^4}{\delta_0^2} \ln(\frac n \delta)
\right)$ samples.
\item Let $\hat X^1$ and $\hat X^2$ be matrices where the $i^{th}$ column is $\hvec x^1_i$ and $\hvec x^2_i$, respectively.
\item Let $\hat P$ be the projection matrix on the last $k$ left singular vectors of $\hat X^1 - \hat X^2$.
\end{enumerate}
\textbf{Denoising Phase:}
\begin{enumerate} \setcounter{enumi}{3}
\item Let $\epsilon' = \frac{\epsilon}{8 r}$ and $\gamma = g\left(\frac {\epsilon'}{8 k\alpha} \right)$.
\item
Take $m_2 = \Omega\left( \frac{k}{p_0\gamma} \ln \frac1\delta \right)$ fresh samples\footnotemark\ and let $\hat S_\parallel= \left\{\hat P\hvec x_i^1 \mid \forall i\in[m_2] \right\}$.
\item Remove $\hvec x_\parallel$ from $\hat S_\parallel$, for which there are less than $p_0 \gamma m_2/2$ points within distance of $\frac{\epsilon'}{2}$ in $\hat S_\parallel$. \label{item:S||}
\end{enumerate}
\textbf{Phase 2:}
\begin{enumerate} \setcounter{enumi}{5}
\item For all $\hvec x_\parallel$ in $\hat S_\parallel$, if $
\mathrm{dist}(\vec x_\parallel, \mathrm{CH}(\hat S_\parallel \setminus B_{6r\epsilon'}(\hvec x) ) \geq 2\epsilon'$ add $\hvec x_\parallel$ to $C$.
\item Cluster $C$ using single linkage with threshold $16r\epsilon'$. Assign any point from cluster $i$ as $\hat {\vec a}_i$. \end{enumerate}
\textbf{Output:}
Return $\hvec a_1, \dots, \hvec a_k$.
\end{algorithm}
\footnotetext{For the denoising step, we use a fresh set of samples that were not used for learning the projection matrix. This guarantees that the noise distribution in the projected samples remain a Gaussian.}
\begin{theorem}\label{thm:noise-a_i}
Consider any $\epsilon, \delta>0$ such that
$\epsilon \leq O\left(r \sigma \sqrt{ k} \right)$,
where $r$ is a parameter that depends on the geometry of the simplex $\vec a_1, \dots, \vec a_k$ and will be defined later.
There is an efficient algorithm for which an unlabeled sample set of size
\[
m = O\left(\frac {n-k}{\zeta} \ln(\frac 1 \delta)
+ \frac{n \sigma^4 r^2 M^2}{\delta_0^2 \epsilon^2} \ln(\frac 1 \delta)
+ \frac{n \sigma^2 M^4 r^2}{\delta_0^2 \epsilon^2} \mathrm{polylog}(\frac{nrM}{\epsilon\delta})
+ \frac{M^4}{\delta_0^2} \ln(\frac n \delta) +\frac{k~\ln(1 / \delta)}{p_0 ~g\!\left(\epsilon / (k r \alpha)\right)}
\right)
\]
is sufficient to recover $\hvec a_i$ such that $\| \hvec a_i - \vec a_i\|_2 \leq \epsilon$ for all $i\in[k]$, with probability $1-\delta$.
\end{theorem}
The proof of Theorem~\ref{thm:noise-a_i} involves the next three lemmas on the performance of the phases of the above algorithm. We formally state these two lemmas here, but defer their proofs to Sections~\ref{sec:phase1}, \ref{sec:denoise} and \ref{sec:phase2}.
\begin{lemma}[Phase 1]\label{lem:phase1-noise}
For any $\sigma>0$ and $\epsilon>0$, an unlabeled sample set of size
\[
m= O \left(\frac {n-k}{\zeta} \ln(\frac 1 \delta)
+ \frac{n \sigma^4}{\delta_0^2 \epsilon^2} \ln(\frac 1 \delta)
+ \frac{n \sigma^2 M^2}{\delta_0^2 \epsilon^2} \mathrm{polylog}(\frac{n}{\epsilon\delta})
+ \frac{M^4}{\delta_0^2} \ln(\frac n \delta)
\right).
\]
is sufficient, such that with probability $1-\delta$, Phase 1 of Algorithm~\ref{alg:noise-gaussian} returns a projection matrix $\hat P$, such that $\| P - \hat P\|_2 \leq \epsilon$.
\end{lemma}
\begin{lemma}[Denoising]\label{lem:phase-denoise}
Let $\epsilon' \leq \frac 13\sigma \sqrt{k}$,
$\|P - \hat P \| \leq \epsilon'/8 M$, and $\gamma = g\left(\frac {\epsilon'}{8 k\alpha} \right)$.
An unlabeled sample size of $m = O\left( \frac{k}{p_0 \gamma} \ln(\frac1\delta) \right)$ is sufficient such that for
$\hat S_\parallel$ defined in Step~\ref{item:S||} of Algorithm~\ref{alg:noise-gaussian} the following holds with probability $1-\delta$: For any $\vec x \in \hat S_\parallel$, $\mathrm{dist}(\vec x, \Delta)\leq \epsilon'$, and, for all $i\in [k]$, there exists $\hvec a_i \in \hat S_\parallel$ such that $\| \hvec a_i - \vec a_i\|\leq \epsilon'$.
\end{lemma}
\begin{lemma}[Phase 2]\label{lem:phase2-noise}
Let $\hat S_\parallel$ be a set of points for which the conclusion of Lemma~\ref{lem:phase-denoise} holds with the value of $\epsilon' = \epsilon / 8r$.
Then, Phase 2 of Algorithm~\ref{alg:noise-gaussian} returns $\hvec a_1, \dots, \hvec a_k$ such that for all $i\in [k]$, $\| \vec a_i - \hvec a_i \| \leq \epsilon$.
\end{lemma}
We now prove our main Theorem~\ref{thm:noise-a_i} by directly leveraging the three lemmas we just stated.
\begin{proof}[Proof of Theorem~\ref{thm:noise-a_i}]
By Lemma~\ref{lem:phase1-noise}, sample set of size $m_1$ is sufficient such that Phase 1 of Algorithm~\ref{alg:noise-gaussian} leads to $\| P - \hat P \| \leq \frac{\epsilon}{32 M r}$, with probability $1-\delta/2$.
Let $\epsilon' = \frac {\epsilon}{8r}$ and take a fresh sample of size $m_2$.
By Lemma~\ref{lem:phase-denoise}, with probability $1-\delta/2$, for any $\vec x \in \hat S_\parallel$, $\mathrm{dist}(\vec x, \Delta)\leq \epsilon'$, and, for all $i\in [k]$, there exists $\hvec a_i \in \hat S_\parallel$ such that $\| \hvec a_i - \vec a_i\|\leq \epsilon'$.
Finally, applying Lemma~\ref{lem:phase2-noise} we have that Phase 2 of Algorithm~\ref{alg:noise-gaussian} returns $\hvec a_i$, such that for all $i\in[k]$, $\| \vec a_i - \hvec a_i\| \leq\epsilon$.
\end{proof}
Theorem~\ref{thm:noise-a_i} discusses the approximation of $\vec a_i$ for all $i\in[k]$. It is not hard to see that such an approximation also translates to the approximation of class vectors, $\vec v_i$ for all $i\in[k]$. That is, using the properties of perturbation of pseudoinverse matrices (see Proposition~\ref{prop:inverseperturb}) one can show that $\| \hat A^{+} - V \| \leq O(\| \hat A - A\| )$. Therefore, $\hat V = \hat A^+$ is a good approximation for $V$.
\subsection{Proof of Lemma~\ref{lem:phase1-noise} --- Phase 1} \label{sec:phase1}
For $j\in \{1, 2\}$, let $X^j$ and $\hat X^j$ be $n\times m$ matrices with the $i^{th}$ column being $\vec x^j_i$ and $\hvec x^j_i$, respectively.
As we demonstrated in Lemma~\ref{lem:rank}, with high probability $\mathrm{rank}(X^1 - X^2) = n-k$.
Note that the nullspace of columns of $X^1 - X^2$ is spanned by the left singular vectors of $X^1 - X^2$ that correspond to its $k$ zero singular values. Similarly, consider the space spanned by the $k$ least left singular vectors of $\hat X^1 - \hat X^2$.
We show that the nullspace of columns of $X^1 - X^2$ can be approximated within any desirable accuracy by the space spanned by the $k$ least left singular vectors of $\hat X^1 - \hat X^2$, given a sufficiently large number of samples.
Let $D = X^1 - X^2$ and $\hat D = \hat X^1 - \hat X^2$.
For ease of exposition, assume that all samples are perturbed by Gaussian noise $\mathcal{N}(\vec 0,\sigma^2 I_n)$.\footnote{The assumption that with a non-negligible probability a sample is non-noisy is not needed for the analysis and correctness of Phase 1 of Algorithm~\ref{alg:noise-gaussian}. This assumption only comes into play in the denoising phase.}
Since each view of a sample is perturbed by an independent draw from a Gaussian noise distribution, we can view $\hat D = D + E$, where each column of $E$ is drawn i.i.d from distribution $\mathcal{N}(\vec 0, 2 \sigma^2 I_n)$. Then,
$\frac 1m \hat D\hat D^\top = \frac 1m D D^\top + \frac 1m D E^\top + \frac 1m ED^\top +\frac 1m EE^\top.$
As a thought experiment, consider this equation in expectation. Since $\mathbb{E}[\frac 1m EE^\top] = 2 \sigma^2 I_n$ is the covariance matrix of the noise and $\mathbb{E}[DE^\top + E D^\top ] = 0$, we have
\begin{equation} \label{eq:shrinkage}
\frac 1m \mathbb{E} \left[ \hat D \hat D^\top \right] - 2\sigma^2 I_n= \frac 1m\mathbb{E}\left[D D^\top \right].
\end{equation}
Moreover, the eigen vectors and their order are the same in $\frac 1m\mathbb{E}[\hat D \hat D^\top ]$ and $\frac 1m\mathbb{E}[\hat D \hat D^\top ] -2 \sigma^2 I_n$. Therefore, one can recover the nullspace of $\frac 1m\mathbb{E}[D D^\top]$ by taking the space of the least $k$ eigen vectors of $\frac 1m\mathbb{E}[\hat D \hat D^\top] $.
Next, we show how to recover the nullspace using $\hat D \hat D^\top$, rather than $\mathbb{E}[\hat D \hat D^\top]$.
Assume that the following properties hold:
\begin{enumerate}
\item Equation~\ref{eq:shrinkage} holds not only in expectation, but also with high probability.
That is, with high probability,
$\| \frac 1m \hat D \hat D^\top - 2\sigma^2 I_n - \frac 1m D D^\top \|_2 \leq \epsilon.$
\item With high probability $\lambda_{n-k} (\frac 1m \hat D \hat D^\top) > 4\sigma^2 + \delta_0 / 2$, where $\lambda_i(\cdot)$ denotes the $i^{th}$ most significant eigen value.
\end{enumerate}
Let $D = U \Sigma V^\top$ and $\hat D = \hat U \hat \Sigma \hat V^\top$ be SVD representations.
We have that $\frac 1m \hat D \hat D ^\top - 2\sigma^2 I_n = \hat U (\frac 1m \hat \Sigma^2 - 2\sigma^2 I_n) \hat U^\top$. By property 2, $\lambda_{n-k}(\frac 1m\hat \Sigma^2) > 4 \sigma^2 +\delta_0/2$. That is, the eigen vectors and their order are the same in $\frac 1m \hat D\hat D^\top - 2 \sigma^2 I_n$ and $\frac 1m \hat D\hat D^\top $. As a result the projection matrix, $\hat P$, on the least $k$ eigen vectors of $\frac 1m \hat D \hat D^\top$, is the same as the projection matrix, $Q$, on the least $k$ eigen vectors of $\frac 1m \hat D \hat D^\top - 2 \sigma^2 I_n$.
Recall that $\hat P$ and $P$ and $Q$ are the projection matrices on the least significant $k$ eigen vectors of $\frac 1m \hat D \hat D^\top$, $\frac 1m D D^\top$, and $\frac 1m \hat D \hat D^\top- 2\sigma^2I$, respectively. As we discussed, $\hat P = Q$. Now, using the \cite{davis1970rotation} or \cite{wedin1972perturbation} $\sin\theta$ theorem (see Proposition~\ref{prop:kahan}) from
matrix perturbation theory, we have,
\begin{align*}
\| P - \hat P \|_2 = \| P - Q \| \leq \frac{\|\frac 1m \hat D \hat D^\top - 2\sigma^2 I_n - \frac 1m D D^\top\|_2 }{ \left| \lambda_{n-k}(\frac 1m \hat D \hat D^\top) - 2\sigma^2 - \lambda_{n-k+1}(\frac 1m D D^\top )\right|} \leq \frac{2\epsilon}{\delta_0}
\end{align*}
where we use Properties 1 and 2 and the fact that $\lambda_{n-k+1}(\frac 1m D D^\top ) =0$, in the last transition.
\subsubsection{Concentration}
It remains to prove Properties 1 and 2. We briefly describe
our approach for obtaining concentration results and prove that when the number of samples $m$ is large enough, with high probability $\| \frac 1m \hat D \hat D^\top - 2\sigma^2 I_n - \frac 1m D D^\top \|_2 \leq \epsilon$ and $\lambda_{n-k} (\frac 1m \hat D \hat D^\top) > 4\sigma^2 + \delta_0 / 2$.
Let us first describe $\frac 1m \hat D \hat D^\top - 2 \sigma^2 I_n - \frac 1m D D^\top$ in terms of the error matrices. We have
\begin{equation} \label{eq:hatDexpansion}
\frac 1m \hat D \hat D^\top - 2\sigma^2 I_n - \frac 1m D D^\top = \left( \frac 1m E E^\top -2 \sigma^2 I_n \right) + \left( \frac 1m D E^\top + \frac 1m E D^\top \right).
\end{equation}
It suffices to show that for large enough $m > m_{\epsilon, \delta}$, $\Pr[ \| \frac 1m E E^\top - 2\sigma^2 I_n \|_2 \geq \epsilon] \leq \delta$ and $\Pr[ \| \frac 1m D E^\top + \frac 1m E D^\top \|_2 \geq \epsilon] \leq \delta$. In the former, note that $\frac 1m E E^\top$ is the sample covariance of the Gaussian noise matrix and $2\sigma^2 I_n$ is the true covariance matrix of the noise distribution.
The next claim is a direct consequence of the convergence properties of sample covariance of the Gaussian distribution (see Proposition~\ref{prop:covariance-gauss}).
\begin{claim} \label{claim:EE-estimate}
For $m > n \frac{\sigma^4}{\epsilon^2} \log(\frac 1 \delta)$, with probability $1-\delta$, $ \| \frac 1m E E^\top - 2\sigma^2 I_n \|_2 \leq \epsilon$. \footnote{At first sight, the dependence of this sample complexity on $\sigma$ might appear unintuitive. But, note that even without seeing any samples we can approximate the noise covariance within $2\sigma^2 I_n$. Therefore, if $\epsilon = 2\sigma^2$ our work is done.}
\end{claim}
We use the Matrix Bernstein inequality~\citep{tropp2015introduction}, described in Appendix~\ref{app:spectral}, to demonstrate the concentration of $ \| \frac 1m D E^\top + \frac 1m E D^\top\|_2$. The proof of the next Claim is relegated to Appendix~\ref{app:claim:DE-estaimte}.
\begin{claim} \label{claim:DE-estimate}
$m = O(\frac{n \sigma^2 M^2}{\epsilon^2} \mathrm{polylog}\frac{n}{\epsilon\delta})$ is sufficient so that with probability $1-\delta$, $\left\| \frac 1m D E^\top + \frac 1m ED^\top \right\|_2 \leq \epsilon$,
\end{claim}
Next, we prove that $\lambda_{n-k} (\frac 1m \hat D \hat D^\top) > 4\sigma^2 + \delta_0 / 2$. Since for any two matrices, the difference in $\lambda_{n-k}$ can be bounded by the spectral norm of their difference (see Proposition~\ref{prop:diffeigen}), using Equation~\ref{eq:hatDexpansion}, we have
{\small
\begin{align*}
\left| \lambda_{n-k}\left(\frac 1m \hat D \hat D^\top\right) - \lambda_{n-k}\left(\frac 1m D D^\top\right) \right|
\leq \left\| 2 \sigma^2 I + \left( \frac 1m E E^\top - 2\sigma^2 I_n \right) - \left( \frac 1m D E^\top + \frac 1m E D^\top\right) \right\|
\leq 2 \sigma^2 + \frac{ \delta_0}{4},
\end{align*}}where in the last transition we use Claims~\ref{claim:EE-estimate} and \ref{claim:DE-estimate} with the value of $\delta_0/8$ to bound the last two terms by a total of $\delta_0/4$.
Since $\lambda_{n-k}(\mathbb{E}[\frac 1m D D^\top]) \geq 6 \sigma^2 + \delta_0$, it is sufficient to show that $|\lambda_{n-k}(\mathbb{E}[\frac 1m D D^\top]) - \lambda_{n-k}([\frac 1m D D^\top]) | \leq \delta_0/4$. Similarly as before, this is bounded by $\| \frac 1m D D^\top - \mathbb{E}[\frac 1m D D^\top] \|$. We use the Matrix Bernstein inequality (Proposition~\ref{prop:bernstein}) to prove this concentration result. The rigorous proof of this claim appears in Appendix~\ref{app:estimateDD},
\begin{claim}\label{claim:DD-estimate}
$m = O\left( \frac{M^4}{\delta_0^2} \log \frac n \delta \right)$ is sufficient so that with probability $1-\delta$,
$\left\| \frac 1m D D^\top - \mathbb{E}\left[ \frac 1m D D^\top \right] \right\|_2 \leq \frac {\delta_0}{4}$.
\end{claim}
This completes the analysis of Phase 1 of our algorithm and the proof of Lemma~\ref{lem:phase1-noise} follows directly from the above analysis and the application of Claims~\ref{claim:EE-estimate} and \ref{claim:DE-estimate} with the error of $\epsilon \delta_0$, and Claim~\ref{claim:DD-estimate}.
\subsection{Proof of Lemma~\ref{lem:phase-denoise} --- Denoising Step} \label{sec:denoise}
Having approximately recovered a projection matrix $\hat P$ for $\mathrm{span} \{\vec v_1, \dots, \vec v_k \}$, we can now use this subspace to partially denoise the samples while approximately preserving $\Delta = \mathrm{CH}(\{\vec a_1, \dots, \vec a_k\})$.
At a high level, when considering the projection of samples on $\hat P$, one can show that 1)
the regions around $\vec a_i$ have sufficiently high density, and, 2) the regions that are far from $\Delta$ have low density.
We claim that if $\hat x_\parallel \in \hat S_\parallel$ is \emph{non-noisy and corresponds almost purely to one class} then $\hat S_\parallel$ also includes a non-negligible number of points within $O(\epsilon')$ distance of $\hat x_\parallel$.
This is due to the fact that a non-negligible number of points (about $p_0 \gamma m$ points) correspond to non-noisy and almost-pure samples that using $P$ would get projected to points within a distance of $O(\epsilon')$ of each other.
Furthermore, the inaccuracy in $\hat P$ can only perturb the projections up to $O(\epsilon')$ distance. So, the projections of all non-noisy samples that are purely of class $i$ fall within $O(\epsilon')$ of $\vec a_i$. The following lemma, whose proof appears in Appendix~\ref{app:claim:high-density}, formalizes this claim.
In the following lemmas, let $D$ denote the flattened distribution of the first paragraphs. That is, the distribution over $\hvec x^1$ where we first take $\vec w\sim \P$, then take $(\vec x^1, \vec x^2) \sim \mathcal{D}^{\vec w}$, and finally take $\hvec x^1$.
\begin{claim} \label{claim:high-dense}
For all $i\in [k]$,
$\Pr_{\vec x \sim D}\left[ \hat P\vec x \in B_{\epsilon'/4}(\vec a_i)\right] \geq p_0 \gamma.$
\end{claim}
On the other hand, any projected point that is far from the convex hull of $\vec a_1, \dots, \vec a_k$ has to be noisy, and as a result, has been generated by a Gaussian distribution with variance $\sigma^2$. For a choice of $\epsilon'$ that is small with respect to $\sigma$, such points do not concentrate well within any ball of radius $\epsilon'$.
In the next lemma we show that the regions that are far from the convex hull have low density.
\begin{claim} \label{claim:low-dense}
For any $\vec z$ such that $\mathrm{dist}(\vec z, \Delta) \geq \epsilon'$, we have
$\Pr_{\vec x \sim D}\left[ \hat P \vec x \in B_{\epsilon'/2}(\vec z)\right] \leq \frac{p_0 \gamma}{4}.
$
\end{claim}
\noindent{\it Proof.}~
We first show that $B_{\epsilon'/2}(\vec z)$ does not include any non-noisy points. Take any non-noisy sample $\vec x$. Note that $P\vec x = \sum_{i=1}^k w_i \vec a_i$, where $w_i$ are the mixture weights corresponding to point $\vec x$. We have,
\[ \left\| \vec z - \hat P \vec x \right\| = \left\| \vec z - \sum_{i=1}^k w_i \vec a_i + (P - \hat P) \vec x \right\| \geq
\left\| \vec z - \sum_{i=1}^k w_i \vec a_i \right\| - \| P - \hat P\| \|\vec x\| \geq\epsilon'/2
\]
\begin{wrapfigure}[11]{r}{0.35\textwidth}
\vspace{-1cm}
\begin{center}
\includegraphics[width=0.29\textwidth]{gauss2.png}
\end{center}
\vspace{-0.6cm}\caption{\small Density is maximized when blue and red gaussians coincide and the ball is at their center.}
\label{fig:gauss2}
\end{wrapfigure}
Therefore, $B_{\epsilon'/2}(\vec z)$ only contains noisy points.
Since noisy points are perturbed by a spherical
Gaussian, the projection of these points on any $k$-dimensional subspace can be thought of points generated from a $k$-dimensional Gaussian distributions with variance $\sigma^2$ and potentially different centers.
One can show that the densest ball of any radius is at the center of a Gaussian.
Here, we prove a slightly weaker claim.
Consider one such Gaussian distribution, $\mathcal{N}(\vec 0, \sigma^2 I_k)$. Note that the pdf of the Gaussian distribution decreases as we get farther from its center. By a coupling between the density of the points, $B_{\epsilon'/2}(\vec 0)$ has higher density than any $B_{\epsilon'/2}(\vec c)$ with $\|\vec c\|_2> \epsilon'$.
Therefore,
\[ \sup_{\vec c} \Pr_{\vec x\sim \mathcal{N}(\vec 0, \sigma^2I_k)}[\vec x\in B_{\epsilon'/2}(\vec c)] \leq \Pr_{\vec x\sim \mathcal{N}(\vec 0, \sigma^2I_k)}[\vec x \in B_{3\epsilon'/2}(\vec 0)].
\]
So, over $D$ this value will be maximized if the Gaussians had the same center (see Figure~\ref{fig:gauss2}). Moreover, in $\mathcal{N}(\vec 0, \sigma^2 I_k)$,
$\Pr[\|\vec x\|_2 \leq \sigma \sqrt{ k (1- t)}]\leq \exp(-k t^2/ 16).
$
Since
$3 \epsilon'/2 \leq \sigma\sqrt{k} /2 \leq \sigma \sqrt{ k (1- \sqrt{\frac{16}{k} \ln\frac{4}{p_0 \gamma}})}$ we have
\[ \Pr_{\hvec x \sim D} [\vec x\in B_{\epsilon'/2}(\vec c)] \leq \Pr_{\vec x\sim \mathcal{N}(\vec 0, \sigma^2I_k)}[\|\vec x\|_2 \leq 3\epsilon'/2] \leq \frac{p_0 \gamma}{4}.\]
\qed
The next claim shows that in a large sample set, the fraction of samples that fall within any of the described regions in Claims~\ref{claim:high-dense} and \ref{claim:low-dense} is close to the density of that region. The proof of this claim follows from VC dimension of the set of balls.
\begin{claim} \label{claim:vc}
Let $D$ be any distribution over $\mathbb{R}^k$ and $\vec x_1, \dots, \vec x_m$ be $m$ points drawn i.i.d from $D$. Then $m = O(\frac{k}{\gamma} \ln \frac{1}{\delta})$ is sufficient so that with probability $1-\delta$, for any ball $B \subseteq \mathbb{R}^k$ such that
$\Pr_{\vec x\sim D}[\vec x\in B] \geq 2\gamma$, $| \{\vec x_i \mid \vec x_i \in B\} |> \gamma m$ and for any
ball $B \subseteq \mathbb{R}^k$ such that
$\Pr_{\vec x\sim D}[\vec x\in B] \leq \gamma/2$, $| \{\vec x_i \mid \vec x_i \in B\} | < \gamma m$.
\end{claim}
Therefore, upon seeing $\Omega(\frac{k}{p_0 \gamma} \ln \frac1\delta)$ samples, with probability $1-\delta$, for all $i\in[k]$ there are more than $p_0 \gamma m/2$ projected points within distance $\epsilon'/4$ of $\vec a_i$ (by Claims~\ref{claim:high-dense} and \ref{claim:vc}), and, no point that is $\epsilon'$ far from $\Delta$ has more than $p_0 \gamma m/2$ points in its $\epsilon'/2$-neighborhood (by Claims~\ref{claim:low-dense} and \ref{claim:vc}).
Phase 2 of Algorithm~\ref{alg:noise-gaussian} leverages these properties of the set of projected points for denoising the samples while preserving $\Delta$: Remove any point from $\hat S_\parallel$ that has fewer than $p_0 \gamma m/2$ neighbors within distance $\epsilon'/2$.
We conclude the proof of Lemma~\ref{lem:phase-denoise} by noting that the remaining points in $\hat S_\parallel$ are all within distance $\epsilon'$ of $\Delta$. Furthermore, any point in $B_{\epsilon'/4}(\vec a_i)$ has more than $p_0 \gamma m/2$ points within distance of $\epsilon'/2$. Therefore, such points remain in $\hat S_\parallel$ and any one of them can serve as $\hvec a_i$ for which $\| \vec a_i - \hvec a_i\|\leq \epsilon'/4$.
\subsection{Proof of Lemma~\ref{lem:phase2-noise} --- Phase 2} \label{sec:phase2}
\begin{figure}
\centering
\begin{subfigure}{0.55\textwidth}
\centering
\vspace*{-0.5cm}
\includegraphics[width=0.85\textwidth]{noiseysimplex5.png}
\caption{}
\label{fig:noisysimplex}
\end{subfigure}
~
\begin{subfigure}{0.4 \textwidth}
\centering
\includegraphics[width=0.75\textwidth]{skew2.png}
\caption{}
\label{fig:skew}
\end{subfigure}
\caption{\small(a)~ Demonstrating the distinction between points close to $\vec a_i$ and far from $\vec a_i$. The convex hull of $CH(\hat S_{||} \setminus B_{r_2}(\hvec x))$, which is a subset of the blue and gray region, intersects $B_{r_1}(\hvec x)$ only for $\hvec x$ that is sufficiently far from $\vec a_i$'s. (b)~ Parameter $r$ is determined by the geometry of $\Delta$. }
\label{..}
\end{figure}
At a high level, we consider two balls around each projected sample point $\hvec x \in \hat S_\parallel$ with appropriate choice of radii $r_1< r_2$ (see Figure~\ref{fig:noisysimplex}).
Consider the set of projections $\hat S_\parallel$ when points in $B_{r_2}(\vec x)$ are removed from it.
For points that are far from all $\vec a_i$, this set still includes points that are close to $\vec a_i$ for all topics $i\in [k]$.
So, the convex hull of $\hat S_\parallel \setminus B_{r_2}(\vec x)$ is close to $\Delta$, and in particular, intersects $B_{r_1}(\vec x)$.
On the other hand, for $\vec x$ that is close to $\vec a_i$, $\hat S_\parallel \setminus B_{r_2}(\vec x)$ does not include an extreme point of $\Delta$ or points close to it. So,
the convex hull of $\hat S_\parallel \setminus B_{r_2}(\vec x)$ is considerably smaller than $\Delta$, and in particular, does not intersect $B_{r_1}(\vec x)$.
The geometry of the simplex and the angles between $\vec a_1, \dots, \vec a_k$ play an important role in choosing the appropriate $r_1$ and $r_2$. Note that when the samples are perturbed by noise, $\vec a_1, \dots, \vec a_k$
can only be approximately recovered if they are sufficiently far apart and the angles of the simplex at each $\vec a_i$ is far from being flat.
That is, we assume that for all $i\neq j$, $\| \vec a_i - \vec a_j \| \geq 3 \epsilon$.
Furthermore, define $r\geq 1$ to be the smallest value such that
the distance between $\vec a_i$ and $\mathrm{CH}( \Delta \setminus B_{ r \epsilon}(\vec a_i))$ is at least $\epsilon$.
Note that such a value of $r$ always exists and depends entirely on the angles of the simplex defined by the class vectors. Therefore, the number of samples needed for our method depends on the value of $r$. The smaller the value of $r$, the larger is the separation between the topic vectors and the easier it is to identify them.
See Figure~\ref{fig:skew} for a demonstration of this concept.
\begin{claim} \label{claim:extreme-noisy}
Let $\epsilon' = \epsilon/8r$.
Let $\hat S_{\parallel}$ be the set of denoised projections, as in step~\ref{item:S||} of Algorithm~\ref{alg:noise-gaussian}.
For any $\hvec x\in \hat S_{\parallel}$ such that for all $i$, $\| \hvec x - \vec a_i \| > 8r\epsilon'$,
$\mathrm{dist}(\hvec x, \mathrm{CH}(\hat S_\parallel \setminus B_{6r\epsilon'}(\hvec x) ) ) \leq 2\epsilon'$.
Furthermore, for all $i\in[k]$ there exists $\hvec a_i\in \hat S_\parallel$ such that $\| \hvec a_i - \vec a_i \| < \epsilon'$ and
$\mathrm{dist}(\hvec a_i , \mathrm{CH}(\hat S_\parallel \setminus B_{6r\epsilon'}(\hvec a_i ))) > 2\epsilon'$.
\end{claim}
\begin{proof}
Recall that by Lemma~\ref{lem:phase-denoise}, for any $\hvec x\in \hat S_\parallel$ there exists $\vec x\in \Delta$ such that $\| \hvec x - \vec x\| \leq \epsilon'$ and for all $i\in[k]$, there exists $\hvec a_i \in \hat S_\parallel$ such that $\| \hvec a_i - \vec a_i\|\leq \epsilon'$.
For the first part, let $\vec x = \sum_i \alpha_i \vec a_i \in \Delta$ be the corresponding point to $\hvec x$, where $\alpha_i$'s are the coefficients of the convex combination.
Furthermore, let $\vec x' = \sum_i \alpha_i \hvec a_i$.
We have,
\[ \| \vec x' - \hvec x\| \leq \left\| \sum_{i=1}^k \alpha_i \hvec a_i - \sum_{i=1}^k \alpha_i \vec a_i + \vec x - \hvec x \right\| \leq \left\| \max_{i\in[k]} ~( \hvec a_i - \vec a_i) \right\| + \left\|\vec x - \hvec x \right\| \leq 2\epsilon'.
\]
The first claim follows from the fact that $\| \hvec x - \vec a_i\| > 8r\epsilon'$ and as a result $\vec x' \in \mathrm{CH}(\hat S_\parallel \setminus B_{6r\epsilon'}(\hvec x))$.
Next, note that $B_{4r\epsilon'}(\vec a_i) \subseteq B_{5r\epsilon'} (\hvec a_i)$.
So, by the fact that $\| \vec a_i - \hvec a_i\| \leq \epsilon'$,
\[ \mathrm{dist} \left( \hvec a_i, \mathrm{CH}(\Delta \setminus B_{5r\epsilon'}(\hvec a_i) ) \right)\geq
\mathrm{dist}\left( \vec a_i, \mathrm{CH}(\Delta \setminus B_{4r\epsilon'}(\vec a_i)) \right) - \epsilon'\geq 3\epsilon'.
\]
Furthermore, we argue that if there is $\hvec x \in \mathrm{CH}(\hat S_\parallel\setminus B_{5r\epsilon'}(\hvec a_i))$ then there exists $\vec x\in \mathrm{CH}(\Delta \setminus B_{4r\epsilon'}(\hvec a_i))$, such that $\| \vec x - \hvec x\|\leq \epsilon'$. The proof of this claim is relegated to Appendix~\ref{app:CH_claim}. Using this claim, we have
$\mathrm{dist}\left( \hvec a_i, \mathrm{CH}(\hat S_\parallel \setminus B_{6r\epsilon'}(\hvec a_i)) \right) \geq 2\epsilon'.
$
\end{proof}
Given the above structure, it is clear that set of points in $C$ are all within $\epsilon$ of one of the $\vec a_i$'s. So, we can cluster $C$ using single linkage with threshold $\epsilon$ to recover $\vec a_i$ up to accuracy $\epsilon$.
\section{Additional Results, Extensions, and Open Problems}
\subsection{Sample Complexity Lower bound} \label{sec:lower-bound}
As we observed the number of samples required by our method is $poly(n)$.
However, as the number of classes can be much smaller than the number of features, one might hope to recover $\vec v_1, \dots, \vec v_k$, with a number of samples that is polynomial in $k$ rather than $n$.
Here, we show that in the general case $\Omega(n)$ samples are needed to learn $\vec v_1, \dots, \vec v_k$ regardless of the value of $k$.
For ease of exposition, let $k=1$ and note that in this case every sample should be purely of one type.
Assume that the class vector, $\vec v$, is promised to be in the set $C = \{\vec v^j \mid v^j_\ell = 1/ \sqrt{2}, \text{ if } \ell = 2j-1 \text{ or } 2j, \text{ else } v^j_\ell = 0\}$.
Consider instances $(\vec x_j^1, \vec x^2_j)$ such that the $\ell^{th}$ coordinate of $\vec x_j^1$ is $x^1_{j \ell} = -1/\sqrt{2}$ if $\ell = 2j-1$ and $1/\sqrt{2}$ otherwise, and $x^2_{j\ell} = -1/\sqrt{2}$ if $\ell = 2j$ and $1/\sqrt{2}$ otherwise.
For a given $(\vec x_j^1, \vec x_j^2)$, we have that $\vec v^j \cdot \vec x_j^1 = \vec v^j \cdot \vec x_j^2 = 0$. On the other hand, for all $\ell\neq j$, $\vec v^\ell \cdot \vec x_j^1 = \vec v^\ell \cdot \vec x_j^2 = 1$. Therefore, sample $(\vec x_j^1, \vec x_j^2)$ is consistent with $\vec v = \vec v^\ell$ for any $\ell\neq j$, but not with $\vec v = \vec v^j$. That is, each instance $(\vec x_j^1, \vec x^2_j)$ renders only one candidate of $C$ invalid.
Even after observing at most $\frac n 2 -2$ samples of this types, at least $2$ possible choices for $\vec v$ remain. So, $\Omega(n)$ samples are indeed needed to find the appropriate $\vec v$.
The next theorem, whose proof appears in Appendix~\ref{app:lower-bound} generalizes this construction and result to the case of any $k$.
\begin{theorem} \label{thm:lower}
For any $k \leq n$, any algorithm that for all $i\in [k]$ learns $\vec v'_i$ such that $\| \vec v_i - \vec v'_i \|_2\leq 1/\sqrt 2$, requires $\Omega(n)$ samples.
\end{theorem}
Note that in the above construction samples have large components in the irrelevant features.
It would be interesting to see if this lower bound can be circumvented using additional natural assumptions in this model, such as assuming that the samples have length $\mathrm{poly}(k)$.
\subsection{Alternative Noise Models} \label{sec:agnostic}
Consider the problem of recovering $\vec v_1, \dots, \vec v_k$ in the presence of agnostic noise, where for an $\epsilon$ fraction of the samples $(\vec x^1, \vec x^2)$, $\vec x^1$ and $\vec x^2$ correspond to different mixture weights.
Furthermore, assume that the distribution over the instance space is rich enough such that any subspace other than $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ is inconsistent with a set of instances of non-negligible density.\footnote{This assumption is similar to the richness assumption made in the standard case, where we assume that there is enough ``entropy'' between the two views of the samples such that even in the non-noisy case the subspace can be uniquely determined by taking the nullspace of $X_1 - X_2$.}
Since the VC dimension of the set of $k$ dimensional subspaces in $\mathbb{R}^n$ is $\min\{ k , n-k\}$, from the information theoretic point of view, one can recover $\mathrm{span}\{ \vec v_1, \dots, \vec v_k\}$ as it is the only subspace that is inconsistent with less than $O(\epsilon)$ fraction of $\tilde O(\frac{k}{\epsilon^2})$ samples. Furthermore, we can detect and remove any noisy sample, for which the two views of the sample are not consistent with $\mathrm{span}\{ \vec v_1, \dots, \vec v_k \}$. And finally, we can recover $\vec a_1, \dots, \vec a_k$ using phase 2 of Algorithm~\ref{alg:noisefree}.
In the above discussion, it is clear that once we have recovered $\mathrm{span}\{ \vec v_1, \dots, \vec v_k\}$, denoising and finding the extreme points of the projections can be done in polynomial time.
For the problem of recovering a $k$-dimensional nullspace, \cite{hardt2013algorithms} introduced an efficient algorithm that tolerates agnostic noise up to $\epsilon = O(k/n)$. Furthermore, they provide an evidence that this result might be tight. It would be interesting to see whether additional structure present in our model, such as the fact that samples are convex combination of classes, can allow us to efficiently recover the nullspace in presence of more noise.
Another interesting open problem is whether it is possible to handle the case of $p_0=0$.
That is, when \emph{every document} is affected by Gaussian noise $\mathcal{N}(0, \sigma^2I_n)$, for $\sigma \gg \epsilon$. A simpler form of this problem is as follows. Consider a distribution induced by first drawing $\vec x \sim D$, where $D$ is an arbitrary and unknown distribution over $\Delta = \mathrm{CH}(\{\vec a_1, \dots, \vec a_k\})$, and taking $\hvec x = \vec x + \mathcal{N}(0, \sigma^2I_n)$. \emph{Can we learn $\vec a_i$'s within error of $\epsilon$ using polynomially many samples?}
Note that when $D$ is only supported on the corners of $\Delta$, this problem reduces to learning mixture of Gaussians, for which there is a wealth of literature on estimating Gaussian means and mixture weights~\citep{dasgupta2002pac,kalai2012disentangling,moitra2010settling}. It would be interesting to see under what regimes $\vec a_i$ (and not necessarily the mixture weights) can be learned when $D$ is an arbitrary distribution over $\Delta$.
\subsection{General function $f(\cdot)$}
Consider the general model described in Section~\ref{sec:model}, where $f_i(x) = f(\vec v_i \cdot \vec x)$ for an unknown strictly increasing function $f:\mathbb{R}^+ \rightarrow [0,1]$ such that $f(0) = 0$.
We describe how variations of the techniques discussed up to now can extend to this more general setting.
For ease of exposition, consider the non-noisy case.
Since $f$ is a strictly increasing function, $f(\vec v_i \cdot \vec x^1) = f(\vec v_i \cdot \vec x^2)$ if and only if
$\vec v_i \cdot \vec x^1 = \vec v_i \cdot \vec x^2$. Therefore, we can recover $\mathrm{span}(\vec v_1, \dots, \vec v_k)$ by the same approach as in Phase 1 of Algorithm~\ref{alg:noisefree}.
Although, by definition of pseudoinverse matrices, the projection of $\vec x$ is still represented by $\vec x_\parallel = \sum_i (\vec v_i \cdot \vec x) \vec a_i$, this is not necessarily a convex combination of $\vec a_i$'s anymore. This is due to the fact that $\vec v_i \cdot \vec x$ can add up to values larger than $1$ depending on $\vec x$.
However, $\vec x_\parallel$ is still a \emph{non-negative combination} of $\vec a_i$'s.
Moreover, $\vec a_i$'s are linearly independent, so $\vec a_i$ can not be expressed by a nontrivial non-negative combination of other samples. Therefore, for all $i$, $\vec a_i / \| \vec a_i \|$ can be recovered by taking \emph{the extreme rays of the convex cone} of the projected samples. So, we can recover $\vec v_1, \dots, \vec v_k$, by taking the psuedoinverse of $\vec a_i / \| \vec a_i \|$ and re-normalizing the outcome such that $\| \vec v_i\|_2=1$. When samples are perturbed by noise, a similar argument that also takes into account the smoothness of $f$ proves similar results.
It would be interesting to see whether a more general class of similarity functions, such as kernels, can be also learned in this context.
\section{Introduction}
Topic modeling is an area with significant recent work in the intersection of algorithms and machine learning \citep{arora2012computing,arora2012learning,arora2013practical,anandkumar2012spectral,anandkumar2014tensor, bansal2014provable}. In topic modeling, a topic (such as sports, business, or politics) is modeled as a probability distribution over words, expressed as a vector $\vec a_i$. A document is generated by first selecting a mixture $\vec w$ over topics, such as 80\% sports and 20\% business, and then choosing words i.i.d. from the associated mixture distribution, which in this case would be $0.8 \vec a_{sports} + 0.2 \vec a_{business}$. Given a large collection of such documents (and some assumptions about the distributions $\vec a_i$ as well as the distribution over mixture vectors $\vec w$) the goal is to recover the topic vectors $\vec a_i$ and then to use the $\vec a_i$ to correctly classify new documents according to their topic mixtures.
Algorithms for this problem have been developed with strong provable guarantees even when documents consist of only two or three words each \cite{arora2012learning,anandkumar2012spectral,papadimitriou1998latent}. In addition, algorithms based on this problem formulation perform well empirically on standard datasets \cite{blei2003latent,hofmann1999probabilistic}.
As a theoretical model for document generation, however, an obvious problem with the standard topic modeling framework is that documents are not actually created by independently drawing words from some distribution. Better would be a model in which {\em sentences} are drawn i.i.d. from a distribution over sentences (this would at least produce grammatical objects and allow for meaningful correlation among related words within a topic, like {\sf shooting} a {\sf free throw} or {\sf kicking} a {\sf field goal}). Even better would be {\em paragraphs} drawn i.i.d. from a distribution over paragraphs (this would at least produce coherent paragraphs). Or, even better, how about a model in which paragraphs are drawn non-independently, so that the second paragraph in a document can depend on what the first paragraph was saying, though presumably with some amount of additional entropy as well? This is the type of model we study here.
Note that an immediate problem with considering such a model is that now the task of learning an explicit distribution (over sentences or paragraphs) is hopeless. While a distribution over words can be reasonably viewed as a probability vector, one could not hope to learn or even represent an explicit distribution over sentences or paragraphs. Indeed, except in cases of plagiarism, one would not expect to see the same paragraph twice in the entire corpus. Moreover, this is likely to be true even if we assume paragraphs have some natural feature-vector representation.
Instead, we bypass this issue by aiming to directly learn a predictor for documents---that is, a function that given a document, predicts its mixture over topics---without explicitly learning topic distributions. Another way to think of this is that our goal is not to learn a model that could be used to {\em write} a new document, but instead just a model that could be used to {\em classify} a document written by others. This is much as in standard supervised learning where algorithms such as SVMs learn a decision boundary (such as a linear separator) for making predictions on the labels of examples without explicitly learning the distributions $D_+$ and $D_-$ over positive and negative examples respectively. However, our setting is {\em un}supervised (we are not given labeled data containing the correct classifications of the documents in the training set) and furthermore, rather than each data item belonging to one of the $k$ classes (topics), each data item belongs to a {\em mixture} of the $k$ topics. Our goal is given a new data item to output what that mixture is.
We begin by describing our high level theoretical formulation. This formulation can be viewed as a generalization both of standard topic modeling and of a setting known as {\em multi-view learning} or {\em co-training} \cite{blum1998combining,dasgupta2002pac,SSL10,balcan2004co,sun13}. We then describe several natural assumptions under which we can indeed efficiently solve the problem, learning accurate topic mixture predictors.
\section{Sample Complexity Lower bound} \label{sec:lower-bound}
\section{An Easier Case with Simplifying Assumptions} \label{sec:no-noise}
We make two main simplifying assumptions in this section, both of which will be relaxed in Section~\ref{sec:noise}: 1) The documents are not noisy, i.e., $\vec x^1\cdot \vec v_i = \vec x^2\cdot\vec v_i$; 2) There is
non-negligible probability density on instances that belong purely to one class.
In this section we demonstrate ideas and techniques, which we will develop further in the next section, to learn the topic vectors from a corpus of unlabeled documents.
\medskip
\noindent\textbf{The Setting:}~
We make the following assumptions.
The documents are not noisy, that is for any document $(\vec x^1, \vec x^2)$ and for all $i\in[k]$, $\vec x^1\cdot \vec v_i = \vec x^2\cdot\vec v_i$.
Regarding distribution $\P$, we assume that a non-negligible probability density is assigned to pure samples for each class. More formally, for some $\xi > 0$, for all $i\in[k]$, $\Pr_{\vec w \sim \P}[ \vec w = \vec e_i] \geq \xi$.
Regarding distribution $\mathcal{D}^{\vec w}$, we allow the two paragraphs in a document, i.e., the two views $(\vec x^1, \vec x^2)$ drawn from $\mathcal{D}^{\vec w}$, to be correlated as long as
for any subspace $Z \subset \mathrm{null}\{ \vec v_1 \dots, \vec v_k \}$ of dimension strictly less than $n-k$, $\Pr_{(\vec x^1, \vec x^2)\sim \mathcal{D}^{\vec w}} [(\vec x^1 - \vec x^2) \not\in Z] \geq \zeta$ for some non-negligible $\zeta$.
One way to view this in the context of topic modeling is that if, say, ``sports'' is a topic, then it should not be the case that the second paragraph always talks about the exact same sport as the first paragraph; else ``sports'' would really be a union of several separate but closely-related topics. Thus, while we do not require independence we do require some non-correlation between the paragraphs.
\medskip
\noindent\textbf{Algorithm and Analysis:}~
The main idea behind our approach is to use the consistency of the two views of the samples to first recover the subspace spanned by $\vec v_1, \dots, \vec v_k$ (Phase 1). Once this subspace is recovered, we show that a projection of a sample on this space corresponds to the convex combination of class vectors using the appropriate mixture weight that was used for that sample. Therefore, we find vectors $\vec a_1, \dots, \vec a_k$ that purely belong to each class by taking the extreme points of the projected samples (Phase 2). The class vectors $\vec v_1, \dots, \vec v_k$ are the unique vectors (up to permutations) that classify $\vec a_1, \dots, \vec a_k$ as pure samples. Phase 2 is similar to that of \cite{arora2012learning}.
Algorithm~\ref{alg:noisefree} formalizes the details of this approach.
\begin{algorithm}
\caption {\textsc{Algorithm for Generalized Topic Models --- No noise}}
\label{alg:noisefree}
\textbf{Input:} A sample set $S = \{ (\vec x_i^1 , \vec x_i^2) \mid i\in[m] \}$ such that for each $i$, first a vector $\vec w$ is drawn from $\P$ and then $(\vec x_i^1, \vec x_i^2)$ is drawn from $\mathcal{D}^{\vec w}$.\\
\textbf{Phase 1:}
\begin{enumerate}
\item Let $X^1$ and $X^2$ be matrices where the $i^{th}$ column is $\vec x^1_i$ and $\vec x^2_i$, respectively.
\item Let $P$ be the projection matrix on the last $k$ left singular vectors of $(X^1 - X^2)$.
\end{enumerate}
\textbf{Phase 2:}
\begin{enumerate}
\item Let $S_\parallel = \{P\vec x_i^j \mid i\in[m], j\in\{1,2\} \}$.
\item Let $A$ be a matrix whose columns are the extreme points of the convex hull of $S_\parallel$. (This can be found using farthest traversal or linear programming.)
\end{enumerate}
\textbf{Output:} Return columns of $A^+$ as $\vec v_1, \dots, \vec v_k$.
\end{algorithm}
In Phase $1$ for recovering $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$, note that for any sample $(\vec x^1, \vec x^2)$ drawn from $\mathcal{D}^{\vec w}$, we have that $\vec v_i\cdot \vec x^1 = \vec v_i\cdot \vec x^2= w_i$. Therefore, regardless of what $\vec w$ was used to produce the sample, we have that $\vec v_i\cdot (\vec x^1 - \vec x^2) = 0$ for all $i\in [k]$.
That is, $\vec v_1, \dots, \vec v_k$ are in the null-space of all such $(\vec x^1 - \vec x^2)$. So, if samples $(\vec x_i^1 - \vec x_i^2)$ span a $n-k$ dimensional subspace, then $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ can be recovered by taking $\mathrm{null}\{ (\vec x^1 - \vec x^2) \mid (\vec x^1, \vec x^2) \in \mathcal{X}^{\vec w} \times \mathcal{X}^{\vec w},~ \forall \vec w\in \mathbb{R}^k\}$.
Using singular value decomposition, this null space is spanned by the last $k$ singular vectors of $X^1 - X^2$, where $X^1$ and $X^2$ are matrices with columns $\vec x_i^1$ and $\vec x_i^2$, respectively.
This is where the assumptions on $\mathcal{D}^{\vec w}$ come into play. By assumption, for any strict subspace $Z$ of $\mathrm{span}\{ (\vec x^1 - \vec x^2) \mid (\vec x^1, \vec x^2) \in \mathcal{X}^{\vec w} \times \mathcal{X}^{\vec w},~ \forall \vec w\in \mathbb{R}^k\}$,
$D^{\vec w}$ has non-negligible probability on instances $(\vec x^1 - \vec x^2) \notin Z$. Therefore, after seeing sufficiently many samples we can recover the space of all $(\vec x^1 - \vec x^2)$. The next lemma, whose proof appears in Appendix~\ref{app:no-noise-rank}, formalizes this discussion.
\begin{lemma}\label{lem:rank}
Let $Z = \mathrm{span} \{(\vec x^1_i - \vec x^2_i)\mid i \in[m]\}$. Then, $m = O( \frac {n-k}{\zeta} \log(\frac 1 \delta))$ is sufficient such that with probability $1-\delta$, $\mathrm{rank}(Z) = n-k$.
\end{lemma}
\begin{wrapfigure}[14]{r}{0.25\textwidth}
\vspace{-1cm}
\begin{center}
\includegraphics[width=0.2\textwidth]{a-and-v-2.png}
\vspace{-0.5cm}
\end{center}
\caption{\small $\vec v_1, \vec v_2$ correspond to class $1$ and $2$, and $\vec a_1$ and $\vec a_2$ correspond to canonical vectors that are purely of class $1$ and $2$, respectively.}
\label{fig:w-and-u}
\end{wrapfigure}
Using Lemma~\ref{lem:rank}, Phase 1 of Algorithm~\ref{alg:noisefree} recovers $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$. Next, we show that pure samples are the extreme points of the convex hull of all samples when projected on the subspace $\mathrm{span} \{\vec v_1, \dots, \vec v_k\}$.
Figure~\ref{fig:w-and-u} demonstrates the relation between the class vectors, $\vec v_i$, projection of samples, and the projection of pure samples $\vec a_i$. The next lemma,
whose proof appears in Appendix~\ref{app:no-noise-sum-alpha-u}, formalizes this claim.
\begin{lemma} \label{lem:sum-alpha-u}
For any $\vec x$, let $\vec x_\parallel$ represent the projection of $\vec x$ on $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$. Then, $x_\parallel = \sum_{i\in[k]}(\vec v_i \cdot \vec x) \vec a_i.$
\end{lemma}
With $\sum_{i\in[k]}(\vec v_i \cdot \vec x) \vec a_i$ representing the projection of $\vec x$ on $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$, it is clear that the extreme points of the set of all projected instances that belong to $\mathcal{X}^{\vec w}$ for all ${\vec w}$ are $\vec a_1, \dots, \vec a_k$. Since in a large enough sample set, with high probability for all $i\in[k]$, there is a pure sample of type $i$, taking the extreme points of the set of projected samples is also $\vec a_1, \dots, \vec a_k$. The following lemma, whose proof appears in Appendix~\ref{app:no-noise-extreme-no-noise}, formalizes this discussion.
\begin{lemma} \label{lem:extreme-no-noise}
Let $m = c(\frac 1\xi \log(\frac k \delta))$ for a large enough constant $c>0$.
Let $P$ be the projection matrix for $\mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ and $S_\parallel = \{P\vec x_i^j \mid i\in[m], j\in\{1,2\} \}$ be the set of projected samples.
With probability $1-\delta$,
$\{ \vec a_1, \dots, \vec a_k\}$ is the set of extreme points of $\mathrm{CH}(S_\parallel)$.
\end{lemma}
Therefore, $\vec a_1, \dots, \vec a_k$ can be learned by taking the extreme points of the convex hull of all samples projected on $\mathrm{span}(\{\vec v_1, \dots, \vec v_k\})$. Furthermore, $V = A^+$ is unique, therefore $\vec v_1, \dots, \vec v_k$ can be easily found by taking the pseudoinverse of matrix $A$. Together with Lemma~\ref{lem:rank} and \ref{lem:extreme-no-noise} this proves the next theorem regarding learning class vectors in the absence of noise.
\begin{theorem}[No Noise] \label{thm:no-noise}
There is a polynomial time algorithm for which
$ m = O\left( \frac {n-k}{\zeta} \ln(\frac 1 \delta) + \frac 1\xi \ln(\frac k \delta) \right)
$
is sufficient to recover $\vec v_i$ exactly for all $i\in[k]$, with probability $1-\delta$.
\end{theorem}
\section{Preliminaries} \label{sec:model}
We assume that paragraphs are described by $n$ real-valued features and so can be viewed as points $\vec x$ in an instance space $\mathcal{X} \subseteq \mathbb{R}^n$.
We assume that each document consists of at least two paragraphs and denote it by $(\vec x^1, \vec x^2)$.
Furthermore, we consider $k$ topics and partial membership functions $f_1, \dots, f_k: \mathcal{X} \rightarrow[0,1]$, such that $f_i(\vec x)$ determines the degree to which paragraph $\vec x$ belongs to topic $i$, and, $\sum_{i=1}^k f_i(\vec x) = 1$.
For any vector of probabilities $\vec w \in \mathbb{R}^k$ --- which we sometimes refer to as mixture weights --- we define $\mathcal{X}^{\vec w} = \{ \vec x\in \mathbb{R}^n \mid \forall i,~ f_i(\vec x) = w_i\}$ to be the set of all paragraphs with partial membership values $\vec w$.
We assume that both paragraphs of a document have the same partial membership values, that is $(\vec x^1, \vec x^2) \in \bigcup_{\vec w} \mathcal{X}^{\vec w}\times \mathcal{X}^{\vec w}$, although we also allow some noise later on. To better relate to the literature on multi-view learning, we will also refer to topics as ``classes'' and refer to paragraphs as ``views'' of the document.
Much like the standard topic models, we consider an unlabeled sample set that is generated by a two-step process. First, we consider a distribution $\P$ over vectors of mixture weights and draw $\vec w$ according to $\P$.
Then we consider distribution $\mathcal{D}^{\vec w}$ over the set $\mathcal{X}^{\vec w} \times \mathcal{X}^{\vec w}$ and draw a document $(\vec x^1, \vec x^2)$ according to $\mathcal{D}^{\vec w}$.
We consider two settings. In the first setting, which is addressed in Section~\ref{sec:no-noise}, the learner receives the instance $(\vec x^1, \vec x^2)$.
In the second setting, the learner receives samples $(\hvec x^1, \hvec x^2)$ that have been perturbed by some noise. We discuss two noise models in Sections~\ref{sec:noise} and \ref{sec:agnostic}.
In both cases, the goal of the learner is to recover the partial membership functions $f_i$.
More specifically, in this work we consider partial membership functions of the form
$f_i(\vec x) = f(\vec v_i \cdot \vec x)$, where $\vec v_1, \dots, \vec v_k \in \mathbb{R}^n$ are linearly independent and $f:\mathbb{R} \rightarrow [0,1]$ is a monotonic function.
For the majority of this work, we consider $f$ to be the identity function, so that $f_i(\vec x) = \vec v_i \cdot \vec x$.
Define $\vec a_i \in \mathrm{span}\{\vec v_1, \dots, \vec v_k\}$ such that $\vec v_i \cdot \vec a_i=1$ and $\vec v_j \cdot \vec a_i = 0$ for all $j\neq i$. That is, $\vec a_i$ can be viewed as the projection of a paragraph that is purely of topic $i$ onto the span of $\vec v_1, \dots, \vec v_k$. Define $\Delta = \mathrm{CH}(\{\vec a_1, \dots, \vec a_k\})$ to be the convex hull of $\vec a_1, \dots, \vec a_k$.
Throughout this work, we use $\|\cdot \|_2$ to denote the spectral norm of a matrix or the $L_2$ norm of a vector. When it is clear from the context, we simply use $\|\cdot \|$ to denote these quantities.
We denote by $B_r(\vec x)$ the ball of radius $r$ around $\vec x$.
For any matrix $M$, we use $M^+$ to denote the pseudoinverse of $M$.
\subsection*{Generalization of Standard Topic Modeling}
Let us briefly discuss how the above model is a generalization of the standard topic modeling framework.
In the standard framework, a topic is modeled as a probability distribution over $n$ words, expressed as a vector $\vec a_i\in [0,1]^n$, where $a_{ij}$ is the probability of word $j$ in topic $i$.
A document is generated by first selecting a mixture $\vec w\in[0,1]^k$ over $k$ topics, and then choosing words i.i.d. from the associated mixture distribution $\sum_{i=1}^k w_i \vec a_i$.
The document vector $\hvec x$ is then the vector of word counts, normalized by dividing by the number of words in the document so that the $L_1$ norm of $\hvec x$ is 1.
As a thought experiment, consider infinitely long documents. In the standard framework, all infinitely long documents of a mixture weight $\vec w$ have the same representation $\vec x = \sum_{i=1}^k w_i \vec a_i$.
This representation implies $\vec x \cdot \vec v_i = w_i$ for all $i\in [k]$, where $V = (\vec v_1, \dots, \vec v_k)$ is the pseudo-inverse of matrix $A = (\vec a_1, \dots, \vec a_k)$.
Thus, by partitioning the document into two halves (views) $\vec x^1$ and $\vec x^2$,
our \emph{noise-free model} with $f_i(\vec x) =\vec v_i \cdot \vec x$ generalizes the standard topic model for long documents.
However, our model is substantially more general:
features within a view can be arbitrarily correlated, the views themselves can be correlated with each other, and even in the zero-noise case, documents of the same mixture can look very different so long as they have the same projection to the span of the $\vec a_1, \dots, \vec a_k$.
For a shorter document $\hvec x$, each feature $\hat x_i$ is drawn according to a distribution with mean $x_i$, where $\vec x = \sum_{i=1}^k w_i \vec a_i$. Therefore, $\hvec x$ can be thought of as a noisy measurement of $\vec x$. The fewer the words in a document, the larger is the noise in $\hvec x$. Existing work in topic modeling, such as~\cite{arora2012learning,anandkumar2014tensor}, provide elegant procedures for handling large noise that is caused by drawing only $2$ or $3$ words according to the distribution induced by $\vec x$. As we show in Section~\ref{sec:noise}, our method can also tolerate large amounts of noise under some conditions.
While our method cannot deal with documents that are only $2$- or $3$-words long, the benefit is a model that is much more general in many other respects.
|
1,108,101,562,811 | arxiv | \section{Introduction}
The discovery and characterization of extrasolar terrestrial planets
in the habitable zone (HZ) of their central star is one of the most
exciting prospects of exoplanetary science. Such planets are
extremely good candidates for the search for extraterrestrial life.
The HZ is usually defined as the shell around a star where a planet
could retain liquid water on the surface \citep{kasting1993}. This
definition is motivated because liquid water seems to be the
fundamental requirement for life as we know it on Earth. Being
located inside the HZ as defined by \citet{kasting1993} for an
Earth-like planet, however, not necessarily implies habitability for
a specific planetary scenario (see, e.g., Mars in our own solar
system). The potential habitability of a planet depends critically
on atmospheric composition and surface pressure. Still, for a planet
located well inside this classical HZ, habitability is achievable
for a much broader range of atmospheric conditions (greenhouse
effect etc.) than for a planet near one of the boundaries.
Among the more than 500 extrasolar planets discovered so far, some
orbit their central star inside or near the HZ (e.g.,
\citealp{mayor2004}, \citealp{lovis2006}, \citealp{fischer2008},
\citealp{haghighipour2010}). Most of these planets are Neptune- or
Jupiter-like gas planets. The planetary system \object{Gliese 581}
(GL 581), however, contains at least four planets
(\citealp{bonfils2005}, \citealp{udry2007},
\citealp{mayor2009gliese}), one of which is a potentially habitable
Super-Earth, GL 581 d. This was shown by \citet{wordsworth2010},
\citet{vparis2010gliese}, \citet{hu2011} and
\citet{kaltenegger2011} who presented 1D modeling studies of
different atmospheric scenarios of GL 581 d. They found habitable
surface conditions (i.e., surface temperatures above 273 K) with
CO$_2$ partial pressures as low as 1 bar, depending on CO$_2$
concentration. These results imply that the GL 581 planetary system
contains indeed at least one potentially habitable, possibly
terrestrial planet.
Orbital simulations presented by \citet{zollinger2009}
showed that between the orbits of GL 581 c and d (i.e., inside the
classical HZ), another Super-Earth planet would be dynamically
stable. Specifically, they stated a stability range for a
low-eccentricity planet of not more than 2.6 Earth masses
(m$_{\oplus}$) ranging from 0.126 AU to 0.17 AU.
Recently, \citet{vogt2010gliese} claimed the detection of two more
planets in the GL 581 system, one of them (called GL 581 g in \citealp{vogt2010gliese}) with a minimum mass of 3.1
m$_{\oplus}$ and an orbital distance of 0.146 AU, hence
inside the stability range calculated by
\citet{zollinger2009}. These detections are controversial and
disputed by further analysis of the radial velocity data
\citep{tuomi2011}.
Nevertheless, we use this claimed discovery as a starting
point to investigate the habitability of planets in the GL 581
system. Such model calculations aim at supporting the selection of
future targets for detailed observational programs of habitable
planets which is probably needed in the future \citep{horner2010},
given the expected number of targets. Such potentially habitable
planets are expected to be discovered in the near future by on-going
ground-based programs such as MEarth \citep{nutzman2008} or space
missions like Kepler (see, e.g.,
\citealp{borucki2011} for an overview of Kepler candidates) and the
planned PlaTO mission \citep{catala2009}. First attempts at
characterizing the atmospheres of transiting Super-Earth planets
have already been made (CoRoT-7 b, \citealp{guenther2011}, and GJ
1214 b, \citealp{bean2010}, \citealp{desert2011}, \citealp{croll2011_gj1214}).
For the claimed planet in the HZ of GL 581, dedicated
modeling studies have been performed by \citet{pierrehumbert2011},
\citet{heng2011} and \citet{bloh2011}. \citet{pierrehumbert2011} presented
several possible atmospheric scenarios (airless planet, pure N$_2$,
mixed CO$_2$/H$_2$O atmospheres) and discussed potential
implications for surface conditions, without detailed calculations
of the atmospheric structure for the mixed CO$_2$/H$_2$O cases. On the other hand,
\citet{heng2011} used a general circulation model of Earth to
simulate the dynamics and circulation on GL 581 g, however did not
investigate surface conditions and habitability in detail.
The study of \citet{bloh2011} used a geodynamic box model to
assess planetary habitability, coupling geophysical and atmospheric
processes in a simplified approach.
Previous modeling studies of habitability in the GL 581 system either focused on the existing planets GL 581 c and d (e.g., \citealp{selsis2007gliese}, \citealp{bloh2007}), used a very simple model to simulate atmospheric processes and surface conditions (e.g., \citealp{bloh2011}) or investigated only a very small subset of potential atmospheric scenarios in terms of CO$_2$ level and surface pressure when varying orbital distance \citep{kaltenegger2011}. We present here model calculations for
possible terrestrial planets in the GL 581 system
along the same line of reasoning as in \citet{vparis2010gliese}, using a consistent 1D atmosphere model. We vary the surface pressure and CO$_2$
level over a large range. For these scenarios, we calculate temperature and pressure profiles in order to
assess the habitability of up to now hypothetical
planets, assuming different planetary masses and orbital distances.
The paper is organized as follows: Sect. \ref{plansys} states the
stellar and planetary parameters. The model used is described in
Sect. \ref{model}. A description of the runs is given in Sect.
\ref{modinput}. Results are described and discussed in Sect.
\ref{resultsect}. We give our conclusions in Sect. \ref{concl}.
\section{Model planets around GL 581}
\label{plansys}
GL 581 is a very quiet M3 star \citep{bonfils2005}. The stellar
spectrum of GL 581 is taken from \citet{vparis2010gliese}. It was
derived from an UV spectrum measured by the IUE (International
Ultraviolet Explorer) satellite and a synthetic Nextgen model
spectrum \citep{hauschildt1999}.
Atmospheric simulations were performed for a subset of
probable planet scenarios, defined by mass and orbital distance. We
varied the orbital distance from 0.117 to 0.175 AU, covering the
stability range found by \citet{zollinger2009}. In terms of
insolation in the solar system, this translates roughly into the
present-day insolation at the orbit of Mars and the solar flux at
Earth about 1.3 billion years ago (e.g., \citealp{gough1981}). The
planetary mass was varied between 2.6 and 3.1 m$_{\oplus}$. For all
scenarios, orbits were assumed to be circular.
The planetary radius is taken from a mass-radius relationship by
\citet{sotin2007}, yielding the surface gravity. Changing
the planetary mass from 3.1 to 2.6 m$_{\oplus}$ decreases the
gravity from 16.4 ms$^{-2}$ to 15.1 ms$^{-2}$, i.e by roughly 9 \%.
As in \citet{vparis2010gliese}, the measured Earth surface albedo,
i.e. the reflectivity of the planetary surface with respect to
incoming stellar radiation, was taken as the model surface albedo
($A_{\rm{surf}}$=0.13, \citealp{rossow1999}) for all
scenarios. In doing so, the effect of clouds is explicitly excluded
from our simulations. Table \ref{planpar} summarizes the planetary
parameters. Note that scenario 2 in Table \ref{planpar}
corresponds to the claimed planet GL 581 g of \citet{vogt2010gliese}.
\begin{table}[H]
\caption{Planetary parameters}
\label{planpar}
\begin{center}
\resizebox{\hsize}{!}{\begin{tabular}{lrr}
\hline
& Mass [m$_{\oplus}$] & Orbital distance [AU] \\
\hline
Scenario 1 & 3.1 & 0.117 \\
Scenario 2 & 3.1 & 0.146\\
Scenario 3 & 3.1 & 0.175\\
Scenario 4 & 2.6 & 0.146\\
\end{tabular}}
\end{center}
\end{table}
\section{Atmospheric model}
\label{model}
A cloud-free 1D radiative-convective model was used to calculate the
atmospheric structure, i.e. the temperature, water and pressure
profiles.
The model is originally based on the climate model described by
\citet{kasting1984water} and \citet{kasting1984}. Further
developments are described by e.g. \citet{kasting1988} and
\citet{mischna2000}. The model version used here is based on the
version of \citet{vparis2008} and \citet{vparis2010gliese} where
more details on the model are given.
The model considers N$_2$, H$_2$O, and CO$_2$ as atmospheric
species. H$_2$O and CO$_2$ are the two most important greenhouse
gases on present Earth, and N$_2$ is present in significant amounts
in all terrestrial atmospheres of the solar system.
Temperature profiles from the surface up to a pressure of 6.6
$\cdot$ 10$^{-5}$ bar are calculated by solving the equation of
radiative transfer and performing convective adjustment, if
necessary. Convective adjustment means that the lapse rate in the
atmosphere is adjusted to the convective lapse rate instead of using
the radiative lapse rate if the atmosphere is unstable against
convection. The convective lapse rate is assumed to be adiabatic
with contributions of latent heat release by condensing water or
carbon dioxide. The water profile is calculated based on the
relative humidity distribution of \citet{manabewetherald1967}. Above
the cold trap, the water profile is set to an isoprofile of the cold
trap value.
\section{Atmospheric scenarios}
\label{modinput}
We performed a parameter study to investigate the influence of
surface pressure and CO$_2$ level on the potential habitability of
terrestrial planets in the GL 581 system, as summarized in
Table \ref{planpar}. The initial surface pressure (1, 2, 5, 10, 20
bar) and CO$_2$ concentration (0.95, 0.05, 3.55 $\cdot$ 10$^{-3}$
and 355 ppm, respectively) were varied. N$_2$ was used as the
background gas. Table \ref{listofruns} summarizes the considered
atmospheric scenarios.
\begin{table}[H]
\caption{Atmospheric scenarios (PAL: Present
Atmospheric Level)
}\label{listofruns}
\begin{center}
\resizebox{\hsize}{!}{\begin{tabular}{lcc}
\hline
Set & $p$ [bar] & CO$_2$ vmr \\
\hline
G1 (low CO$_2$) & 1,2,5,10,20 &3.55 $\cdot$ 10$^{-4}$ \\
G2 (10 PAL CO$_2$) & 1,2,5,10,20 &3.55 $\cdot$ 10$^{-3}$ \\
G3 (medium CO$_2$) & 1,2,5,10,20 &0.05 \\
G4 (high CO$_2$)& 1,2,5,10,20 &0.95 \\
\end{tabular}}
\end{center}
\end{table}
\section{Results and discussion}
\label{resultsect}
\subsection{Temperature profiles}
The resulting temperature-pressure profiles of scenario 2 in
Table \ref{planpar} for the sets G1-G4 are shown in Figs.
\ref{temperature_low}-\ref{temperature_high}. The equilibrium
temperature $T_{\rm{eq}}$ of the planet and the melting temperature
of water (273 K) are indicated by vertical lines. A global mean
surface temperature of 273 K or higher is generally used as the
criterion for surface habitability in exoplanet science. This
criterion is purely based on the phase diagram of water where the
liquid phase of water needs temperatures above 273 K at almost all
pressures (the melting line is nearly isothermal in the p-T
diagram). Note that, of course, on Earth life is found in areas with
mean annual temperatures far below the freezing point of water.
\begin{figure}[H]
\includegraphics[width=200pt]{GLIESE_581_G_LOWtemperature_all}\\
\caption[Temperature-pressure profiles for the low CO$_2$ runs]
{Temperature-pressure profiles for the low CO$_2$ (355 ppm CO$_2$) runs of scenario 2.
Equilibrium temperature of the planet (dashed) and melting temperature of water (dotted) are indicated as vertical lines. }
\label{temperature_low}
\end{figure}
The low CO$_2$ runs result in uninhabitable surface conditions for
all assumed surface pressures (see Fig. \ref{temperature_low}),
although the 20 bar run with a surface temperature of 272 K is very
close to being considered habitable.
\begin{figure}[H]
\includegraphics[width=200pt]{GLIESE_581_G_10PALtemperature_all}\\
\caption[Temperature-pressure profiles for the 10 PAL CO$_2$ runs]
{Temperature-pressure profiles for the 10 PAL CO$_2$ (3.55 $\cdot$ 10$^{-3}$ CO$_2$) runs of scenario 2.
}
\label{temperature_pal}
\end{figure}
When increasing the CO$_2$ content by a factor of 10, calculated
surface temperatures were ranging between 258 and 320 K (Fig.
\ref{temperature_pal}). For surface pressures of 5 bar or more,
surface temperatures were larger than 273 K, indicating potentially
habitable surface conditions. The rather high surface temperature of
320 K for the 20 bar run is the result of the stronger greenhouse
effect due to CO$_2$ and a positive water vapor feedback (increasing
surface temperature leads to increased water vapor in the
atmosphere, hence more greenhouse effect).
\begin{figure}[H]
\includegraphics[width=200pt]{GLIESE_581_G_MEDIUMtemperature_all}\\
\caption[Temperature-pressure profiles for the medium CO$_2$ runs]
{Temperature-pressure profiles for the medium CO$_2$ (5\% CO$_2$) runs of scenario 2.
}
\label{temperature_medium}
\end{figure}
The medium CO$_2$ runs (see Fig. \ref{temperature_medium}) result in
habitable conditions on the surface for pressures of 2 bar and
higher. Calculated surface temperatures are as high as 378 K for the
20 bar case.
The high CO$_2$ runs (see Fig. \ref{temperature_high}) all show
habitable conditions on the surface. Calculated surface temperatures
range from 290 to 401 K upon increasing the surface pressure from 1
to 20 bar.
In all runs presented here, model atmospheres show convective
tropospheres and radiative stratospheres, in contrast to purely
radiative atmospheres encountered in some cases in
\citet{vparis2010gliese}. Also, CO$_2$ condensation is absent for
all runs, even in the high CO$_2$ cases, due to the high
temperatures in the middle atmosphere.
\begin{figure}[H]
\includegraphics[width=200pt]{GLIESE_581_G_HIGHtemperature_all}\\
\caption[Temperature-pressure profiles for the high CO$_2$ runs]
{Temperature-pressure profiles for the high CO$_2$ (95\% CO$_2$) runs of scenario 2.}
\label{temperature_high}
\end{figure}
\subsection{Variation of gravity}
The main effect upon decreasing the gravity $g$ at fixed
surface pressure $p_s$ is an increase in column density $D_{\rm{col}}$ (since
$D_{\rm{col}}\sim \frac{p_s}{g}$). This then translates into more greenhouse
effect, hence the first-order influence of decreasing gravity will
be an increase in surface temperature.\newline However, for the
small variations in gravity considered here (see Sect.
\ref{plansys}), the resulting increase in surface temperatures was
only of the order of 0.4 to 2.8 K. Such small variations of surface
temperatures are not critical in the assessment of habitability in
the frame of this work and are therefore not shown.
\subsection{Variation of orbital distance}
The immediate effect of increasing the orbital distance is a
reduction of the stellar flux $S$ received by the planet. Hence, the
amount of CO$_2$ required to achieve surface temperatures above 273
K increases with orbital distance.
In Table \ref{table_summary_results}, we show the surface temperatures as a
function of surface pressure and CO$_2$ concentration for the scenarios 1-3
in Table \ref{planpar}.\newline It is clearly seen that for an orbital distance of 0.117 AU even the low CO$_2$ runs
result in habitable surface conditions over the entire range of
surface pressures considered. It can also be inferred that
the same value of surface temperature can be achieved for several
combinations of surface pressure and CO$_2$ concentration.\newline
As was already demonstrated by the temperature
profiles shown above, for an orbital distance of 0.146 AU, several model scenarios resulted in surface
conditions which were uninhabitable (e.g., Fig.
\ref{temperature_low}). However, in general, habitability can be
achieved over the entire range of CO$_2$ concentrations with accordingly high surface pressures.
\begin{table}[H]
\caption{Surface temperature in K for variation of orbital distance $d$, surface pressure $p$ in bar and CO$_2$ concentration $C$. Increased grey shading between 260 and 380 K in 30 K steps. White indicates cold scenarios.}\label{table_summary_results}
\begin{tabular}{cp{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}}
\hline\hline
\backslashbox{$C$}{$p$} & 1 & 2&5 & 10 & 20\\\hline
\multicolumn{6}{c}{$d$=\unit[0.117]{AU}}\\\hline
\unit[355]{ppm} &\cellcolor[gray]{0.9}288 &\cellcolor[gray]{0.8}296 &\cellcolor[gray]{0.8}309 &\cellcolor[gray]{0.7}323 &\cellcolor[gray]{0.7}341\\
\unit[3550]{ppm} & \cellcolor[gray]{0.8}295&\cellcolor[gray]{0.8}305 & \cellcolor[gray]{0.7}326 &\cellcolor[gray]{0.7}347 &\cellcolor[gray]{0.6}372\\
\unit[5]{\%} & \cellcolor[gray]{0.8}307 &\cellcolor[gray]{0.7}324 &\cellcolor[gray]{0.6}351&\cellcolor[gray]{0.6}375 &\cellcolor[gray]{0.5}404\\
\unit[95]{\%} & \cellcolor[gray]{0.8}320&\cellcolor[gray]{0.7}340&\cellcolor[gray]{0.6}369&\cellcolor[gray]{0.5}395&\cellcolor[gray]{0.5}424\\
\hline
\multicolumn{6}{c}{$d$=\unit[0.146]{AU}}\\
\hline
\unit[355]{ppm} &254 & 257 &\cellcolor[gray]{0.9}263 &\cellcolor[gray]{0.9}267 &\cellcolor[gray]{0.9}272\\
\unit[3550]{ppm} & 259& \cellcolor[gray]{0.9}264&\cellcolor[gray]{0.9} 274 &\cellcolor[gray]{0.9}289 &\cellcolor[gray]{0.7}321\\
\unit[5]{\%} & \cellcolor[gray]{0.9}269 & \cellcolor[gray]{0.9}283 &\cellcolor[gray]{0.8}317&\cellcolor[gray]{0.7}347 &\cellcolor[gray]{0.6}378\\
\unit[95]{\%} & \cellcolor[gray]{0.8}291&\cellcolor[gray]{0.8}313&\cellcolor[gray]{0.7}345&\cellcolor[gray]{0.6}372&\cellcolor[gray]{0.5}401\\
\hline
\multicolumn{6}{c}{$d$=\unit[0.175]{AU}}\\\hline
\unit[355]{ppm} & 221 &223 &225 &227 &231\\
\unit[3550]{ppm} & 224 &226 &232 &242 &260\\
\unit[5]{\%} &230 &240 &\cellcolor[gray]{0.9}270&\cellcolor[gray]{0.8}309 & \cellcolor[gray]{0.7}347\\
\unit[95]{\%} &254 &\cellcolor[gray]{0.9}281&\cellcolor[gray]{0.8}320&\cellcolor[gray]{0.7}350 &\cellcolor[gray]{0.5}381\\
\hline
\end{tabular}
\end{table}
Also towards the outer boundary of
the HZ around GL 581, at an orbital distance of 0.175 AU, habitable scenarios could be
found. However, allowed CO$_2$ concentrations for surface temperatures above freezing are now limited to
the medium and high CO$_2$ case. Model scenarios with less CO$_2$
were found to be uninhabitable, independent of surface
pressure.\newline These results confirm the findings of
\citet{vparis2010gliese} that medium CO$_2$ scenarios need to be
taken into account when assessing the habitability of planets
orbiting near the outer boundary of the habitable zone.
\section{Conclusions}
We presented 1D radiative-convective calculations for a
subset of potential atmospheric conditions on hypothetical
Super-Earth planets orbiting GL 581. We varied parameters such as
orbital distance, CO$_2$ concentration and surface pressure. In contrast to previous studies of habitability
in the GL 581 system, we considered a much larger parameter space and used a consistent atmospheric
model to assess surface conditions.
Our model results imply that habitable surface
conditions (here T$_{\rm{surf}}>$273 K) could be obtained for a
large part of the considered parameter space. For the smallest orbital
distance of 0.117 AU, habitability was achieved
independent of considered atmospheric parameter range. For an orbital distance
of 0.146 AU, depending on surface pressure, CO$_2$
concentrations as low as 10 times the present Earth's value were
found to be sufficient for surface habitability. However, for the
largest orbital distance considered (0.175 AU), surface conditions were only habitable with
CO$_2$ concentrations of 5 \% and more. Hence, our simulations show
that an additional Super-Earth planet in the GL 581 system in the
dynamical stability range calculated by \citet{zollinger2009} would
indeed be considered a potentially habitable planet.
The model calculations presented here for a subset of the
possible parameter space (orbital distance, CO$_2$ concentration,
surface pressure) illustrate how such investigations can be helpful
to select potentially habitable planets for further, more detailed
studies from a future larger sample of known Super-Earths.
\label{concl}
\begin{acknowledgements}
This research has been supported by the Helmholtz Gemeinschaft
through the research alliance "Planetary Evolution and Life".
Helpful discussions with J.W. Stock, J.L. Grenfell, A.B.C. Patzer and A.
H\"{o}lscher are gratefully acknowledged.
We thank the anonymous referee and Tristan Guillot for their
comments which helped clarify the paper.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,562,812 | arxiv | \section{Introduction}
In this paper we first present a perfect simulation
algorithm for a multicolor system on ${\mathbb Z}^d$ with interactions of
infinite range. This perfect simulation algorithm is the basis of the construction of a
finitary coding from a finite-valued i.i.d. process to the invariant
probability measure of the multicolor system.
By a perfect simulation algorithm we mean a simulation which samples
precisely from the stationary law of the process. More precisely, for
any finite set of sites $F$ and any finite time interval $[0, t]$ we
want to sample the stationary time evolution of the coloring of sites
in $F$ during $[0,t]$.
By coding we mean
a translation invariant deterministic measurable map from the
finite-valued i.i.d. process to the invariant probability
measure of the system. Finitary means that the value of the map
at the origin depends only on a finite subset of the random
variables. This finite subset is a function of the realization of the
family of independent random variables.
The process we consider is an interacting particle
system with finite state space. The elements of this finite state
space are called {\sl colors}. To each site in ${\mathbb Z}^d$ is assigned a
color. The coloring of the sites changes as time goes by. The rate at which
the color of a fixed site $i$ changes from a color $a$ to a new color
$b$ is a function of the entire configuration and depends on $b$.
We do not assume that the system has a dual, or is attractive, or
monotone in any sense. Our system is not even spatially
homogeneous. The basic assumptions are the continuity of the infinite
range change rates together with a fast decay of the long range
influence on the change rate. These two properties imply that the change rates can be represented as
a countable mixture of local change rates of increasing range. This
decomposition (see Theorem \ref{theo:decomp}) extends to the case of
interacting particle systems the notion of random Markov chains
appearing explicitly in Kalikow (1990) and Bramson and Kalikow (1993)
and implicitly in Ferrari et al. (2000) and Comets et al. (2002).
The decomposition of the change rate of infinite range as a countable
mixture of finite range change rates suggests the construction of any
cylindrical time evolution of the stationary process by the
concatenation of two basic algorithms. First we construct a backward
black and white sketch of the process. Then in a second forward
algorithm we assign colors to the black and white picture.
The proof that the backward black and white algorithm stops after a
finite number of steps follows ideas presented in Bertein and Galves
(1977) to study dual processes. Using these ideas we prove the
existence of our process in a self-contained way. The same ideas
appear again in the construction of the finitary coding.
This type of construction is similar in spirit to procedures
adopted in Ferrari (1990), Ferrari et al. (2002), Garcia and Mari\'c
(2006) and Van den Berg and Steif (1999). However all these papers
only consider particular models, satisfying restrictive assumptions
which are not assumed in the present paper.
Our Theorem \ref{theo:5} shows the existence of a finitary coding from
an i.i.d. finite-valued process to the invariant probability measure
of the multicolor system. This can be seen as an extension to the
infinite range processes of Theorem 3.4 of Van den Berg and Steif
(1999).
H\"aggstr\"om and Steif (2000) constructs a finitary coding for Markov
fields. This result follows, under slightly stronger assumptions, as a
corollary of our Theorem \ref{theo:5} which holds also for non Markov
infinite range fields. These authors conclude the above mentioned
paper by observing that the extension of their results to
``infinite-range Gibbs measures appears to be a more difficult
matter''. Our Theorem \ref{theo:5} is an attempt in this direction.
This paper is organized as follows. In Section 2 we present the model
and state a preliminary result, Theorem \ref{theo:decomp}, which
gives the representation of the change rate as a countable
mixture of local change rates. In Section 3, we present the perfect
simulation algorithm and Theorem \ref{theo:nstop} which ensures that
the algorithm stops after a finite number of steps. Theorem
\ref{theo:nstop} also guarantees the exponential ergodicity of the
process. The definitions and results concerning the finitary coding
are presented in Section 4. The proofs of the theorems are presented
in Sections 5 to 11.
\section{Definitions, notation and basic results}
In what follows, $A$ will be a finite set of colors, the initial
lowercase letters $a$, $b$, $c, \ldots$ will denote elements of $A.$
We will call configuration any element of $A^{{\mathbb Z}^d} .$
Configurations will be denoted by letters
$\eta, \zeta, \xi , ...$ A point $ i \in {\mathbb Z}^d $ will be
called site.
As usual, for any $i \in {\mathbb Z}^d$, $\eta(i)$ will denote the
value of the configuration $\eta$ at site $i$. By extension, for any
subset $V \subset {\mathbb Z}^d$, $\eta(V)\in A^V$ will denote the restriction
of the configuration $\eta$ to the set of positions in $V.$ For any
$\eta ,$ $i$ and $a,$ we shall denote $\eta^{i,a}$ the modified
configuration
$$\eta^{i,a}(j) = \eta(j) \mbox{, for all $j \neq i,$ and $\eta^{i,a}(i)
= a.$}$$ For any $i \in {\mathbb Z}^d,$ $\eta \in A^{{\mathbb Z}^d}$ and $a \in A,$
$a \neq \eta(i),$ and we
denote by $c_i (a, \eta) $ a positive real number. We suppose that
there exists a constant $\Gamma_i < + \infty $ such that
\begin{equation}\label{eq:boundedrate}
c_i (a, \eta) \le \Gamma_i ,
\end{equation}
for every $a $ and $ \eta$ such that $a \neq \eta(i).$
A multicolor system with interactions of infinite range is a Markov
process on $A^{{\mathbb Z}^d} $ whose generator is defined on cylinder functions by
\begin{equation}
\label{eq:generator}
L \,f(\eta) \,=\, \sum_{i \in {\mathbb Z}^d} \sum_{a \in A, a \neq \eta(i)}
c_i ( a, \eta) [f(\eta^{i,a}) - f(\eta)] \, .
\end{equation}
Intuitively, this form of the generator means that the site $i$ will
be updated to the symbol $a,$ $ a \neq \eta(i),$ at a rate $c_i (a, \eta ) $ whenever the
configuration of the system is $\eta .$ The choice of $c_i (a, \eta) $ for
$ a = \eta(i) $ does not affect the generator (\ref{eq:generator}) and
represents a degree of freedom in our model. In what follows we choose $c_i (\eta(i), \eta) $
in such a way that
\begin{equation}\label{eq:rateprobability}
c_i (a, \eta) = M_i \, p_i (a| \eta) .
\end{equation}
In the above formula $ M_i < + \infty $ is a suitable constant and for every fixed configuration $\eta
,$ $ p_i (\cdot | \eta) $ is a probability measure on $A.$ Condition
(\ref{eq:boundedrate}) implies that such a choice is always possible, for instance by taking $M_i =
|A| \, \Gamma_i $ and defining
\begin{equation}\label{eq:fixedchoice}
c_i ( \eta (i), \eta) = M_i - \sum_{a: a \neq \eta(i)} c_i (a, \eta) .
\end{equation}
We shall call
$(c_i)_{i \in {\mathbb Z}^d} $ a family of rate functions for this fixed choice
(\ref{eq:fixedchoice}).
Our first aim is to give sufficient conditions on $c_i(a,\eta)$
implying the existence of a perfect simulation algorithm of the process having generator (\ref{eq:generator}).
To state these conditions,
we need some extra notation. Let $V_{i} (k) = \{j \in {\mathbb Z}^d; 0 \le \|j
- i \| \le k\},$ where $\|j\| = \sum_{u=1}^d |j_u|$ is the usual
$L_1$-norm of
${\mathbb Z}^d$. We will impose the following continuity condition on the family of rate functions $c .$\\
{\bf Continuity condition.} For any symbol $a,$ we will assume that
\begin{equation}
\label{eq:continuity}
\sup_{i \in {\mathbb Z}^d} \sup_{\eta(V_i(k)) = \zeta(V_i(k))} | c_i (a, \eta) -
c_i (a, \zeta )| \rightarrow 0 \, ,
\end{equation}
as $k \rightarrow \infty .$
Define
\begin{equation}
\label{eq:alpha0}
\alpha_{i} (-1) \,=\, \sum_{a \in A} \min \left( \inf_{\zeta \in A^{{\mathbb Z}^d}, \zeta(i ) \neq a} c_i(a, \zeta)
, \, M_i - \sup_{ \zeta \in A^{{\mathbb Z}^d}, \zeta(i )= a} \sum_{b \neq a} c_i (b, \zeta) \right) \, ,
\end{equation}
and for any $k \ge 0 , $
\begin{equation}
\label{eq:alpha}
\alpha_i (k) \,=\, \min_{w \in A^{V_i (k) }} \left( (\sum_{a \in A, a \neq w(i)}
\inf_{\zeta: \zeta(V_i (k)) = w} c_i( a,\zeta)) + M_i - \sup_{\zeta: \zeta(V_i (k)) = w} \sum_{ b \neq w(i)} c_i (b, \zeta)\right) .
\end{equation}
In order to clarify the role of $\alpha_i(k)$, we
present an example that is a spatial version of the chain that
regenerates in $1 :$ the evolution of a site depends on the random
ball around that site where the size of this ball is the smallest
radius such that $1$ belongs to the ball (without the center itself).
\begin{ex}
Let $A = \{ 0, 1 \} , $
$$ l_i (\eta) = l, \mbox{ if } \max_{j \in V_i(l), j \neq i } \eta (j) = 0 \mbox{ and } \max_{j
\in V_i (l+1), j \neq i} \eta(j) = 1$$
and define
$$ c_i(1, \eta ) = q_{l_i (\eta)}, \, c_i(0, \eta ) = 1 - c_i(1, \eta ), $$
where $0 < q_k < 1 $ for all $k.$
Note that in this case,
$$ \sup_i \sup_{\eta(V_i (k)) = \zeta(V_i(k))} | c_i(1, \eta) -
c_i(1, \zeta )| = \sup_{l, m \geq k } |q_l - q_m|,$$ and thus the
process is continuous, if and only if $\lim_k q_k$ exists.
Observe that if $q_k \downarrow q_{\infty},$ as $k \to \infty ,$ then $M_i = 1$ and
\begin{eqnarray*}
\alpha_i (-1) &=& \min \left( \inf_{\zeta \in A^{{\mathbb Z}^d}, \zeta(i
) = 1} c_i(0, \zeta), \, 1 - \sup_{ \zeta \in A^{{\mathbb Z}^d}, \zeta(i )=
0} c_i (1, \zeta) \right) \\
&& + \, \min \left( \inf_{\zeta \in A^{{\mathbb Z}^d}, \zeta(i) = 0} c_i(1,
\zeta), \, 1 - \sup_{ \zeta \in A^{{\mathbb Z}^d}, \zeta(i )= 1} c_i (0,
\zeta) \right) \\
&=& \inf_{\zeta \in A^{{\mathbb Z}^d}} c_i(0, \zeta) + \inf_{ \zeta \in A^{{\mathbb Z}^d}} c_i (1,
\zeta) \\
&=& 1 - q_0 + q_{\infty}.
\end{eqnarray*}
Also,
\begin{eqnarray*}
\alpha_i (0) &=& \min \Big( \inf_{\zeta \in
A^{{\mathbb Z}^d}, \zeta(i) = 0} c_i(1, \zeta) + 1 - \sup_{\zeta \in
A^{{\mathbb Z}^d}, \zeta(i) = 0} c_i(1, \zeta), \\
&& \quad \quad \inf_{\zeta \in
A^{{\mathbb Z}^d}, \zeta(i) = 1} c_i(0, \zeta) + 1 - \sup_{\zeta \in
A^{{\mathbb Z}^d}, \zeta(i) = 1} c_i(0, \zeta) \Big) \\
&=& \min (q_{\infty} + 1 - q_0, 1 - q_0 + q_{\infty} )\\
&=& \alpha_i (-1) .
\end{eqnarray*}
Finally,
\begin{eqnarray*}
\alpha_i (k) &=& \min_{w \in A^{V_i(k)}} \left( \inf_{\zeta:
\zeta(V_i (k)) = w} c_i( 1 - \zeta(i),\zeta) + 1 - \sup_{\zeta:
\zeta(V_i (k)) = w} c_i (\zeta(i), \zeta) \right) \\
&=& \min_{w \in A^{V_i(k)}} \left( \inf_{\zeta:
\zeta(V_i (k)) = w} c_i( 0,\zeta) + \inf_{\zeta:
\zeta(V_i (k)) = w} c_i( 1,\zeta) \right) \\
&=& \min_{w \in A^{V_i(k)}} \left( \inf_{\zeta:
\zeta(V_i (k)) = w} q_{l_i (\zeta)} + \inf_{\zeta:
\zeta(V_i (k)) = w} (1-q_{l_i (\zeta)}) \right)\\
&=& (1 - q_k) + q_{\infty} .
\end{eqnarray*}
\end{ex}
Let us introduce some more notation. Note that for each site $i,$
\begin{equation}\label{eq:m}
M_i = \lim_{k \to \infty }
\alpha_i (k) .
\end{equation}
Hence to each
site $i$ we can associate a probability distribution $ \lambda_i$ by
\begin{equation}
\label{eq:lambda0}
\lambda_{i} (-1)\,=\, \frac{\alpha_{i} (-1)}{M_i} ,
\end{equation}
and for $k \ge 0 $
\begin{equation}
\label{eq:lambdak}
\lambda_i (k)\,=\, \frac{\alpha_i (k) \,-\, \alpha_{i} (k-1)}{M_i } .
\end{equation}
We will see that for each $i$ the family of rate functions $c_i(.,.)$
can be represented as a mixture of local rate functions weighted by
$(\lambda_i (k))_{k \geq -1}.$ More formally, we have the following
theorem.
\begin{theo} \label{theo:decomp} Let $(c_i)_{i \in {\mathbb Z}^d} $ be a family
of rate functions satisfying the conditions (\ref{eq:boundedrate}), (\ref{eq:rateprobability}),
the continuity condition
(\ref{eq:continuity}) and the summability condition
(\ref{eq:condition1}). Then for any site $i$ there exists a family
of conditional probabilities $p_i^{[k]}$ on $A$ depending on the
local configurations $\eta(V_i (k))$ such that
\begin{equation}
\label{cmmc}
c_i (a, \eta) \,=\, M_i \, p_i (a|\eta) , \mbox{ where } p_i(a|\eta) = \sum_{k \geq -1 } \lambda_i (k) p_i^{[k]} (a | \eta(V_i (k))).
\end{equation}
As a consequence, the infinitesimal generator $L$ given by
(\ref{eq:generator}) can be rewritten as
\begin{equation}
\label{eq:generator2}
L \,f(\eta) \,=\, \sum_{i \in {\mathbb Z}^d} \sum_{a \in A} \sum_{k \ge -1}
M_i \lambda_i (k) p_i^{[k]}(a| \eta(V_i (k))) [f(\eta^{i,a}) - f(\eta)]\, .
\end{equation}
\end{theo}
\begin{rem}
Note that for $k = -1,$ $V_i (k) = \emptyset $ and hence $
p_i^{[-1]} (a | \eta(V_i (k))) = p_i^{[-1]} (a)$ does not depend on
the configuration. Therefore, $\lambda_{i} (-1)$ represents the
spontaneous self-coloring rate of site $i$ in the process.
\end{rem}
The representation given by (\ref{eq:generator2}) provides a clearer
description of the time evolution of the process. We start with an
initial configuration $\eta$ at time zero. This configuration is
updated in a c\`adl\`ag way as follows. For each site $ i \in {\mathbb Z}^d ,$
consider a rate $M_i$ Poisson point process $N^i .$ The Poisson
processes corresponding to distinct sites are all independent. If at
time t, the Poisson clock associated to site $i$ rings, we choose a
range $k$ with probability $\lambda_i (k)$ independently of everything
else. And then, update the value of the configuration at this site by
choosing a symbol $a$ with probability $p_i^{[k]} (a |
\xi^{\eta}_t(V_i(k)))$.
\section{Perfect simulation of the stationary process}
The decomposition (\ref{cmmc}) provided by Theorem \ref{theo:decomp}
suggests an algorithm of perfect simulation for the
multicolor long range interacting system. This is the main
result of this article. The goal is to sample under
equilibrium the time evolution of any
finite set of sites $F$ during any fixed finite time interval.
We first introduce a simulation procedure to sample
the time evolution of any
finite set of sites $F$ during any fixed finite time interval $[0, t],$ when starting from
a fixed initial configuration $\eta .$ This simulation
procedure has two stages. First, we draw a backward black and white
sketch in order to determine the set of sites and the succession of
choices affecting the configuration of the set of sites $F$ at time $t.$
Then, in the second stage, a forward coloring procedure
assigns colors to every site involved in the black and white
sketch. This will be formally described in Algorithms 1 and 2 below.
Let us describe the mathematical ideas behind this algorithm. Our goal
is to simulate the configuration of the fixed set of sites $F$ during the
time interval
$[0, t]$ when the process starts from an initial configuration $\eta .$
We climb up the rate $M_j$ Poisson processes $N^j, j \in F ,$ until we find the
last occurrence time before time $t$ where the Poisson clock
rang. Note that the probability that the clock of site $i$ rings first
among all these clocks is given by
$$ \frac{M_i}{\sum_{j \in F} M_j } .$$
Then we have to inspect the configuration at the sites belonging to
the finite set $V_i (k) $ which is chosen at that time. $V_i (k) $
is chosen with probability $\lambda_i (k), k \geq -1.$ If $k = -1 $ is chosen, this
means that the value of $\xi (i) $ at that time is chosen according
to $p_i^{[-1]} ,$ independently of the other sites, and thus site
$i$ can be removed from the set $F .$
Otherwise, if $ k \geq 0,$ we have to include all the sites in $V_i
(k) $ to the set of sites $F$ and to continue the algorithm. The
reverse-time checking continues for each point reached previously
until we find an occurrence time before time $0.$ In this
case the algorithm stops.
In the second stage, the algorithm assigns colors to all the sites
that have been involved in the first stage. To begin with,
all sites that have not yet chosen a range $-1$, will be colored according
to the initial configuration $\eta $ at time $0.$
And then
successively, going forwards in time, we assign colors to the
remaining sites according to $ p^{[ k ]}_i ( \cdot | V_i (k)),$
where all sites in $V_i (k) $ have already been colored in a
previous step of the algorithm. Finally we finish with the colors of
the set of sites $F$ at time $t .$
The finite time simulation Algorithm 1 and 2 uses the following variables.
\begin{itemize}
\item $N$ is an auxiliary variables taking values in the set
of non-negative integers
$ \{ 0, 1,2, \ldots \} $
\item $N_{STOP}$ is a counter taking values in the set of non-negative integers
$ \{ 0, 1, 2, \ldots \} $
\item $T_{STOP} $ is an element of $(0, + \infty )$
\item
$I $ is variable taking values in ${\mathbb Z}^d$
\item
$K$ is a variable taking values in $\{ -1, 0, 1, \ldots \}$
\item
$T$ is an element of $( 0, + \infty ) $
\item
$B = (B_1, B_2, B_3)$ where
\begin{itemize}
\item
$B_1 $ is an array of
elements of ${\mathbb Z}^d $
\item
$B_2$ is an array of
elements of $\{ -1, 0 , 1 , \ldots \} $
\item
$B_3$ is an array of elements of $ (0 , + \infty ) $
\end{itemize}
\item
$C$ is variable taking values in the set of finite subsets of ${\mathbb Z}^d$
\item $ W $ is an auxiliary variable taking values in $ A$
\item $V$ is an array of elements of $A $
\item $\zeta $ is a function from ${\mathbb Z}^d $ to $A \cup \{ \Delta \} ,$
where $\Delta $ is some extra symbol that does not belong to $A$
\end{itemize}
\begin{algorithm}[h]
\caption{Backward black and white sketch without deaths}
\begin{algorithmic}[1]
\STATE {\it Input:} $F$; {\it Output:} $N_{STOP}$, $B$, $ C$, $T_{STOP} $
\STATE $N \leftarrow 0,$ $N_{STOP} \leftarrow 0 ,$ $ B \leftarrow \emptyset ,$ $ C \leftarrow F, $ $T_{STOP} \leftarrow 0 $
\WHILE {$ T_{STOP} < t \mbox{ and } C \neq \emptyset $}
\STATE Choose a time $T \in (0, +\infty) $ randomly according to the
exponential distribution with parameter
$ \sum_{j \in C} M_j .$ Update
$$ T_{STOP} \leftarrow T_{STOP} + T .$$
\STATE $N \leftarrow N+1 .$
\STATE Choose a site $I \in C$ randomly according to the distribution
$$P ( I = i) = \frac{M_i }{\sum_{j \in C } M_j}$$
\STATE Choose $ K \in \{ -1, 0, 1, \ldots \}$ randomly according to
the distribution
$$P( K = k) = \lambda_I ( k)$$
\STATE $ C \leftarrow C \cup V_I (K)$
\STATE $ B (N) \leftarrow (I, K, T_{STOP} )$
\ENDWHILE
\STATE $N_{STOP} \leftarrow N $
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{ Forward coloring procedure}
\begin{algorithmic}[1]
\STATE {\it Input:} $N_{STOP}$, $B$, $C$, $\eta (C)$; {\it Output:} $ V$
\STATE $ N \leftarrow N_{STOP } $
\STATE $\zeta(j) \leftarrow \eta (j) $ for all $j \in C;$ $\zeta(j) \leftarrow \Delta $ for all $ j \in {\mathbb Z}^d \setminus C $
\WHILE {$N \ge 1$}
\STATE $ (I,K,T) \leftarrow B(N) .$
\IF {$K= -1 $} \STATE Choose $W $ randomly in $A$
according to the probability distribution
$$ P( W = v) = p_I^{[-1]} (v )$$
\ELSE \STATE{Choose $W $ randomly in $A$
according to the probability distribution
$$ P( W = v) = p_I^{[K]} ( v | \zeta ( V_I (K))$$}
\ENDIF
\STATE $ \zeta (I) \leftarrow W $
\STATE $ V (N) \leftarrow W $
\STATE { $ N \leftarrow N-1 $}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Using output $V$ of Algorithm 2
and output $B$ of Algorithm 1 we can
construct the time evolution $(\xi_s (F) , 0 \le s \le t) $ of
the process. This is done as follows.
Denote $I(N), T(N) $ the first and the third coordinate of
the array $B(N)$ respectively. Introduce
the following random times for any $1 \le n \le
N_{STOP},$
$$S_n = t - T ( N_{STOP} - n +1 ) .$$
\begin{itemize}
\item
For $0 \le s < S_1 $ define $\xi_s (F) = \zeta(F ) .$
\item
For $1 \le n \le N_{STOP},$ for $S_n \le s < S_{n+1}\wedge t ,$ we put
\begin{itemize}
\item
for all $ i \in F $ such that $i \neq I( N_{STOP} - n + 1 ), $ $\xi_s (i) = \xi_{S_n} (i) ; $
\item
for $ i = I( N_{STOP} - n + 1 ) ,$ $ \xi_s (i) = V(n) .$
\end{itemize}
\end{itemize}
We summarize the above discussion in the following proposition.
\begin{prop} \label{theo:1} Let $(c_i)_{i \in {\mathbb Z}^d}$ be a family of
continuous rate functions satisfying the conditions of Theorem \ref{theo:decomp}. If
\begin{equation}
\label{eq:condition1}
\sup_{i \in {\mathbb Z}^d} \sum_{k \ge 0} |V_i (k)| \lambda_i (k) \,< \, +\infty\,,
\end{equation}
then Algorithm 1 stops
almost surely after a finite number of steps, i.e.
$$ P( N_{STOP} < + \infty ) = 1 .$$
Moreover, for any initial configuration $\eta$, there exists a unique
Markov process $(\xi^{\eta}_t)_{t \ge 0}$ such that $\xi^\eta_0 = \eta$ and with
infinitesimal generator
\begin{equation}
\label{eq:generator1}
L \,f(\eta) \,=\, \sum_{i \in {\mathbb Z}^d} \sum_{a \in A} c_i(a , \eta) [f(\eta^{i,a}) - f(\eta)] \, .
\end{equation}
The cylindrical time
evolution $
(\xi_s (F) , 0 \le s \le t ) $ simulated in Algorithms 1 and
2 is a sample from this process $\xi^{\eta} .$
\end{prop}
We now turn to the main object of this paper, the perfect simulation of
the multicolor long range interacting system under equilibrium. The goal is to sample under
equilibrium the time evolution of any
finite set of sites $F$ during any fixed finite time interval.
We first introduce a simulation procedure to sample from equilibrium
the cylindrical configuration at a fixed time. As before, this simulation
procedure has two stages : First, we draw a backward black and white
sketch in order to determine the set of sites and the succession of
choices affecting the configuration of the set of sites at
equilibrium. Then, in the second stage, a forward coloring procedure
assigns colors to every site involved in the black and white
sketch. This will be formally described in Algorithms 3 and 4 below.
The following variables will be used.
\begin{itemize}
\item $N$ is an auxiliary variables taking values in the set
of non-negative integers
$ \{ 0, 1,2, \ldots \} $
\item $N_{STOP}$ is a counter taking values in the set of non-negative integers
$ \{ 0, 1, 2, \ldots \} $
\item
$I $ is variable taking values in ${\mathbb Z}^d$
\item
$K$ is a variable taking values in $\{ -1, 0, 1, \ldots \}$
\item
$B $ is an array of
elements of ${\mathbb Z}^d \times \{ -1, 0 , 1 , \ldots \} $
\item
$C$ is variable taking values in the set of finite subsets of ${\mathbb Z}^d$
\item $ W $ is an auxiliary variable taking values in $ A$
\item $\eta $ is a function from ${\mathbb Z}^d $ to $A \cup \{ \Delta \} ,$
where $\Delta $ is some extra symbol that does not belong to $A$
\end{itemize}
{\begin{algorithm}[h]
\caption{Backward black and white sketch}
\label{algo1}
\begin{algorithmic}[1]
\STATE {\it Input:} $F$; {\it Output:} $N_{STOP}$, $B$
\STATE $N \leftarrow 0,$ $N_{STOP} \leftarrow 0 ,$ $ B \leftarrow \emptyset ,$ $ C \leftarrow F, $
\WHILE {$C \neq \emptyset $}
\STATE $N \leftarrow N+1 .$
\STATE Choose a site $I \in C$ randomly according to the distribution
$$P ( I = i) = \frac{M_i }{\sum_{j \in C } M_j}$$
\STATE Choose $ K \in \{ -1, 0, 1, \ldots \}$ randomly according to
the distribution
$$P( K = k) = \lambda_I ( k)$$
\IF {$K = -1,$} \STATE {$C \leftarrow C \setminus \{ I \}$}
\ELSE \STATE $ C \leftarrow C \cup V_I (K)$ \ENDIF
\STATE $ B (N) \leftarrow (I, K )$
\ENDWHILE
\STATE $N_{STOP} \leftarrow N $
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{ Forward coloring procedure}
\begin{algorithmic}[1]
\STATE {\it Input:} $N_{STOP}$, $B$; {\it Output:} $\{(i,\eta(i)),
i \in F\}$
\STATE $ N \leftarrow N_{STOP } $
\STATE $\eta(j) \leftarrow \Delta $ for all $ j \in {\mathbb Z}^d $
\WHILE {$N \ge 1$}
\STATE $ (I,K) \leftarrow B(N) .$
\IF {$K= -1 $} \STATE Choose $W $ randomly in $A$
according to the probability distribution
$$ P( W = v) = p_I^{[-1]} (v )$$
\ELSE \STATE{Choose $W $ randomly in $A$
according to the probability distribution
$$ P( W = v) = p_I^{[K]} ( v | \eta ( V_I (K))$$}
\ENDIF
\STATE $ \eta (I) \leftarrow W $
\STATE { $ N \leftarrow N-1 $}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
}
Let us call $\mu $ the distribution on $A^{{\mathbb Z}^d} $ whose projection on
$A^F $ is the law of $\eta (F)$ printed at the end of
Algorithm 4.
The following theorem gives a sufficient condition
ensuring that Algorithm 3 stops after a finite number of steps
and shows that $\mu $ is actually the invariant measure of the process.
\begin{theo}\label{theo:nstop}
Let $(c_i)_{i \in {\mathbb Z}^d} $ be a family
of rate functions satisfying the conditions of Theorem
\ref{theo:decomp}. If
\begin{equation}
\label{eq:condition2}
\sup_{i \in {\mathbb Z}^d} \sum_{k \ge 0} \, |V_i (k)| \lambda_i (k) \,< \, 1\,,
\end{equation}
then
$$ P ( N_{STOP} < + \infty ) = 1.$$
The law of the set $\{ (i,\eta (i)) : i \in F \} $ printed at the
end of Algorithms 3 and 4 is the projection on $A^F$ of the unique
invariant probability measure $\mu $
of the process. Moreover, the law of the process starting
from any initial configuration
converges weakly to $\mu $ and this convergence takes place
exponentially fast.
\end{theo}
\begin{rem}
In the literature, we say that the process is {\it ergodic}, if it
admits a unique invariant
measure which is the weak limit of the law of the process starting
from any initial configuration. If this convergence takes place exponentially fast,
we say that the process is {\it exponentially ergodic}. Therefore, Theorem
\ref{theo:nstop} says that the multicolor system is exponentially ergodic.
\end{rem}
Algorithms 3 and 4 show how to sample the invariant probability measure of
the process. We now pursuit a more ambitious goal : how to sample
the stationary time evolution of any fixed finite set of sites $F$
during any fixed interval of time $ [0, t ] .$ This is done using
Algorithms 1 and 2 as well.
Algorithm 1 produces a backward black and white sketch without
removing the spontaneously coloring sites. We start at time $t$ with
the set of sites $F$ and run backward in time until time $0 .$ This
produces as part of its output the set of sites $C$ whose coloring at
time $0$ will affect the coloring of the sites in $F$ during $[0, t].$
We then use the output set $C$ of Algorithm 1 as input set of
positions in Algorithms 3 and 4. Algorithms 3 and 4 will give us as
output the configuration $\eta (C)$ that will be used as input
configuration for Algorithm 2.\\
\begin{theo}\label{theo:4}
Under the conditions of Theorem \ref{theo:1}, Algorithm 3 stops
almost surely after a finite number of steps
$$ P( N_{STOP} < + \infty ) = 1 .$$
Moreover, under the conditions of Theorem \ref{theo:nstop}, for any
$ t> 0 ,$ the cylindrical time
evolution $
(\xi_s (F) , 0 \le s \le t ) $ simulated in Algorithms 1, 2, 3 and
4 is a sample from the stationary process.
\end{theo}
\section{Finitary coding}
The perfect simulation procedure described in Algorithms 1--4 gives
the basis for the construction a finitary coding for the invariant
probability measure of the multicolor system $\xi_t $. By this we mean
the following. Let $(Y(i) , i \in {\mathbb Z}^d)$ be a family of i.i.d. random
variables assuming values on a finite set $S$. Let $ (\xi_0 (i), i \in
{\mathbb Z}^d ) $ be the configuration sampled according to the invariant
probability measure $\mu $ obtained as output of Algorithm 2.
\begin{defin}
We say that there exists a {\it finitary coding} from $(Y(i) ,i \in
{\mathbb Z}^d) $ to $(\xi_0 (i) , i \in {\mathbb Z}^d) $ if there exists a
deterministic function $f: S^{{\mathbb Z}^d} \rightarrow A^{{\mathbb Z}^d}$
such that almost surely the following holds:
\begin{itemize}
\item $f$ commutes with the shift operator, that is, $f(T_i(y)) =
T_i(f(y))$ for any $i \in {\mathbb Z}^d$;
\item $\xi_0 = f( (Y (j) ), j \in {\mathbb Z}^d ) $; and
\item there exists a a finite subset $\bar{F} $ of ${\mathbb Z}^d$ satisfying
$$ f ( (Y (j) ), j \in {\mathbb Z}^d) = f ( (Y' (j) ), j \in {\mathbb Z}^d)$$
whenever
$$ Y' (j) = Y (j) \mbox{ for all } j \in \bar{F} .$$
\end{itemize}
\end{defin}
In the first condition of the definition, the notation $T_i$
denotes the translation by $i$ steps, in $S^{{\mathbb Z}^d}$,
or in $A^{{\mathbb Z}^d}$. More precisely, for any $i \in {\mathbb Z}^d$, if $y \in
S^{{\mathbb Z}^d}$ then $T_i(y)$ is the element of $S^{{\mathbb Z}^d}$ such that
$T_i(y)(j) = y(j-i)$ with equivalent definition for $\xi \in
A^{{\mathbb Z}^d}$.
\begin{theo}\label{theo:5}
Under the conditions of Theorem \ref{theo:nstop}
there exists a finitary coding from an independent and identically
distributed family of finite-valued variables $ (Y
(i), i \in {\mathbb Z}^d)$ to $(\xi_0 (i) , i \in {\mathbb Z}^d ) .$
\end{theo}
Theorem 1.1 of H\"aggstr\"om and Steif (2000) follows as a corollary
of Theorem \ref{theo:5} under a slightly stronger condition. In order to
state this corollary, we need to introduce the notion of Markov random field.
\begin{defin} A Markov random field $X$ on ${\mathbb Z}^d$ with values in a
finite alphabet $A$ has distribution $\mu$ if $\mu$ admits a
consistent set of conditional probabilities
$$ \mu(X(\Lambda) = \xi(\Lambda)| X({\mathbb Z}^d \setminus \Lambda) = \xi({\mathbb Z}^d
\setminus \Lambda)) \,=\,
\mu(X(\Lambda) = \xi (\Lambda)| X(\partial \Lambda) = \xi(\partial
\Lambda))$$
for all finite $\Lambda \subset {\mathbb Z}^d$, $\xi \in A^{{\mathbb Z}^d}$. Here,
$ \partial \Lambda = \{ j \in {\mathbb Z}^d : \inf_{ i \in \Lambda } \| i -j \| = 1 \} .$
Such a set of conditional probabilities is
called the specification of the random field and denoted by ${\cal
Q}$.
\end{defin}
\begin{cor} \label{cor:HS} For any Markov random field $X$ on ${\mathbb Z}^d$ with
specification ${\cal Q}$ satisfying
\begin{equation}
\label{eq:HS}
\sum_{a \in A} \min_{\zeta(\partial 0) \in A^{\partial 0}} {\cal Q}(X(0)=a|
X(\partial 0) = \zeta(\partial 0)) > \frac{2d}{2d +1 },
\end{equation}
there exists an i.i.d. sequence $ (Y (i) , i \in {\mathbb Z}^d ) $ of finite
valued random variables such that there exists a finitary coding from
$ (Y (i) , i \in {\mathbb Z}^d ) $ to the Markov random field.
\end{cor}
\begin{rem}
Just for comparison, Condition (\ref{eq:HS}) is equivalent to
$$\alpha_0(-1) > \frac{2d}{2d +1 },$$
while {\it Condition HN} in Theorem 1.1 of H\"aggstr\"om and Steif
(2000) can be rewritten in our notation as
$$ \alpha_0(-1) > \frac{2d -1 }{2d}.$$
This does not seem to be a too high price to pay in
order to be able to treat the general case of long range interactions.
\end{rem}
\section{Proof of Theorem \ref{theo:decomp}}
The countable mixture representation provided by Theorem
\ref{theo:decomp} is the basis of all the other results presented in
this paper. Therefore it is just fair that its proof appears in the
first place.
Recall that $c_i (a, \eta ) = M_i \, p_i (a|\eta ). $ Therefore, it is
sufficient to provide a decomposition for $p_i (a|\eta ).$ Put
\begin{eqnarray*}
r_i^{[-1]} (a) &=& \inf_{\zeta} p_i(a |\zeta),\\
\Delta^{[-1]}_i (a) &= &r_i^{[-1]} (a), \\
r_i^{[0]} ( a| \eta (V_i (0)))& =& \inf_{\zeta : \zeta (V_i (0) ) =
\eta (V_i (0)) }p_i (a| \zeta) ,\\
\Delta^{[0]}_i ( a | \eta (V_i (0) ))&=
& r_i^{[0]} ( a |\eta(V_i (0) )) - r^{[-1]}_i (a) .
\end{eqnarray*}
For any $k \geq 1,$ define
$$ r_i^{[k]} ( a| \eta (V_i(k))) = \inf_{ \zeta : \zeta (V_i (k)) =
\eta (V_i (k))} p_i(a | \zeta) , $$
$$ \Delta_i^{[k]} (a | \eta (V_i (k)))=
r_i^{[k]} (a |\eta(V_i (k))) - r^{[k-1]}_{i} ( a | \eta ( V_i ({k-1}) )). $$
Then we have that
$$ p_i(a|\eta) = \sum_{j=-1}^k \Delta^{[j]}_i (a| \eta(V_i(j))) +
\left[ p_i(a|\eta) - r_i^{[k]} (a|\eta (V_i(k)))\right].
$$
By continuity of $c_i(a,\eta),$ hence of $p_i (a|\eta),$
$$ r_i^{[k]} (a|\eta (V_i(k))) \to p_i ( a| \eta) \mbox{ as } k \to \infty .$$
Hence by monotone convergence, we conclude that
$$ \sum_{j=-1}^\infty \Delta_i^{[j]} (a| \eta(V_i (j))) = p_i(a| \eta ) .$$
Now, put
$$ \lambda_i (k,\eta (V_i(k))) = \sum_a \Delta_i^{[k]} (a| \eta(V_i (k)))$$
and for any $i,k$ such that $ \lambda_i (k,\eta (V_i(k))) > 0,$ we define
$$ \tilde{p}_i^{[k]} (a | \eta(V_i(k))) = \frac{\Delta_i^{[k]} (a|
\eta(V_i (k)))}{ \lambda_i (k,\eta (V_i(k)))}.$$ For $i, k$ such that
$ \lambda_i (k,\eta (V_i(k))) = 0,$ define $\tilde{p}_i^{[k]} (a |
\eta(V_i(k)))$ in an arbitrary fixed way.
Hence
\begin{equation}\label{eq:almost}
p_i(a|\eta) = \sum_{k=-1}^\infty \lambda_i (k,\eta(V_i(k)))
\tilde{p}_i^{[k]} (a| \eta(V_i(k))).
\end{equation}
In (\ref{eq:almost}) the factors $\lambda_i (k,\eta(V_i(k) ))$ still
depend on $\eta (V_i(k)) .$ To obtain the decomposition as in the
theorem, we must rewrite it as follows.
For any $i,$ take $M_i$ as in (\ref{eq:m}) and the sequences $\alpha_i
(k), \lambda_i (k), k \geq -1,$ as defined in (\ref{eq:alpha}) and
(\ref{eq:lambdak}), respectively. Define the new quantities
$$\alpha_i (k,\eta(V_i(k))) = M_i \, \sum_{l \le k} \lambda_i (l, \eta( V_i(l))).$$
Finally put $p_i^{[-1]} (a) = \tilde{p}_i^{[-1]} (a),$ and for any $k \geq 0,$
\begin{eqnarray*}
&& p_i^{[k]} ( a| \eta(V_i({k}))) = \\
&& \sum_{-1 = l' \le l }^{k-1} 1_{\{
\alpha_i (l' - 1 ,\eta(V_i (l' -1))) < \alpha_i ({k-1}) \le \alpha_i({l'},
\eta(V_i({l' })))\}} 1_{\{
\alpha_i (l,\eta(V_i (l))) < \alpha_i ({k}) \le \alpha_i({l+1},
\eta(V_i({l+1})))\}} \\
&& \quad \quad \quad
\left[ \frac{\alpha_i (l',\eta (V_i(l'))) - \alpha_i(k-1) }{M_i \,
\lambda_i({k})} \tilde{p}_i^{[l']} (a | \eta (V_i(l')))\right. \\
&& \quad \quad \quad + \sum_{m = l'+1}^{l} \frac{\lambda_i (m , \eta (V_i (m))}{M_i \lambda_i (k) }
\tilde{p}_i^{[m]} (a | \eta (V_i(m)))
\\
&&
\quad \quad \quad \left. + \; \frac{\alpha_i({k}) - \alpha_i (l,\eta (V_i(l)))}{M_i \,
\lambda_i({k})} \tilde{p}_i^{[l+1]} (a| \eta (V_i({l+1}))) \right] .
\end{eqnarray*}
This concludes our proof. \hfill $\square$
\section{The black and white time-reverse sketch process} \label{sec:bw}
The {\it black and white time-reverse sketch process} gives the
mathematically precise description of the backward black and white
Algorithm 1 given above. We start
by introducing some more notation. For each $ i \in {\mathbb Z}^d, $ denote
by $\ldots T_{-2}^i <T_{-1}^i < T_{0}^i < 0 < T_1^i < T_2^i <
\ldots$ the occurrence times of the rate $M_i$ Poisson point process
$N^i $ on the real line. The Poisson point processes associated to
different sites are independent. To each point $T_n^i$ associate an
independent mark $K^i_n$ according to the probability distribution
$(\lambda_i(k))_{k \ge -1}$. As usual, we identify the Poisson point
processes and the counting measures through the formula
$$N^i[s,t] \,=\, \sum_{n \in {\mathbb Z}} {\bf 1}\hskip-.5mm_{\{ s \le T_n^i \le t\}}.$$
It follows from this identification that for any $t > 0$ we have
$T^i_{N^i(0,t]} \le t < T^i_{N^i(0,t]+1},$ and for any $t \le 0,$
$T^i_{-N^i(t,0]} \le t < T^i_{-N^i(t,0]+1}$.
For each $i \in {\mathbb Z}^d$ and $t \in {\mathbb R}$ we define the time-reverse point
process starting at time $t,$ associated to site $i,$
\begin{eqnarray}
\label{eq:tildet}
\tilde{T}^{(i,t)}_n &=& t \,-\, T^i_{N^i(0,t]-n+1}, \quad t \ge 0,\nonumber \\
\tilde{T}^{(i,t)}_n & = & t \,-\, T^i_{-N^i(t,0]-n+1}, \quad t < 0 .
\end{eqnarray}
We also define the associated marks
\begin{eqnarray}
\label{eq:tildet1}
\tilde{K}^{(i,t)}_n &=& K^i_{N^i(0,t]-n+1}, \quad t \ge 0,\nonumber \\
\tilde{K}^{(i,t)}_n & = & K^i_{-N^i(t,0]-n+1}, \quad t < 0.
\end{eqnarray}
For each site $i \in {\mathbb Z}^d$, $k \ge -1$, the reversed $k$-marked
Poisson point process returning from time $t$ is defined as
\begin{equation}
\label{eq:tilden}
\tilde{N}^{(i,t,k)}[s,u] \,=\, \sum_{n} {\bf 1}\hskip-.5mm_{\{s \le \tilde{T}^{(i,t)}_{n} \le u\}} {\bf 1}\hskip-.5mm_{\{\tilde{K}^{(i,t)}_n = k\}}.
\end{equation}
To define the black and white time-reverse sketch process we need to
introduce a family of transformations $\{\pi^{(i,k)}, i \in {\mathbb Z}^d, k
\ge 0\}$ on the set of finite subsets of ${\mathbb Z}^d,$ $ {\cal F}({\mathbb Z}^d),$
defined as follows. For any unitary set $\{j\}$,
\begin{equation}
\label{eq:pij}
\pi^{(i,k)}(\{j\}) \,=\, \left\{ \begin{array}{ll}
V_i (k), & \mbox{ if } j=i \\
\{j\}, & \mbox{ otherwise}
\end{array} \right\} .
\end{equation}
Notice that for $k=-1,$ $ \pi^{(i,k)}(\{i\}) = \emptyset.$ For any
set finite set $F \subset {\mathbb Z}^d$, we define similarly
\begin{equation}
\label{eq:pif}
\pi^{(i,k)}(F) \,=\, \cup_{j \in F} \pi^{(i,k)}(\{j\}) .
\end{equation}
The black and white time-reverse sketch process starting at site $i$
at time $t$ will be denoted by $(C_s^{(i,t)})_{s \geq 0}.$
$C_s^{(i,t)}$ is the set of sites at time $s$ whose colors affect the
color of site $i$ at time $t.$ The evolution of this process is
defined through the following equation: $C_0^{(i,t)} := \{i\},$ and
\begin{equation}
\label{eq:ct}
f( C_s^{(i,t)}) \,=\, f(C_0^{(i,t)}) \,+\, \sum_{k \ge -1} \sum_{j \in {\mathbb Z}^d} \int_0^s [f(\pi^{(j,k)} (C_{u-}^{(i,t)})) - f(C_{u-}^{(i,t)})]\, \tilde{N}^{(j,t,k)}(du),
\end{equation}
where $f: {\cal F}({\mathbb Z}^d) \rightarrow {\mathbb R}$ is any bounded cylindrical
function. This family of equations characterizes completely the time
evolution $\{C_s^{(i,t)}, s \ge 0\}$. For any finite set $F \subset
{\mathbb Z}^d$ define
$$C_s^{(F,t)} \,=\, \cup_{i \in F} C_s^{(i,t)}.$$
The following proposition summarizes the properties of the family of
processes defined above.
\begin{prop}
For any finite set $F \subset {\mathbb Z}^d$, $C_s^{(F,t)}$ is a Markov jump
process having as infinitesimal generator
\begin{equation}
\label{eq:generatord}
L f(C) \,=\, \sum_{i \in C} \sum_{k \ge 0} \lambda_i (k) [f(C \cup V_i(k)) - f(C)] + \lambda_i (-1) [f(C \setminus \{i\}) - f(C)] ,
\end{equation}
where $f$ is any bounded function.
\end{prop}
{\bf Proof:} The proof follows in a standard way from the construction
\reff{eq:ct}.
\section{Proof of Theorem \ref{theo:1}}
The existence issue addressed by Theorem \ref{theo:1} can be
reformulated in terms of the black and white time-reverse sketch
process described above. The process is well defined if for each site
$i$ and each time $t$, the time-reverse procedure $C^{(i,t)} $
described above is a non-explosive Markov jump process. This means
that for each time $t,$ the number of operations needed to determine
the value of $\xi^{\eta}_t(i)$ is finite almost surely. Note that by
equation (\ref{eq:ct}), the jumps of $C^{(i, t)} $ occur at total rate
$$ \sum_{j \in C_s^{(i,t)}} M_j \, \sum_{ k \geq - 1 } \lambda_j (k) \le (\sup_j M_j ) \, | C_s^{(i,t)} |,$$
where $| \cdot | $ denotes the cardinal of a set. Hence it suffices to show that the cardinal of
$C^{(i, t) }$ remains finite.
More precisely, fix some $N \in {\mathbb N}.$ Let $L_s = | C_s^{(i,t)}| $ and
$$ T_N = \inf \{ t : L_t \geq N \} .$$ Then by (\ref{eq:ct}),
\begin{eqnarray}\label{eq:ub}
L_{s \wedge T_N} & \le& 1 + \sum_{k \geq 1} \sum_{j \in {\mathbb Z}^d} \int_0^{s \wedge T_N} [|V_j(k)| - 1] 1_{\{ j \in C^{(i,t)}_{u-} \}} \, \tilde{N}^{(j,t,k)}(du)\nonumber\\
&& -\sum_{j \in {\mathbb Z}^d} \int_0^{s \wedge T_N} 1_{\{ j \in C^{(i,t)}_{u-} \}} \, \tilde{N}^{(j,t,0 )}(du) .
\end{eqnarray}
Passing to expectation and using that by condition
(\ref{eq:condition1}),
$$ m = \sup_i \sum_{k \geq 1} M_i \, \lambda_i (k) |V_i(k)| < + \infty,$$
this yields
\begin{eqnarray}\label{eq:upperbound2}
E (L_{s \wedge T_N})& \le& 1 + \sum_{j \in {\mathbb Z}^d}\, M_j \left( (\sum_{k \geq 1}\, \lambda_j (k) [ |V_j(k)| - 1] ) - \lambda_j (-1)\right) \nonumber \\
&& \quad \quad \quad \quad \quad \quad \quad \quad \times E \int_0^{s\wedge T_N} 1_{\{ j \in C^{(i,t)}_{u-} \}} du \nonumber \\
&\le& 1 + m \, E \int_0^{s\wedge T_N} L_u du .
\end{eqnarray}
Letting $N \to \infty ,$ we thus get that
$$ E(L_s) \le 1 + m \int_0^s E(L_u) du,$$
and Gronwall's lemma yields
\begin{equation}
\label{eq:gron}
E(L_s) \le e^{ms} .
\end{equation}
This implies that the number of sites that have to be determined in
order to know the value of site $i$ at time $t$ is finite almost
surely. This means that the process $C_s^{(F, t)}
$ admits only a finite number of
jumps on any finite time interval. Hence, we have necessarily
$N_{STOP} < + \infty $ almost surely which means that the algorithm
stops almost surely after a finite time. This concludes the proof of Theorem \ref{theo:1}.
\section{Proof of Theorem \ref{theo:nstop}}
We show that under condition (\ref{eq:condition2}),
Algorithm 3 stops after a finite
time almost surely. Write $L^i_s$ for the cardinal of
$C_s^{(i, t)} .$ Using once more the upper-bound (\ref{eq:upperbound2})
and the fact that under condition (\ref{eq:condition2}),
$$ M_j \left( (\sum_{k \geq 1}\, \lambda_j (k) [ |V_j(k)| - 1] ) - \lambda_j (-1)\right) \le - \varepsilon < 0 ,$$
Gronwall's lemma yields that
$$ E( L^i_s) \le e^{- \varepsilon s} .$$
Hence, since $ |C_s^{(F, t)} | \le \sum_{i \in F } |C_s^{(i, t)}| =
\sum_{i \in F } L_s^i,$
$$ E ( |C_s^{(F, t)} |) \le |F| e^{ - \varepsilon s .} $$
This implies that $ \inf \{ s: C_s^{(F, t)} = \emptyset \} $ is finite
almost surely. Due to Theorem \ref{theo:1}, the process $C_s^{(F, t)}
$ is non-explosive, which means that it admits only a finite number of
jumps on any finite time interval. Hence, we have necessarily
$N_{STOP} < + \infty $ almost surely which means that the algorithm
stops almost surely after a finite time.
In order to show that the measure $\mu $ that we have simulated in this
way is necessarily the unique invariant probability measure of the
process, we prove the following lemma.
\begin{lem}\label{lemma:1}
Fix a time $t > 0,$ some finite set of sites $F \subset {\mathbb Z}^d $ and two
initial configurations $ \eta $ and $\zeta \in A^{{\mathbb Z}^d} .$ Then there exists a
coupling of the two processes $(\xi^\eta_s)_s$ and $(\xi^\zeta_s)_s $ such that
$$
P( \xi_t^\eta (F) \neq \xi^\zeta_t (F) ) \le |F| e^{- \varepsilon t } .
$$
\end{lem}
From this lemma, it follows immediately that $\mu$ is the unique
invariant measure of the process and that the convergence towards the
invariant measure takes place exponentially fast.
{\bf Proof of Lemma \ref{lemma:1}}.
We use a slight modification of Algorithm 1 and 2 in order to
construct $\xi_t^\eta $ and $\xi_t^\zeta . $ The modification is defined
as follows. Replace step 8 of Algorithm 1 by
$$ \mbox{ \bf if } K = -1 ,\mbox{ \bf then } $$
$$ C \leftarrow C \setminus \{ I \}$$
$$ \mbox{ \bf else } $$
$$ C \leftarrow C \cup V_I (K) $$ We use the same realizations of $T,
I $ and $ K $ for the construction of $\xi_t^\eta $ and $\xi_t^\zeta
$. Write $L_s$ for the cardinal of $C_s^{(F,t)}.$ Clearly, both
realizations of $\xi_t^\eta $ and $\xi_t^\zeta $ do not depend on the
initial configuration $\eta,$ $\zeta$ respectively if and only if the
output $C$ of Algorithm 1 is void. Thus,
\begin{eqnarray*}
P ( \xi_t^\eta ( F) \neq \xi_t^\zeta (F) )
& \le & P( T_{STOP} \geq t ) \\
& =& {\mathbb P} ( L_t \geq 1 )\\
& \le & E (L_t) \le |F| e^{- \varepsilon t } .
\end{eqnarray*}
This concludes the proof of lemma \ref{lemma:1}.
\section{Proof of Theorem \ref{theo:4}}
The proof of Theorem \ref{theo:4} goes according to the following
lines.
We start at time $t$ with the sites in $F$ and go back
into the past until time 0 following the backward black and white
sketch without deaths described in Algorithm 1. The set $C$ of points
reached by this procedure at time 0 are the only ones which coloring
affects the evolution during the interval of time $[0,t]$ of the sites
belonging to $F$.
We need to know that $C$ is a finite set. This follows from a slight
modification of the proof of Theorem \ref{theo:1}. Notice that in the
construction by Algorithm 1, even if $K = -1 ,$ the corresponding site
is not removed from the set $C. $ This implies that in the upper bound
(\ref{eq:ub}) the negative term on the right hand side disappears.
This modification does not affect the upper bound
(\ref{eq:upperbound2}) which remains true.
Using Theorem \ref{theo:nstop} we assign colors to the sites
belonging to $C$ using the invariant distribution at the origin of the
multicolor system. Then we apply Algorithm 2 to describe the time evolution
of the coloring of the sites in $F$. This evolution depends on the
colors of the sites in $C$ at time zero as well as on the successive
choices of sites and ranges made during the backward steps
starting at time $t$. This concludes the proof.
\section{Proof of Theorem \ref{theo:5} }
The construction of the finitary coding can be better understood if we
do it using two intermediate steps based on families of
infinite-valued random variables.
Using a slightly abusive terminology, let us introduce the following
definitions of a finitary coding from families of piles of
i.i.d. random variables.
\begin{defin}\label{def:finitarycoding}
We say that there exists a {\it finitary coding} from a family of
i.i.d. uniform random variables $(U_n (i) ,i
\in {\mathbb Z}^d, n \in {\mathbb N} )$ to the configuration $(\xi_0 (i), i \in {\mathbb Z}^d)
$ sampled with respect to the invariant probability measure $\mu$,
if there exists a deterministic function $f:[0,1]^{{\mathbb Z}^d \times {\mathbb N}}
\rightarrow A^{{\mathbb Z}^d}$ such that, almost surely, the following holds
\begin{itemize}
\item $f$ commutes with the shift operator in ${\mathbb Z}^d$;
\item $\xi_0 = f( (U_n (j) ), j \in {\mathbb Z}^d, n \in {\mathbb N} ) $; and
\item for each site $i \in {\mathbb Z}^d$, there exists a a finite subset
$\bar{F}_i $ of ${\mathbb Z}^d$ and $\bar{n}_i \ge 1$ such that if
$$ U'_n (j) = U_n (j) \mbox{ for all } j \in \bar{F}_i, n \le
\bar{n}_i$$
then
$$ f ( (U_n (j) ), j \in {\mathbb Z}^d, n \in {\mathbb N})(i) = f ( (U'_n (j) ), j \in
{\mathbb Z}^d, n \in {\mathbb N})(i).$$
\end{itemize}
\end{defin}
\begin{defin}\label{def:finitarycoding2}
We say that there exists a {\it finitary coding} from a family of
i.i.d. fair Bernoulli random variables $(Y_{n,r} (i) ,i \in {\mathbb Z}^d, (n,r)
\in {\mathbb N}^2 )$ to the configuration $(\xi_0 (i), i \in {\mathbb Z}^d) $ sampled
with respect to the invariant probability measure $\mu$, if there
exists a deterministic function $f:\{0,1\}^{{\mathbb Z}^d \times {\mathbb N}^2}
\rightarrow A^{{\mathbb Z}^d}$ such that, almost surely, the following holds
\begin{itemize}
\item $f$ commutes with the shift operator in ${\mathbb Z}^d$;
\item $\xi_0 = f( (Y_{n,r} (j) ), j \in {\mathbb Z}^d, (n,r) \in {\mathbb N}^2 ) $; and
\item for each site $i \in {\mathbb Z}^d$, there exists a a finite subset
$\bar{F}_i $ of ${\mathbb Z}^d$ and $\bar{n}_i \ge 1$ such that if
$$ Y'_{n,r} (j) = Y_{n,r} (j) \mbox{ for all } j \in \bar{F}_i, 1 \le n \le
\bar{n}_i, 1 \le r \le \bar{n}_i$$
then
$$ f ( (Y_{n,r} (j) ), j \in {\mathbb Z}^d, (n,r) \in {\mathbb N}^2)(i) = f (
(Y'_{n,r} (j) ), j \in {\mathbb Z}^d, (n,r) \in {\mathbb N}^2)(i).$$
\end{itemize}
\end{defin}
We will first prove the existence of a finitary coding in this
extended definition from the sequence $(U_n(i), i \in {\mathbb Z}^d , n \in {\mathbb N}
)$ to the configuration $(\xi_0 (i), i \in {\mathbb Z}^d) $ .
\begin{prop} \label{prop:uni-fini} Under the conditions of Theorem
\ref{theo:nstop}, there exists a finitary coding from a family of
i.i.d. uniform random variables $(U_n (i) ,i \in {\mathbb Z}^d, n \in {\mathbb N} )$
to the configuration $(\xi_0 (i), i \in {\mathbb Z}^d)$.
\end{prop}
\proof Our goal is to choose the color of site $i$ at time 0 using
Algorithms 3 and 4. For notational convenience, we will represent the
sequence $U_n(j) $ as
$$U_n^v (j), \, v \in {\cal V} \, ,$$
where ${\cal V} = \{I, K, W\}$.
The first sequence of uniform variables $U_n^I (j) $ is used for the
choice of successive points $I$ at Step 5 of Algorithm 3. The second
sequence $U_n^K(j)$ will be used to construct the sequence of ranges
$K$ of Step 6 of Algorithm 4. Finally, the third sequence $U_n^W(j)$ will be
used to construct the corresponding colors $W$ at Steps 7 and 9 of the forward
procedure described in Algorithm 4.
The fact that $N_{STOP } $ is finite almost surely implies that the
backward Algorithm 1 must run only a finite number of steps for
any fixed $i$. Thus only a finite set of sites $\bar{F}_i $ is
involved in this procedure. This also implies that the number of
uniform random variables we must use is finite, thus $\bar{n}_i$ is
finite. The definition of the function $f$ is explicitly given by
Algorithms 3 and 4.
This concludes the proof of Proposition \ref{prop:uni-fini}.
Proposition \ref{prop:uni-fini} can be rewritten using piles of piles
of Bernoulli random variables instead of piles of uniform random variables.
\begin{prop} \label{prop:bern-fini} Under the conditions of Theorem
\ref{theo:nstop}, there exists a finitary coding from a family of
i.i.d. fair Bernoulli random variables $(Y_{n,r} (i) ,i \in {\mathbb Z}^d,
(n,r) \in {\mathbb N}^2 )$ to the configuration $(\xi_0 (i), i \in {\mathbb Z}^d)$.
\end{prop}
\proof As before, for notional convenience, we will represent the
sequence $Y_{n,r}(j) $ as $$Y_{n,r}^v (j), \, v \in {\cal V} .$$
All we need to prove is that the successive uniform random variables
$\{U^v_n(j), j \in \bar{F}_i, 1 \le n \le \bar{n}_i\}$ used in Proposition
\ref{prop:uni-fini} can be generated
using a finite number of Bernoulli random variables from the pile
$Y_{n,r}^v (j), r \in {\mathbb N}$.
In all the steps, the uniform random variables were used to generate
random variables taking values in a countable set. Let us identify this
countable set with ${\mathbb N}$. In the successive steps of Algorithms 3 and
4, this selection could generate either $K$, $I$ or $W$. The selection
is made by defining in each case a suitable partition of $[0, 1 ] =
\cup_{l=1}^{\infty} [\theta(l), \theta(l+1))$ and then choosing the
index $l$ whenever $U^v_n(j) \in [\theta(l), \theta(l+1))$. It is easy to see
that $U_n^v(j)$ has the same law as $\sum_{r=1}^{\infty} 2^{-r}
Y_{n,r}^v (j)$.
We borrow from Knuth and Yao (1976) the following algorithm to
generate the discrete random variables $I$, $K$ and $W$ which appear
in the perfect sampling procedure (see also Harvey {\it et al.}, 2005).
For any $m \geq 1,$ we define
$$ S_m (j,n,v) = \sum_{r=1}^m 2^{-r} Y_{n,r}^v (j) .$$
Now, put
$$ J(S_m(j,n,v)) = \sup \{ k \ge 1 : \theta(k) \le S_m(j,n,v) \} ,$$
and finally define
\begin{equation}
\label{eq:N(v,n,j)}
N_n^v(j) = \inf \{ m \ge 1 : J(S_m(j,n,v)) = J(S_{m'}(j,n,v) )\,
\forall m' \geq m \} .
\end{equation}
Notice that $ N_n^v(j)$ is a finite stopping time with respect to the
$\sigma$-algebra generated by $\{Y_{n,r}^v(j), r \ge 1 \}$.
Therefore, the total number of piles $N(j)$ is equal to $ N(j)= \sum_{v \in {\cal V}}
\sum_{n=1}^{\bar{n}_j}N_n^v(j), $ where $\bar{n}_j$ is defined in
Proposition \ref{prop:uni-fini}, used at site $j$ is finite and the
event $[N(j) = \ell]$ is measurable with respect to the
$\sigma$-algebra generated by $\{Y_{n,r}^v(j), v \in {\cal V}, 1 \le n
\le \ell, 1 \le r \le \ell\}$. Since the set of sites used is $\bar{F}_i$
(the same one as Proposition \ref{prop:uni-fini}), the proof is
complete.
We are finally ready to prove Theorem \ref{theo:5}. The difficulty
is to show that the construction achieved in Proposition
\ref{prop:bern-fini} using the random sized piles $(Y_{n,r}^v (i) ,
v\in {\cal V} , i \in {\mathbb Z}^d, n \le N(i), r \le \ N(i) )$ can actually be
done using finite piles of fair Bernoulli random
variables. Specifically, for $i \in {\mathbb Z}^d ,$ let us call
$$Z(i) = (Y_{n,r}^v (i) , v\in {\cal V}, n \in \{0,\ldots, M\}, r \in
\{0,\ldots, M\} ),$$
where $M$ is a suitable fixed positive integer.
The proof that there exists a finitary coding from the family of
finite-valued i.i.d. random variables $\{Z(i), i \in {\mathbb Z}^d\}$ to
$\{\xi_0(i), i \in {\mathbb Z}^d\}$ follows from the construction in Van den
Berg and Steif (1999) if we can take
$$ M > \sup_{j \in {\mathbb Z}^d} {\mathbb E}[N(j)]. $$
It follows from the definition of $N_n^v(j)$ given by
(\ref{eq:N(v,n,j)}) that
\begin{equation}
\label{eq:tail1}
{\mathbb P}[ N_n^v(j) > k] \le {\mathbb P} \left( \cup_{i=1}^{m_k} \left[ \theta(i)
- \frac{1}{2^k} < S_k (j,n,v) \le \theta(i) \right] \right) + {\mathbb P} \left( 1
- \frac{1}{2^{k}} < S_k(j,n,v) \le 1 \right)
\end{equation}
where
$$m_k = \sup\{ i \ge 1; \theta(i) < 1 - \frac{1}{2^k} \}.$$
In the above formula, the partition of $[0, 1 ] =
\cup_{i=1}^{\infty} [\theta(i), \theta(i+1))$ was used to
simulate the countable-valued random variable at stake at that level
(either $I$, $K$ or $W$).
Since $ S_k (j,n,v)$ converges in law to a uniform random variable as
$k \rightarrow \infty$, the right hand side of (\ref{eq:tail1}) is
bounded above by $$\frac{m_k+1}{2^{k-1}}.$$
For piles choosing colors, the result is obvious since the set of
possible colors is finite and therefore all the corresponding $m_k =
|A|$ for all $k \ge |A|$.
For piles choosing sites in the backward black and white sketch,
the result follows from inequality (\ref{eq:gron}) as in the conclusion of the proof of Proposition \ref{theo:1}.
Finally, for piles choosing ranges, the result follows from the following
two lemmas and Wald's inequality observing that $ \bar{n}_j \le N_{STOP}$.
\begin{lem} $\sup_j {\mathbb E}[N_n^K(j)] < \infty.$ \end{lem}
\proof It follows from Knuth and Yao (1976) that
\begin{equation}
\label{eq:entropy}
{\mathbb E}[N_n^K(j)] \le H(\{ \lambda_j(k) \}_{k \ge -1}) + 2,
\end{equation}
where $H(\{ \lambda_j(k)\}_{k \ge -1 })$ is the entropy of the discrete
distribution $\{ \lambda_j(k)\}_{k \ge -1 }$ of the random variable $K$.
On the other hand, the condition $\sum_k |V_k(j)| \lambda_k(j) < 1$ in
Theorem \ref{theo:nstop} implies that $\sum_k k \lambda_j(k) = m_j < 1$
for all $j \in {\mathbb Z}^d$. We want to compare $\{ \lambda_j(k)\}_{k \ge -1 }$
to a geometric distribution. For that sake, we introduce a distribution $\nu_j (k)
$ on $\{ 1, 2, \ldots \} $ by $ \nu_j (k) = \lambda_j (k-2) .$ Then
$$ \sum_{k \geq 1 } k \nu_j (k) = m_j + 2 - \lambda_j (-1) =: \tilde m_j.$$
By a direct comparison with the geometric
distribution of mean $\tilde m_j$ we have that
\begin{equation}
\label{eq:geom}
H(\{ \lambda_j(k)\}_{k \ge -1}) \le - \log(p_j) - \log (1-p_j) (\tilde m_j-1) < \infty,
\end{equation}
where $p_j = 1/(\tilde m_j) $.
\begin{lem} ${\mathbb E}[N_{STOP}] < \infty$. \end{lem}
\proof Without loss of generality we can consider $F = \{0\}$ to start
Algorithm 3. Define
$$ L_n := | C_n|,$$
the cardinal of the set $C_n$ after $n$ steps of Algorithm 3. Let
$(K^{i}_n)_{n \geq 0, i \in {\mathbb Z}^d} $ be the i.i.d. marks defined in
Section \ref{sec:bw}, taking values in $\{ -1 , 0 , 1, 2, \ldots
\} $ such that
$$ P( K^i_n = k ) = \lambda_i (k) .$$
Define $X^i_n = |V_0(K_n^i) | - 1$. Note that by condition
(\ref{eq:condition2}),
$$\sup_{i \in {\mathbb Z}^d } E(X^i _1) \le ({\bar{\lambda}} -1) <
0,$$
where ${\bar{\lambda}} = \sup_{i \in {\mathbb Z}^d} \sum_{k \ge 0} \, |V_i (k)|
\lambda_i (k) $.
Consider the sequence $I_n $ which gives the site of the particle
chosen at the $n$th step of Algorithm 3. Put
$$S_n \,=\, \sum_{k= 0}^n X_k^{I_k}.$$
Note that by construction, $S_n + n (1 - {\bar{\lambda}}) $ is a super-martingale.
Then a very rough upper bound is
$$ L_n \le 1 + S_n \mbox{ as long as } n \le V_{STOP} ,$$
where $V_{STOP} $ is defined as
$$ V_{STOP} = \min \{ k : S_k = -1 \} .$$
By construction
$$ N_{STOP} \le V_{STOP}. $$
Fix a truncation level $N.$ Then by the stopping rule for super-martingales, we have that
$$ E( S_{V_{STOP} \wedge N } ) + (1 - {\bar{\lambda}}) E( V_{STOP} \wedge N ) \le 0 .$$
But notice that
$$ E( S_{V_{STOP} \wedge N }) = - 1 \cdot P( V_{STOP} \le N ) + E (S_N ; V_{STOP } > N ) .$$
On $ V_{STOP } > N , $ $S_N \geq 0,$ hence we have that $ E( S_{V_{STOP} \wedge N }) \geq - P( V_{STOP} \le N) .$
We conclude that
\begin{eqnarray*}
E( V_{STOP} \wedge N ) & \le & \frac{1}{(1 - {\bar{\lambda}}) } P( V_{STOP} \le N ) .
\end{eqnarray*}
Now, letting $N \to \infty ,$ we get
$$ E( V_{STOP}) \le \frac{1}{(1-{\bar{\lambda}})} ,$$
and therefore
$$ E( N_{STOP}) \le \frac{1}{(1-{\bar{\lambda}})}. $$
\vspace{.5cm}
Finally for piles choosing sites in the backward black and white
sketch, the result follows from inequality (\ref{eq:gron}) as in
the conclusion of the proof of Theorem \ref{theo:1}.
This concludes the proof.
\section{Proof of Corollary \ref{cor:HS}}
The strategy of the proof is the following. We will consider a
multicolor system having the law of the Markov random field as
invariant measure. Then the corollary follows from the second
assertion of Theorem \ref{theo:5}.
A standard way to obtain a system having the law $\mu$ as invariant
measure is to ask for reversibility. Usually, in the statistics
literature such dynamic is known as Gibbs sampler. In the statistical
physics literature, where it first appeared, it is known as Glauber
dynamics. We will use a particular case of the Glauber dynamics called
the heat bath algorithm. The idea is that at each site there is a
Poisson clock which rings independently of all other sites. Each
time its clock rings the color of the site is updated according to
the specification of the Markov random field ${\cal Q}$.
We are only considering the Markov spatially homogeneous
case. This means that the rate $c_i ( a , T_i \xi ) = c_0 ( a, \xi )$
where $(T_i \xi ) (j) = \xi ( j-i ) .$ $c_0(a, \xi) $ only depends on
$\xi (\partial 0 ) $. Moreover, in this case $ M = M_i = 1$.
With the heat bath algorithm, the rates are defined as
$$c_0(a, \xi) = {\cal Q}(X(0)=a|
X(\partial 0) = \xi(\partial 0)) $$ where ${\cal Q}$ is the {\it
specification} of the random field $X$.
Since we are considering the homogeneous case, we will drop the
subscript from the notation. Therefore, we have
\begin{eqnarray*}
\alpha(-1) &=& \sum_{a \in A} \min \left( \inf_{\zeta \in A^{{\mathbb Z}^d},
\zeta(i ) \neq a} c_0(a, \zeta)
, \, 1 - \sup_{ \zeta \in A^{{\mathbb Z}^d}, \zeta(i )= a} \sum_{b \neq a}
c_0 (b, \zeta) \right) \\
&=& \sum_{a \in A} \min \left( \inf_{\zeta \in A^{{\mathbb Z}^d},
\zeta(i ) \neq a} c_0(a, \zeta)
, \, \inf_{ \zeta \in A^{{\mathbb Z}^d}, \zeta(i )= a} c_0 (a, \zeta)
\right) \\
& =& \sum_{a \in A} \min_{\zeta(\partial 0) \in A^{\partial 0}} {\cal Q}(X(0)=a|
X(\partial 0) = \zeta(\partial 0)). \\
\end{eqnarray*}
Also,
\begin{eqnarray*}
\alpha (0) &=& \min_{w \in A } \left( \sum_{a \in A, a \neq w}
\inf_{\zeta: \zeta(0) = w} c_0( a,\zeta) + 1 - \sup_{\zeta:
\zeta(0) = w} \sum_{ b \neq w} c_0 (b, \zeta)\right) \\
&=& \min_{w \in A } \sum_{a \in A} \inf_{\zeta: \zeta(0) = w} c_0( a,\zeta) \\
&=& \min_{w \in A } \sum_{a \in A} \inf_{\zeta;\zeta(0) = w} {\cal Q}(X(0)=a|
X(\partial 0) = \zeta(\partial 0)) \\
&=& \sum_{a \in A} \min_{\zeta(\partial 0) \in A^{\partial 0}} {\cal Q}(X(0)=a|
X(\partial 0) = \zeta(\partial 0)).
\end{eqnarray*}
The last equality follows from the fact that $ \inf_{\zeta;\zeta(0) =
w} {\cal Q}(X(0)=a| X(\partial 0) = \zeta(\partial 0))$ only
depends on the value of the random field at $\partial 0$.
Finally, for all $k \ge 1$
\begin{eqnarray*}
\alpha (k) &=& \min_{w \in A^{V_0 (k) }} \left( \sum_{a \in A, a \neq w(0)}
\inf_{\zeta: \zeta(V_0 (k)) = w} c_0( a,\zeta) + 1 - \sup_{\zeta:
\zeta(V_0 (k)) = w} \sum_{ b \neq w(0)} c_0 (b, \zeta)\right) \\
&=& \min_{w \in A^{V_0 (k) }} \left( \sum_{a \in A}
\inf_{\zeta: \zeta(V_0 (k)) = w} c_0( a,\zeta) \right) \\
&=& \min_{w \in A^{\partial 0 }} \left( \sum_{a \in A}
{\cal Q}(X(0)=a|
X(\partial 0) = w(\partial 0) \right) = 1.
\end{eqnarray*}
Observe that $\alpha(0) = \alpha(-1)$. Therefore, condition
(\ref{eq:condition1}) reduces to
$$\alpha(-1) > \frac{2d}{2d +1 }.$$
\section*{Acknowledgments}
We thank Pablo Ferrari, Alexsandro Gallo, Yoshiharu Kohayakawa, Servet
Martinez, Enza Orlandi and Ron Peled for many comments and
bibliographic suggestions. We also thank the anonymous Associated
Editor that pointed out an incomplete definition in an earlier version of
this manuscript.
This work is part of PRONEX/FAPESP's project \emph{Stochastic
behavior, critical phenomena and rhythmic pattern identification in
natural languages} (grant number 03/09930-9), CNRS-FAPESP project
\emph{Probabilistic phonology of rhythm} and CNPq's projects
\emph{Stochastic modeling of speech} (grant number 475177/2004-5) and
\emph{Rhythmic patterns, prosodic domains and probabilistic modeling
in Portuguese Corpora} (grant number 485999/2007-2). AG and NLG are
partially supported by a CNPq fellowship (grants 308656/2005-9 and
301530/2007-6, respectively).
\defReferences{References}
|
1,108,101,562,813 | arxiv | \section{Introduction}
\vspace{-0.1in}
\label{sec:intro}
In this paper, we focus on developing a deep learning-based speech enhancement model for real-world applications that meets the following criteria:
1. a small and fast model that can reduce single-frame real-time-factor (RTF) as much as possible while keeping competitive performance against the state-of-the-art deep learning networks, 2. a model that can perform both the denoising and derverberation simultaneously.
To address the first issue, we aim to improve a popular neural architecture, U-Net \cite{ronneberger2015u}, which has proven its superior performance on speech enhancement tasks \cite{choi2019phase,isik2020poconet, hu2020dccrn}.
The previous approaches that use U-Net on source separation applications apply convolution kernel not only on the frequency-axis but also on the time-axis.
This non-causal nature of U-Net increases computational complexity because additional computations are required on past and future frames to infer the current frame.
Therefore, it is not suitable for online inference scenarios where the current frame needs to be processed in real-time.
In addition, the time-axis kernel makes the network computationally inefficient because there exists redundant computation between adjacent frames in both the encoding and decoding path of U-Net.
To tackle this problem, we propose a new neural architecture, Tiny Recurrent U-Net (TRU-Net), which is suitable for online speech enhancement.
The architecture is designed to enable efficient decoupling of the frequency-axis and time-axis computations, which makes the network fast enough to process a single frame in real-time. The number of parameters of the proposed network is only 0.38 million (M), which is small enough to deploy the model not only on a laptop but also on a mobile device and even on an embedded device combined with a quantization technique \cite{integeronlyquantization}.
The details of TRU-Net is described more in section \ref{sec:trunet}.
Next, to suppress the noise and reverberation simultaneously, we propose a phase-aware $\beta$-sigmoid mask (PHM).
The proposed PHM is inspired by \cite{wang2019deep}, in which the authors propose to estimate phase by reusing an estimated magnitude mask value from a trigonometric perspective.
The major difference between PHM and the approach in \cite{wang2019deep} is that PHM is designed to respect the triangular relationship between the mixture, the target source, and the remaining part, hence the sum of the estimated target source and the remaining part is always equal to the mixture.
We extend this property into a quadrilateral by producing two different PHMs simultaneously, which allows us to effectively deal with both denoising and dereverberation.
We will discuss PHM in further details in section \ref{sec:phm}.
\vspace{-0.1in}
\section{Tiny Recurrent U-Net}
\label{sec:trunet}
\vspace{-0.2in}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{figures/TRU-Net.png}
\vspace{-0.2in}
\caption{The network architecture of TRU-Net}
\label{fig:trunet}
\end{figure}
\vspace{-0.25in}
\subsection{PCEN feature as an input}
\label{ssec:input}
A spectrogram is perhaps the most popular input feature for many speech enhancement models.
Per-channel energy normalization (PCEN) \cite{wang2017trainable} combines both dynamic range compression and automatic gain control together, which reduce the variance of foreground loudness and supress background noise when applied to a spectrogram \cite{lostanlen2018per}.
PCEN is also suitable for online inference scenarios as it includes a temporal integration step, which is essentially a first-order infinite impulse response filter that depends solely on a previous input frame.
In this work, we employ the trainable version of PCEN.
\vspace{-0.1in}
\subsection{Network architecture}
\label{ssec:architecture}
TRU-Net is based on U-Net architecture, except that the convolution kernel does not span the time-axis. Therefore, it can be considered a frequency-axis U-Net with 1D Convolutional Neural Networks (CNNs) and recurrent neural networks in the bottle-neck layer.
The encoder is composed of 1D Convolutional Neural Network (1D-CNN) blocks and a Frequency-axis Gated Recurrent Unit (FGRU) block.
Each 1D-CNN block is a sequence of pointwise convolution and depthwise convolution similar to \cite{howard2017mobilenets}, except the first layer, which uses the standard convolution operation without a preceding pointwise convolution.
To spare the network size, we use six 1D-CNN blocks, which downsample the frequency-axis size from 256 to 16 using strided convolutions.
This results in a small receptive field (1,750Hz) which may be detrimental to the network performance.
To increase the receptive field, we use a bi-directional GRU layer \cite{cho-etal-2014-learning} along the frequency-axis instead of stacking more 1D-CNN blocks.
That is, the sequence of 16 vectors from 1D-CNN blocks is passed into the bi-directional GRU to increase the receptive field and share the information along the frequency-axis. We call this frequency-axis bi-directional GRU layer an FGRU layer.
A pointwise convolution, batch normalization (BN), and rectified linear unit (ReLU) are used after the FGRU layer, composing an FGRU block.
We used 64 hidden dimensions for each forward and backward FGRU cell.
The decoder is composed of a Time-axis Gated Recurrent Unit (TGRU) block and 1D Transposed Convolutional Neural Network (1D-TrCNN) blocks.
The output of the encoder is passed into a uni-directional GRU layer to aggregate the information along the time-axis. We call this GRU layer a TGRU layer.
While one can apply different GRU cells to each frequency-axis index of the encoder output, we shared the same cell on each frequency-axis index to save the number of parameters.
A pointwise convolution, BN, and ReLU follow the TGRU layer, composing a TGRU block.
We used 128 hidden dimensions for the TGRU cell.
Finally, 1D-TrCNN blocks are used to upsample the output from the TGRU block to the original spectrogram size.
The 1D-TrCNN block takes two inputs - 1. a previous layer output, 2. a skipped tensor from the encoder at the same hierarchy - and upsamples them as follows.
First, the two inputs are concatenated and projected to a smaller channel size (192 $\rightarrow$ 64) using a pointwise convolution.
Then, 1D transposed convolution is used to upsample the compressed information.
This procedure saves both the number of parameters and computation compared to the usual U-Net implementation where the two inputs are concatenated and upsampled immediately using the transposed convolution operation. Note that we did not use depthwise convolution for 1D-TrCNN block as we empirically observed that it drops the performance significantly when used in the decoding stage.
Every convolution operation used in the encoder and decoder is followed by BN and ReLU.
We denote the convolution configurations as follows, $l$-th: ($\kappa$, $s$, $c$)\,, where $l$, $\kappa$, $s$, $c$ denotes layer index, kernel size, strides, and output channels, respectively.
The detailed configurations of the encoder and decoder are as follows,
EncoderConfig = \{1-th: (5,2,64), 2-th: (3,1,128), 3-th: (5,2,128), 4-th: (3,1,128), 5-th: (5,2,128), 6-th: (3,2,128)\},
DecoderConfig = \{1-th: (3,2,64), 2-th: (5,2,64), 3-th: (3,1,64), 4-th: (5,2,64), 5-th: (3,1,64), 6-th: (5,2,10)\}.
Note that the pointwise convolution operations share the same output channel configuration with the exception that $\kappa$ and $s$ are both 1. The overview of TRU-Net and the number of parameters used for 1D-CNN blocks, FGRU block, TGRU block, and 1D-TrCNN blocks are shown in Fig. \ref{fig:trunet}.
\section{Single-stage Denoising and Dereverberation}
\label{sec:phm}
A noisy-reverberant mixture signal $\bm{x}$ is commonly modeled as the sum of additive noise $\bm{y}^{(n)}$ and reverberant source $\tilde{\bm{y}}$, where $\tilde{\bm{y}}$ is a result of convolution between room impulse response (RIR) $\bm{h}$ and dry source $\bm{y}$ as follows,
\begin{equation}
\bm{x} = \tilde{\bm{y}} + \bm{y}^{(n)} = \bm{h} \circledast \bm{y} + \bm{y}^{(n)}
\end{equation}
More concretely, we can break down $\bm{h}$ into two parts. First, the direct path part $\bm{h}^{(d)}$, which does not include the reflection path, and second, the rest of the part $\bm{h}^{(r)}$ including all the reflection paths as follows,
\begin{equation}
\bm{x} = \bm{h}^{(d)} \circledast \bm{y} + \bm{h}^{(r)} \circledast \bm{y} + \bm{y}^{(n)} = \bm{y}^{(d)} + \bm{y}^{(r)} + \bm{y}^{(n)},
\end{equation}
where $\bm{y}^{(d)}$ and $\bm{y}^{(r)}$ denotes a direct path source and reverberation, respectively.
In this setting, our goal is to separate $\bm{x}$ into three elements $\bm{y}^{(d)}$, $\bm{y}^{(r)}$, and $\bm{y}^{(n)}$.
Each of the corresponding time-frequency $(t,f)$ representations computed by short-time Fourier transform (STFT) is denoted as $X_{t,f} \in \mathbb{C}$, $Y^{(d)}_{t,f}\in \mathbb{C}$, $Y^{(r)}_{t,f} \in \mathbb{C}$, $Y^{(n)}_{t,f} \in \mathbb{C}$, and the estimated values will be denoted by the hat operator $\hat{\, \cdot \,}$.
\vspace{-0.1in}
\subsection{Phase-aware \texorpdfstring{$\beta$}{}-sigmoid mask}
The proposed phase-aware $\beta$-sigmoid mask (PHM) is a complex-valued mask which is capable of systemically restricting the sum of estimated complex values to be exactly the value of mixture, $X_{t,f} = Y^{(k)}_{t,f} + Y^{(\lnot k)}_{t,f}$.
The PHM separates the mixture $X_{t,f}$ in STFT domain into two parts as \textit{one-vs-rest} approach, that is, the signal $Y^{(k)}_{t,f}$ and the sum of the rest of the signals $Y^{(\lnot k)}_{t,f} = X_{t,f}-Y^{(k)}_{t,f}$, where index $k$ could be one of the direct path source ($d$), reverberation ($r$), and noise ($n$) in our setting, $k \in \{d, r, n\}$.
The complex-valued mask $M^{(k)}_{t,f} \in \mathbb{C}$ estimates the magnitude and phase value of the source of interest $k$.
Computing PHM requires two steps. First, the network outputs the magnitude part of two masks $\lvert M^{(k)}_{t,f} \rvert$ and $\lvert M^{(\lnot k)}_{t,f} \rvert$ with sigmoid function $\sigma^{(k)}(\bm{z}_{t,f})$ multiplied by coefficient $\beta_{t,f}$ as follows, $\lvert M^{(k)}_{t,f} \rvert = \beta_{t,f} \cdot \sigma^{(k)}(\bm{z}_{t,f}) = \beta_{t,f} \cdot (1+e^{-(z^{(k)}_{t,f} - z^{(\lnot k)}_{t,f})})^{-1}$,
where $z^{(k)}_{t,f}$ is the output located at $(t,f)$ from the last layer of neural-network function $\psi^{(k)}(\phi)$, and $\phi$ is a function composed of network layers before the last layer.
$\lvert M^{(k)}_{t,f} \rvert$ serves as a magnitude mask to estimate source $k$ and its value ranges from 0 to $\beta_{t,f}$.
The role of $\beta_{t,f}$ is to design a mask that is close to optimal values with a flexible magnitude range so that the values are not bounded between 0 and 1, unlike the commonly used sigmoid mask.
In addition, because the sum of the complex valued masks $M^{(k)}_{t,f}$ and $M^{(\lnot k)}_{t,f}$ must compose a triangle, it is reasonable to design a mask that satisfies the triangle inequalities, that is, $\lvert M^{(k)}_{t,f} \rvert + \lvert M^{(\lnot k)}_{t,f} \rvert$ $\geq 1$ and $\abs{ \lvert M^{(k)}_{t,f} \rvert - \lvert M^{(\lnot k)}_{t,f} \rvert} \leq 1$.
To address the first inequality we designed the network to output $\beta_{t,f}$ from the last layer with a softplus activation function as follows, $\beta_{t,f} = 1+ \texttt{softplus}((\psi_{\beta}(\phi))_{t,f})$, where $\psi_{\beta}$ denotes an additional network layer to output $\beta_{t,f}$. The second inequality can be satisfied by clipping the upper bound of the $\beta_{t,f}$ by $1 \mathbin{/} \lvert \, \sigma^{(k)}(\bm{z}_{t,f}) - \sigma^{(\lnot k)}(\bm{z}_{t,f}) \rvert$.
Once the magnitude masks are decided, we can construct a phase mask $e^{j\theta_{t,f}^{(k)}}$.
Given the magnitudes as three sides of a triangle, we can compute the cosine of the absolute phase difference $\Delta \theta_{t,f}^{(k)}$ between the mixture and source $k$ as follows,
$\cos(\Delta \theta_{t,f}^{(k)}) = \nicefrac{ (1+{\lvert M_{t,f}^{(k)} \rvert}^2 - {\lvert M_{t,f}^{(\lnot k)}\rvert}^2 ) } {(2 \, \lvert M_{t,f}^{(k)}\rvert )}$.
Then, the rotational direction $\xi_{t,f} \in \{1, -1\}$ (clockwise or counterclockwise) for phase correction is estimated for the phase mask as follows, $e^{j\theta_{t,f}^{(k)}} = \cos(\Delta \theta_{t,f}^{(k)}) + j \xi_{t,f}\sin(\Delta \theta_{t,f}^{(k)})$.
Two-class straight-through Gumbel-softmax estimator was used to estimate $\xi_{t,f}$ \cite{DBLP:conf/iclr/JangGP17}.
$M^{(k)}_{t,f}$ is defined as follows, $ M^{(k)}_{t,f} = \lvert M^{(k)}_{t,f} \rvert \cdot e^{j\theta_{t,f}^{(k)}}$.
Finally, $M^{(k)}_{t,f}$ is multiplied with $X_{t,f}$ to estimate the source $k$ as follows, $\hat{Y}^{(k)}_{t,f} = M^{(k)}_{t,f} \cdot X_{t,f}$.
\subsection{Masking from the perspective of a quadrilateral}
\begin{figure}[H]
\vspace{-0.1in}
\centering
\includegraphics[scale=0.4]{figures/quadrangle.png}
\vspace{-0.1in}
\caption{The illustration of PHM masking method on a quadrilateral}
\label{fig:quadrilateral}
\end{figure}
\vspace{-0.1in}
Because we desire to extract both the direct and reverberant source, two pairs of PHMs are used for each of them.
The first pair of masks, $M^{(d)}_{t,f}$ and $M^{(\lnot d)}_{t,f}$, separate the mixture into the direct source and the rest of the components, respectively.
The second pair of masks, $M^{(n)}_{t,f}$ and $M^{(\lnot n)}_{t,f}$, separate the mixture into the noise and the reverberant source.
Since PHM guarantees the mixture and separated components to construct a triangle in the complex STFT domain, the outcome of the separation can be seen from the perspective of a quadrilateral, as shown in Fig \ref{fig:quadrilateral}.
In this setting, as the three sides and two side angles are already determined by the two pairs of PHMs, the fourth side of the quadrilateral, $M^{(r)}_{t,f}$, is uniquely decided.
\vspace{-0.1in}
\subsection{Multi-scale objective}
Recently, a multi-scale spectrogram (MSS) loss function has been successfully used in a few audio synthesis studies \cite{wang2019neural, Engel2020DDSP}.
We incorporate this multi-scale scheme not only in the spectral domain but also in the waveform domain similar to \cite{Yao2019}.
Learning to maximize cosine similarity can be regarded as maximizing the signal-to-distortion ratio (SDR) \cite{choi2019phase}. Cosine similarity loss $C$ between the estimated signal $\hat{\bm{y}}^{(k)} \in \mathbb{R}^{N} $ and the ground truth signal $\bm{y}^{(k)} \in \mathbb{R}^{N}$ is defined as follows, $C(\bm{y}^{(k)},\hat{\bm{y}}^{(k)}) = -\frac{<\bm{y}^{(k)},\hat{\bm{y}}^{(k)}>}{\norm{\bm{y}^{(k)}} \norm{\hat{\bm{y}}^{(k)}}}$,
where $N$ denotes the temporal dimensionality of a signal and $k$ denotes the type of signal ($k \in \{d,r,n\}$).
Consider a sliced signal $\bm{y}^{(k)}_{[\frac{N}{M}(i-1):\frac{N}{M}i]}$, where $i$ denotes the segment index and $M$ denotes the number of segments.
By slicing the signal and normalizing it by its norm, each sliced segment is considered a unit for computing $C$.
Therefore, we hypothesize that it is important to choose a proper segment length unit $\frac{N}{M}$ when computing $C$.
In our case, we used multiple settings of segment lengths $g_{j}=\frac{N}{M_{j}}$ as follows,\useshortskip
\begin{equation}
\label{eq:multiscale}
\mathcal{L}_{wav}^{(k)} = \sum_{j} \frac{1}{M_j}\sum_{i=1}^{M_j} C(\bm{y}^{(k)}_{[g_{j}(i-1):g_{j}i]},\hat{\bm{y}}^{(k)}_{[g_{j}(i-1):g_{j}i]}),
\end{equation}
\setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt}
\setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt}
where $M_{j}$ denotes the number of sliced segments.
In our case, the set of $g_j$\textquotesingle s was chosen as follows, $g_j \in \{4064, 2032, 1016, 508\}$.
Next, the multi-scale loss on spectral domain is defined as follows,
\useshortskip
\begin{equation}
\mathcal{L}_{spec}^{(k)} = \sum_{i} \norm{ \, {\lvert \text{STFT}_i(\bm{y}^{(k)}) \rvert} ^{0.3}- {\lvert \text{STFT}_i(\hat{\bm{y}}^{(k)})}^{0.3} \rvert}^2,
\end{equation}
\useshortskip
where $i$ denotes the FFT size of $\text{STFT}_{i}$.
The only difference to the original MSS loss is that we replaced the log transformation into the power-law compression, as it has been successfully used in previous speech enhancement studies \cite{erdogan2018investigations, wilson2018exploring}.
We used the FFT sizes of STFT, (1024, 512, 256), with 75\% overlap.
The final loss function is defined by adding all the components as follows, $\mathcal{L}_\text{final} = \sum_{k \in \{d,r,n\}} \mathcal{L}_{wav}^{(k)} + \mathcal{L}_{spec}^{(k)}$.
\section{Experiments}
\label{sec:pagestyle}
\begin{table*}[t]
\begin{center}
\scalebox{0.68}{
\centering
\begin{tabular}{lcc|ccccccc|ccccccc}
\toprule \\[-2ex]
$\,$ & $\,$ & $\,$ & \multicolumn{7}{c|}{Synthetic without Reverb} & \multicolumn{7}{c}{Synthetic with Reverb}\\
\midrule \\[-2ex]
Methods & Size(M/MB) & RT &{PESQ1} &{PESQ2} &{CBAK} &{COVL} &{CSIG} &{SI-SDR} &{STOI} &{PESQ1} &{PESQ2} &{CBAK} &{COVL} &{CSIG} &{SI-SDR} &{STOI} \\
\midrule
Noisy & - & - & 2.45 & 1.58 & 2.53 & 2.35 & 3.19 & 9.07 & 91.52 & 2.75 & 1.82 & 2.80 & 2.64 & 3.50 & 9.03 & 86.62\\
NSnet \cite{xia2020weighted} & 1.27/4.84 & \ding{51} & 2.68 & 1.81 & 2.00 & 2.24 & 2.78 & 12.47 & 90.56 & 2.45 & 1.52 & 1.94 & 1.95 & 2.52 & 9.18 & 82.15 \\
DTLN \cite{westhausen2020dual} & 0.99/3.78 & \ding{51} & 3.04 & - & - & - & - & 16.34 & 94.76 & 2.70 & - & - & - & - & 10.53 & 84.68\\
ConvTasNet \cite{koyama2020exploring} & 5.08/19.38 & \ding{55} & - & 2.73 & 3.64 & 3.41 & 4.07 & - & - & - & 2.71 & \bf{3.67} & 3.47 & \bf{4.21} & - & - \\
PoCoNet1 \cite{isik2020poconet} & 50/190.73 & \ding{55} & - & 2.71 & 3.02 & 3.29 & 3.85 & - & - & - & \bf{2.83} & 3.21 & 3.35 & 3.83 & - & - \\
PoCoNet2 \cite{isik2020poconet} & 50/190.73 & \ding{55} & - & 2.75 & 3.04 & 3.42 & 4.08 & - & - & - & - & - & - & - & - & - \\
DCCRN-E \cite{hu2020dccrn} & 3.7/14.11 & \ding{51} & 3.27 & - & - & - & - & - & - & 3.08 & - & - & - & - & - & -\\
DCCRN-CL \cite{hu2020dccrn} & 3.7/14.11 & \ding{55} & 3.26 & - & - & - & - & - & - & 3.10 & - & - & - & - & - & -\\
\midrule
\bf{TRU-Net} (FP32) & 0.38/1.45 & \ding{51} & \bf{3.36} & \bf{2.86} & \bf{3.66} & \bf{3.55} & \bf{4.21} & \bf{17.55} & \bf{96.32} & \bf{3.35} & 2.74 & 3.62 & \bf{3.48} & 4.17 & \bf{14.87} & \bf{91.29}\\
\bf{TRU-Net} (INT8) & 0.38/0.36 & \ding{51} & 3.35 & 2.84 & 3.62 & 3.53 & 4.18 & 17.23 & 96.12 & 3.31 & 2.70 & 3.56 & 3.45 & 4.16 & 14.47 & 91.01\\
\bottomrule
\end{tabular}
}
\vspace{-0.1in}
\caption{Objective evaluation results on DNS-challenge synthetic development sets. PoCoNet2 denotes the model with partial dereverberation described in \cite{isik2020poconet}, and PoCoNet1 is the model trained without it. We denote the network size (Size) in two aspects, the number of parameters in million (M) and the actual model size in megabyte (MB). The models with real-time (RT) capability are marked with \ding{51}, otherwise \ding{55}.
}
\label{tab:dns}
\vspace{-0.2in}
\end{center}
\end{table*}
\subsection{Implementation details}
Since our goal is to perform both denoising and dereverberation, we used pyroomacoustics \cite{scheibler2018pyroomacoustics} to simulate an artificial reverberation with randomly sampled absorption, room size, location of source and microphone distance.
We used 2 seconds of speech and noise segments, and mixed them with a uniformly distributed source-to-noise ratio (SNR) ranging from -5 dB to 25 dB.
Input features were used as a channel-wise concatenation of log-magnitude spectrogram, PCEN spectrogram, and real/imaginary part of demodulated phase.
We used AdamW optimizer \cite{DBLP:conf/iclr/ReddiKK18} and the learning rate was halved when the validation score did not improve for three consecutive epochs. The initial learning rate was set to 0.0004.
The window size and hop size were set to 512 (32 ms) and 128 (8 ms), respectively.
We also quantized the proposed model into INT8 format and compared the model size with prior works. The purpose of our quantized model experiments is to reduce the model size and computational cost for embedded environments. We adopted the computation flow using quantized numbers suggested in \cite{integeronlyquantization} to quantize the neural network. In addition, the uniform symmetric quantization scheme \cite{googlewhitepaper}, which uses uniform quantization and restricts zero-point to 0, was applied for efficient hardware implementation. In the experiments, all the layers in the neural network are processed using quantized weights, activations, and inputs; only bias values are represented in full precision. Other processing steps such as feature extraction and masking are computed in full precision. For encoder and decoder layers, we observe the scale statistics of intermediate tensors during training. Then, during inference, we fix the scales of activations using the average of the observed minimum and maximum values. Only GRU layers are dynamically quantized during the inference time due to the large dynamic range of internal activations at each time step.
\vspace{-0.1in}
\subsection{Ablation study}
\label{ssec:ablation study}
In order to confirm the effect of PCEN, multi-scale objective, and FGRU block, we trained and validated the model using the CHiME2 training set and development set, respectively.
An ablation study was conducted on the CHiME2 test set.
TRU-Net-A denotes the proposed method. TRU-Net-B denotes the model trained without multi-scale objective. TRU-Net-C denotes the model trained without the PCEN feature. TRU-Net-D denotes the model trained without FGRU block.
We used the original SDR \cite{vincent2006performance} to compare our model with other models.
The results are shown in Table \ref{tab:chime2}.
It is clearly observable that all the proposed methods are contributing to performance improvement. Note that FGRU block contributes significantly on the performance.
We also compared the proposed model with other models using the CHiME2 test set.
The proposed model showed better performance than not only the recent lightweight model Tiny-LSTM (TLSTM) and its pruned version (PTLSTM) \cite{fedorov2020tinylstms}, but also the large-sized model \cite{wilson2018exploring}.
\begin{table}[htbp]
\renewcommand{\tabcolsep}{1.6mm}
\begin{center}
\scalebox{0.68}{
\centering
\begin{tabular}{lc|ccccccc}
\toprule \\[-2ex]
$\,$ & $\,$ & \multicolumn{7}{c}{Input SNR}\\
\midrule \\[-2ex]
Methods & Size (M/MB) & {-6} &{-3} &{0} &{3} &{6} &{9} & Avg.\\
\midrule
TLSTM (FP32) \cite{fedorov2020tinylstms} & 0.97/3.70 & 10.01 & 11.54 & 13.08 & 14.23 & 15.85 & 17.46 & 13.70 \\
PTLSTM (FP32) \cite{fedorov2020tinylstms} & 0.52/1.97 & 10.07 & 11.59 & 13.10 & 14.31 & 15.89 & 17.50 & 13.74 \\
PTLSTM (INT8) \cite{fedorov2020tinylstms} & 0.61/0.58 & 9.82 & 11.37 & 12.91 & 14.20 & 15.74 & 17.44 & 13.58\\
PTLSTM (INT8) \cite{fedorov2020tinylstms} & 0.33/0.31 & 9.33 & 10.91 & 12.46 & 13.79 & 15.46 & 17.16 & 13.18 \\
Wilson et al. \cite{wilson2018exploring} & 65/247.96 & 12.17 & 13.44 & 14.70 & 15.83 & 17.30 & 18.78 & 15.37 \\
\midrule
TRU-Net-A (FP32) & 0.38/1.45 & \bf{12.36} & \bf{13.62} & \bf{15.08} & \bf{16.21} & \bf{17.70} & \bf{19.39} & \bf{15.73}\\
TRU-Net-B (FP32) & 0.38/1.45 & 12.21 & 13.39 & 14.91 & 16.09 & 17.53 & 19.24 & 15.56\\
TRU-Net-C (FP32) & 0.38/1.45 & 11.96 & 13.24 & 14.69 & 15.97 & 17.47 & 19.18 & 15.42\\
TRU-Net-D (FP32) & 0.31/1.18 & 11.83 & 13.14 & 14.63 & 15.85 & 17.28 & 18.97 & 15.28\\
TRU-Net-A (INT8) & 0.38/0.36 & 12.35 & 13.62 & 15.03 & 16.18 & 17.62 & 19.30 & 15.68\\
TRU-Net-B (INT8) & 0.38/0.36 & 12.23 & 13.40 & 14.91 & 16.08 & 17.51 & 19.21 & 15.56\\
TRU-Net-C (INT8) & 0.38/0.36 & 11.96 & 13.20 & 14.64 & 15.94 & 17.42 & 19.11 & 15.38\\
TRU-Net-D (INT8) & 0.31/0.30 & 11.79 & 13.13 & 14.56 & 15.78 & 17.19 & 18.85 & 15.22\\
\bottomrule
\end{tabular}
}
\vspace{-0.1in}
\caption{Objective evaluation results on the CHiME2 test set.}
\label{tab:chime2}
\vspace{-0.4in}
\end{center}
\end{table}
\vspace{-0.1in}
\subsection{Denoising results}
\label{ssec:denoising_results}
We further checked the denoising performance of our model by training the model on the large scale DNS-challenge dataset \cite{reddy2020icassp} and internally collected dataset.
It was tested on two non-blind DNS development sets, 1) synthetic clips without reverb (Synthetic without Reverb) and 2) synthetic clips with reverb (Synthetic with Reverb).
We compared our model with the recent models \cite{isik2020poconet, hu2020dccrn, xia2020weighted, westhausen2020dual, koyama2020exploring} submitted to the previous 2020 Interspeech DNS-challenge.
6 evaluation metrics, PESQ, CBAK, COVL, CSIG, SI-SDR, and STOI \cite{recommendation2001perceptual, loizou2013speech, le2019sdr, taal2010short}, were used.
Note that although it is recommended to use ITU-T P862.2 wide-band version of PESQ (PESQ2), a few studies reported their score using ITU-T P862.1 (PESQ1).
Therefore, we used both PESQ versions to compare our model with other models.
The results are shown in Table \ref{tab:dns}.
We can see that TRU-Net shows the best performance in the Synthetic without Reverb set while having the smallest number of parameters.
In the Synthetic with Reverb set, TRU-Net showed competitive performance using orders of magnitude fewer parameters than other models.
\vspace{-0.1in}
\subsection{Dereverberation results}
\label{ssec:dereverb_results}
The performance of simultaneous denoising and dereverberation was tested on \textit{min} subset of WHAMR dataset, which contains 3,000 audio files.
The WHAMR dataset is composed of noisy-reverberant mixtures and the direct sources as ground truth.
TRU-Net models (FP32 and INT8) in Table \ref{tab:dns} were used for the test.
We show the denoising and dereverberation performance of our model in Table \ref{tab:whamr} along with two other models that were tested on the same WHAMR dataset.
Our model achieved the best results compared to the other baseline models, which shows the parameter efficiency of TRU-Net on simultaneous denoising and dereverberation task.
\vspace{-0.1in}
\begin{table}[htbp]
\begin{center}
\scalebox{0.68}{
\centering
\begin{tabular}{l|c|ccc}
\toprule
Method & Size (M/MB) & PESQ1 & SI-SDR & STOI \\
\midrule
Noisy & - & 1.83 & -2.73 & 73.00 \\
NSnet \cite{xia2020weighted} & 1.27/4.84 & 1.91 & 0.34 & 73.02 \\
DTLN \cite{westhausen2020dual} & 0.99/3.78 & 2.23 & 2.12 & 80.40 \\
\midrule
TRU-Net (FP32) & 0.38/1.45 & \bf{2.51} & \bf{3.51} & \bf{81.22} \\
TRU-Net (INT8) & 0.38/0.36 & 2.49 & 3.03 & 80.56 \\
\bottomrule
\end{tabular}
}
\vspace{-0.1in}
\caption{
Objective evaluation of simultaneous denoising and dereverberation results on the WHAMR dataset.
}
\label{tab:whamr}
\end{center}
\vspace{-0.1in}
\end{table}
\vspace{-0.2in}
\subsection{Listening test results}
\label{ssec:listening}
Using the proposed model (TRU-Net (FP32)) in Table \ref{tab:dns}, we participated in 2021 ICASSP DNS Challenge Track 1 \cite{reddy2020icassp}.
For better perceptual quality, we mixed the estimated direct source and reverberant source at 15 dB, and applied a zero-delay dynamic range compression (DRC).
The average computation time to process a single frame (including FFT, iFFT, and DRC) took 1.97 ms and 1.3 ms on 2.7 GHz Intel i5-5257U and 2.6 GHz Intel i7-6700HQ CPUs, respectively.
The lookahead of TRU-Net is 0 ms.
The listening test was conducted based on ITU-T P.808.
The results are shown in Table \ref{tab:p808}.
The model was tested on various speech sets including singing voice, tonal language, non-English (includes tonal), English, and emotional speech. The results show that TRU-Net can achieve better performance than the baseline model, NSnet2 \cite{braun2020data}.
\begin{table}[htbp]
\renewcommand{\tabcolsep}{1.6mm}
\begin{center}
\scalebox{0.68}{
\centering
\begin{tabular}{lc|cccccc}
\toprule
Method & Size (M/MB) & Singing & Tonal & Non-English & English & Emotional & Overall \\
\midrule
Noisy & - & 2.96 & 3.00 & 2.96 & 2.80 & 2.67 & 2.86 \\
NSnet2 \cite{braun2020data} & 2.8/10.68 & \bf{3.10} & 3.25 & 3.28 & 3.30 & \bf{2.88} & 3.21\\
\midrule
TRU-Net & 0.38/1.45 & 3.08 & \bf{3.38} & \bf{3.43} & \bf{3.41} & \bf{2.88} & \bf{3.32} \\
\bottomrule
\end{tabular}
}
\vspace{-0.1in}
\caption{
MOS results on the DNS-challenge blind test set
}
\label{tab:p808}
\end{center}
\vspace{-0.1in}
\end{table}
\vspace{-0.3in}
\section{Relation to prior works}
\vspace{-0.1in}
\label{sec:related_work}
Recently, there has been increasing interest in phase-aware speech enhancement because of the sub-optimality of reusing the phase of the mixture signal.
While most of these works tried to estimate the clean phase by using a phase mask or an additional network, the absolute phase difference between mixture and source can be actually computed using the law of cosines \cite{mowlaee2012phase}.
Inspired by this, \cite{wang2019deep} proposed to estimate a rotational direction of the absolute phase difference for speech separation.
The FGRU and TGRU used in TRU-Net is similar to the work in \cite{grzywalski2019using}. They used bidirectional long short-term memory (bi-LSTM) networks on the frequency-axis and the time-axis combined with 2D-CNN-based U-Net.
The difference is that bi-LSTM was utilized to increase performance in \cite{grzywalski2019using}, whereas we employ FGRU and uni-directional TGRU to better handle the online inference scenario combined with the proposed lightweight 1D-CNN-based (frequency-axis) U-Net.
\vspace{-0.1in}
\section{Conclusions}
\vspace{-0.1in}
\label{sec:print}
In this work, we proposed TRU-Net, which is an efficient neural network architecture specifically designed for online inference applications.
Combined with the proposed PHM, we successfully demonstrated a single-stage denoising and dereverberation in real-time.
We also showed that using PCEN and multi-scale objectives improves the performance further.
Experimental results confirm that our model achieve comparable performance with state-of-the-art models having a significantly larger number of parameters.
For future work, we plan to employ modern pruning techniques on an over-parameterized model to develop a big-sparse model which may provide better performance than a small-dense model with the same number of parameters.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
1,108,101,562,814 | arxiv | \section{\label{sec:level1} Introduction}
According to a recent study, the number of people choosing to shop at a physical departmental store is on a decline. More and more people today are using the convenience of online shopping \cite{nytimes}. The growth of sites like Amazon and Ebay is evidence of this emerging trend. Stores also tend to encourage buyers with discounts and e-coupons for shopping online since it means that they do not have to stock up goods at their brick and mortar stores but may ship them as and when required from the warehouse directly to the buyer. This is now a well proven business model for lowering operational costs and maximizing profits.
As a result of this trend, stores online are able to profile buyers based on their preferences. Stores may use such information to recommend products to other buyers showing a similar profile or may cleverly advertise the products that a buyer is more likely to buy. In either case, the advertising engine would need to analyze large datasets of information containing past records of buyers, the products they bought and the timing of their purchase. Leskovec et all \cite {Leskovec} used such a dataset from a large online retailer to study the effectiveness of viral marketing techniques. They researched the effectiveness of personal recommendations between buyers and modeled these based on a simple stochastic model. Viral marketing is a proven technique as demonstrated by the growth of email services like Yahoo and Hotmail who based their marketing on this technique. Each email sent out from their email services contained a footer advertising the service. Simple, and very effective.
Referral marketing is surely the future of web advertising. Free services online tend to lose their value if they are easily available to even those who don't need them. Google used a referral technique to launch their GMail mail service by allowing users only by invitation. This requires some effort from the potential user of the free service tool to actively seek out users to try and get a referral. It also gives current users a value perception of the free product they are using because it is not widely available to everybody.
This article will aim to look at the Amazon product co-purchasing network, on both a micro as well as a macro scale. Understanding the interactions at a localized and micro scale may help in isolating the noise and point to interesting trends that can be extrapolated to the larger scale. Motifs have been used extensively to study interactions in biological networks, especially in bacteria like the E-Coli. The article shows how motifs can be used in a product co-purchasing network to gain insights into relations between products. Both 3-node and 4-node motifs have been analyzed.
Community structures have long been used in network analysis and the article by Leskovec \cite{Leskovec} identified communities and the products that these communities preferred. In this article, we suggest using community detection methods to make the motif-based analysis more accurate and pertain to a select list of products.
Section II describes the source and the structure of the dataset we will use for analysis. Section III concentrates on the motifs discovered in the dataset, their significance and their frequency of occurrence. Section IV discusses algorithms for detecting community structures in the dataset for identifying trends between products frequently co-purchased together.
\section{\label{sec:level2} Nature and Description of the Dataset}
The dataset for this article was obtained from the publicly available Stanford Network Analysis Platform \cite {snap}. The description reads as follows : ``Network was collected by crawling Amazon website. It is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product \texttt{i} is frequently co-purchased with product \texttt{j}, the graph contains a directed edge from \texttt{i} to \texttt{j}.'' Along with the edge-list representation of co-purchased products, there is also metadata present for each product summarized in \texttt{Table I}.
\begin{table}
\caption{\label{tab:table1} Sample Amazon Product Metadata}
\begin{tabular}{|l|c|}
\hline
\bf{Property} & \bf{Value}\\ \hline
{\bf Id} & 1\\ \hline
{\bf ASIN} & 0827229534\\ \hline
{\bf Title } & Patterns of Preaching: A Sermon Sampler\\ \hline
{\bf Group } & Book\\ \hline
{\bf SalesRank } & 396585\\ \hline
{\bf Similar Products } & 5 0804215715 156101074X \\ & 0687023955 0687074231 082721619X\\ \hline
{\bf Categories } & 2
|Books[283155]|Subjects[1000]|\\ & Religion \& Spirituality[22]| \\ & Christianity[12290]
|Clergy[12360]|\\ & sPreaching[12368]\\ \hline
{\bf Reviews } & Total: 2 downloaded: 2 avg rating: 5\\ &
2000-7-28 cutomer: A2JW67OY8U6HHK \\ & rating: 5 votes: 10 helpful: 9 \\ &
2003-12-14 cutomer: A2VE83MZF98ITY \\ & rating: 5 votes: 6 helpful: 5\\ \hline
\end{tabular}
\end{table}
This article's experiments were carried out on a subset of the original dataset to restrict the computational resources required to carry them out. The portion of the dataset which will be used for analysis in this article was collected on March 02 2003. Some statistics of this dataset are presented in \texttt{Table II}. As depicted by the statistics, the network is quite dense with a clustering coefficient of about 0.42. The fraction of nodes belonging to the largest strongly connected component is very high - 0.922. This implies that the products sampled are very closely related with each other.
\begin{table}
\caption{\label{tab:table2} Dataset Statistics}
\begin{tabular}{|l|c|}
\hline
\bf{Property} & \bf{Value}\\ \hline
{\bf Nodes } & 262111 \\ \hline
{\bf Edges} & 1234877\\ \hline
{\bf Nodes in largest WCC } & 262111 (1.000)\\ \hline
{\bf Edges in largest WCC } & 1234877 (1.000)\\ \hline
{\bf Nodes in largest SCC } & 241761 (0.922)\\ \hline
{\bf Edges in largest SCC } & 1131217 (0.916)\\ \hline
{\bf Average clustering coefficient }& 0.4240\\ \hline
{\bf Number of triangles } & 717719\\ \hline
{\bf Fraction of closed triangles } & 0.2361\\ \hline
{\bf Diameter (longest shortest path)} & 29\\ \hline
{\bf 90-percentile effective diameter } & 11\\ \hline
\end{tabular}
\end{table}
\section{\label{sec:level3} Motifs : Description and Significance}
\subsection{\label{sec:level1} Introduction to Motifs}
Motifs can be described as recurring, significant patterns of interconnections present in a network. These motifs have been found to occur more frequently than in random graphs; a number of biological networks are composed solely of a large number of such motifs.Each type of network seems to display its own set of characteristic motifs (ecological networks have different motifs than gene regulation networks, etc). Milo et all analyzed such network motifs in their famous article \cite {milo} where they showed that motifs formed the basic building blocks of a number of biological networks and they also formulated methods to detect the commonly occurring motifs in a network.
\subsection{\label{sec:level2} Market Segmentation }
Buyers online, as anywhere else, are usually classified into profiles that are indicative of their past buying trends. Market segmentation is a marketing term referring to the aggregating of prospective buyers into groups (segments) that have common needs and will respond similarly to a marketing action. Market segmentation enables companies to target different categories of consumers who perceive the full value of certain products and services differently from one another. Generally three criteria can be used to identify different market segments:
1) Homogeneity (common needs within segment)
2) Distinction (unique from other groups)
3) Reaction (similar response to market)
\subsection{\label{sec:level3} Analysis Functions for Motifs}
In this section, we will define a product purchasability function for a 3-node motif that will indicate the probability of product being purchased in the context of the 3-node motif using in-degree ${V^{i}_{in}}$ at vertex \texttt{i} and ${|E_{motif}|}$, the total count of edges in the motif :
\begin{equation}
f(P_{i}) = \frac{{V^{i}_{in}}}{|E_{motif}|}
\end{equation}
The product purchasability function is not defined for a vertex in the motif with in-degree zero. For all other vertices, this function will return the fraction of edges coming into the product as opposed to the total number of edges. To rate each motif, we will then declare a Motif Rank function that will indicate the fraction of nodes that have a finite, positive product purchasability function. Contrary to what the name of this function implies, it does not identify the importance or significance of a motif. Instead, this function looks at the fraction of nodes (products) whose purchasability may be defined and thus provide a larger actionable product-set.
\begin{equation}
\textnormal{Motif Rank} = \frac{\textnormal{Number of Nodes with Positive $f(P_{i})$}}{\textnormal{Total Nodes in Motif}}
\end{equation}
It must be noted that the Product Purchasability and the Motif Rank functions are comparable only within the particular class of k-node motifs (where k is usually 3 or 4) for which the functions were defined. We will now look at some of the most commonly occurring motifs discovered in the product co-purchasing network in the next sections.
\subsection{\label{sec:level4} Analysis of 3-Node Motifs in Network}
The statistics of 3-node motifs present in the network studied in this article have been summarized in \texttt{Table III}. As seen in biological networks, the probability of occurrence of some motifs is much higher (by an order of magnitude) than others. This is indicative of a structural similarity present in the network. Figure \ref {motif000} shows a 3-node motif with a frequency count of 131613 and MotifID 1. Motif IDs were assigned to the commonly occurring 3-node and 4-node motifs by the original article on motifs by Milo et all \cite{milo}.
\begin{table}
\caption{\label{tab:table3} }
\begin{tabular}{|c|c|c|c|}
\hline
{\bf MotifId} & {\bf Nodes} & {\bf Edges} & {\bf Count}\\ \hline
1 & 3 & 2 & 131613\\ \hline
2 & 3 & 2 & 78578\\ \hline
3 & 3 & 3 & 104071\\ \hline
4 & 3 & 2 & 217566\\ \hline
5 & 3 & 3 & 16090\\ \hline
6 & 3 & 4 & 20400\\ \hline
7 & 3 & 4 & 32962\\ \hline
8 & 3 & 4 & 3685\\ \hline
9 & 3 & 3 & 2\\ \hline
10 & 3 & 3 & 135904\\ \hline
11 & 3 & 4 & 21579\\ \hline
12 & 3 & 5 & 28397\\ \hline
13 & 3 & 6 & 19319\\ \hline
\end{tabular}
\end{table}
MotifID 1 shown in Figure \ref {motif000} is a representation of the case where when a customer bought a product, he / she went ahead and bought either or both of the other 2 products. The $f(P_{i})$ of nodes in this motif is 0.5 for the nodes at the bottom and undefined for the node at the top. We assume that the time instant at which the product represented by the top node is the present instant. The customer may then purchase both the other products, either of the 2 products or none of them. The motif rank can then
be computed as 0.66.
\begin{figure}[h!]
\caption{Motif ID : 1}
\centering
\includegraphics[scale=0.2]{motif-000.png}
\label{motif000}
\end{figure}
MotifID 3 shown in Figure \ref {motif002} is a interesting motif with a relatively high frequency of occurrence. There is a reciprocating relation between 2 of the nodes. This is a strongly connected component indicating the closeness similarity between the nodes. The $f(P_{i})$ of the nodes in this motif are all 0.33. The Motif Rank function for this motif is found to be 1.
\begin{figure}[h!]
\caption{Motif ID : 3}
\centering
\includegraphics[scale=0.2]{motif-002.png}
\label{motif002}
\end{figure}
MotifID 4 shown in Figure \ref {motif003} has the highest frequency of occurrence among all the 3-node motifs. The next highest frequency of occurrence is about 40\% lower than the frequency of this motif. This a converging motif - consumers who bought largely unrelated products (the 2 nodes at the top) also bought the product represented by the node at the bottom. The unrelated nature of the 2 product nodes at the top stands out in this motif. The high frequency of this motif also points to the fact that there is a converging factor in the network implying that a large number of customers end up purchasing a small subset of products. Identifying this small subset of products can be used by the online store for improving services and reducing operational costs by smartly stocking this identified subset of products. The Motif Rank of this motif is 0.33 due to the presence of just one node with a positive $f(P_{i})$ function.
\begin{figure}[h!]
\caption{Motif ID : 4}
\centering
\includegraphics[scale=0.2]{motif-003.png}
\label{motif003}
\end{figure}
MotifID 10 shown in Figure \ref{motif009} contains a strongly connected component showing a close correlation between the products. The $f(P_{i})$ of the node at the top is undefined. The node in the centre has a $f(P_{i})$ of 0.66 and the node at the bottom has one of 0.33. Customers buying the product represented by the node at the top may buy the one in the middle too. The argument can be extended to the product at the bottom too. With increasing edge separation between product nodes, the probability that the customer will buy the product falls exponentially. This is due to the fact that most customers place budgets and prioritize the items to purchase depending on the ones they need the most.
\begin{figure}[h!]
\caption{Motif ID : 10}
\centering
\includegraphics[scale=0.2]{motif-009.png}
\label{motif009}
\end{figure}
\subsection{\label{sec:level5} Analysis of 4-Node Motifs in Network}
In this section, we used a motif searching algorithm to detect all 4-node motifs in the network and obtained their frequency of occurrences. Motif IDs range from 1 to 200, encompassing all the possible permutations of 4-node motifs. Figure \ref{4-motif-hist} shows the frequency distribution of the 4-node motifs. The three tallest bars belong to Motif ID 59, 26 and 5.
\begin{figure}[h!]
\caption{4-Node Motif Frequency Distribution}
\centering
\includegraphics[scale=0.45]{4-motif-hist.png}
\label{4-motif-hist}
\end{figure}
Figures \ref{4-motif004}, \ref{4-motif024} and \ref{4-motif058} show the 3 most frequently occurring 4-node motifs in the product co-purchasing network.The Motif IDs 25 and 59 share a characteristic similar to that observed in the most frequently occurring 3-node motif with MotifID 4. They are both converging motifs indicating the high probability of a customer buying the product to which the edges in the motif converge. The interesting resemblance to the analysis done for 3-node motifs is that the 4-node motif with Motif ID in Figure \ref{4-motif058} also has the highest frequency of occurrence among all 4-node motifs.
\begin{figure}[h!]
\caption{Motif ID : 5}
\centering
\includegraphics[scale=0.2]{4-motif-004.png}
\label{4-motif004}
\end{figure}
Figure \ref{4-motif004} is the third most frequently occurring 4-node motif in the network. The $f(P_{i})$ of the 2 nodes at the bottom are 0.66 and 0.33. This motif is neither converging nor diverging. But there is a trend in the motif to move toward the bottom node with a higher $f(P_{i})$, indicating a higher probability of the product represented by this node to be purchased.
Its interesting to observe that the Motif ID 25 shown in Figure \ref{4-motif024} appears in second place, statistically. This can be identified as a converging motif with all nodes tracing a path to the bottom node. Again, the $f(P_{i})$ function of the bottom node is the highest among the nodes in the motif. This motif may also represent a similarity in tastes from customers belonging to different schools of thought or having different backgrounds.
\begin{figure}[h!]
\caption{Motif ID : 25}
\centering
\includegraphics[scale=0.2]{4-motif-024.png}
\label{4-motif024}
\end{figure}
Motif ID 59 is truly indicative of the convergence trend mentioned previously. It is also statistically the most significant 4-node motif, just like its similar looking counterpart among the 3-node motifs.
\begin{figure}[h!]
\caption{Motif ID : 59}
\centering
\includegraphics[scale=0.2]{4-motif-058.png}
\label{4-motif058}
\end{figure}
\subsection{\label{sec:level6} Extrapolating the Micro to the Macro }
Motifs, by themselves do not describe the large scale structure of the network. To infer the properties of the larger picture, we will have to aggregate the perspective provided by the individual motif analysis. The suggested method to obtain the most mileage would be to break the original network into a number of communities using the Girvan-Newman algorithm \cite{gir-new} or the faster Clauset, Neuman, Moore algorithm \cite{clauset}. Once this is done, each of these communities represents a singular class of objects (nodes). If our network is large and varied, we can continue this process till we reach our required coarseness of sub-categorization. These resulting networks can then be subjected to motif analysis as as carried out in the previous sections.
In case of the product co-purchasing network used in this article, such granularity will enable us in studying the dynamics of particular products more accurately. Since buyer profiles are built with progressive purchases which may not belong to the same community of products, we cannot extrapolate the findings obtained from these community-centric entities to build customer profiles. What we may use them for is building product purchasing trends. We can monitor the motif statistics over time to gather useful information about trends. For example, if the frequency of occurrence of converging motifs increases, we deduce that a larger set of customers are looking to buy a fewer, smaller subset of products. Having adjusted the coarseness of the community sub-categorization, we can make a good guess as to what these products are and whether, as online retailers, we need to make any operational changes to match the demand.
\section{\label{sec:level4} Community Dectection for Motif Analysis}
Social networks are a product of the contexts that bring people together. The
context can be a shared interest in a particular topic or kind of a book. Some-
times there are circumstances, such as a specific job or religious affiliation, that
make people more likely to be interested in the same type of book or DVD.
Community discovery algorithms presented in \cite {gir-new} and \cite{clauset} provide mechanisms for community detection in complex systems. The Girvan-Neuman algorithm \cite {gir-new} focuses on identifying edges that are the weakest in the network, the ones that are the least central . These would inadvertently be edges that are connecting communities. Then communities are detected by progressively removing such edges from the original graph. The modularity maximization method \cite {clauset} detects communities by searching over possible divisions of a network for one or more that have particularly high modularity. Since exhaustive search over all possible divisions is usually intractable, practical algorithms are based on approximate optimization methods such as greedy algorithms, simulated annealing, or spectral optimization, with different approaches offering different balances between speed and accuracy.The usefulness of modularity optimization is however questionable: on the one hand, it has been shown that modularity optimization often fails to detect clusters smaller than some scale, depending on the size of the network (resolution limit); on the other hand the landscape of modularity values is characterized by a huge degeneracy of partitions with high modularity, close to the absolute maximum, which may be very different from each other.The choice of algorithms to use for community detection may be made at the behest of the investigator who knows the nature of the network being studied.
\section{\label{sec:level5} Conclusions and Future Work }
The article defined some functions for analyzing the occurrence and nature of motifs present in a network. Motifs, by themselves are not very much helpful in feature extraction,as observed, and we must combine such analysis with other methods like community detection. Nevertheless, we have shown that motifs can be used to visualize trends, extract interesting large scale features and on the whole, be useful in predicting user behavior in a network such as a product co-purchasing network.
With reference to a co-purchasing network, we were able to show that the following traits may be extracted using motifs :
(1) Prediction of product demand
(2) Purchasing Trends in product categories
(3) Relations between different products
The extraction of these trends is vital from the perspective of online retailers hoping to cash in on the growing number of customers choosing to shop online. The one striking trend that this article found was that there is a large presence of converging motifs, indicating that most customers tend to buy a select few products. This is not a contradiction to the ``long tail'' phenomenon. On the contrary, it can be shown that the presence of the long tail is a large scale phenomenon and would not make itself visible at a micro sampling of the network. As \cite {Leskovec} has shown, the long tail is the future of online retailing; selling less of more is more pronounced in online retailing than in physical retailing.
In the future, one can look more deeply into the interactions between motifs and into the cascading effects of these motifs in the context of the network neighborhood. Various trends may be identified and used effectively for improving marketing and advertising of products by the management of a retailing organization.
\bibliographystyle{plain}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.